A VARIABLE KRASNOSELSKI-MANN ALGORITHM FOR A NEW CLASS OF FIXED-POINT PROBLEMS ABDELLATIF MOUDAFI∗ Abstract. We study the convergence of a variable version of the Krasnoselski-Mann algorithm applied to a primal dual fixed-point problem. The link with the Spingarn’s partial inverse method is made and an application to feasibility problems and mathematical programming is also proposed. Key words. Fixed point, firmly nonexpansive mapping, Partial inverse, K-M algorithm. AMS subject classifications. (2000) 49M45, 90C25, 65C25
1. Introduction and preliminaries. Throughout, H is a real Hilbert space with inner product h·, ·i and induced norm k · k. Let C be a nonempty closed convex subset of H. Then a mapping Q : C → C is said to be nonexpansive if kQ(x) − Q(y)k ≤ kx − yk for all x, y ∈ C. The mapping Q : C → C is also said to be firmly nonexpansive if kQ(x) − Q(y)k2 ≤ hQ(x) − Q(y), x − yi, for all x, y ∈ C. It is known that a mapping Q : C → C is firmly nonexpansive if and only if kQ(x) − Q(y)k2 + k(I − Q)x − (I − Q)yk2 ≤ kx − yk2 , for all x, y ∈ C, where I is the identity operator on H. The class of firmly nonexpansive mappings (which are in fact inverse-strongly-monotone, see [7]) is very interesting in optimization and nonlinear analysis, and contains, in particular, projections on closed convex sets, proximal mappings of proper convex lower semicontinuous functions and resolvents of maximal monotone operators, see [9] and [10]. In particular, algorithms for finding fixed-points of (firmly) nonexpansive mappings have received vast investigation since these methods find applications in a variety of applied areas of inverse problems, partial differential equations, signal recovery, robust regression, location problems (see, [2], [3], [4] and [5]). The Krasnoselski-Mann algorithm is a widely used method for solving fixed-point problems. This algorithm generates from an arbitrary initial guess x0 ∈ C, a sequence (xk ) by the recursive formula xk+1 = (1 − αk )xk + αk Q(xk ), k ∈ IN, where (αk ) is a sequence in [0, 1]. It is known that for a nonexpansive mapping Q, the Krasnoselski-Mann’s algorithm P converges weakly to a fixed point of Q provided the sequence (αk ) is such that k αk (1 − αk ) = +∞ and the underling space is a Hilbert space, see for example [1] and [10]. Very recently, in [13], the convergence of a variable version of the Krasnoselski-Mann iteration has been extended to this class, in which case the relaxation parameters are allowed to be in the interval [0, 2], instead of [0, 1]. This generalization takes into account approximations of the underling mapping in ∗ CEREGMIA, Universit´ e des Antilles-Guyane, D´ epartement Scientifique Interfacultaire, Campus de Schoelcher, 97230 Cedex, Martinique (F.W.I.). (
[email protected]).
1
2
A. MOUDAFI
the sense of a variational distance and has been used in [4] and [12] for the multiplesets split feasibility problem. The purpose of the present note is to study the following problem (1.1)
find x ∈ A y ∈ A⊥ such that x = Q(x + y),
where A is a linear subspace of H, A⊥ the orthogonal complementary of A and Q a firmly nonexpansive mapping on H. Note that when A = H, (1.1) reduces to finding fixed points of the mapping Q, and that when A = {0}, (1.1) reduces to searching fixed-points of its complement operator I − Q which is firmly nonexpansive if, and only if, Q is firmly nonexpansive. In the interesting case when A is a proper subspace of H, inspired by J. E. Spingarn [11], we introduce the concept of the partial complement of Q with respect to the subspace A by: Q SA := P rojA ◦ Q + P rojA⊥ ◦ (I − Q).
The importance of this mapping stems from the fact that (x, y) solves (1.1) if and only if (x, y) is a solution of the following equivalent fixed-point problem (1.2)
Q find x ∈ A y ∈ A⊥ such that x + y = SA (x + y).
This suggests the following strategy: to solve (1.1) we have to find a fixed point of Q SA and after to project onto A (resp. onto A⊥ ) which provides x (resp. y). To approximate such solutions, we develop an algorithm which we will call the partial complement K-M algorithm. This procedure amounts to applying the KrasnoselskiQ Mann iteration to the mapping SA . Our paper is organized as follows: In section 2, we introduce our algorithm, prove a preliminary result and then obtain a weak convergence theorem. In section 3 the connection with the partial inverse method is made and applications to convex feasibility problems and more generally convex programming are given. 2. The main results. Now, if z ∈ H, zA (resp. zA⊥ ) refers to the projection of z onto A (resp. onto A⊥ ). In our context, by setting z k = xk + y k , the algorithm can be broken into the following steps: Partial Complement Algorithm (PCA): i) Initialization: Choose x0 ∈ A, y 0 ∈ A⊥ ii) Proximal step: ak = Q(xk + y k ) iii) xk+1 = (1 − αk )xk + αk (ak )A , y k+1 = (1 − αk )y k + αk ((I − Q)(xk + y k ))A⊥ . Before giving our convergence result, let us state the following remark. Remark 2.1. The following equivalence holds true Q z is a fixed point of SA ⇔ (zA , zA⊥ ) provides a solution to problem (1.1). Q Indeed, according to the definition of the mapping SA , we can write ½ zA = (Q(z))A Q z ∈ Fix SA ⇔ z = (Q(z))A + ((I − Q)(z))A⊥ ⇔ zA⊥ = ((I − Q)(z))A⊥ ½ zA = (Q(z))A ⇔ ⇔ z = zA⊥ + Q(z) ⇔ zA = Q(z). zA⊥ + (Q(z))A⊥ = zA⊥
Partial complement method
3
Hence (zA , zA⊥ ) solves (1.1). The following lemma will be needed in the proof of our theorem. Lemma 2.1. (See, [13]-Theorem 2.3). Let N, N k be firmly nonexpansive operators on a Hilbert space H, for k = 0, 1, · · ·, P∞ and αk ∈ [0, 2] satisfy k=0 αk (2 − αk ) = +∞. Then the sequence (xk ) defined by the iterative step xk+1 = (1 − αk )xk + αk N k (xk ) P∞ converges weakly to a fixed-point of N , provided k=0 αk Dρ (N k , N ) < +∞ for any given ρ > 0, whenever such fixed-points exist, where Dρ (N k , N ) is defined as Dρ (N k , N ) := sup kN k (x) − N (x)k. kxk≤ρ
Furthermore, if N has no fixed-points, then (xk ) is an unbounded sequence. In order to allow the practical implementation of our method, we consider a perturbed version (PPCA) of (PCA) by replacing in the k-th iteration Q by a sequence of its approximations (Qk ) in the sense of the variational distance Dρ . Doing so, we obtain the following convergence result: Theorem 2.2. If Q, Qk are firmly nonexpansive on H and A is a closed linear Q is firmly nonexpansive and the sequence (xk , yk ) subspace of H, then the mapping SA generated by (PPCA) weakly converges to a solution of (1.1), whenever such solutions P∞ P∞ exist and provided αk ∈ [0, 2], k=0 αk (2−αk ) = +∞ and k=0 αk Dρ (Qk , Q) < +∞ for any given ρ > 0. Q Proof. To begin with, we prove that SA is firmly nonexpansive. Indeed, taking into the fact that the metric projection is firmly nonexpansive, we can successively write Q Q 0 2 kSA (z) − SA (z )k = k(Q(z))A − (Q(z 0 ))A + ((I − Q)(z))A⊥ − ((I − Q)(z 0 ))A⊥ k2 = k(Q(z))A − (Q(z 0 ))A k2 + k((I − Q)(z))A⊥ − ((I − Q)(z 0 ))A⊥ k2 ≤ h(Q(z))A − (Q(z 0 ))A , Q(z) − Q(z 0 )i + h((I − Q)(z))A⊥ − ((I − Q)(z 0 ))A⊥ , (I − Q)(z) − (I − Q)(z 0 )i = h(Q(z))A − (Q(z 0 ))A , z − z 0 i − h(Q(z))A − (Q(z 0 ))A , (I − Q)(z) − (I − Q)(z 0 )i + h((I − Q)(z))A⊥ − ((I − Q)(z 0 ))A⊥ , z − z 0 i − h((I − Q)(z))A⊥ − ((I − Q)(z 0 ))A⊥ , Q(z) − Q(z 0 )i Q Q 0 (z ), z − z 0 i = hSA (z) − SA − (hQ(z) − Q(z 0 ), z − z 0 i − kQ(z) − Q(z 0 )k2 ) Q Q 0 ≤ hSA (z) − SA (z ), z − z 0 i,
because h(Q(z))A −(Q(z 0 ))A , (I−Q)(z)−(I−Q)(z 0 )i = hQ(z)−Q(z 0 ), ((I−Q)(z))A −((I−Q)(z 0 ))A i and Q is firmly nonexpansive. On the other hand, we have that k
Q Q Dρ (SA , SA ) = Dρ (Qk , Q).
4
A. MOUDAFI
Indeed, since k
Q Q kSA (z) − SA (z)k = k(Qk (z))A − (Q(z))A + ((I − Qk )(z))A⊥ − ((I − Q)(z))A⊥ k,
we have k
Q Q kSA (z) − SA (z)k = k(Qk (z))A − (Q(z))A − (Qk (z))A⊥ + (Q(z))A⊥ k = k(Qk (z))A − (Q(z))A + (Qk (z))A⊥ − (Q(z))A⊥ k = kQk (z) − Q(z)k. Q Now, by applying lemma 2.1 with N = SA , we obtain that zk weakly converges to z¯, Q where z¯ is a fixed point of SA which is equivalent, thanks to remark 2.1, to saying that (¯ x := zA , y¯ := zA⊥ ) is a solution of (1.1).
Remark 2.2. It should be noticed that an inexact version (IPCA) of the (PCA) can be obtained by taking Qk (z) := Q(z) + εk , for some εk ∈ H. In this case, we have that Dρ (Qk , Q) = kεkP k for all ρ > 0, so the condition on the variational distance in theorem 2.2 becomes k kεk k < +∞ and the step ii) of the (PPCA) can be written as : ak = Q(xk + y k ) + εk . Rather then being fixed in advance, one can think of εk as an error that one is allowed to make in evaluating the quantity Q(xk + y k ). 3. Applications. - Monotone variational inclusions: An interesting case is obtained by letting Q := (I + T )−1 , where T is a maximal monotone operator. It is well known that in this case Q is firmly nonexpansive and it is easy to check that finding fixed-points of Q is equivalent to finding zeroes of T . Q In this setting, SA coincides with (I + TA )−1 , where TA stands for the partial inverse operator with respect to A which is described by its graph in the following way {(xA + yA⊥ , yA + xA⊥ ); y ∈ T (x)}. Moreover, our algorithm (PCA) with αk = 1 for all k ∈ IN , is nothing but the associated partial inverse method introduced by Spingarn in [11]. - Constrained convex optimization: The convex optimization problem, which contains many signal processing problems and inverse problems, for a convex function f : H → IR over the constrained set C := ∩m i=1 Ci 6= ∅: (3.1)
x) = min f (x), find x ¯ ∈ ∩m i=1 Ci such that f (¯ x∈C
can be formulated equivalently as problem (1.1). Using the indicator functions χCi of the subsets Ci defined by χCi (x) = 0 if x ∈ Ci and χCi (x) = +∞ otherwise, (3.1) can be rewritten as (3.2)
minn {f (x) +
x∈IR
m X
χCi (x)}.
i=1
Via a qualification condition, the corresponding necessary and sufficient optimality condition at x ¯ can be written in the following form (3.3)
0 ∈ ∂f (x) +
m X i=1
∂χCi (x).
5
Partial complement method
In order to formulate the latter optimality conditions on the form (1.1), we introduce a Hilbert space H, a suitable subspace A and a mapping Q as follows: H := (IRn )m with the inner product hu, vi :=
m X
hxi , yi i
i=1
and the subspace A := {u = (x1 , · · ·, xm ), x1 = x2 , · · ·, = xm )} of which the orthogonal subspace is A⊥ = {v = (y1 , · · ·, ym );
m X
yi = 0}
i=1
and the operator Q defined on H by Q(u) := Πm i=1 proxf +χCi (xi ). According to these notations, the optimality conditions can be written as find u ¯ ∈ A, v¯ ∈ A⊥ such that u ¯ = Q(¯ u + v¯), more precisely (3.4)
find x ¯, y¯1 , · · ·, y¯m ∈ IRn ;
m X
y¯i = 0 such that x ¯ = proxf +χC (¯ x + y¯i ).
i=1
i
In this context, our exact algorithm becomes: Constrained optimization Algorithm:
Pm 0 ∈ IRn such that i=1 yi0 = 0, i) Initialization: Choose x0 ∈ IRn , y10 , · · ·, ym ii) Proximal step: i = 1 · · · m, compute aki = proxf +χC (xk + yik ) i Pm iii) xk+1 = (1 − αk )xk + αmk i=1 aki , yik+1 = (1 − αk )yik + αk (yik − aki + xk+1 ) = yik + αk (xk+1 − aki ), i = 1, · · ·, m. Remark 3.1. 1) In the context of convex feasibility problems (i.e. f ≡ 0), the proximal mapping is nothing but the metric projection PCi which is a typical firmly nonexpansive mapping. The proximal step ii) is easy to evaluate if Ci are simple in the sense that the closed form expression of PCi is known, which implies that PCi can be computed within a finite number of arithmetic operations. This will be the case, for example, when Ci is a linear variety, a closed ball, a closed cone or a closed polytope. 2) In the case of mathematical programming, namely when Ci are given explicitly by Ci = {x ∈ IRn ; fi (x) ≤ 0, i = 1, ·, ·, ·, m}. The variational distance assumption can be easily verified. In what follows, as a special approximation of gi = f + χCi , we consider gik = f + φki , where φki is a sequence of penalty function which approximate χCi and we assume that the Slater condition is satisfied (namely, ∃¯ x ∈ IRn ; fi (¯ x) < 0). Exact penalty functions For i = 1, · · ·m, these functions are defined by φki = rk fi+ , with rk ↑ +∞. According to a result in [6], we have ∀ρ ≥ 0, ∃K := K(ρ) ∈ IN ; Dρ (proxgik , proxgi ) = 0 ∀k ≥ K.
6
A. MOUDAFI
Thus the variational distance assumption of theorem 2.2 is automatically satisfied. Quadratic penalty functions For i = 1, · · ·m, these penalty functions are given by φki =
rk + 2 (f ) , with rk ↑ +∞. 2 i
In the light of a result in [6], we obtain the following estimate µ ∀k ∈ IN. rk P∞ The assumption on Dρ of theorem 2.2 is satisfied as long as k=1 αrkk < +∞. Exponential penalty functions For i = 1, · · ·m, these functions are defined by ∀ρ ≥ 0, ∃µ := µ(ρ) > 0; Dρ (proxgik , proxgi ) =
φki (x) =
1 exp(rk fi (x)), sk
where rk , sk > 0, lim
k→+∞
sk = +∞ and rk
lim sk = +∞.
k→+∞
By virtue of a result in [6], we obtain the following estimate ∀ρ ≥ 0, ∃µ := µ(ρ) > 0 ∃K := K(ρ) ∈ IN ; Dρ (proxgik , proxgi ) =
µ ∀k ≥ K. sk
P∞ The assumption on Dρ of theorem 2.2 is satisfied as long as k=1 αskk < +∞. Finally, we would like to emphasize that the extension of the partial inverse method to the primal-dual fixed-point problems is justified by the fact that there exist (firmly) nonexpansive mappings which are not proximal mappings and which are not resolvent operators, which is the case, for instance, for the periodic problems considered in [8]. Acknowledgment: The author would like to thank Professor M. Z. Nashed for his pertinent remarks about the notion of inverse-strongly-monotone mappings. REFERENCES [1] H. H. Bauschke and P. L. Combettes, A weak-to-strong convergence principle for Fej´ er-monotone methods in Hilbert spaces, Mathematics of Operations Research, 26, no. 2 (2001), 248-264. [2] F. E. Browder, Nonexpansive nonlinear operators in Banach space, Proc. Natl Acad. Sc., 54 (1965), 1041-1044. [3] C. L. Byrne, A unified traitement of some iterative algorithms in signal processing and image reconstruction, Inverse Problems, 20 (2004), 103-120. [4] Y. Censor, A. Motova and A. Segal, Perturbed projections and subgradient projections for the multiple-sets split feasibility problem, Journal of Mathematical Analysis and Applications, 327 (2007) 1244-1256. [5] K. Goebel, W. A. Kirk, topics in metric fixed-point theory, Cambridge university Press, 1990. [6] B. Lemaire, Coupling optimization methods and variational convergence, In Trends in mathematical optimization Int. series of Numer. Math. Birkhauser Verlag, 84 (c) (1988), 163-179. [7] F. Liu and M. Z. Nashed, Regularization of nonlinear ill-posed variational inequalities and convergence rates, Set-Valued Analysis, 6 (1998) 313-344. [8] J. J. Moreau, Rafle par convexe variable, S´ eminaire d’analyse convexe, Montpellier expos´ e N 14 (1971).
Partial complement method
7
[9] M. Z. Nashed, A decomposition relative to convex sets, Proc. Amer. Math. Soc., 19 (1968) 782-786. [10] S. Reich, Weak convergnce theorems for nonexpansive mappings in Banach spaces, J. Math. Anal. Appl., 67 (1979), 274-276. [11] J. E. Spingarn, Partial inverse of a monotone operator, Appl. Math. Opt., 10 (1983), 247-265. [12] H.-K. Xu, A variable Krasnoselskii-Mann algorithm and the multiple-set split feasibility problem, Inverse Problems, 22 (2006) 2021-2034. [13] J. Zhao and Q. Yang, A note on the convergence of KrasnoselskiMann theorem and its generalizations, Inverse Problems, 23 no 23 (2007) 1011-1016.