A PARTIAL COMPLEMENT METHOD FOR

0 downloads 0 Views 141KB Size Report
SOLUTIONS OF A PRIMAL DUAL FIXED-POINT PROBLEM. ABDELLATIF .... operator with respect to A which is described by its graph in the following way. {(xA + yB ... ρc is the inf-convolution between the absolute value function and the kernel 1. 2c. |·|2, .... the operation of communication or transportation networks. Finally ...
A PARTIAL COMPLEMENT METHOD FOR APPROXIMATING SOLUTIONS OF A PRIMAL DUAL FIXED-POINT PROBLEM ABDELLATIF MOUDAFI∗ Abstract. We study the convergence of the Mann Iteration applied to the partial complement of a firmly nonexpansive operator with respect to a linear subspace of a Hilbert space. A new concept considered here. A regularized version is also proposed. Furthermore, to motivate this concept, some applications to robust regression procedures and location problems are proposed. Key words. Fixed point, firmly nonexpansive mapping, Partial inverse, K-M algorithm. AMS subject classifications. (2000) 90C25, 49M45, 65C25

1. Introduction and preliminaries. Throughout, H is a real Hilbert space with inner product h·, ·i and induced norm k · k. Let C be a nonempty closed convex subset of H. Then a mapping Q : C → C is said to be nonexpansive if kQ(x) − Q(y)k ≤ kx − yk for all x, y ∈ C. The mapping Q : C → C is also said to be firmly nonexpansive if kQ(x) − Q(y)k2 ≤ hQ(x) − Q(y), x − yi, for all x, y ∈ C. It is known that a mapping Q : C → C is firmly nonexpansive if and only if kQ(x) − Q(y)k2 + k(I − Q)x − (I − Q)yk2 ≤ kx − yk2 , for all x, y ∈ C, where I is the identity operator on H. This shows that Q is firmly nonexpansive ⇔ I − Q is firmly nonexpansive. The class of firmly nonexpansive mappings is very interesting in optimization and nonlinear analysis, and contains, in particular, projections on closed convex sets, proximal mappings of proper convex lower semicontinuous functions and resolvents of maximal monotone operators. This class has been extensively studied up until now. It follows, from a classical result due to Opial [11, Theorem 3], that for any initial point x0 , the sequence of successive approximations xk+1 = Q(xk ) ∀k ∈ IN converges weakly to a fixed point of Q if such a point exists. The extension of this result to the relaxed iteration xk+1 = (1 − αk )xk + αk Q(xk ) ∀k ∈ IN and αk ∈]0, 2[ P+∞ under the condition k=0 αk (2 − αk ) = +∞ follows from [6, Corollary 3]. To achieve the strong convergence, a regularized version has been introduced in [7], namely xk+1 = αk u + (1 − αk )Q(xk ) ∀k ∈ IN, ∗ CEREGMIA, Universit´ e des Antilles-Guyane, D´ epartement Scientifique Interfacultaire, Campus de Schoelcher, 97230 Cedex, Martinique (F.W.I.). ([email protected]).

1

2

A. MOUDAFI

and the strong convergence was obtained in [9] provided that lim αk = 0,

k→+∞

+∞ X

αk = +∞ and

k=0

lim

k→+∞

|αk+1 − αk | = 0. 2 αk+1

The purpose of the present note is to study the following problem (1.1)

find x ∈ A y ∈ B such that x = Q(x + y),

where A is a linear subspace of H, B := A⊥ the orthogonal complementary of A and Q a firmly nonexpansive mapping on H. Note that when A = H, (1.1) reduces to finding fixed points of the mapping Q, and that when A = {0}, (1.1) reduces to searching fixed-points of its complement operator I − Q. In the interesting case when A is a proper subspace of H, inspired by J. E. Spingarn [13], we introduce the concept of the partial complement of Q with respect to the subspace A by: Q SA := P rojA ◦ Q + P rojB ◦ (I − Q),

and will show that (1.1) is equivalent to (1.2)

Q find x ∈ A y ∈ B such that x + y = SA (x + y).

This suggests the following strategy: to solve (1.1) we have to find a fixed point of Q and after to project onto A (resp. onto B) which provides x (resp. y). SA To approximate such solutions, we develop an algorithm which we will call the method of partial complement. This procedure amounts to applying the Mann algorithm to Q the mapping SA . Our paper is organized as follows: In section 2, we introduce our algorithm, prove a preliminary lemma and then obtain a weak convergence theorem. A viscosity approximation version of our algorithm is also considered. In section 3 the connection with the partial inverse method is made and applications to the Least absolute value (LAV) and the Huber-M estimators, and to the location problems are given. 2. The main results. Now, if z ∈ H, zA (resp. zB ) refers to the projection of z onto A (resp. onto B). Q For our problem, we obtain for SA the following algorithm: Q Given zk , compute zk+1 = SA (zk ), then put xk = (zk )A and yk = (zk )B ,

which is equivalent, by setting zk = xk + yk , to Partial Complement Algorithm (PCA): i) Initialization: Choose x0 ∈ A, y0 ∈ B ii) ak = Q(xk + yk ) iii) xk+1 = (ak )A , yk+1 = ((I − Q)(xk + yk ))B = yk − ak + xk+1 . To begin with, let us give preliminary results which will be needed in the proof of our convergence result. Q Lemma 2.1. z is a fixed point of SA if, and only if, (zA , zB ) provides a solution to problem (1.1).

Partial complement algorithm

3

Q Proof. According to the definition of SA , we can write

½

z ∈ Fix

Q SA

zA = (Q(z))A ⇔ z = (Q(z))A + ((I − Q)(z))B ⇔ zB = ((I − Q)(z))B ½ zA = (Q(z))A ⇔ z = zB + Q(z) ⇔ zA = Q(z). ⇔ zB + (Q(z))B = zB

Hence (zA , zB ) solves (1.1). The following key lemma is due to H. H. Bauschke [1]-lemma 1. Lemma 2.2. Let C be a closed linear subspace of H and let F : H → H be firmly nonexpansive. Then PC F + (I − PC )(I − F ) is firmly nonexpansive. Now, we are in a position to prove the convergence of the proposed algorithm. Theorem 2.3. If Q is firmly nonexpansive on H and A a closed linear subspace Q of H. The operator SA is firmly nonexpansive and the sequence (xk , yk ) weakly converges to (¯ x, y¯), a solution of (1.1), if such a solution exists. Proof. According to the fact that Q is firmly nonexpansive and using lemma 2.2, Q Q we get that SA is firmly nonexpansive. Now, by applying [11, Thm 3] with SA , we Q obtain that zk weakly converges to z¯, where z¯ is a fixed point of SA which is equivalent, thanks to lemma 2.1, to saying that (¯ x, y¯) is a solution of (1.1). It is worth mentioning that we can consider a regularized form of (PCA) by adapting viscosity approximation method. This leads to Viscosity Partial Complement Algorithm (VPCA): i) Initialization: Choose x0 ∈ A, y0 ∈ B and u ∈ H ii) ak = Q(αk u + (1 − αk )(xk + yk )) iii) xk+1 = (ak )A , yk+1 = αk uB + (1 − αk )yk + xk+1 − ak . We directly obtain the following convergence result: Theorem 2.4. Let Q be a firmly nonexpansive mapping, A a closed linear subspace of H and assume that lim αk = 0,

k→+∞

+∞ X

αk = +∞ and

k=0

lim

k→+∞

|αk+1 − αk | = 0. 2 αk+1

Then, the sequences (xk ) and (yk ) generated by (VPCA) strongly converge to x∗ and y ∗ respectively, where (x∗ , y ∗ ) stands for the solution of (1.1) such that ku−(x∗ +y ∗ )k is minimum. 3. Applications. - Monotone variational inclusions: An interesting case is obtained by letting Q := (I + T )−1 , where T is a maximal monotone operator. It is well known that in this case Q is firmly nonexpansive and it is easy to check that finding fixed-points of Q is equivalent to finding zeroes of T .

4

A. MOUDAFI

Q In this setting, SA coincides with (I + TA )−1 , where TA stands for the partial inverse operator with respect to A which is described by its graph in the following way

{(xA + yB , yA + xB ); y ∈ T (x)}. Moreover, our algorithm (PCA) is nothing but the associated partial inverse method introduced by Spingarn in [13]. - Robust regression procedures: Robustness has been one problem that was given much attention in statistical literature. The Least absolute value (LAV) and the Huber-M estimators (HME) are currently attracting considerable attention when the errors have a contaminated Gaussian or long-tail distribution. A unified formulation which takes into account the LAV and the HME at the same time consists in solving the following problem ½ Pn Minimizex∈IRn Φc (x) := i=1 ρc [x − b]i (3.1) Subject to x ∈ range(M ), 1 ρc is the inf-convolution between the absolute value function and the kernel 2c | · |2 , M is a matrix of size n × m and [x − b]i is the i-th component of the vector x − b. The dual of (3.1) takes the form ½ Minimizey∈IRn 2c kyk2 + hy, bi (3.2) Subject to M t y = 0 and γ∞ (y) ≤ 1,

when γ∞ stands for the Chebyshev norm. According to Rockafellar [12], a vector x is an optimal solution of (3.1) and a vector y is an optimal for (3.2) if and only if the following optimality conditions hold   x ∈ range(M ) M ty = 0  x ∈ {cy + b} + NB∞ (y),

(3.3)

where NB∞ denotes the normal cone of the unit ball of L∞ . To apply the numerical resolution proposed in this note to (3.3) that will generate sequences globally convergent to a primal and a dual solutions, it should be noticed that (3.1) can be formulated under the form (1.1) by letting A = {x ∈ IRn ; x ∈ range(M )} and by considering the firmly nonexpansive mapping 1 proxΦc (x) := argminz∈IRn {Φc (z) + kz − xk2 }. 2 The latter can be explicitly evaluated as proxΦc (x) = x − projB∞

¡x − b¢ , 1+c

see [3], for more details. - Location problems: Many authors have studied extensions of the well-known Fermat-Weber problem. Here we consider the following generalization considered in [2], see also [8]: (3.4)

min f (x) := hc, xi +

x∈D

n X i=1

αi kM i x − ai kβi ,

5

Partial complement algorithm

where c ∈ H, ai and x ∈ H, αi > 0, βi ≥ 1, M i , i = 1, · · ·, n are linear continuous operators from H to H, H is a Hilbert space, Dj ⊂ H, j = 1, · · ·, m are closed and convex sets, D0 is a linear subspace of H and D = ∩m j=0 Dj . Using the indicator functions δDj of the subsets Dj defined by δDj (x) = 0 if x ∈ Dj and δDj (x) = +∞ otherwise, (3.4) can be rewritten as (3.5)

min F (x) := hc, xi + x∈D

n X

kM i x − ai kβi +

i=1

m X

δDj (x),

i=1

where we put αi = 1 for i = 1, · · ·, n without loss of generality. The corresponding necessary and sufficient optimality condition at x ¯ can be written in the following form  ¯ − ai kβi ), i = 1, · · ·, n,  qi ∈ ∂(kM i x rj ∈P NDj (¯ x), jP= 1, · · ·, m, NDj stands for the normal cone to Dj (3.6)  n m c + i=1 qi + j=1 rj ∈ D0⊥ . In order to formulate the latter optimality conditions on the form (1.1), we introduce a Hilbert space E, a suitable subspace A and a mapping Q as follows: E := H n+m+1 with the inner product hu, vi :=

n+m+1 X

hui , vi i

i=1

for u = (u1 , · · ·, un+m+1 ), v = (v1 , · · ·, vn+m+1 ) ∈ E and the subspace A := {y ∈ E; y = (y1 , · · ·, yn+m+1 ); yj = x, j = 1, · · ·, n + m + 1; x ∈ D0 } of which the orthogonal subspace is B := B = {p ∈ E; p = (p1 , · · ·, pn+m+1 );

n+m+1 X

pi ∈ D0⊥ }

i=1

and the operator Q defined on E by Q(y) :=

Πn+m+1 Qi (x) i=1

with y = (x, · · ·, x), where

1 Qi (x) := proxkM i (·)−ai kβi (x) = argminξ {kM i (ξ) − ai kβi + kξ − xk2 } for i = 1, · · ·, n, 2 Qn+j (x) = projDj (x), for j = 1, · · ·, m and Qn+m+1 (x) = x − c. According to these notations, the optimality conditions can be written as find y¯ ∈ A, p¯ ∈ B such that y¯ = Q(¯ y + p¯). It is worth mentioning that numerous other applications can be given in various fields including the message routing problem which frequently appears when dealing with the operation of communication or transportation networks. Finally, we would like to emphasize that the extension of the partial inverse method to the primal-dual fixed-point problems is justified by the fact that there exist (firmly) nonexpansive mappings which are not proximal mappings and which are not resolvent operators, which is the case, for instance, for the periodic problems considered in [10]. Acknowledgment: The author would like to thank Professor M. Overton for his kind invitation to the Courant Institute of Mathematical Sciences, New York University.

6

A. MOUDAFI REFERENCES

[1] H.H. Bauschke, A note on the paper by Eckstein and Svaiter on ”General projective splitting methods for sums of maximal monotone operators”, SIAM Journal on Control and Optimization 48 (2009), 2513-2515. [2] H. Benker, A. Hamel and C. Tammer, A proximal point algorithm for control approximation problems, Mathematical Methods of Operation Research 43, 3 (1996), 261-280. [3] M.L. Bougeard, C. D. Caquineau, Parallel proximal decomposition algorithms for robust estimation, Annals of Operation research 90, 4 (1999), 247-270. [4] F. E. Browder, Nonexpansive nonlinear operators in Banach space, Proc. Natl Acad. Sc. 54 (1965), 1041-1044. [5] K. Goebel, W. A. Kirk, Topics in metric fixed-point theory, Cambridge University Press, 1990. [6] C. W. Groetsch, A note on segmating Mann iterates, J. Math. Anal. Appl., 40 (1972), 369-372. [7] B. Halpern, Fixed points of nonxpanding maps, Bull. Amer. Math. Soc. 73 (1967) 957-961. [8] H. Idrissi, O. Lefebvre and Ch. Michelot, A primal-dual algoritm for a constrained Fermat-weber problem involving mixed norms, Operation Research 22, 4 (1988), 313-330. [9] P. -L. Lions, Approximation de points fixe de contractions, C. R. Acad? Sci. Ser. A-B Paris 284 (1977) 1357-1359. [10] J. J. Moreau, Rafle par convexe variable, S´ eminaire d’Analyse Convexe, Montpellier expos´ eN 14 (1971). [11] Z. Opial, Weak convergence of the sequence of successive approximations for nonexpansive mappings, Bull. Amer. Math. Soc. 73 (1967) 591-597. [12] R. T. Rockafellar, Convex analysis, Princeton University Press, 1970. [13] J. E. Spingarn, Partial inverse of a monotone operator, Appl. Math. Optim. 10 (1983), 247-265.

Suggest Documents