Wang et al. Journal of Inequalities and Applications 2014, 2014:435 http://www.journalofinequalitiesandapplications.com/content/2014/1/435
RESEARCH
Open Access
Preconditioning methods for solving a general split feasibility problem Peiyuan Wang1,2* , Haiyun Zhou2,3 and Yu Zhou2 Dedicated to Professor Shih-sen Chang on the occasion of his 80th birthday. * Correspondence:
[email protected] 1 The Second Training Base, Naval Aviation Institution, Huludao, 125001, China 2 Department of Mathematics, Shijiazhuang Mechanical Engineering College, Shijiazhuang, 050003, China Full list of author information is available at the end of the article
Abstract We introduce and study a new general split feasibility problem (GSFP) in a Hilbert space. This problem generalizes the split feasibility problem (SFP). The GSFP extends the SFP with a nonlinear continuous operator. We apply the preconditioning methods to increase the efficiency of the CQ algorithm, two general preconditioning CQ algorithms for solving the GSFP are presented. We also propose a new inexact method to approximate the preconditioner. The convergence theorems are established under the projections with respect to special norms. Some numerical results illustrate the efficiency of the proposed methods. MSC: 46E20; 47J20; 47J25; 47L50 Keywords: general split feasibility problem; general variational inequality; preconditioning method; projection method
1 Introduction As preconditioning methods can improve the condition number of the ill-posed system matrix, the convergence rate of the iterative algorithm can also be improved []. In [, ], a preconditioning method is applied to modify the projected Landweber algorithm for solving a linear feasibility problem (LFP). The modified algorithm is xn+ = PC xn – τ DA∗ (Axn – b) ,
n ≥ ,
where τ ∈ (, /DA∗ A), A : X → Y is a linear and continuous operator, · means -norm, X and Y are Hilbert spaces and b ∈ Y is the datum of the problem, corrupted by noise or experimental errors. While under the nonlinear conditions, Auslender and Dafermos [, ] proposed an algorithm to solve variational inequalities (VI), xn+ = PS xn – τn G– F(xn ) ,
n ≥ ,
(.)
where PS is the projection operator onto S with respect to the norm · G . Bertsekas and Gafni [] and Marcotte and Wu [] improved it with variable symmetric positive defined matrices Gn , Fukushima [] modified it by a relaxed projection method with half-space; ©2014 Wang et al.; licensee Springer. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Wang et al. Journal of Inequalities and Applications 2014, 2014:435 http://www.journalofinequalitiesandapplications.com/content/2014/1/435
Page 2 of 11
then in [], Yang established the convergence of Auslender’s algorithm under the weak co-coercivity of F. Further, the general variational inequality problem (GVIP) has been investigated by many authors (see [–]). It is to find u∗ ∈ Rn such that g(u∗ ) ∈ K and ∗ F u , g(u) – g u∗ ≥ ,
g(u) ∈ K,
where K is a nonempty closed convex set in Rn , F, g : Rn → Rn are nonlinear operators. In [], Santos and Scheimberg extended and applied (.) to solve the GVI. However, a general split feasibility problem (GSFP) equals to a GVI, and preconditioning methods for solving the GSFP have not been studied. By introducing a convex minimization problem, the split feasibility problem (SFP) is equivalent to a variational inequality problem (VIP), which involves a Lipschitz continuous and inversely strong monotone (ism) operator, see [–]. Similarly, by the same way, in this paper we introduce that a new GSFP equals to a GVI involving a Lipschitz continuous and co-coercive operator [, –]. Otherwise, Mohammad and Abdul [] considered a general split feasibility in infinitedimensional real Hilbert spaces. It is to find x∗ such that x∗ ∈
∞ i=
Ci ,
Ax∗ ∈
∞
Qj ,
j=
∞ where A : H → H is a bounded linear operator, {Ci }∞ i= and {Qj }j= are the families of nonempty closed convex subsets of H and H , respectively. Let C and Q be nonempty closed convex subsets in real Hilbert spaces H and H , respectively. We consider a general split feasibility problem which is different from the one in []. Our GSFP is to find
x∗ ∈ H , g x∗ ∈ C such that Ag x∗ ∈ Q,
(.)
where A : H → H is a bounded linear operator and g : H → C is a continuous operator. We see that the SFP in [] and the GSFP in [] are particular cases of GSFP (.). It has applications in many special fields such as signal decryption, demodulating the digital signal and noise processing, etc. In order to solve GSFP (.), two preconditioning algorithms are developed in this paper following the iterative scheme g(xn+ ) = PC g(xn ) – γ DA∗ (I – PQ )Ag(xn ) ,
n ≥ ,
where the two general constraints C and Q deal with projections with respect to the norms corresponding to some symmetric positive definite matrices. Define the solution set of (.) = {x∗ ∈ H | Ag(x∗ ) ∈ Q}, as is nonempty, and by virtue of the related G-cocoercive operator, we can establish the convergence of the proposed algorithms. The paper is organized as follows. Section presents two useful propositions. In Section , we define the algorithms with fixed preconditioner, variable preconditioner, relaxed projection and preconditioner approximation and analyze the convergence. Numerical results are reported in Section . Finally, Section gives some concluding remarks.
Wang et al. Journal of Inequalities and Applications 2014, 2014:435 http://www.journalofinequalitiesandapplications.com/content/2014/1/435
Page 3 of 11
2 Preliminaries In what follows, we state some concepts and propositions. The SFP is to find a point x∗ ∈ C such that Ax∗ ∈ Q, where A : H → H is a bounded linear operator []. Let G be a symmetric positive definite matrix, and set D = G– . Then the norm · G is defined by xG = x, Gx for ∀x ∈ H. We denote by PC the projection operator onto C with respect to the norm · G [], i.e.,
PC (x) = arg min x – yG . y∈C
Let λmin and λmax be the minimum and the largest eigenvalues of G, respectively. Then for the -norm · [, ], we have λmin x ≤ xG ≤ λmax x ,
∀x ∈ H.
(.)
Proposition . [, ] Let C be a nonempty closed convex subset in H, for ∀x, y ∈ H and ∀z ∈ C, the G-projection operator onto C has the following properties: (i) G(x – PC x), z – PC (x) ≤ ; (ii) x ± yG = xG ± x, Gy + yG ; (iii) PC (x) – PC (y)G ≤ x – yG – (PC (x) – x) – (PC (y) – y)G . ˜ = DAT . Then the norm · D˜ ˜ be a symmetric positive definite matrix, and AT D Let D ˜ for ∀y ∈ H. We denote by PQ the projection operator onto Q is defined by yD˜ = y, Dy with respect to the norm · D˜ . According to the SFP, when GSFP (.) has no solution (refer to [, , ]), we can define fg (x) = (I – PQ )Ag(x) and g ˜ Ag(x) – PQ Ag(x) , Ag(x) – PQ Ag(x) , fD˜ (x) = Ag(x) – PQ Ag(x)D˜ = D g
fD˜ (x) is also convex and continuously differentiable in H. Its gradient operator is g
∇fD (x) = DAT (I – PQ )Ag(x). As D = I, we define ∇fg (x) = AT (I – PQ )Ag(x), ∇fg (x) is also Lipschitz continuous. Proposition . If we consider the constrained minimization problem
g min fD˜ (x) | x ∈ H s.t. g(x) ∈ C ,
Wang et al. Journal of Inequalities and Applications 2014, 2014:435 http://www.journalofinequalitiesandapplications.com/content/2014/1/435
Page 4 of 11
its stationary point x∗ ∈ H satisfies ⎧ ⎨x∗ ∈ H, g(x∗ ) ∈ C such that ⎩∇f g (x∗ ), g(x) – g(x∗ ) ≥ , ∀g(x) ∈ C, D
which is a general variational inequality involving a Lipschitz continuous and G-cocoercive operator. Proof For ∀x, y ∈ H, from (.) and Lemma . in [], we have g ∇f (x) – ∇f g (y) ≤ λmax (G)∇f g (x) – ∇f g (y) D D D D G ≤
λmax (D) ∇fg (x) – ∇fg (y) λmin (D)
≤
λmax (D) L g(x) – g(y) λmin (D)
≤
λmax (D) L g(x) – g(y)G , λmin (D) g
where L is the largest eigenvalue of AT A; therefore, the operator ∇fD is Lipschitz continuous, g g ∇fD (x) – ∇fD (y), g(x) – g(y) ≥ λmin (D) ∇fg (x) – ∇fg (y), g(x) – g(y)
λmin (D) ∇fg (x) – ∇fg (y) L λmin (D) ∇fg (x) – ∇fg (y) ≥ D L · λmax (D) λmin (D) D ∇fg (x) – ∇fg (y) , GD ∇fg (x) – ∇fg (y) = L · λmax (D) λmin (D) ∇f g (x) – ∇f g (y) . = D D G L · λmax (D)
≥
g
Thus, the operator ∇fD is co-coercive.
3 Main results In this section, we propose several modified CQ algorithms with preconditioning techniques and prove the convergence. 3.1 General preconditioning CQ algorithm In this part, we have our first algorithm with fixed stepsize and preconditioner to solve GSFP (.). The algorithm is as follows. Algorithm . Choose ∀x ∈ H such that g(x ) ∈ C, and let xn ∈ H such that g(xn ) ∈ C, then we calculate xn+ such that g g(xn+ ) = PC g(xn ) – γ ∇fD (xn ) ,
n ≥ ,
where γ ∈ (, L·L D ), L and LD are the largest eigenvalues of AT A and D, respectively.
(.)
Wang et al. Journal of Inequalities and Applications 2014, 2014:435 http://www.journalofinequalitiesandapplications.com/content/2014/1/435
Page 5 of 11
Now we establish the weak convergence of Algorithm .. Theorem . Suppose that the operators g : H → C and g – : C → H are continuous. If = ∅, then the sequence {xn } generated by Algorithm . converges to the solution of GSFP (.). Proof Firstly, for ∀x∗ ∈ , we have g g x∗ = PC g x∗ – γ ∇fD x∗ . From (.), (iii), (ii) and the definition of ism, we have g(xn+ ) – g x∗
G
g g = PC g(xn ) – γ ∇f (xn ) – PC g x∗ – γ ∇f x∗ D
D
g g ≤ g(xn ) – g x∗ – γ ∇f (xn ) – ∇f x∗ D
D
G
G
≤ g(xn ) – g x∗ G – γ g(xn ) – g x∗ , ∇fg (xn ) – ∇fg x∗ g g + γ ∇fD (xn ) – ∇fD x∗ , ∇fg (xn ) – ∇fg x∗ γ ≤ g(xn ) – g x∗ G – – γ LD ∇fg (xn ) – ∇fg x∗ , L
(.)
as γL – γ LD > , which implies that the sequence {g(xn ) – g(x∗ )G }n∈N is monotonically decreasing, then we can obtain that the sequence {g(xn ) – g(x∗ )G }n∈N is also convergent, especially, the sequence {g(xn )}n∈N is bounded. Consequently, we get from (.) lim ∇fg (xn ) – ∇fg x∗ = lim ∇fg (xn ) = .
n→∞
n→∞
(.)
Moreover, for each g(xn ) ∈ C, from (iii) and (.), we have g(xn+ ) – g(xn ) ≤ γ ∇fD (xn ) G G ≤ γ LD ∇fg (xn )G ≤ γ LD λmax (G)∇fg (xn ). Then by virtue of (.) we have lim g(xn+ ) – g(xn )G = .
n→∞
(.)
Hence, there exists a subsequence {g(xj )}j∈N of {g(xn )}n∈N such that lim g(xj+ ) – g(xj )G = .
j→∞
Thus, {g(xj )}j∈N is also bounded. Let x¯ be an accumulation point of {xn }, then the subsequence of {xn }, {xj }j∈N → x¯ as j → ∞. Because of the continuity of g, there exists an accumulation point g(¯x) ∈ C of the
Wang et al. Journal of Inequalities and Applications 2014, 2014:435 http://www.journalofinequalitiesandapplications.com/content/2014/1/435
Page 6 of 11
sequence {g(xn )}n∈N ; for the subsequence {g(xj )}j∈N , we have {g(xj )}j∈N → g(¯x) as j → ∞. After that, from (.) we obtain lim ∇fg (xnj ) = ∇fg (¯x) = AT Ag(¯x) – AT PQ Ag(¯x) = ,
j→∞
that is, Ag(¯x) ∈ Q. We use x¯ in place of x∗ in (.) and obtain that {g(xn ) – g(¯x)G } is convergent. Because its subsequence {g(xnj ) – g(¯x)G } → , then we get that {g(xn )}n∈N converges to g(¯x) as j → ∞. As well as g – is continuous, we finally have lim xn = x¯ ∈ .
n→∞
Therefore, x¯ is a solution of GSFP (.).
From Algorithm . and Theorem . we can deduce the following results easily. Corollary . If g = I, then GSFP (.) reduces to SFP, Algorithm . also reduces to a preconditioning CQ (PCQ) algorithm for ∀x ∈ H : xn+ = PC xn – γ ∇fD (xn ) ,
n ≥ ,
(.)
where γ ∈ (, L·L D ), L and LD are the largest eigenvalues of AT A and D, respectively. PC and PQ are still the projection operators onto C and Q with respect to the norm · D and · D˜ , respectively. ˜ = I, then GSFP reduces to SFP, then Algorithm . reduces Corollary . If g = I, D = I, D to the CQ algorithm proposed in []. ˜ = I, PC and PQ are the projections onto C and Q with respect to Corollary . If g = I, D the norm · , set F(xn ) = DAT (I – PQ )ADxn , (.) transforms into the algorithm in [] xn+ = PC xn – γ F(xn ) ,
n ≥ ,
where γ ∈ (, L ), L = DAT . Then the GSFP reduces to the extended split feasibility problem (ESFP) in [].
3.2 An algorithm with variable projection metric The algorithms above can speed the convergence of CQ algorithm, but the stepsize and preconditioner are fixed. In this subsection, we extend the results in [] and construct an iterative scheme with variable stepsize and preconditioner Dn from one iteration to the next. As a key role, Dn will change arbitrarily or following some rules to achieve the convergence progress and better results. ˜ n be two symmetric positive definite matrices for n = , , , . . . . Denote Let Dn and D by PC and PQ the projections onto C and Q with respect to the norm · Dn and · D˜ n , respectively. Let χ be a set of symmetric positive definite matrices, we have the following algorithm.
Wang et al. Journal of Inequalities and Applications 2014, 2014:435 http://www.journalofinequalitiesandapplications.com/content/2014/1/435
Page 7 of 11
Algorithm . Choose ∀x ∈ H such that g(x ) ∈ C, and let xn ∈ H such that g(xn ) ∈ C; for ∀Dn ∈ χ , we compute xn+ such that g(xn+ ) = PC g(xn ) – γn Dn ∇fg (xn ) ,
n ≥ ,
(.)
where γn ∈ (, L·M ), L is the largest eigenvalue of AT A, MD is the minimum value of all D the largest eigenvalue values LDn to matrices Dn .
Remark . Define dn = g(xn+ ) – g(xn )Gn , then for the next iteration, Dn+ is either chosen arbitrarily from χ or equivalent to Dn . It is conditional on whether dn has decreased or not. More particularly, we define a scalar d¯ n with initial value d¯ = ∞. Having chosen a scalar α ∈ (, ) at the nth iteration, then d¯ n+ is calculated by ⎧ ⎨α d¯ , if d ≤ d¯ ; n n n d¯ n+ = ⎩d¯ n , if dn > d¯ n , then we select ⎧ ⎨∀D ∈ χ , if d¯ < d¯ ; n+ n Dn+ = ⎩Dn , ¯ ¯ if dn+ = dn . Theorem . If = ∅, then the sequence {xn } generated by Algorithm . converges to the solution of GSFP (.). Proof To obtain variable Dn at each iteration and keep the convergence of Algorithm ., dn must be a descending behavior for n = , , , . . . . We first show that lim dn = .
(.)
n→∞
Indeed if (.) is not true, we have limn→∞ dn > , then Dn must have changed a finite number of times, we set that this number is κ ∈ N . Therefore, let x∗ ∈ be a solution of GSFP, refer to (.) and for n > κ, we have g(xn+ ) – g x∗ ≤ g(xn ) – g x∗ Gκ Gκ γκ – γκ LDκ ∇fg (xn ) – ∇fg x∗ , – L as MD = min{LDn | n = , , . . . κ}, rem ., we also get
γκ L
– γκ LDκ > . Then, following the proof of Theo-
lim ∇fg (xn ) =
(.)
lim dn = lim g(xn+ ) – g(xn )Gκ = .
(.)
n→∞
and
n→∞
n→∞
Equation (.) contradicts the above hypothesis, so (.) is true.
Wang et al. Journal of Inequalities and Applications 2014, 2014:435 http://www.journalofinequalitiesandapplications.com/content/2014/1/435
By using (.) and (iii), for the nth iteration, we have dn = g(xn+ ) – g(xn )Gn g(xn ) – PC g(xn ) – g(xn+ ) – PC g(xn ) G Gn n = g(xn ) – PC g(xn ) Gn – PC g(xn ) – γn Dn ∇fg (xn ) – PC g(xn ) Gn ≥ g(xn ) – PC g(xn ) Gn – γn LDn ∇fg (xn )Gn , ≥
where γn LDn > . Then from (.) and (.) we know that lim ∇fg (xn )Gn = lim AT (I – PQ )Ag(xn )Gn = ,
n→∞
n→∞
so Ag(xn ) ∈ Q. By virtue of (.), we also have limg(xn ) – PC g(xn ) = . n
This means that at least a subsequence of {g(xn )}n∈N converges to a solution of g(¯x) ∈ . Similar to the argumentation of accumulation in the proof of Theorem ., we know that {xn } converges to a solution of GSFP.
3.3 Some methods for execution In Algorithms . and ., there still exists difficulty to implement the projections PC and PQ with respect to the defined norms, especially when C and Q are general closed convex sets. According to the relaxed method in [, , ], we consider the above algorithm in which the closed convex subsets C and Q are the following particular formula:
C = g(x) ∈ H | c g(x) ≤
and Q = Ag(x) ∈ H | q Ag(x) ≤ ,
where c : H → R and q : H → R are convex functions. Cn and Qn are given as
Cn = g(x) ∈ H | c g(xn ) + ξn , g(x) – g(xn ) ≤ ,
Qn = Ag(x) ∈ H | q Ag(xn ) + ηn , Ag(x) – Ag(xn ) ≤ , where ξn ∈ ∂c(g(xn )), ηn ∈ ∂q(Ag(xn )). Here, we also replace PC and PQ by PCn and PQn . However, in this paper, take Algorithm . for example, the projections are with respect to the norms corresponding to Gn ˜ n , we should use the following methods to calculate them. For ∀z ∈ H and ∀y ∈ H , and D
PCn (z) =
⎧ ⎨z – Dn c[g(xn )]+Dn ξn ,z–g(xn ) ξn , if c[g(xn )] + Dn ξn , z – g(xn ) > ; ξ ⎩z,
n Dn
otherwise
and
PQn (y) =
⎧ ˜ n q[Ag(xn )]+D˜ n ηn ,y–Ag(xn ) ηn , if q[Ag(xn )] + D ˜ n ηn , y – Ag(xn ) > ; ⎨y – D η ⎩
n ˜ Dn
y,
otherwise.
Page 8 of 11
Wang et al. Journal of Inequalities and Applications 2014, 2014:435 http://www.journalofinequalitiesandapplications.com/content/2014/1/435
Set z = g(xn ) – γn Dn ∇fg (xn ), y = Ag(xn ), let x¯ ∈ H be an accumulation of {xn }n∈N . From the proof above, it is easy to deduce that lim c g(xn ) + Dn ξn , z – g(xn ) = c g(¯x) ≤ ,
n→∞
˜ n ηn , y – Ag(xn ) = q Ag(¯x) ≤ . lim q Ag(xn ) + D
n→∞
Therefore, g(¯x) ∈ C ⊆ Cn , Ag(¯x) ∈ Q ⊆ Qn , with the projections PCn and PQn , x¯ is a solution of GSFP. Next, we present a new approximation method to estimate γn and Dn in Algorithm .. If = ∅, for ∀x∗ ∈ and n ≥ , such that Dn ∇fg x∗ = Dn AT Ag x∗ – Dn AT PQn Ag x∗ = , under the ideal condition, if Dn AT A ≈ I, the solution is done, but unfortunately, (AT A)– cannot be calculated directly when A is a large matrix in practice. As AT Ag x∗ = AT PQn Ag x∗ = λg x∗ , where λ is an eigenvalue of AT A. Let D = I, for the nth iteration, we have the next j × j approximation of Dn+
jj
⎧ ⎨
Dn+ =
[g(xn )]j , [AT PQn Ag(xn )]j ⎩Djj , n
if [g(xn )]j = and [AT PQn Ag(xn )]j = ; otherwise,
where j = , , . . . . So, at the nth iteration, let lDn be the minimum eigenvalue of Dn , take MDn ≈ min{LDk | k = , , . . . , n}, Ln ≈ max{(lDk )– | k = , , . . . , n}, the variable stepsize is approximated by γn =
ρn , Ln · MDn
ρn ∈ (, ), n = , , . . . .
4 Numerical results We consider the following problem from [] in a finite dimensional Hilbert space: Let C = {x ∈ H | c(x) ≤ }, where c(x) = –x + x + · · · + xN , and Q = {y ∈ H | q(y) ≤ }, where q(y) = y + y + · · · + yM – . AM×N is a random matrix where every element of A is in (, ) satisfying = ∅. Let x be a random vector in H where every element of x is in (, ). ˜ n = I, for n ≥ . We set xn+ – xn ≤ ε as the stop rule, and let N = , M = , g = I, D Using the methods in Section ., we compare Algorithm . with the relaxed CQ algorithm (RCQ) in [], with different ε and initial values. The results can be seen in Table . We see that the proposed methods in this paper behave better. 5 Concluding remarks In this paper, we have discussed a new general split feasibility problem, which is related to the general variational inequalities involving a co-coercive operator. By using the G-norm
Page 9 of 11
Wang et al. Journal of Inequalities and Applications 2014, 2014:435 http://www.journalofinequalitiesandapplications.com/content/2014/1/435
Page 10 of 11
Table 1 The comparison between the results of preconditioning and relaxed CQ algorithms ε
Algorithms
n
CPU (s)
c(x)
q(y)
0.01
RCQ 3.2 RCQ 3.2 RCQ 3.2 RCQ 3.2
16 22 200 13 360 60 464 21
0.0112 0.0049 0.0216 0.0039 0.0295 0.0081 0.0363 0.0043
–0.1665 –0.1333 –0.2684 –0.0770 –0.1039 –0.0479 –0.2561 –0.0566
57.1092 1.2636 1.2708 3.5300E–02 1.0710E–01 1.4900E–02 7.5000E–03 3.3809E–04
0.001 0.0001 0.00001
method, variable modulus method and relaxed method, two modified projection algorithms for solving the GSFP and some approximate methods for algorithm executing have been presented. The numerical results show that by preconditioning method, the convergence speed of CQ algorithm can be improved, but the way to obtain variable stepsize in the paper is inexact. To continue to improve it or combine it with the methods in [] and [] is another interesting subject. Competing interests The authors declare that they have no competing interests. Authors’ contributions The authors take equal roles in deriving results and writing of this paper. All authors read and approved the final manuscript. Author details 1 The Second Training Base, Naval Aviation Institution, Huludao, 125001, China. 2 Department of Mathematics, Shijiazhuang Mechanical Engineering College, Shijiazhuang, 050003, China. 3 Department of Mathematics and Information, Hebei Normal University, Shijiazhuang, 050024, China. Acknowledgements The authors would like to thank the associate editor and the referees for their comments and suggestions. This research was supported by the National Natural Science Foundation of China (11071053). Received: 14 February 2014 Accepted: 17 October 2014 Published: 31 Oct 2014 References 1. Chen, K: Matrix Preconditioning Techniques and Applications. Cambridge University Press, New York (2005) 2. Piana, M, Bertero, M: Projected Landweber method and preconditioning. Inverse Probl. 13, 441-463 (1997) 3. Strand, ON: Theory and methods related to the singular-function expansion and Landweber’s iteration for integral equations of the first kind. SIAM J. Numer. Anal. 11, 798-824 (1974) 4. Auslender, A: Optimisation: Méthodes Numérique. Masson, Paris (1976) 5. Dafermos, S: Traffic equilibrium and variational inequalities. Transp. Sci. 14, 42-54 (1980) 6. Bertsekas, PD, Gafni, EM: Projection methods for variational inequities with application to the traffic assignment problem. Math. Program. Stud. 17, 139-159 (1982) 7. Marcotte, P, Wu, JH: On the convergence of projection methods: application to the decomposition of affine variational inequalities. J. Optim. Theory Appl. 85, 347-362 (1995) 8. Fukushima, M: A relaxed projection method for variational inequalities. Math. Program. 35, 58-70 (1986) 9. Yang, Q: The revisit of a projection algorithm with variable steps for variational inequalities. J. Ind. Manag. Optim. 1, 211-217 (2005) 10. He, BS: Inexact implicit methods for monotone general variational inequalities. Math. Program. 86, 199-217 (1999) 11. Noor, MA, Wang, YJ, Xiu, N: Projection iterative schemes for general variational inequalities. J. Inequal. Pure Appl. Math. 3, Article 34 (2002) 12. Santos, PSM, Scheimberg, S: A projection algorithm for general variational inequalities with perturbed constraint sets. Appl. Math. Comput. 181, 649-661 (2006) 13. Muhammad, AN, Abdellah, B, Saleem, U: Self-adaptive methods for general variational inequalities. Nonlinear Anal. 71, 3728-3738 (2009) 14. Qu, B, Xiu, N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl. 21, 1655-1665 (2005) 15. Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103-120 (2004) 16. Dolidze, Z: Solution of variational inequalities associated with a class of monotone maps. Èkon. Mat. Metody 18, 925-927 (1982) 17. He, B, He, X, Liu, H, Wu, T: Self-adaptive projection method for co-coercive variational inequalities. Eur. J. Oper. Res. 196, 43-48 (2009)
Wang et al. Journal of Inequalities and Applications 2014, 2014:435 http://www.journalofinequalitiesandapplications.com/content/2014/1/435
18. Facchinei, F, Pang, JS: Finite-Dimensional Variational Inequality and Complementarity Problems, vol. I. Springer, New York (2003) 19. Facchinei, F, Pang, JS: Finite-Dimensional Variational Inequality and Complementarity Problems, vol. II. Springer, New York (2003) 20. Yao, YH, Postolache, M, Liou, YC: Strong convergence of a self-adaptive method for the split feasibility problem. Fixed Point Theory Appl. 2013, 201 (2013) 21. Yao, YH, Yang, PX, Kang, SM: Composite projection algorithms for the split feasibility problem. Math. Comput. Model. 57, 693-700 (2013) 22. Yao, YH, Liou, YC, Shahzad, N: A strongly convergent method for the split feasibility problem. Abstr. Appl. Anal. 2012, Article ID 125046 (2012) 23. Mohammad, E, Abdul, L: General split feasibility problems in Hilbert spaces. Abstr. Appl. Anal. 2013, Article ID 805104 (2013) 24. Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221-239 (1994) 25. Byrne, C: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441-453 (2002) 26. Wang, PY, Zhou, HY: A preconditioning method of the CQ algorithm for solving the extended split feasibility problem. J. Inequal. Appl. 2014, 163 (2014). doi:10.1186/1029-242X-2014-163 27. Yang, Q: On variable-step relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl. 302, 166-179 (2005) 28. López, G, Martín-Márquez, M, Wang, F, Xu, H-K: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. 28, 085004 (2012) 29. Wang, Z, Yang, Q, Yang, Y: The relaxed inexact projection methods for the split feasibility problem. Appl. Math. Comput. 217, 5347-5359 (2011) 30. Yang, Q: The relaxed CQ algorithm solving the split feasibility problem. Inverse Probl. 20, 1261-1266 (2004)
10.1186/1029-242X-2014-435 Cite this article as: Wang et al.: Preconditioning methods for solving a general split feasibility problem. Journal of Inequalities and Applications 2014, 2014:435
Page 11 of 11