55/1994, DEPARTMENT OF MATHEMATICS, THE UNIVERSITY OF IOWA ... of Mathematics and Computer Science, Valdosta State University, Valdosta, GA ...
REPORTS ON COMPUTATIONAL MATHEMATICS, NO. 55/1994, DEPARTMENT OF MATHEMATICS, THE UNIVERSITY OF IOWA
A predictor{corrector method for solving the P-matrix LCP from infeasible starting points Jun Ji , Florian A. Potray , and Rongqin Sheng z June, 1994
Abstract
A predictor-corrector method for solving the P ()-matrix linear complementarity problems from infeasible starting points is analyzed. Two matrix factorizations and two ? backsolves are performed at each iteration. The algorithm terminates in O ( + 1)2nL steps either by nding a solution or by determining that the problem is not solvable. The computational complexity depends on the quality of the starting points. If the problem is solvable and if a certain measure of feasibility p at the starting point is small enough then the algorithm nds a solution in O (( + 1) nL) iterations. The algorithm is quadratically convergent for problems having a strictly complementary solution.
Key Words: linear complementarity problems, P-matrices, predictor-corrector, infeasibleinterior-point algorithm, polynomiality, superlinear convergence. Abbreviated Title: A predictor{corrector method for LCP.
Department of Mathematics and Computer Science, Valdosta State University, Valdosta, GA 31698, USA. y Department of Mathematics, University of Iowa, Iowa City, IA 52242, USA. The work of this author was supported in part by NSF, Grant DMS 9305760 z Department of Mathematics, University of Iowa, Iowa City, IA 52242, USA.
1
1 Introduction The P-matrix linear complementarity problem requires the computation of a vector pair (x; s) 2 IR 2n satisfying s = Mx + q; xT s = 0; (x; s) 0 ; (1:1) where q 2 IR n and M 2 IR nn is a P-matrix. The class of P-matrices was introduced by Kojima, Megiddo, Noma and Yoshise [3] and it contains many types of matrices encountered in practical applications. Let be a nonnegative number. A matrix M is called a P ()?matrix if (1 + 4) where
X
i2I+ (x)
xi [Mx]i +
X
i2I? (x)
xi [Mx]i 0 ;
8x 2 IR n ;
(1:2)
I+ (x) = fi : xi[Mx]i > 0g ; I? (x) = fi : xi[Mx]i < 0g ;
or equivalently if
xT Mx ?4
X
i2I+ (x)
xi [Mx]i ;
8x 2 IR n :
(1:3)
The class of all P()?matrices is denoted by P(), and the class P is de ned by P =
[
0
P () ;
i.e. M is a P-matrix if M 2 P() for some 0. Obviously, P(0) = P SD (the class of positive semi-de nite matrices) and P(1) P (2 ) for 0 1 < 2 . Also we have P P where P is the the class of all matrices with positive principal minors. This follows from the fact that a P -matrix M is a P()-matrix for = max f?:25min(M )= (M ); 0g where min(M ) is the least eigenvalue of (M + M T )=2, and (M ) > 0 is the so-called P -matrix number of M (see [3, Lemma 3.3]). Most interior-point methods for linear programming have been successfully extended to the monotone LCP, i.e. to the P (0)-matrix LCP. A potential reduction method for Pmatrix LCP was proposed by Kojima et al. [3]. Their method solves (1.1) in at most p O((1 + ) nL) iterations. However, no superlinear convergence results have been proved so far for that method. The algorithm given by Miao [4] enjoys both polynomial complexity and quadratic convergence. All the above mentioned methods require a strictly feasible starting point. In practice, it is dicult to get such an initial point. Also the assumption of the existence of a strictly feasible point, which implies the boundedness of the primal-dual feasibility set, restricts the class of problems for which the results apply. In a recent paper [2], we have extended the infeasible-interior-point algorithm of [6] to the P-matrix LCP. Like the algorithm in [6], the algorithm of [2] requires two matrix factorizations and at most three backsolves per iteration. At each iteration both \feasibility" and \optimality" are reduced at the same rate. It is quadratically convergent for problems having a strictly complementary solution. In the present paper we will show that the above mentioned algorithm can be 2
modi ed along the lines of [7] so that only two backsolves are needed at each iteration. Feasibility and optimality are no longer reduced exactly at the same rate, but rather at \comparable" rates (see Theorem 2.4). The algorithm can be modi ed so that it also detects whether the problem has solutions of norm less than a given constant. For = 0 our results reduce to the corresponding results of [7]. We mention that the algorithm of [7] is similar to the algorithms independently considered by [8] and [5].
2 The predictor-corrector algorithm We denote the feasible set of the problem (1.1) by
F = f(x; s) 2 IR 2+n : s = Mx + qg and its solution set by
F = f(x; s) 2 F : xT s = 0g : It is easily seen that (x; s) 2 F if and only if (x; s) 0 is the solution of the following nonlinear system
F (x; s) := Mx Xs ?s+q =0 ;
(2:1)
where X = diag(x) denotes the diagonal matrix having as diagonal entries the elements of the vector x. For any given > 0 we de ne the set of -approximate solutions of (1.1) as
F = f(x; s) 2 IR 2+n : xT s kMx ? s + qk g: In what follows we will present an algorithm that nds a point in this set in a nite number of steps. The algorithm depends on two positive constants and satisfying q
(1 + 4(1 + 2))=8 2 < < 1; 1? ? = (1=(1 + )):
(2.2a) (2.2b)
Here the notation = ( ) means that there is constant such that for all 's of interest (i.e. for all suciently small). It is easily seen that 1 1 ; = q (2:3) = q 4 1 + 4(1 + 2) 2 1 + 4(1 + 2) verify (2.2). The starting point of the algorithm can be any pair of strictly positive vectors n that is (; )-centered in the sense that it belongs to the following set (x0; s0) 2 IR 2++
N; = f(x; s) 2 IR 2++n : kXs ? ek g where e denotes the vector with all entries equal to one. 3
At a typical step of our algorithm we are given a pair (x; s) 2 N; and obtain a predictor direction (u; v) by solving the linear system Su + Xv = ?Xs; Mu ? v = r;
(2.4a) (2.4b)
where r is the residual r = s ? Mx ? q: Notice that this is just the Newton direction for the nonlinear system (2.1), whose Jacobian F 0(x; s) :=
S M
X ?I
is nonsingular whenever x > 0 and s > 0. If we take a steplength along this direction we obtain the points x() = x + u; s() = s + v: We de ne as the largest steplength for which
kX ()s() ? (1 ? )ek (1 ? ); for all 0 ;
(2:5)
and consider the predicted pair x = x + u; s = s + v:
(2:6)
We will see later that these vectors are strictly positive. Therefore the Jacobian F 0(x; s) is nonsingular and we can de ne the corrector direction u; v as the solution of the following linear system Su + Xv = (1 ? )e ? Xs; Mu ? v = 0:
(2.7a) (2.7b)
By taking a unit steplength along the corrector direction we obtain a new pair Clearly
x+ = x + u; s+ = s + v:
(2:8)
r+ = s+ ? Mx+ ? q = (1 ? )r:
(2:9)
+ = (1 ? ):
(2:10)
Correspondingly, we take by de nition
In order to have a well-de ned algorithm we will show that (x+ ; s+) 2 N; + so that the above steps can be iterated with (x+ ; s+) and + instead of (x; s) and . In the proof we will use the following two technical lemmas. The rst one is a slight modi cation of Lemma 2.1 of [6], while the second one corresponds to the Corollary 2.3 of [2]. 4
Lemma 2.1 If (x; s) 2 N; , then the largest number 2 [0; 1] satisfying (2.5) is given by 1 Xs ? e ; g = 1 Uv ; = kg k ; 0 = 2 ? kf k2 ; 1 = f T g ; q '1 = 0 =(1 + 12 + 0 2) ; q = 2=(1 + 1 + 4='1 ) ; f =
(2.11a) (2.11b)
where (u; v) is the solution of the linear system (2.4). Moreover the pair (x; s) de ned by (2.6) satis es (2:12) kXs ? (1 ? )ek = (1 ? ) ; x > 0 ; s > 0 :
Lemma 2.2 Let x; s; a; r be four n-dimensional vectors with x > 0 and s M 2 IR nn be a P ()-matrix. Then the solution (u; v ) of the linear system
> 0, and let
Su + Xv = a ; Mu ? v = b
(2.13a) (2.13b)
satis es the following relations: r
kDuk kbk + kaek2 + kebk2 + 2kcek2 ; e
kD?1 vk kDuk2 + kD?1 vk2 kUvk2 where
r
kaek2 + kebk2 + 2kcek2 ;
r
kaek2 + 2kcek2 + 2kebk2 + 2kebk kaek2 + kebk2 + 2kcek2
21 ;
1 kaek4 + 1 2(2 ? kaek2) ; 8 4 1 1
(2.14a) (2.14b) (2.14c) (2.14d)
D = X ?1=2 S 1=2 ; ae = (XS )?1=2 a ; eb = D?1 b ; ce = ae + eb :
We are now ready to prove that the algorithm described in this section is well de ned. For ease of later reference let us rst formally de ne our algorithm. Algorithm 2.3 Choose (x0; s0) 2 N;0 with 0 = (x0)T s0=[n(1 + =pn)] = 0=[1 + =pn] and set 0 = 1. For k = 0; 1; , do A1 through A5: A1 Set x = xk , s = sk and de ne = (xT s)=n; r = s ? Mx ? q; = k: A2 If xT s ; and krk then report (x; s) 2 F and terminate. A3 Find the solution u; v of the linear system (2.4), de ne x; s as in (2.6), and set + = (1 ? ) , where is given by (2.11). 5
A4 Find the solution u; v of the linear system (2.7) and de ne x+ ; s+; + as in (2.8) and (2.10). A5 Set xk+1 = x+; sk+1 = s+ ; k+1 = + ; k = ; k = ; rk = r;
k+1
=
+:
Before stating our main result let us note that the standard choice of starting points x0 = p e; s0 = d e
gives 0 =
and
p d 0 p = 1 + = n 1 + =pn
d pn = 0; kX 0s0 ? 0ek = 1 +p=
(2:15)
(xk ; sk ) 2 N;
(2:16)
which shows that (x0; s0) 2 N;0 , as required in the algorithm. Theorem 2.4 For any integer k 0, Algorithm 2.3 de nes a pair k
and the corresponding residuals satisfy rk =
where
kr
0;
k =
k 0 ;
p
(1 ? =n)k k k 0 = (1 + = n)k 0 = 1;
k=
kY ?1 i=0
(1 ? i):
(2:17) (2:18) (2:19)
Proof. The proof is by induction. For k = 0, (2.16), (2.17) and (2.18) are clearly satis ed. Suppose they are satis ed for some k 0. As in Algorithm 2.3 we will omit the index k. Therefore we can write
(x; s) 2 N; ; r = r0; =p 0; (1 ? =n) 0 = (1 + = n): The fact that (2.17) holds for k + 1 follows immediately from (2.9) and (2.10). From (2.7) and (2.8) we have 1 1 (2:20) X + s+ = (1 ? )e + Uv; + = (x+ )T s+ = (1 ? ) + uT v: n n 6
By using (2.11), (2.20) and applying Lemma 2.2 to (2.7), we deduce that q kUvk p18 1 + 4(1 + 2)k(X S )?1kkXs ? (1 ? )ek2 q 1 + 4(1 + 2) 2 p (1 ? ): 8(1 ? ) On the other hand by using (2.2a), (2.20) and (2.21), we can write
(2.21)
q
1 + 4(1 + 2) 2 p (2:22) (1 ? ) + : 8(1 ? ) The positivity of x+ and s+ is proved by contradiction. Suppose for example that [x+]i 0 for some i. Since (2.22) implies [x+]i[s+]i > 0, we must have [x+]i < 0 and [s+]i < 0. It follows that [u]i < ?[x]i and [v]i < ?[s]i. By virtue of (2.7) we get ?[x]i[s]i < (1 ? ) ? [x]i[s]i = [s]i[u]i + [x]i[v]i < ?2[x]i[s]i; which is a contradiction. Hence (2.16) is satis ed for k + 1. According to [3, Lemma 3.4] we have 2 uT v ?k(X S )?1 kkXs ? (1 ? )ek2 ? ) : (2:23) (1 ? ) (1 ? By substituting (2.23) in (2.20) and using (2.2a), we deduce that
kX + s+ ? +ek = kUvk
+ (1 ?
From (2.22) it follows that
2
n(1 ? )
)(1 ? ) (1 ? =n) + :
p
p
uT v nkUvk n + :
(2:24) (2.25)
By substituting the above inequality in (2.20) we obtain p p + (1 ? ) (1 + = n) = + (1 + = n) p = +0(1 + = n) = +0: (2.26) Hence (2.24) and (2.26) imply that (2.18) is also satis ed for k +1. This completes the proof of our theorem. 2
3 Global convergence and polynomial complexity
In what follows we assume that F is nonempty. Under this assumption we will prove that Algorithm 2.3, with = 0, is globally convergent in the sense that lim = 0 ; klim rk = 0 : k!0 k !0 7
Lemma 3.1 Assume that F is nonempty. Let (x; s) 2 F and the sequence (xk ; sk ) is generated by Algorithm 2.3. Then
(1 ?
k
(1 + =p n)(1 + 4)(2 + )nk ; (3.1a) p (1 + = n)(1 + 4)((1 + k ) + (1 ? k ) )nk ;(3.1b)
k T 0 k T 0 k ((x ) s + (s ) x ) )((xk )T s + (sk )T x)
where
= ((x0)T s + (s0)T x)=((x0 )T s0) :
(3:2)
Proof. By writing x; s; for xk ; sk ; k, respectively, and by using (2.17) we have s0 + (1 ? )s ? s = (r0 + M (x0 ? x)) ? (r + M (x ? x )) = M ( x0 + (1 ? )x ? x) : Using the inequalities (x; s) 0 and (x; s) > 0, and the de ning property (1.3) of a P ()-matrix we can write
[ x0 + (1 ? )x ? x]T [ s0 + (1 ? )s ? s] X ?4 [ x0 + (1 ? )x ? x]i[ s0 + (1 ? )s ? s]i
?4
i2I+
X
i2I+
2[x0] [s0] i
0 0 i + (1 ? ) ([x ]i [s ]i + [x ]i[s ]i ) + [x]i[s]i
?4( 2(x0)T s0 + (1 ? ) ((x)T s0 + (x0)T s) + xT s) ; where
I+ = fi : [
(3.3)
(3.4)
x0 + (1 ? )x ? x]i [ s0 + (1 ? )s ? s]i > 0g :
By expanding (3.3), we obtain [ x0 + (1 ? )x ? x]T [ s0 + (1 ? )s ? s] = 2n0 + (1 ? ) ((x0)T s + (s0)T x) ? ((x0)T s + (s0)T x) + xT s ? (1 ? )(sT x + xT s) + (1 ? )(x)T s: (3.5) The desired inequalities (3.1) follow from (3.4) and (3.5) by using (2.17) and the relations p T T T T 0 T 0 (x ) s = 0; s x + x s 0, s x + x s > 0 and (1 + = n) . 2 Lemma 3.1 and Lemma 2.2 we will be used to derive an upper bound for the quantities k = kU k v k k=k ; k 0 ; (3:6) where (uk ; vk ) is obtained at step A3 of Algorithm 2.3.
Lemma 3.2 Let (uk ; vk ) be obtained in the k-th iteration at step A3 of Algorithm 2.3 and
let k be de ned by (3.6). Then
p
q
k = (1 + = n)n :125 + 4(1 + )4(1 + )2
8
where
s
p = n(1 + 4)(2 + )k(S 0 )?1 r0 k
1
with given by (3.2).
1 + =pn ; 1?
Proof. We omit the index k. Applying Lemma 2.2 to linear system (2.4) and using
Lemma 3.1 we have
p n(1 + = n) ; kak = kebk = kD?1 rk = k(XS )?1=2Xrk (1 ? )?1=2 ?1=2kXrk (1 ? )?1=2 ?1=2kXrk1 = (1 ? )?1=2 ?1=2kXr0 k1 n X = (1 ? )?1=2 ?1=2 [x]i[s0]i[r0]i=[s0]i i=1 ? 1 = 2 ? 1 = 2 (1q? ) k(S 0)?1 r0k1 (s0)T x n(1 + =pn) ; q kcek kaek + kebk (1 + ) n(1 + =pn) : e
k(XS )1=2ek = pn
q
(3.7a)
(3.7b) (3.7c)
Finally the required inequality follows by substituting (3.7) in (2.14d). 2 It is easily seen from (2.11a) that 1 j j + qj j2 + 2 = : 1
1
'1
0
0
The right hand side of the above inequality is increasing in j1j and decreasing in 0. Using the inequalities 0 2 ? 2 > 0 and j1j kf kkgk we deduce 1 + q()2 + ( 2 ? 2)2 =( 2 ? 2) = =( ? ) : (3:8) '1
Finally, from Lemma 3.2, (2.11b) and (3.8) we obtain q
k = 2=(1 + 1 + 4 =( ? ) ) ; k 0 :
(3:9)
With the help of (3.9) and Theorem 2.4 we can easily prove the main result of this section.
Theorem 3.3 Suppose that the solution set F is nonempty.
(i) If = 0 then Algorithm 2.3 either nds an optimal solution z 2 F in a nite number of steps or produces an in nite sequence z k = (xk ; sk ; y k ) such that
lim (xk )T sk = 0; klim (rk ) = 0: !1
k!1
9
(ii) If > 0 then Algorithm 2.3 terminates with a z 2 F in at most
j ln(=0)j e; j ln(1 ? )j iterations, where 0 = maxf(x0)T s0; kr0 kg, and de denotes the smallest integer greater K = d
or equal to .
From the above theorem we can obtain polynomial complexity under certain assumptions on the starting point. For the case when the starting point is feasible, or close to being feasible, (2.2b), (3.9), Lemma 3.2 and Theorem 3.3 imply the following corollary.
Corollary 3.4 Assume that F is nonempty and that the starting point is chosen such that
there is a constant C independent of n satisfying the inequality
(2 + )k(S 0)?1r0k1 (1 +C)pn :
p Then Algorithm 2.3 terminates in at most K~ = O((1 + ) n ln(0=)) iterations. Most of the complexity results on infeasible{interior{point methods are obtained for starting points of the form x0 = p e; s0 = d e ; (3:10) where p and d are suciently large positive constants (the counter part of of a \big M initialization"). For such starting points we have clearly (x0; s0) 2 N;0 and = kxk1 =(np ) + ksk1 =(nd ); for some (x; s ) 2 F ;
k(S 0)?1 r0k1 1 + (p=d )kMek1 + (1=d )kqk1:
Therefore if p and d satisfy the inequalities
p n?1 kx k1 ; d maxfp kMek1 ; kq k1 ; n?1 ksk1 g ;
p
(3:11)
for some (x; s) 2 F then O((1 + ) n). Hence, from (2.2b), (3.9), Lemma 3.2 and Theorem 3.3 we obtain the following complexity result.
Corollary 3.5 Assume that F is nonempty and that the starting point is chosen of the form (3.10) such that (3.11) is satis ed for some (x; s) 2 F . Then Algorithm 2.3 terminates
in at most iterations.
K~ = O((1 + )2n ln(0 =))
10
(3:12)
All the above results have been proved under the assumption that F is nonempty. It turns out that Algorithm 2.3 can be modi ed in such a way that it can detect within polynomial time whether F contains points of norm less than a quantity chosen in advance. Let p and d be such quantities and de ne = (kx0kd + ks0 kp )=((x0)T s0 );
1 + =pn ; 1 1? q p = n(1 + = n) :125 + 4(1 + )4 (1 + )2 ;
p = n(1 + 4)(2 + )k(S 0 )?1 r0 k q
s
= 2=(1 + 1 + 4 =( ? )):
Now we can prove the following theorem.
(3:13) (3:14) (3:15) (3:16)
Theorem 3.6 Suppose that the following instruction \A2.5 If (x0)T sk + (s0)T xk > (1 + 4)[(x0)T s0k =0 + (xk )T sk 0=k + (1 ? k =0)(dkx0k + p ks0 k)] then terminate" is inserted in between instructions A2 and A3 of Algorithm 2.3. Then the new algorithm terminates either at A2 with z 2 F or at A2.5, in at most j ln(=0)j e K = d j ln(1 ? )j iterations, and in the latter case there is no z = (x; s ) 2 F such that kxk p; ksk d .
Proof. Suppose that the following inequality (x0)T sk +(s0)T xk (1+4)[(x0)T s0k =0 +(xk )T sk 0=k +(1?k =0)(dkx0k+pks0k)] (3:17) holds for all 0 k K . Then we have p 0 T k 0T k (3:18) 0 k K : k ((x ) s + (s ) x ) (1 + = n)(1 + 4)(2 + )nk ; With the help of (3.18) we can prove, as in Lemma 3.2, that k . Hence k , for all 0 k K , which implies (xK ; sK ) 2 F. On the other hand, if there exists (x; s) 2 F such that kxk p; ksk d , then (3.4) and (3.5) imply that ( 3.17 holds for all k 0. Hence, if (3.17) is violated for some k K , then there is no (x; s) 2 F such that kxk p; ksk d. This completes the proof of
our theorem. 2
From the above theorem we can obtain polynomial complexity for our modi ed algorithm under certain assumptions on the starting point. Let us choose x0 = ^p e; s0 = ^d e;
11
(3:19)
where Then we obtain
p
^p p = n; p ^d maxf^p kMek1 ; kq k1; d = ng:
(3:20) (3:21)
2; (3:22) k(S 0)?1r0k1 1 + (^p=^d )kMek1 + (1=^d )kqk1 3: (3:23) p From (3.14), (3.22) and (3.23), it follows that O((1+ ) n). Hence we have the following
complexity result.
Corollary 3.7 Suppose that the following instruction \A2.5 If (x0)T sk + (s0)T xk > (1 + 4)[(x0)T s0k =0 + (xk )T sk 0=k + (1 ? k =0)(dkx0k + p ks0 k)] then terminate" is inserted in between instructions A2 and A3 of Algorithm 2.3,
and that the starting point is chosen of the form (3.19) such that (3.20) and (3.21) are satis ed. Then the new algorithm terminates either at A2 with z 2 F or at A2.5, in at most K^ = O((1 + )2n ln(0 =))
iterations, and in the latter case there is no z = (x; s ) 2 F such that kxk p ; ksk d .
4 Quadratic convergence In the present section we will study the asymptotic convergence properties of Algorithm 2.3 under the further assumption that (1.1) has a strictly complementary solution. Let us denote by F c the set of all such solutions, i.e.,
F c = f(x; s) 2 F : [x]i + [s]i > 0; i = 1; 2; ; ng : It is well known that there is a unique partition
B [ N = f1; 2; ; ng ; B \ N = ; ; such that for any (x; s) 2 F c we have ([x]i > 0; [s]i = 0; 8i 2 B) and ([x]i = 0; [s]i > 0; 8i 2 N ). Let us denote the corresponding partition of M by M M BB BN : M= M NB MNN
Also, for any vector y 2 IR n we denote by yB the vector of components [y]i; i 2 B; and by yN the vector of components [y ]i; i 2 N . 12
Lemma 4.1 The iteration sequence f(xk ; sk )g generated by Algorithm 2.3 is bounded, i.e., p (4:1) 0 < [xk ] ; [sk ] (1 + = n)(1 + 4)(2 + )n max f 1 ; 1 g : i
0 j =1;;n
i
[x0]j [s0]j
0
Proof. It is easily seen from (2.17) and (3.1a) that p (xk )T s0 + (sk )T x0 (1 + = n)(1 + 4)(2 + )n0 ;
wherefrom we deduce the desired result. 2 Lemma 4.2 Let fzk (xk ; sk )g be generated by Algorithm 2.3. For any solution z (x; s) 2 F , there is a constant 1 such that
j
[xk(1)]i
? [x]ij; j[sk (1)]i ? [s]ij 1 kz
k
? zk2 ; i = 1; : : : ; n : k
(4:2)
Proof. For any z = (x; s) 2 F , using (2.4) we have k S X k xk (1) ? x = (X k ? X )(sk ? s) : 0 sk (1) ? s M ?I
Applying Lemma 2.2 to the above linear system, we obtain p kDk (xk (1) ? x)k 1 + 2k(X k S k )? 21 (X k ? X )(sk ? s)k k k p 1 + 2 kx ?q x k ks ? s k (1 ? )k p1 + 2 kzk ? zk2 2 p1 ? p ; k
where Dk = (X k )? 12 (S k ) 12 Diag(d). Thus j[xk (1)]i ? [x]ij = [d]?i 1 j[d]i([xk (1)]i ? [x]i)j [ x k ]i k k 12 kDk (xk (1) ? x)k ([x ]i[s ]i) p1 + 2 k 2 2(1 ? ) 0 kz ? z k k
k z k ? z k2 ; with = 1 k
1
p
1 + 2 0 : 2(1 ? )
The corresponding inequality for s can be obtained similarly. 2 Lemma 4.3 Let F c 6= ;. Then there is a constant 2 such that [xk ]i 2 k ; 8 i 2 N ; [sk ]i 2 k ; 8 i 2 B : 13
(4.3)
Proof. Let (x; s) 2 F c. It is easily seen from (3.1b) and the fact that k 1; k 1 that p (xk )T s + (sk )T x (1 + =pn)(1 + 4)((1 + 1)=(1 ? 1) + )nk (1 + = n)(1 + 4)((2 ? 0)=0 + )nk : Therefore, and
[xk ]
p (1 + = n)(1 + 4)((2 ? 0 )=0 + ) nk ; 8 i 2 N ; i [s]i p
(1 + = n)(1 + 4)((2 ? 0)=0 + ) n ; 8 i 2 B : i k [x]i Hence the desired result holds with ) ( p 1 1 : 2 ; max
2 = n(1 + = n)(1 + 4)((2 ? 0)= 0 + ) max max i2B [x]i i2N [s]i [sk ]
Lemma 4.4 Suppose that F c 6= ;. Let fzk (xk ; sk )g be generated by Algorithm 2.3. Then there is a constant 3 such that for each k there is a solution zk 2 F such that (4:4) kzk ? zk k 3 k : Proof. Consider the following problem: MBB xB = ?qB MNB xB ?sN = ?qN
=0 (4:5) =0 0: Under the assumption that F c 6= ;, (4.5) has a solution and the solution set of (4.5) is F . By Homan's lemma [1] , there is a constant 4, independent of k such that for any zk there is a zk 2 F satisfying xN
kzk ? zk k We have
sB xB ; sN
4
(MBB xkB + qB ; MNB xkB + qN ? sN ; xkN ; skB )
4
(?MBN xkN + skB ; ?MNN xkN ; xkN ; skB )
+ 4 krk k :
krk k = k r0k = kr k ( 0) = kr k k : 0
0
0
0
(4.6) (4:7)
Finally, (4.4) follows from Lemma 4.3 and (4.6)-(4.7) . 2
Lemma 4.5 Let fzk (xk ; sk )g be generated by Algorithm 2.3. Then j[u]ij 5k ; j[v]ij 5k ; with 5 (1 + 1 3) 3 : 14
(4:8)
Proof. Let zk = (xk ; sk ) 2 F satisfy (4.4). From Lemma 4.2 and Lemma 4.4 we deduce that
j[u]ij j[xk]i + [u]i ? [xk]ij + j[xk]i ? [xk]ij = j[xk(1)]i ? [xk]ij + j[xk]i ? [xk ]ij 1 32 k + kzk ? zk k (1 + 1 3) 3 k 5k : The corresponding inequality for j[v]ij is obtained in a similar manner. 2
We end the paper by stating and proving our quadratic convergence result. Theorem 4.6 If the linear complementarity problem (1.1) has a strictly complementarity solution then there are two constants and independent of k such that the points produced by Algorithm 2.3 satisfy (4:9) k+1 k 2 ; krk+1 k krk k2 ; k 1: Proof. From (2.11), (3.8), (3.6) and (4.8) it follows that k 1 ? =( ? ) 1 ? ^ k p with ^ = n 52=( ? ),pwhich gives k+1 ^ k2: From (4.7) and (2.18) we see that (4.9) holds with = ^(1 + = n)=(1 ? =n)2 and = ^0=kr0k: 2
References [1] A. J. Homan. On approximate solutions of systems of linear inequalities. Journal of Research of the National Bureau of Standards, 49:263{265, 1952. [2] J. Ji and F. A. Potra. An infeasible-interior-point method for the p-matrix LCP. Reports on Computational Mathematics 52, Department of Mathematics, University of Iowa, Iowa City, IA 52242, USA, February 1994. [3] M. Kojima, N. Megiddo, T. Noma, and A. Yoshise. A uni ed approach to interior point algorithms for linear complementarity problems. Lecture Notes in Comput. Sci., 538, 1991. p [4] J. Miao. A quadratically convergent o((1 + ) nl)-iteration algorithm for the p()matrix linear complementarity problem. Research Report RRR 93, RUTCOR-Rutgers Center for Operations Research, Rutgers University, P.O Box 5063, New Brunswick, NJ, USA, 1993. [5] S. Mizuno, F. Jarre, and J. Stoer. A uni ed approach to infeasible-interior-point algorithms via geometrical linear complementarity problems. Preprint Nr. 213, Mathematische Institute der Universitat Wurzburg, 97074 Wurzburg, Germany, April 1994. 15
[6] F. A. Potra. An O(nL) infeasible-interior-point algorithm for LCP with quadratic convergence. Reports on Computational Mathematics 50, Department of Mathematics, The University of Iowa, Iowa City, IA 52242, USA, January 1994. [7] F. A. Potra and R. Sheng. A modi ed O(nL) infeasible-interior-point algorithm for LCP with quadratic convergence. Reports on Computational Mathematics 54, Department of Mathematics, The University of Iowa, Iowa City, IA 52242, USA, April 1994. [8] J. Stoer. The complexity of an infeasible interior-point path-following method for the solution of linear programs. Optimization Methods and Software, 3:1{12, 1994.
16