Homogeneous Interior{Point Algorithms for ... - Semantic Scholar

11 downloads 0 Views 230KB Size Report
feasibility model for linear programming developed by Goldman and Tucker 1, 10]. ... mention that Ye, Todd and Mizuno 12] proposed a homogeneous algorithm ...
REPORTS ON COMPUTATIONAL MATHEMATICS, NO. 82/1995, DEPARTMENT OF MATHEMATICS, THE UNIVERSITY OF IOWA

Homogeneous Interior{Point Algorithms for Semide nite Programming  Florian A. Potra and Rongqin Shengy November, 1995

Abstract

A simple homogeneous primal-dual feasibility model is proposed for semide nite programming (SDP) problems. Two infeasible-interior-point algorithms are applied to the homogeneous formulation. The algorithms do not need big M initialization. If the original SDP problem has a solution, then both algorithms nd an -approximate solup tion (i.e., a solution with residual error less than or equal to ) in at most O( n ln( 0 =)) steps, where  is the trace norm of a solution and 0 is the residual error at the (normalized) starting point. A simple way of monitoringppossible infeasibility of the original SDP problem is provided such that in at most O( n ln(0 =)) steps either an -approximate solution is obtained, or it is determined that there is no solution with trace norm less than or equal to a given number  > 0.

Key Words: semide nite programming, homogeneous interior-point algorithm, polynomial complexity.

Abbreviated Title: Homogeneous algorithms for SDP.

 y

This work was supported in part by NSF Grant DMS 9305760. Department of Mathematics, University of Iowa, Iowa City, IA 52242, USA.

1

1 Introduction This paper is concerned with the semide nite programming (SDP) problem: (P ) minfC  X : Ai  X = bi ; i = 1; : : : ; m; X  0g;

(1.1)

and its associated dual problem: (D)

m X T maxfb y : yiAi + S i=1

= C; S  0g;

(1.2)

where C 2 nn; Ai 2 nn; i = 1; : : : ; m; b = (b1 ; : : : ; bm )T 2 m are given data, and X 2 S+n , (y; S ) 2 m  S+n are the primal and dual variables, respectively. By G  H we denote the trace of (GT H ). Without loss of generality, we assume that the matrices C and Ai; i = 1; : : : ; m, are symmetric (otherwise, replace C by (C + C T )=2 and Ai by (Ai + ATi )=2). Also, for simplicity we assume that Ai; i = 1; : : : ; m, are linearly independent. We consider the primal{dual SDP problem: IR

IR

IR

IR

Ai  X = bi; i = 1; : : : ; m; m X yiAi + S = C;

(1.3a) (1.3b)

X  S = 0; X  0; S  0:

(1.3c)

i=1

Recently, Kojima, Shindoh and Hara [4], Nesterov and Todd [8], and Monteiro [5] extended some interior{point methods from LP to SDP. In the latter paper Monteiro developed a new formulation of the primal{dual search direction originally introduced in [4]. All above mentioned methods, with the exception of the infeasible{interior{point potential{ reduction method of Kojima, Shindoh and Hara [4], require a strictly feasible starting point and therefore are feasible-interior-point methods. Nesterov [7] proposed several infeasible{ interior{point algorithms for nonlinear conic problems, which include SDP, by trying to nd a recession direction of a shifted homogeneous primal{dual problem. More recently, Zhang [13], Kojima, Shida and Shindoh [3] and Potra and Sheng [9] independently proposed new infeasible-interior-point path{following algorithms for SDP. In the present paper, we propose a simple homogeneous primal-dual feasibility model for the problem (1.3). This formulation is an extension of the homogeneous and self{dual feasibility model for linear programming developed by Goldman and Tucker [1, 10]. We mention that Ye, Todd and Mizuno [12] proposed a homogeneous algorithm for linear programming, and their results were simpli ed by Xu, Hung and Ye [11] who developed an 2

ecient code by using Goldman and Tucker's homogeneous formulation for linear programming and an infeasible{interior{point method to solve it. Using a similar idea, we apply two interior{point algorithms to solve the homogeneous formulation of SDP from infeasible starting points. The algorithms do not need big M initialization. If the original SDP problem has a solution of trace norm  , then both algorithms nd an -approximate solution in at p most O( n ln(0 =)) steps. We also provide a simplepway to monitor possible infeasibility of the original SDP problem such that in at most O( n ln(0 =)) steps we can either get a -approximate solution or determine that there is no solution with trace norm less than or equal to a given number  > 0. This is better that the complexity of the algorithm considered in our previous paper [9] where we assumes that a solution exists and proved that the algorithm nds it inp O(n ln(0=) iterations if the starting point is large enough (big M initialization) or in O( n ln(0 =)) iterations if the starting point is almost feasible, where 0 denotes residual error at the starting point. The following notation and terminology are used throughout the paper:

p : the p-dimensional Euclidean space; p p IR + : the nonnegative orthant of IR ; p p IR ++ : the positive orthant of IR ; pq : the set of all p  q matrices with real entries; IR S pp: the set of all p  p symmetric matrices; S+p : the set of all p  p symmetric positive semide nite matrices; S++: the set of all p  p symmetric positive matrices; [M ]ij : the (i; j )-th entry of a matrix M; Tr(M ): the trace of a p  p matrix M , Tr(M ) = Ppi=1[M ]ii ; IR

M  0: M is positive semide nite; M  0: M is positive de nite; i(M ); i = 1; : : : ; n: the eigenvalues of M 2 S n ; max(M ); min(M ): the largest, smallest, eigenvalue of M 2 S n ; G  H  Tr(GT H ); k  k: Euclidean norm of a vector and the corresponding norm of a matrix, i.e., qP p 2 kyk  qi=1 yi ; kM k  maxfkMyk : kyk = 1g ; kM kF  Ppi=1 Pqj=1[M ]2ij ; M 2 pq : Frobenius norm of a matrix. IR

3

2 A homogeneous feasibility model Our homogeneous model is given by the following homogeneous system:

Ai  X = bi ; i = 1; : : : ; m; m X yiAi + S = C;

(2.1a) (2.1b)

= ; X  0; S  0;   0;   0:

(2.1c) (2.1d)

i=1 bT y ? C  X

It is easily seen that (2.1a), (2.1b) and (2.1c) imply

X  S +  = 0: (2.2) The above formulation is an extension of the homogeneous and self{dual feasibility model for linear programming developed by Goldman and Tucker [1, 10]. It is also closely related to the shifted homogeneous primal{dual model considered by Nesterov [6, 7]. The following theorem is easily checked.

Theorem 2.1 The SDP problem (1.3) has a solution if and only if the homogeneous system (2.1) has a solution

(X ; y; S ;  ; ) 2 S+n 

IR

m  S n  IR  IR ; + + +

such that   = 0;  > 0. We denote the solution set of the problem (1.3) by F . Also, let us de ne the solution set of (1.3) by H. Notice that H is not empty since (2.1) has a trivial zero solution. The residues of (2.1a){(2.1c) are denoted by:

Ri = bi  ? Ai  X; i = 1; : : : ; m; m X Rd = C ? yiAi ? S; i=1 T =  ? b y + C  X:

r Throughout the paper we will use the notation:  = X nS++1  : 4

(2.3a) (2.3b) (2.3c) (2.4)

Let us de ne

n  H++ = S++

IR

m  S n  IR  IR : ++ ++ ++

Then we can de ne the infeasible central path of the homogeneous problem (2.1) by C = f(X; y; S; ; ) 2 H++ : XS = I;  = ; Ri = (=0)Ri0; i = 1; : : : ; m; Rd = (=0)Rd0 ; r = (=0)r0g: In our algorithms, we will use the following neighborhood of this central path: 

N ( ) = f(X; y; S; ; ) 2 H++ : kX 1=2 SX 1=2 ? I k2F + ( ? )2 = f(X; y; S; ; ) 2 H++ :

m X i=1

(i(XS ) ?  )2 + ( ? )2

1=2

!1=2

 g

  g;

where is a constant such that 0 < < 1.

3 Search directions The search direction (U; w; V; ; ) of our algorithms is de ned by the following linear system: X ?1=2(XV + US )X 1=2 + X 1=2 (V X + SU )X ?1=2 = 2(I ? X 1=2 SX 1=2 ); (3.1a)  +   =  ? ; (3.1b) Ai  U ? bi = (1 ?  )Ri; i = 1; : : : ; m; (3.1c) m X wiAi + V ? C = (1 ?  )Rd; (3.1d) i=1

 ? bT w + C  U = (1 ?  )r; (3.1e) where  is a parameter such that  2 [0; 1]. We note that equation (3.1a) can be written under the form 2XV X + USX + SXU = 2(X ? XSX ); so that the solution of the linear system (3.1) does not require computation of square roots of matrices (see [5]). Throughout the paper we will use the following notation:     X 0 S 0 f e X= 0  ; S= 0  ; (3.2) 5

    Ue = U0 0 ; Ve = V0 0 : Then it is easily seen that f e  = Xn + S1 :

(3.3)

Lemma 3.1 Suppose (X; y; S; ; ) 2 H++. Then the linear system (3.1) has a unique solution

(U; w; V; ; ) 2 S n 

IR

m  S n  IR  IR :

Proof. Let us consider the following homogeneous linear system

X ?1=2(XV + US )X 1=2 + X 1=2 (V X + SU )X ?1=2 = 0  +   = 0; Ai  U ? bi  = 0; i = 1; : : : ; m; m X wiAi + V ? C = 0;

(3.4a) (3.4b) (3.4c) (3.4d)

 ? bT w + C  U = 0:

(3.4e)

i=1

In view of (3.4a), (3.4c), (3.4d) and the proof of Lemma 2.1 of Monteiro [5], it follows that U and V are symmetric. From (3.4c){(3.4e), we have U  V +  = 0, i.e., Ue  Ve = 0 with e Ve given by (3.2) and (3.3). In view of (3.4a) and (3.4b), we have U;

Xf?1=2 (XfVe + Ue Se)Xf1=2 + Xf1=2(Ve Xf + SeUe )Xf?1=2 = 0; or equivalently, f1=2 Ve X f1=2 + [X f?1=2 Ue SeX f1=2 + X f1=2 SeUe X f?1=2 ] = 0: 2X

(3.5)

Therefore we obtain f1=2 Ve X f1=2 ]  [X f?1=2 Ue SeX f1=2 + X f1=2 SeUe X f?1=2 ] = 0: [2X

(3.6)

Relations (3.5) and (3.6) imply the relation

Xf1=2 Ve Xf1=2 = 0; which yields Ve = 0 and therefore  = 0. Then from (3.1b), we obtain  = 0. Finally from Lemma 2.1 of Monteiro [5] it follows that U = V = 0. Consequently, the coecient 6

matrix of the linear system (3.1), which has 2n + m + 2 linear equations and 2n + m + 2 unknowns, is nonsingular, which implies the existence of a unique solution of (3.1). The symmetry of U and V determined by (3.1) can be deduced as in Lemma 2.1 of Monteiro [5].

Lemma 3.2 If (U; w; V; ; ) is a solution of the linear system (3.1), then, U  V +   = 0: Proof. Let us write (U 0 ; w0; V 0;  0 ; 0) = ((U + (1 ?  )X; w + (1 ?  )y; V + (1 ?  )S;  + (1 ?  );  + (1 ?  )): Then from (3.1c){(3.1e), we have Ai  U 0 ? bi 0 = 0; i = 1; : : : ; m; m X wi0 Ai + V 0 ?  0 C = 0; i=1

and therefore

0 ? bT w0 + C  U 0 = 0;

U 0  V 0 +  0 0 = 0: Hence, using the notation of (3.2) and (3.3), we get f)  (Ve + (1 ?  )Se) (Ue + (1 ?  )X = (U + (1 ?  )X )  (V + (1 ?  )S ) + ( + (1 ?  ) )( + (1 ?  )) = 0: In view of (3.1a) and (3.1b), we obtain 1 [X f?1=2 (X fVe + Ue Se)X f1=2 + X f1=2 (Ve X f + SeUe )X f?1=2 ] = (I ? X f1=2 SeX f1=2 ): 2 By taking the trace of both sides of (3.8), we have e Xf  Ve + Ue  Se = ( ? 1)Xf  S: Then by expanding (3.7) and using (3.9) we get Ue  Ve = 0, i.e., U  V +   = 0: 7

(3.7) (3.8) (3.9)

Lemma 3.3 Let (X; y; S; ; ) 2 N ( ) for some 2 [0; 1). Suppose that (U; w; V; ; ) is a solution of the linear system (3.1). Then,

kXf?1=2 Ue Ve Xf1=2 k 

kI ? Xf1=2 SeXf1=2 k2F : 2(1 ? )2 

Proof. Using the notation introduced in (3.2) and (3.3) and Lemma 3.2, we have

Xf1=2Ve Xf1=2 + Xf?1=2 Ue SeXf1=2 + Xf1=2 SeUe Xf?1=2 = 2(I ? Xf1=2SeXf1=2 ); and

(3.10)

Ue  Ve = 0:

(3.11)

kXf1=2 SeXf1=2 ? I kF  :

(3.12)

Since (X; y; S; ; ) 2 N ( ), we obtain

Then the desired inequality follows from (3.10), (3.11), (3.12) and a manipulation similar to that used in the proof of Lemma 4.4 of Monteiro [5].

4 A generic homogeneous algorithm Generic Homogeneous Algorithm

Let (X 0; y0; S 0; 0 ; 0) = (I; 0; I; 1; 1): Repeat until a stopping criterion is satis ed:  Choose  2 [0; 1] and compute the solution (U; w; V; ; ) of the linear system (3.1).  Compute steplength  such that

X + U  0; S + V  0;  +  > 0;  +  > 0:

 Update the iterates (X +; y+; S +; +; +) = (X; y; S; ; ) + (U; w; V; ; ): We will de ne a stopping criterion later in this section. 8

Theorem 4.1 The iterates generated by the Generic Homogeneous Algorithm satisfy: + +  = X  S + + + = (1 ? (1 ?  )); (4.1a) +

n+1 = (1 ? (1 ?  ))Ri ; i = 1; : : : ; m; (4.1b) + Rd = (1 ? (1 ?  ))Rd ; r = (1 ? (1 ?  ))r: (4.1c) Proof. By the de nition of the Generic Homogeneous Algorithm, we have X +  S + = (X + U )  (S + V ) = X  S + (X  V + U  S ) + U  V = X  S + (n ? X  S ) + U  V; and ++ = ( +  )( + ) =  + (  +  ) +   =  + ( ? ) +  : Therefore, from Lemma 3.2, we obtain X +  S + + ++ = (1 ? (1 ?  ))(n + 1); which implies (4.1a). Finally, (4.1b) and (4.1c) can be easily deduced from (3.1c){(3.1e). By using the above theorem and the fact that 0 = 1, we deduce the following corollary. Corollary 4.2 Let (X k ; yk; S k ; k ; k ) be generated by Generic Homogeneous Algorithm. Ri+

Then,

Rik = k Ri0; i = 1; : : : ; m; Rdk = k Rd0; rk = k r0: Let us de ne a linear manifold H0 = f(X 0; y0; S 0;  0; 0 ) 2 S n  m  S n   : Ai  X 0 ?  0 bi = 0; i = 1; : : : ; m; m X yi0Ai + S 0 ?  0 C = 0; IR

i=1

0 ? bT y0 + C  X 0 = 0g 9

IR

IR

Then it is easily seen that if (X 0; y0; S 0;  0; 0) 2 H0 , then

X 0  S 0 +  0 0 = 0: The following lemma shows that the iterates (X k ; yk ; S k ; k ; k ) are bounded. Lemma 4.3 If (X k ; yk; S k ; k ; k ) are generated by the Generic Homogeneous Algorithm, then Tr(X k + S k ) + k + k = (n + 1)(1 + ): (4.2) Proof. For simplicity, we omit the index k. From Corollary 4.2, we have (X ? X 0; y ? y0; S ? S 0;  ? 0;  ? 0) 2 H0: Therefore,

(S ? S 0)  (X ? X 0) + ( ? 0 )( ? 0) = 0: (4.3) The desired result follows by expanding (4.3) and using the fact that X 0 = S 0 = I and 0 = 0 = 1.

Theorem 4.4 Let (X k ; yk ; S k ; k ; k ) be generated by the Generic Homogeneous Algorithm and let  > 0 be a given number. Suppose that the following two conditions are satis ed: (a) there exists a solution (X  ; y ; S  ) of the original problem (1.3) such that Tr(X  +S  )  ; (b) there exists a constant ! 2 (0; 1) such that k k  !k ; 8k: Then,

! ; 8 k: k   + 1

Proof. Let  = 1;  = 0. Obviously, (X  ; y ; S  ;  ;  ) is a solution of (2.1). From the proof of Lemma 4.3, for any real number , we have

(X ? X 0 + X ; y ? y0 + y; S ? S 0 + S ;  ? 0 + ;  ? 0 + ) 2 H0 : Therefore, we get (S ? S 0 + S )  (X ? X 0 + X ) + ( ? 0 + )( ? 0 + ) = 0: By expanding the above equality and noting that  is an arbitrary real number and that X   S  +  = 0, we obtain

S  X  + S   X +  +   = (S 0  X  + S   X 0 + 0  + 0 ) = [Tr(X  + S ) + 1]: 10

Then, from condition (a) and  = 1, we deduce that

  [Tr(X  + S ) + 1]  ( + 1):

(4.4)

Hence, by (4.4) and condition (b), we have ! :   !   +1

Corollary 4.5 Under condition (b) of Theorem 4.4 suppose that k ! 0 as k ! 1. Then (i) the primal{dual problem (1.3) has a solution if and only if k ! 0 as k ! 1 and there exists a constant  > 0 such that k   ; 8k; (ii)(1.3) has no solution if and only if k ! 0 as k ! 1. Proof. Part (i) is an immediate consequence of Theorem 4.4. From Lemma 4.3, the sequence fk g is bounded. Let us observe that if (1.3) has no solution then fk g does not have a positive accumulation point (otherwise, there exists a subsequence f(X k =k ; yk=k ; S k =k )g converging to a solution of (1.3)), and therefore k ! 0. Conversely, if k ! 0, then by Theorem 4.4 (1.3) has no solution. Let us de ne the stopping criterion of the Generic Homogeneous Algorithm: Stopping criterion.

Ek  maxfkRdk k=k ; jRik j=k ; i = 1; : : : ; m; (X k  S k + k k )=k2 g  ;

(4.5)

where  > 0 is the tolerance. Let us de ne the set of -approximate solutions of (1.3) by

F = f(X; y; S ) 2 S+n  m  S+n : X  S  ; jbi ? A  X j  ; i = 1; : : : ; m; kC ? Pmi=1 yiAi ? S k  g: IR

Obviously, if (4.5) is satis ed, then (X k =k ; yk =k ; S k =k ) is an -approximate solution of the original problem (1.3). Let us observe that if our algorithm starts at a point that is a multiple of (I; 0; I; 1; 1), say (I; 0; ; ;  ), then we obtain an iteration sequence (X k ; yk; Sk ; k ; k ) = (X k ; yk ; S k ; k ; k ): However, we note that (X k =k ; yk=k ; Sk =k ) = (X k =k ; yk =k ; S k =k ): 11

Therefore, we see that the sequence (X k =k ; yk =k ; S k =k ) does not depend on the magnitude of the starting point. This explains why we do not need a big M initialization. If we de ne the residual error at the point (X; y; S; ; ) by res((X; y; S; ; )) = maxfX  S + ; kRd kF ; jRij; i = 1; : : : ; mg; (4.6) then for our starting point we have 0  res((I; 0; I; 1; 1)) = maxfn + 1; kC ? I kF ; jbi ? Tr(Ai )j; i = 1; : : : ; mg; while for a ( + 1) multiple of our starting point we have ()  res(( + 1)(I; 0; I; 1; 1)) (4.7) = maxf( + 1)2(n + 1); ( + 1)kC ? I kF ; ( + 1)jbi ? Tr(Ai)j; i = 1; : : : ; mg  0 ( + 1)2: Using the above introduced quantity we obtain the following result. Lemma 4.6 If k  !=( + 1), then then the following estimate holds: Ek  k ()=!2; where () is the residual error de ned by (4.7). Proof. From (4.5) and Corollary 4.2, we get Ek = maxfk kRd0kF =k ; k jRi0j=k ; i = 1; : : : ; m; k (n + 1)=k2g = k maxfkRd0 kF =k ; jRi0 j=k ; i = 1; : : : ; m; (n + 1)=k2g  k maxfkRd0 kF =(!=( + 1)); jRi0 j=(!=( + 1)); i = 1; : : : ; m; (n + 1)=(!=( + 1))2g  (k =!2) maxf( + 1)kRd0kF ; ( + 1)jRi0j; i = 1; : : : ; m; ( + 1)2(n + 1)g = k ()=!2: From Theorem 4.4 and Lemma 4.6, we see that the quantity k is a useful certi cate for monitoring possible infeasibility of the problem (1.3). Corollary 4.7 Under condition (b) of Theorem 4.4 if k < !=( + 1); for some k; (4.8) then the primal{dual problem (1.3) has no solution (X  ; y ; S  ) such that Tr(X  + S  )  . In the next two sections, pwe will describe two examples of the Generic Homogeneous Algorithm, which achieves O( n ln(0 =)){iteration complexity. 12

5 A predictor{corrector algorithm In this section, we apply the recent infeasible{interior{point predictor{correctorpalgorithm of Potra and Sheng [9] to the homogeneous system (2.1) and show that it has O( n ln(0 =))iteration complexity. Let ; be two positive constants satisfying the inequalities

< 1:  < < 2(1 ? )2 1? 2

(5.1)

For example, = 0:25; = 0:41 verify (5.1).

Algorithm 5.1

(X; y; S; ; ) (I; 0; I; 1; 1); Repeat until stopping criterion (4.5) is satis ed or  is suciently small:

(Predictor step)

Solve the linear system (3.1) with  = 0; Compute  = maxf~ 2 [0; 1] : n X i=1

(i(X ()S ()) ? (1 ? ))2 + ( ()() ? )2

!1=2

 (1 ? ); 8  2 [0; ~]g;

where

(X (); S ();  (); ()) = (X; S; ; ) + (U; V; ; ); (X; y; S; ; ) X; y; S; ; ) + (U; w; V; ; );

(Corrector step)

Solve the linear system (3.1) with  = 1; (X; y; S; ; ) X; y; S; ; ) + (U; w; V; ; ):

Using a proof similar to that of Lemma 2.5 of Potra and Sheng [9], we get the following lower bound for the steplength : 2 ; (5.2) q 1 + 4=( ? ) + 1 where

  1 kXf?1=2Ue Ve Xf1=2 kF : 13

(5.3)

Then, in view of Lemma 3.3, we obtain f1=2 SeX f1=2 k2 k X   2(1 ? )22 F 2 + kI ? X f1=2 SeX f1=2 k2 k I k F F = 2(1 ? )22 + 1)2 + 22 = n + 1 + 2 = O(n);  (n 2(1 ? ) 2 2 2(1 ? )2 which implies

Here we have used the relation

1 : 1 ?   1 ? O (p n)

f1=2 SeX f1=2 ] = 0: (I )  [I ? X Then, from Theorem 4.1, we have 1 ) : k+1 = (1 ? k )k  (1 ? O(p n) k

(5.4)

Also, we can prove that

(X k ; yk; S k ; k ; k ) 2 N ( ); 8k: Hence, condition (b) of Theorem 4.4 is satis ed with ! =p1 ? . Therefore, in view of (5.4), Theorem 4.4 and Lemma 4.6, we see that in at most O( n ln(0 =)) iterations, either the p inequality k  != holds all the time or it is violated for some k  O( n ln(0 =)). In the former case, we get an -approximate solution of (1.3) while in the later case (1.3) has no solution (X  ; y; S ) such that Tr(X  + S )  . To summarize, we obtain the following polynomial complexity result.

Theorem 5.2 (i) If F  is not empty, then Algorithm 5.1 terminates with an -approximate solution (X k =k ; yk =k ; S k =k ) 2 F in a nite number of steps k = K < 1. p (ii) If  = Tr(X  + S  ) for some (X  ; y ; S  ) 2 F  , then K = O( n ln( 0 =)). c = O(pn ln(0 =)) such that either (iii) For any choice of  > 0 there is an index k = K (iiia) (X k =k ; y k =k ; S k =k ) 2 F, or, (iiib) k < !=( + 1), and in the latter case there is no solution (X  ; y ; S  ) 2 F  with   Tr(X  + S  ).

14

6 A short{step algorithm In this section, we extend a recent short{step feasible path{following algorithm by Kojima, Shindoh and Hara [4] and simpli ed by Monteiro [5] for solving the homogeneous system from infeasible starting points. Algorithm 6.1 (X; y; S; ; ) (I; 0; I; 1; 1); Repeat until the stopping criterion (4.5) is satis ed p or  is suciently small: Solve the linear system (3.1) with  = 1 ? 0:3= n + 1; (X; y; S; ; ) X; y; S; ; ) + (U; w; V; ; ): By an analogous analysis to that used in Theorem 4.1 of Monteiro [5], we can prove that (X k ; yk ; S k ; k ; k ) 2 N (0:3); 8k. From Theorem 4.1, we get

p

k+1 = (1 ? 0:3= n + 1)k : Obviously, condition (b) of Theorem 4.4 is satis ed with ! = 0:7. Hence, similar to Theorem 5.2, we obtain the following complexity result for Algorithm 6.1.

Theorem 6.2 (i) If F  is not empty, then Algorithm 6.1 terminates with an -approximate solution (X k =k ; yk =k ; S k =k ) 2 F in a nite number of steps k = K < 1. p (ii) If  = Tr(X  + S  ) for some (X  ; y ; S  ) 2 F  , then K = O( n ln( 0 =)). c = O(pn ln(0 =)) such that either (iii) For any choice of  > 0 there is an index k = K (iiia) (X k =k ; y k =k ; S k =k ) 2 F, or, (iiib) k < !=( + 1), and in the latter case there is no solution (X  ; y ; S  ) 2 F  with   Tr(X  + S  ).

7 Further remarks For simplicity of analysis, we have chosen the starting point (I; 0; I; 1; 1). Indeed, we can use any starting point (X 0; y0; S 0; 0 ; 0) 2 H++ near the infeasible central path because we are actually interested in the sequence (X k =k ; yk =k ; S k =k ) which does not depend on the magnitude of the starting point. 15

As a nal comment we note that we can also extend the long{step algorithm for linear programming by Kojima Mizuno and Yoshise [2] to the homogeneous system (2.1). By using the proof techniques and the results of Monteiro [5] we can prove that the iteration complexity of this extended algorithm is O(n1:5 ln(0 =)).

References [1] A. J. Goldman and A. W. Tucker. Polyhedral convex cones. In H. W. Kuhn and A. W. Tucker, editors, Linear Inequalities and Related Systems. Princeton University Press, Princeton, NJ, 1956. p. 19. [2] M. Kojima, S. Mizuno, and A. Yoshise. A primal{dual interior point algorithm for linear programming. In N. Megiddo, editor, Progress in Mathematical Programming, pages 29{47. Springer Verlag, New York, 1989. [3] M. Kojima, M. Shida, and S. Shindoh. Global and local convergence of predictor{ corrector infeasible{interior{point algorithms for semide nite programs. Research Reports on Information Sciences B-305, Department of Information Sciences, Tokyo Institute of Technology, 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152, Japan, October 1995. [4] M. Kojima, S. Shindoh, and S. Hara. Interior-point methods for the monotone linear complementarity problem in symmetric matrices. Research Reports on Information Sciences B-282, Department of Information Sciences, Tokyo Institute of Technology, 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152, Japan, April 1994. [5] R. D. C. Monteiro. Primal-dual path following algorithms for semide nite programming. Working paper, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA, September 1995. [6] Y. E. Nesterov. Long{step strategies in interior point potential{reduction methods. Working paper, Department SES COMIN, Geneva University, 1993. to appear in Mathematical Programming. [7] Y. E. Nesterov. Infeasible start interior point primal{dual methods in nonlinear programming. Working paper, Center for Operations Research and Econometrics, 1348 Louvain-la-Neuve, Belgium, 1994.

16

[8] Y. E. Nesterov and M. J. Todd. Primal{dual interior{point methods for self{scaled cones. Technical Report 1125, School of Operations Research and Industrial Engineering, Cornell University, Ithaca, New York 14853{3801, USA, 1995. [9] F. A. Potra and R. Sheng. A superlinearly convergent primal{dual infeasible{interior{ point algorithm for semide nite programming. Reports on Computational Mathematics 78, Department of Mathematics, The University of Iowa, Iowa City, IA 52242, USA, October 1995. [10] A. W. Tucker. Dual systems of homogeneous relations. In H. W. Kuhn and A. W. Tucker, editors, Linear Inequalities and Related Systems. Princeton University Press, Princeton, NJ, 1956. p. 3. [11] X. Xu, P. Hung, and Y. Ye. A simpli cation of the homogeneous and self-dual linear programming algorithm and its implementation. Working paper, Department of Management Sciences, The University of Iowa, Iowa City, Iowa 52242, USA, 1993. to appear in Annals of Operations Research. p [12] Y. Ye, M. J. Todd, and S. Mizuno. An O( nL)-iteration homogeneous and self-dual linear programming algorithm. Mathematics of Operations Research, 19(1):53{67, 1994. [13] Y. Zhang. On extending primal{dual interior{point algorithms from linear programming to semide nite programming. TR 95-20, Department of Mathematics and Statistics, University of Maryland Baltimore County, Baltimore, Maryland 21228{5398, USA, October 1995.

17