which generalizes the interior{point method for linear programming proposed by .... is a strictly complementary solution of (2.3). Let us write. QT XkQ = ^Xk. B.
REPORTS ON COMPUTATIONAL MATHEMATICS, NO. 86/1996, DEPARTMENT OF MATHEMATICS, THE UNIVERSITY OF IOWA
Superlinear Convergence of Interior{Point Algorithms for Semide nite Programming Florian A. Potra and Rongqin Shengy April, 1996 (Revised May, 1996)
Abstract
We prove the superlinear convergence of the primal-dual infeasible-interior-point path-following algorithm proposed recently by Kojima, Shida and Shindoh and the present authors, under two conditions: (1) the SDP problem has a strictly complementary solution, and (2) the size of the central path neighborhood approaches zero. The nondegeneracy condition suggested by Kojima, Shida and Shindoh is not used in our analysis. Our result implies that the modi ed algorithm of Kojima, Shida and Shindoh, which enforces condition (2) by using additional corrector steps, has superlinear convergence under the standard assumption of strict complementarity. Finally, we point out that condition (2) can be made weaker and show the superlinear convergence under the strict complementarity assumption and a weaker condition than (2).
Key Words: semide nite programming, path-following, infeasible-interior-point algorithm, polynomiality, superlinear convergence. Abbreviated Title: Superlinear convergence of algorithms for SDP.
y
This work was supported in part by NSF Grant DMS 9305760. Department of Mathematics, University of Iowa, Iowa City, IA 52242, USA.
1
1 Introduction Many primal-dual interior-point algorithms have been proposed recently for solving semidefinite programming (SDP) problems (cf. [1, 2, 3, 4, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15]). Most of these algorithms use one of the following three search directions: the Kojima-Shindoh-Hara direction [6], the Alizadeh-Haeberly-Overton direction [1] and the Nesterov-Todd direction [10]. Using dierent directions leads to dierent global and local convergence behaviors of the algorithms. The algorithm proposed by Kojima, Shida and Shindoh [3] and the present authors [12] uses the Kojima-Shindoh-Hara search direction and has polynomial complexity. Also, the present authors [12] proposed a sucient condition for the superlinear convergence of the algorithm while Kojima, Shida and Shindoh [3] established the superlinear convergence under the following three assumptions: (A) SDP has a strictly complementary solution; (B) SDP is nondegenerate in the sense that the Jacobian matrix of its KKT system is nonsingular; (C) the iterates converge tangentially to the central path in the sense that the size of the neighborhood in wich the iterates reside must approach zero. More recently, Kojima, Shida and Shindoh [5] proposed a predictor-corrector algorithm using the Alizadeh-Haeberly-Overton search direction, and proved the quadratic convergence of the algorithm under assumptions (A) and (B), but the algorithm does not seem to be polynomial. Using the Nesterov-Todd search direction, Luo, Sturm and Zhang [7] investigated a symmetric primal-dual path following algorithm presented in [13], proved the superlinear convergence under assumptions (A) and (C), and then dropped (C) by enforcing it in later iterations. The Kojima-Shindoh-Hara search direction is easy to compute. In a recent paper, Zhang [15] proposed a simple scheme for computing the Kojima-Shindoh-Hara direction while proposing an infeasible-interior-point algorithm for SDP. The purpose of this paper is to establish the superlinear convergence of the algorithm proposed by Kojima, Shida and Shindoh [3] and the present authors [12], under assumptions (A) and (C). In other words, the nondegeneracy condition (B) is not necessary for the superlinear convergence of the algorithm. Our analysis is based on our previous paper [12] and in particular, the sucient condition for the superlinear convergence established there. We show that under assumptions (A) and (C) that sucient condition is satis ed. We also show that the modi ed algorithm of Kojima, Shida and Shindoh [4], which enforces condition 2
(C), is superlinearly convergent under the standard assumption (A) only. Finally, we point out that condition (C) can be made weaker and show the superlinear convergence under the strict complementarity assumption and a weaker assumption than (C). The following notation and terminology are used throughout the paper: p IR : the p-dimensional Euclidean space; IR p+ : the nonnegative orthant of IR p ; IR p++ : the positive orthant of IR p; IR pq : the set of all p q matrices with real entries; S pp: the set of all p p symmetric matrices; S+p : the set of all p p symmetric positive semide nite matrices; S++: the set of all p p symmetric positive matrices; [M ]ij : the (i; j )-th entry of a matrix M; Tr(M ): the trace of a p p matrix, Tr(M ) = Ppi=1[M ]ii ; M 0: M is positive semide nite; M 0: M is positive de nite; i(M ); i = 1; : : : ; n: the eigenvalues of M 2 S n ; max(M ); min(M ): the largest, smallest, eigenvalue of M 2 S n ; G H Tr(GT H ); k k: Euclidean norm of a vector and the corresponding norm of a matrix, i.e., qPp kyk qi=1 yi2; kM k maxfkMyk : kyk = 1g ; kM kF Ppi=1 Pqj=1[M ]2ij ; M 2 IR pq : Frobenius norm of a matrix; M k = o(1): kM k k ! 0 as k ! 1; M k = O(1): kM k k is bounded; M k = (1): 1=? kM k k ? for some constant ? > 0; M k = o(k ): M k =k = o(1); M k = O(k ): M k =k = O(1); M k = (k ): M k =k = (1):
2 Semide nite programming We consider the semide nite programming (SDP) problem: (P ) minfC X : Ai X = bi ; i = 1; : : : ; m; X 0g;
3
(2.1)
and its associated dual problem: (D)
m X T maxfb y : yiAi + S i=1
= C; S 0g;
(2.2)
where C 2 IR nn; Ai 2 IR nn; i = 1; : : : ; m; b = (b1 ; : : : ; bm )T 2 IR m are given data, and X 2 S+n , (y; S ) 2 IR m S+n are the primal and dual variables, respectively. By G H we denote the trace of (GT H ). Without loss of generality, we assume that the matrices C and Ai ; i = 1; : : : ; m, are symmetric (otherwise, we replace C by (C + C T )=2 and Ai by (Ai + ATi )=2). Also, for simplicity we assume that Ai; i = 1; : : : ; m, are linearly independent. Throughout this paper we assume that both (2.1) and (2.2) have nite solutions and their optimal values are equal. Under this assumption, X and (y; S ) are solutions of (2.1) and (2.2) if and only if they are solutions of the following nonlinear system:
Ai X = bi; i = 1; : : : ; m; m X yiAi + S = C;
(2.3a) (2.3b)
X S = 0; X 0; S 0:
(2.3c)
i=1
We denote the feasible set of the problem (2.3) by
F = f(X; y; S ) 2 S+n IR m S+n : (X; y; S ) satis es (2:3a) and (2:3b)g and its solution set by F , i.e., F = f(X; y; S ) 2 F : X S = 0g: The residues of (2.3a) and (2.3b) are denoted by:
Ri = bi ? Ai X; i = 1; : : : ; m; m X Rd = C ? yiAi ? S: i=1
For any given > 0 we de ne the set of -approximate solutions of (2.3) as
F = fZ = (X; y; S ) 2 S+n IR m S+n : X S ; jRij ; i = 1; : : : ; m; kRd k g: 4
(2.4a) (2.4b)
3 The infeasible-interior-point algorithm In a recent paper [12], we proposed an infeasible-interior-point algorithm for solving (2.3), which generalizes the interior{point method for linear programming proposed by Mizuno, Todd and Ye [8]. The algorithm performs in a neighborhood of the infeasible central path: n IR m S n : C ( ) = fZ = (X; y; S ) 2 S++ ++
XS = I; Ri = (=0 )Ri0; i = 1; : : : ; m; Rd = (=0 )Rd0 g:
The positive parameter is driven to zero and therefore the residues are also driven to zero at the same rate as . The iterates reside in the following neighborhood of the above central path: n IR m S n : kX 1=2 SX 1=2 ? I k g N ( ; ) = f(X; y; S ) 2 S++ ++ F
=
n IR m S n f(X; y; S ) 2 S++ ++
:
m X i=1
(i(XS ) ? )2
!1=2
g;
where is a constant such that 0 < < 1. Throughout the paper we also use the notation:
= (X S )=n:
(3.1)
The algorithm depends on two positive parameters ; satisfying the inequalities
2
2(1 ? )2
< < 1 ? < 1:
(3.2)
For example, = 0:25; = 0:41 verify (3.2). The search direction (X; y; Z ) is computed by solving the following linear system:
X ?1=2 (XV + US )X 1=2 + X 1=2 (V X + SU )X ?1=2 = 2I ? 2X 1=2 SX 1=2; (3.3a) Ai U = (1 ? )Ri; i = 1; : : : ; m; (3.3b) m X wiAi + V = (1 ? )Rd: (3.3c) i=1
Algorithm 3.1
k 0; (X; y; S ) (X 0; y0; S 0); Repeat until (X; y; S ) 2 F or is suciently small:
(Predictor step)
5
Solve the linear system (3.3) with = 0; b ] where Compute 2 [;
2 ; b q 1 + 4=( ? ) + 1
(3.4)
1 kX ?1=2 UV X 1=2 kF ;
(3.5) n o = max ~ 2 [0; 1] : kX ()1=2 S ()X ()1=2 ? (1 ? ) kF (1 ? ); 8 2 [0; ~] ; and (X (); S ()) = (X; S ) + (U; V ); (X; y; S ) (X; y; S ) + (U; w; V ); if = 1, then report (X; y; S ) 2 F and terminate;
(Corrector step)
Solve the linear system (3.3) with = 1; (X; y; S ) (X; y; S ) + (U; w; V ); k k + 1; (X k ; yk ; S k ) (X; y; S ).
For each 0, let K denote the number of steps needed by the above algorithm to terminate with (X; y; S ) 2 F. Usually we have K0 = 1. Theorem 3.2 ([12], Theorem 2.6) Let 0 be given. Then for any integer 0 k < K, Algorithm 3.1 de nes a triple (X k ; yk ; S k ) 2 N (; k) (3.6) and the corresponding residuals satisfy
where
Rdk = k Rd0 ; Rik = k Ri0 ; i = 1; : : : ; m;
(3.7)
k = k 0 ; (1 ? )k k = (X k S k )=n (1 + )k
(3.8) (3.9)
0 = 1;
k=
kY ?1 j =0
(1 ? j ):
It is proved in [12] that Algorithm 3.1 is globally convergent with polynomial complexity. 6
4 Local convergence analysis
De nition 4.1 A triple (X ; y; S ) 2 F is called a strictly complementary solution of (2.3) if X + S 0. Assumption 1. The SDP problem has a strictly complementary solution (X ; y; S ). Assumption 2. klim k(X k )1=2 S k (X k )1=2 ? k I kF =k = 0: !1
Let Q = (q1 ; : : : ; qn) be an orthogonal matrix such that q1; : : : ; qn are eigenvectors of X and S , and de ne IB = fi : qiT X qi > 0g;
IN = fi : qiT S qi > 0g:
It is easily seen that IB [ IN = f1; 2; : : : ; ng. For simplicity, let us assume that
0 ; QT S Q = 0 0 ; 0 0 0 N where B and N are diagonal matrices. Here and in the sequel, if we write a matrix M in the block form M M 11 12 M= M M ; 21 22 then we assume that the dimensions of M11 and M22 are jIB jjIB j and jIN jjIN j, respectively. The following lemma can be derived from Lemma 4.4 in [12].
QT X Q =
B
Lemma 4.2 If Assumption 1 is satis ed, then we have O(1) O(p ) O(1) O (1) k T k 1 = 2 T k ? 1 = 2 Q (X ) Q = O(p ) O(p ) ; Q (X ) Q = O(1) O(1=p ) ; k k k p O(p ) O(p ) k k ; QT (S k )?1=2 Q = O(1= k ) O(1) ; p QT (S k )1=2 Q = O ( k ) O(1) O(1) O(1) O( ) O(p ) O(1) O(p ) QT X k Q = O(p ) O( k) ; QT S k Q = O(pk ) O(1)k : k
k
k
7
(4.1) (4.2) (4.3)
As in [12], we de ne a linear manifold:
M f(X 0; y0; S 0) 2 S n IR m S n : Ai X 0 = bi; i = 1; : : : ; m; m X yi0 Ai + S 0 = C;
i=1 qiT X 0qj = 0 if i or j 2 IN ; qiT S 0qj = 0 if i or j 2 IB :g
(4.4)
It is easily seen that if (X 0; y0; S 0) 2 M, then QT X 0Q = M0B 00 ; QT S 0Q = 00 M0 : N
Lemma 4.3 ([12], Lemma 4.5) Under Assumption 1, F M. Lemma 4.4 ([12], Lemma 4.6) Under Assumption 1, every accumulation point of (X k ; yk ; S k )
is a strictly complementary solution of (2.3).
Let us write
X^Jk ; QT S k Q = S^Bk S^Jk : (S^Jk )T S^Nk (X^Jk )T X^Nk From Lemma 4.2 and Lemma 4.4, we can easily get the following result. Lemma 4.5 Under Assumption 1, X^Bk = (1); S^Nk = (1): QT X k Q =
X^ k B
(4.5)
The next theorem describes a sucient condition for the superlinear convergence of Algorithm 3.1. De ne k = k (?) = 1 k(X k )?1=2 (X k ? X k )(S k ? Sk )(X k )1=2 kF ; (4.6) k where (X k ; yk ; Sk ) is the solution of the following minimization problem: minfk(X k )?1=2 (X k ? X 0)(S k ? S 0)(X k )1=2 kF : (X 0; y0; S 0) 2 M; k(X 0; S 0)kF ?g; (4.7) and ? is a constant such that k(X k ; S k )kF ?; 8k. Note that every accumulation point of (X k ; yk ; S k ) belongs to the feasible set of the above minimization problem and the feasible set is bounded. Therefore (X k ; Sk ) exists for each k. 8
Theorem 4.6 ([12], Theorem 4.7) Under Assumption 1, if k ! 0 as k ! 1, then Algorithm 2.3 is superlinearly convergent. Moreover, if there exists a constant > 0 such that k = O(k ), then the convergence has Q-order at least 1+ in the sense that k+1 = O(1+ k ).
The quantities !k and k de ned below are used in the next two lemmas. !k = maxfkX^ Jk kF ; kS^Jk kF ; k g: n o k = max k(X k )1=2 S k (X k )1=2 ? k I kF =k ; pk :
(4.8) (4.9)
Lemma 4.7 Under Assumption 1, we have k(X k ? X k ; S k ? Sk )kF = O(!k );
(4.10)
k(X k ? X 0; S k ? S 0)kF :
(4.11)
where (X k ; Sk ) is the solution of the minimization problem:
min
(X ;y ;S )2M 0
0
0
Proof. Suppose by contradiction that (4.10) does not hold, i.e., there exists a subsequence such that k = k(X k ? X k ; S k ? Sk )kF =!k ! 1 as k ! 1: (4.12) Let us de ne (4.13) (X k ; S k ) = 1 (X k ? X k ; S k ? Sk ):
It is easily seen that
!k k
Ai X k = !k Ai (X 0 ? X ); i = 1; : : : ; m; 0 k k m m X k X 0 k k (yi )Ai + S = ! [ (yi ? yi)Ai + S 0 ? S ]: 0 k k i=1 i=1
(4.14a) (4.14b)
Since k(X k ; S k )kF = 1 and yk depends linearly on S k (cf. (4.14)), we deduce that there exists a convergent subsequence of (X k ; yk ; S k ). Without loss of generality we can write (X k ; yk ; S k ) ! (X 0; y0; S 0): 9
Letting k ! 1 in (4.14), we obtain
Ai
X 0 = 0;
i = 1; : : : ; m;
m X i=1
(yi0 )Ai + S 0 = 0:
(4.15)
From (4.3), we have for any i 2 IB ,
jqiT (S k )qij = jqiT S k qi j=(k!k ) kS^Bk kF =(k !k ) = O(k )=(k !k ) = O(1=k) = o(1); which implies qiT S 0qi = 0 for each i 2 IB . Similarly, we can show that qiT X 0 qi = 0 for any i 2 IN . For each pair i; j where i 2 IB ; j 2 IN or i 2 IN ; j 2 IB , we have jqiT (S k )qj j = jqiT S k qj j=(k !k ) kS^Jk kF =(k !k ) 1=k = o(1); which implies qiT S 0qj = 0. Similarly, qiT X 0 qj = 0 for any pair i; j described above. Hence, (X k ; yk ; Sk ) + (X 0 ; y0; S 0) 2 M; for all 2 IR : Since
k(X k ? (X k + !k k X 0); S k ? (Sk + !k k S 0))kF k(X k ? X k ; S k ? Sk ))kF = 1 k(X k ? (X k + !k k X 0 ); S k ? (Sk + !k k S 0 ))kF !k k = k(X k ? X 0; S k ? S 0 )kF ! 0 as k ! 1;
(X k ; yk Sk ) is not the solution of the minimization problem (4.11) for suciently large k, which is a contradiction.
Lemma 4.8 Under Assumption 1, the quantities de ned by (4.5) and (4.9) satisfy the following relation: (4.16) X^Jk = O(kpk ); S^Jk = O(k pk ): Proof. The proof is by contradiction. pSuppose that (4.16) does not hold. Without loss of generality, we may assume X^Jk 6= O(k k ). Then, there exists a subsequence
^k k = kXpJ kF ! 1; as k ! 1: k
k
10
(4.17)
By (4.9), we have In view of (4.1), we obtain
k(X k )1=2 S k (X k )1=2 ? k I kF k k :
(4.18)
kX k S k ? k I kF k(X k )1=2 kF k(X k )1=2 S (X k )1=2 ? k I kF k(X k )?1=2kF = O(k pk ): (4.19)
Therefore, which implies
X k S k = O(k pk ); X^ k B
X^Jk S^Bk S^Jk = O( p ): k k (X^Jk )T X^Nk (S^Jk )T S^Nk The above equation and (4.17) yield X^Bk S^Jk + X^Jk S^Nk = O(k pk ) = o(kX^Jk kF ): Hence,
S^Jk = ?(X^Bk )?1 X^Jk S^Nk + (X^Bk )?1o(kX^Jk kF ): Since according to Lemma 4.5 we have X^Bk = (1); S^Nk = (1) it follows that S^Jk = (kX^Jk kF ): Now, let us choose a convergent subsequence lim X k = X ; klim S k = S : !1
k!1
By virtue of (4.23), we can also choose a convergent subsequence X^Jk = X 0 ; lim S^Jk = S 0 lim J k!1 ^ k ^Jk kF k!1 kX kXJ kF J De ne
^ QT X Q = X0B 00 ; QT S Q = 00 S^0 : N k Upon dividing both sides of (4.21) by kX^J kF and letting k ! 0 we get X^B SJ0 + XJ0 S^N = 0:
11
(4.20) (4.21) (4.22) (4.23)
According to Lemma 4.4, X^B ; S^N are both positive de nite matrices. Note that kXJ0 kF = 1. Hence,
XJ0 SJ0 = ?XJ0 [(XB )?1XJ0 SN ] = ?Tr((XB )?1XJ0 SN XJ0T ) = ?Tr((XB )?1=2 XJ0 SN XJ0T (XB )?1=2 ) = ?Tr([(XB )?1=2 XJ0 (SN )1=2 ][(XB )?1=2 XJ0 (SN )1=2 ]T ) < 0: Let (X k ; yk; Sk ) 2 M be de ned as in Lemma 4.7 and let X 00 = X k ? X k + k (X ? X 0); y00 = yk ? yk + k (y ? y0); S 00 = S k ? Sk + k (S ? S 0); where k = k =0 . From Theorem 3.2, it is easily seen that Ai X 00 = 0; i = 1; : : : ; m;
m X i=1
Ai yi00 + S 00 = 0:
A simple calculation shows that [X k ? X k + k (X ? X 0)] [S k ? Sk + k (S ? S 0 )] = 0: Then, by using Lemma 4.7 we obtain (X k ? X k ) (S k ? Sk ) = O(!kk ); which implies [QT (X k ? X k )Q] [QT (S k ? Sk )Q] = O(!kk ); i.e., (X^Bk ? X~Bk ) S^Bk + 2X^Jk S^Jk + X^Nk (S^Nk ? S~Nk ) = O(!k k ); where ~k X 0 0 0 T k T k B Q X Q = 0 0 ; Q S Q = 0 S~k : N Note that from Lemma 4.7 we have X^Bk ? X~Bk = O(!k); S^Nk ? S~Nk = O(!k): 12
(4.24)
(4.25)
(4.26)
Also, from Lemma 4.2 it follows that S^Bk = O(k ); X^Nk = O(k ): In view of (4.25), (4.26) and (4.27), we deduce X^Jk S^Jk = O(!kk ): Let us observe that !k k = k maxfkX^Jk kF ; kS^Jk kF ; k g kX^Jk k2F kX^Jk k2F ( ) ^Jk kF k S k k = ^ k max 1; ^ k ; ^ k kXJ kF kXJ kF kXJ kF = o(1); ^k since kS^JkkF is bounded and kXJ kF k = pk 1= = o(1): k kX^Jk kF k k
(4.27) (4.28)
(4.29)
Dividing both sides of (4.28) by kX^Jk k2F , recalling (4.29), and letting k ! 1, we obtain XJ0 SJ0 = 0; which contradicts (4.24). From Lemma 4.8 and (4.8), we get !k = O(k pk ):
(4.30)
Theorem 4.9 Under Assumptions 1 and 2, Algorithm 3.1 is superlinearly convergent. Moreover, if kX 1=2 SX 1=2 ? I kF = O( 1+ ) for some constant > 0, then the convergence has Q-order at least 1 + minf; 0:5g. Proof. From Lemma 4.2, Lemma 4.5 and Lemma 4.8, we deduce that p ) p ) O( ) (1) O ( O ( k k k k T k T k Q X Q = O( p ) O( ) ; Q S Q = O( p ) (1) k : k k k k k
13
(4.31)
Let (X k ; yk; Sk ) 2 M be de ned as in Lemma 4.7. Then, from (4.31), (4.30) and Lemma 4.7, we have ( p ) O( p ) k k k k QT (X k ? X k )Q = O (4.32) O(k pk ) O(k ) ; p ) O( ) O ( k k T k k (4.33) Q (S ? S )Q = O( p ) O( pk ) : k k k k Since (X k ; S k ) is bounded, so is (X k ; Sk ). So, we may choose ? such that k(X k ; S k )kF ?; k(Xk ; Sk )kF ?; 8 k: Hence,
k = k (?) = 1 k(X k )?1=2 (X k ? X k )(S k ? Sk )(X k )1=2kF k 1 k(X k )?1=2 (X k ? X k )(S k ? Sk )(X k )1=2 kF k 1 = k[QT (X k )?1=2 Q][QT (X k ? X k )Q][QT (S k ? Sk )Q][QT (X k )1=2 Q]kF : k
In view of (4.1), (4.32) and (4.33), we have )Q][QT X 1=2Q] [QT X ?1=2 Q][QT (X ? X )Qp][QT (S ? Sp p ) O(1) O(p ) ) O ( ) O ( O ( ) O ( O (1) O (1) p p p p = O(1) O(1=p ) O(p ) O( ) O( ) O( ) O( ) O( ) O(2 ) O(2 1:5 ) = O( ) O(2 ) ; where for simplicity we omit the index k. Therefore k = O(k), which proves the theorem by invoking Theorem 4.6.
5 The modi ed algorithm In a recent paper, Kojima, Shida and Shindoh [4] proposed a modi cation of Algorithm 3.1 which enforces Assumption 2. By performing additional corrector steps, the iterates get more and more centered at a quadratic rate. 14
Algorithm 5.1
k 0; (X; y; S ) (X 0; y0; S 0); Repeat until (X; y; S ) 2 F or is suciently small:
(Predictor step) Do the same as in Algorithm 3.1 (Corrector step) Repeat the corrector step in Algorithm 3.1 until (X; S ) 2 N (minf ; g; );
k k + 1; (X k ; yk ; S k ) (X; y; S ): In Algorithm 5.1, is a positive constant. Kojima, Shida and Shindoh proved in [4] that Algorithm 5.1 has polynomial complexity, and is superlinearly convergent under assumptions (A) and (B). Since the local convergence results in the preceding section also apply to Algorithm 5.1, we obtain the following theorem.
Theorem 5.2 Under Assumption 1, Algorithm 5.1 is superlinearly convergent with Q-order at least 1 + minf; 0:5g.
6 A weaker assumption Assumption 2 is quite strong since even in the LP case (X k and S k are diagonal) it is not satis ed in general. This arises an interesting question: is there a weaker assumption than Assumption 2 so that we can expect it is satis ed at least for some cases such as LP? The following weaker assumption works. kSk X p Assumption 2' The iteration sequence satis es klim = 0: !1 X k S k p Assumption 2' is equivalent to klim X k S k = k = 0. It is easily seen from (4.19) that !1 Assumption 2 implies Assumption 2'. In the LP case, Assumption 2' is satis ed as long as Algorithm 3.1 is convergent.
Theorem 6.1 Under Assumptions 1 and 2', Algorithm 3.1 is superlinearly convergent. Moreover, if X k S k = O(k0:5+ ) for some constant > 0, then the convergence has Q-order at least 1 + minf; 0:5g. Proof. De ne
0k = max kX k S k kF =pk ; pk : Then, a modi cation of the proof of Lemma 4.8 gives X^Jk = O(0kpk ); S^Jk = O(0k pk ): n
o
15
(6.1)
This can be done by replacing k by 0k and removing (4.18) and (4.19) in the proof of Lemma 4.8. Therefore, (6.2) !k = O(0k pk ): In view of (6.1) and (6.2), the theorem follows by an analogous proof to Theorem 4.9.
Acknowledgment
The authors would like to thank Jun Ji for carefully reading the manuscript.
References [1] F. Alizadeh, J.-P. A. Haeberly, and M. L. Overton. Primal-dual interior point methods for semide nite programming. Working paper, 1994. [2] C. Helmberg, F. Rendl, R.J. Vanderbei, and H. Wolowicz. An interior-point method for semide nite programming. Technical report, Program in Statistics and Operations Research, Princeton University, 1994. [3] M. Kojima, M. Shida, and S. Shindoh. Global and local convergence of predictor{ corrector infeasible{interior{point algorithms for semide nite programs. Research Reports on Information Sciences B-305, Department of Information Sciences, Tokyo Institute of Technology, 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152, Japan, October 1995. [4] M. Kojima, M. Shida, and S. Shindoh. Local convergence of predictor{corrector infeasible{interior{point algorithms for semide nite programs. Research Reports on Information Sciences B-306, Department of Information Sciences, Tokyo Institute of Technology, 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152, Japan, December 1995. [5] M. Kojima, M. Shida, and S. Shindoh. A predictor-corrector interior-point algorithm for the semide nite linear complementarity problem using the Alizadeh-Haeberly-Overton search direction. Research Reports on Information Sciences B-311, Department of Information Sciences, Tokyo Institute of Technology, 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152, Japan, January 1996. [6] M. Kojima, S. Shindoh, and S. Hara. Interior-point methods for the monotone linear complementarity problem in symmetric matrices. Research Reports on Information Sciences B-282, Department of Information Sciences, Tokyo Institute of Technology, 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152, Japan, April 1994. 16
[7] Z-Q. Luo, J. F. Sturm, and S. Zhang. Superlinear convergence of a symmetric primaldual path following algorithm for semide nite programming. Report 9607/A, Econometric Institute, Erasmus University Rotterdam, The Netherlands, January 1996. [8] S. Mizuno, M. J. Todd, and Y. Ye. On adaptive-step primal-dual interior-point algorithms for linear programming. Mathematics of Operations Research, 18(4):964{981, 1993. [9] R. D. C. Monteiro. Primal-dual path following algorithms for semide nite programming. Working paper, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA, September 1995. [10] Y. E. Nesterov and M. J. Todd. Primal{dual interior{point methods for self{scaled cones. Technical Report 1125, School of Operations Research and Industrial Engineering, Cornell University, Ithaca, New York 14853{3801, USA, 1995. [11] F. A. Potra and R. Sheng. Homogeneous interior-point algorithms for semide nite programming. Reports on Computational Mathematics 82, Department of Mathematics, The University of Iowa, Iowa City, IA 52242, USA, November 1995. [12] F. A. Potra and R. Sheng. A superlinearly convergent primal{dual infeasible{interior{ point algorithm for semide nite programming. Reports on Computational Mathematics 78, Department of Mathematics, The University of Iowa, Iowa City, IA 52242, USA, October 1995. [13] J. F. Sturm and S. Zhang. Symmetric primal-dual path following algorithms for semidefinite programming. Report 9554/A, Econometric Institute, Erasmus University Rotterdam, The Netherlands, 1995. [14] L. Vandenberghe and S. Boyd. Positive de nite programming. Technical report, Information Systems Laboratory, Electrical Engineering Department, Stanford University, Stanford, CA 94305, USA, July 1994. [15] Y. Zhang. On extending primal{dual interior{point algorithms from linear programming to semide nite programming. TR 95-20, Department of Mathematics and Statistics, University of Maryland Baltimore County, Baltimore, Maryland 21228{5398, USA, October 1995.
17