Department of Mechanical Engineering SOPHIA UNIVERSITY Research Report No.16
Affine Scaling Algorithm Fails For Semidefinite Programming Masakazu MURAMATSU July 1996 (Revised: March 1997)
Abstract
In this paper, we introduce an ane scaling algorithm for semide nite programming, and give an example of a semide nite program such that the ane scaling algorithm converges to a non-optimal point. Both our program and its dual have interior feasible solutions, and unique optimal solutions which satisfy strict complementarity, and they are nondegenerate everywhere.
Abbreviated Title: Ane scaling fails for SDP Key Words: Semide nite Programming, Ane Scaling Algorithm, Global Convergence Analysis.
Aliation: Department of Mechanical Engineering, Sophia University Address: 7-1, Kioi-cho, Chiyoda-ku, Tokyo 102 Japan E-mail address:
[email protected] URL: http://www.or.me.sophia.ac.jp/~muramatu
1 Introduction Semide nite programming (SDP) has remarkable resemblance with linear programming (LP), and it is known that several interior point methods for LP and their polynomial convergence analysis can be naturally extended to SDP (Alizadeh [1], Alizadeh, Haeberly and Overton [2], Helmberg, Rendl, Vanderbei and Wolkowicz [9], Jarre [10], Kojima, Shindoh, and Hara [12], Kojima, Shida and Shindoh [13], Lin and Saigal [14], Luo, Sturm and Zhang [15], Monteiro [17], Monteiro and Zhang [19], Nesterov and Nemirovskii [20, 21], Nesterov and Todd [22, 23], Potra and Sheng [25], Sturm and Zhang [28], Vandenberghe and Boyd [31], and Zhang [34]). The ane scaling algorithm for LP is originally proposed by Dikin [5], and rediscovered by Barnes [4], and Vanderbei, Meketon and Freedman [33] after Karmarkar [11] proposed the rst polynomial-time interior point method. The ane scaling algorithm has been widely implemented and extensively studied, but unlike many other interior point methods, the question of whether the ane scaling algorithm is polynomially convergent is still an open problem. The strongest convergence result so far is due to Tsuchiya and Muramatsu [30], which establishes global convergence of the ane scaling algorithm where the step is taken as a xed fraction less than or equal to 2=3 of the whole step to the boundary of the feasible region. Simpler proofs of the same global convergence results can also be found in Monteiro, Tsuchiya and Wang [18], Saigal [26], and Saigal [27]. The ane scaling algorithm, like other polynomial-time interior point methods, can also be naturally extended to SDP. Faybusovich [6] investigated the ane scaling vector eld for SDP, Faybusovich [7] proposed the discrete version of the ane scaling algorithm, and Goldfarb and Scheinberg [8] proved the global convergence of the associated continuous trajectories. So far, there exists no convergence analysis for the discrete version of the ane scaling algorithm for SDP. In this paper, we give an instance of SDP such that the ane scaling algorithm converges to a nonoptimal point. We prove that for both the short and the long step version of the ane scaling algorithm, there exists a region of starting points such that the generated sequence converges to a non-optimal point. Our program and its dual have interior feasible solutions, and unique optimal solutions which satisfy strict complementarity. Furthermore, both programs are nondegenerate in the sense of Alizadeh, Haeberly, and Overton [3]. This paper is organized as follows. In Section 2, we introduce the ane scaling algorithm for SDP. In Section 3, we give an instance of SDP, and apply the ane scaling algorithm to this program. In Sections 4 and 5, we prove that the long step and the short step ane scaling algorithm converge to a non-optimal point, respectively. In Section 6, we give some concluding remarks.
2 The Ane Scaling Algorithm for SDP
Let S (n) denote the set of n n real symmetric matrices. Consider the SDP problem min C X subject to Ai X = bi ; i = 1; : : :; m; X 0;
(1)
4 where P C; X; Ai 2 S (n), the operator denotes the standard inner product in S (n), i.e., C X = tr(CX) = semide nite. Similarly, X 0 means that X is positive i;j Cij Xij , and X 0 means that X is positive 4 pX X = ptr(X t X) is called the Frobenius norm. The dual of (1) is de nite. The induced norm kX kF = m X (2) maxbtu subject to S + uiAi = C; S 0: i=1
We assume that an interior (i.e., positive de nite) feasible point of the primal problem (1) exists, and the matrices Ai ; i = 1; : : :; m are linearly independent. There are several equivalent characterization of the ane scaling direction for LP. Similarly, we can de ne the ane scaling direction for SDP in dierent ways. Here we de ne the ane scaling direction for SDP by rst de ning the associated dual estimate. For a detailed motivation of the ane scaling algorithm for LP, we recommend the textbook by Saigal [27] which deals with this algorithm extensively. 1
Given an interior feasible solution X 0, we de ne the dual estimate (u(X); S(X)) as the unique solution of the following optimization problem: X min kX 1=2 SX 1=2 k2F subject to S + ui Ai = C: (3) i
Using the Karush-Kuhn-Tucker condition, we solve the above problem and get the explicit formula X u(X) = G(X)?1 p(X); S(X) = C ? ui (X)Ai ; i
(4)
where G(X) 2 S (m) and p(X) 2 Rm are such that Gij (X) = tr(Ai XAj X) and pj (X) = tr(Aj XCX) for all i; j = 1; : : :; m, respectively. Here, the linear independence of the Aj 's ensures that G(X) is invertible. In fact, we have the following proposition. Proposition 1 If X 0, then G(X) 0. Proof : First, we note that for any matrices M1 and M2 which have appropriate sizes, tr(M1 M2 ) = tr(M2 M1 ) holds. By using this property and the linearity of the trace, we have for any v 2 Rm X Gij vi vj vt Gv = i;j X tr(Ai XAj X)vi vj = i;j X X = tr(( vi Ai )X( vj Aj )X) i X j X 1 = 2 = tr(X ( vi Ai)X( vj Aj )X 1=2) j i X 1 = 2 1 = 2 (5) = kX ( vi Ai )X k2F 0; i
P in which the equality holds if and only if i vi Ai = 0. The linear independence of Ai 's implies v = 0. 2 In (3), any decomposition of X = MM t can be used instead of X 1=2 , because kM tSM k2F = tr(SXSX) = 1 kX =2 SX 1=2 k2F . Namely, the dual estimate is independent of the choice of decomposition. We however use X 1=2 for notational simplicity. The ane scaling direction D(X) is de ned as 4 XS(X)X = X(C ? X u (X)A )X: (6) D(X) = i i i
The following proposition gives some properties of D(X).
Proposition 2 We have for all j = 1; : : :; m, and
Aj D(X) = 0
(7)
C D(X) = kX 1=2 SX 1=2 k2F :
(8)
Proof : The rst equation can easily be seen from (4) as
Aj D(X) = tr(Aj X(C ? Hence,
m X i=1
tr(X 1=2 (C ?
ui(X)Ai )X) = pj (X) ?
X i
m X i=1
Gji(X)ui (X) = 0:
ui(X)Ai )XAj X 1=2 ) = 0 2
(9) (10)
holds for all j, and we have C D(X) = tr(CX(C ?
X
= tr(X 1=2 (C ?
uiAi )X)
i X
ui Ai )XCX 1=2 ) i X X = tr(X 1=2 (C ? ui Ai )X(C ? uj Aj )X 1=2 ) j i X = kX 1=2(C ? uiAi )X 1=2 k2F : 2 i
(11)
For the ane scaling algorithm, there are two major strategies to de ne the step size: short step strategy and long step strategy. In the short step strategy, the step size parameter is determined with respect to the Dikin's ellipsoid. Speci cally, the iteration of the short step ane scaling algorithm can be written as D(X ) D(Xk ) Xk+1 = Xk ? 1=2 k 1=2 = Xk ? p (12) C D(Xk ) kXk Sk Xk kF where 1 is a step size parameter. It is easily seen that the condition 1 ensures that the next point Xk+1 is feasible. The long step strategy, which is widely used in practice in case of LP, means that the next iterate is chosen by moving a ( xed) ratio of the whole step to the boundary of the feasible region in the direction of ane scaling direction, namely, Xk+1 = Xk ? (Xk )D(Xk ); (13) where (X) = sup f > 0 j X ? D(X) 0 g ; (14) and < 1 is the ratio. In the following section, we give an instance of (1) for which both the long step and the short step ane scaling algorithm fail to converge to an optimal solution.
3 An Example We consider the following SDP problem, min 10 01 X subject to 01 10 X = 2; X 0; (15) and its dual 0 1 1 0 max2u subject to S + u 1 0 = 0 1 ; S 0; (16) where X; S 2 S (2) and u 2 R. The equality condition of (15) implies that the o-diagonal elements of X must be 1 if X is feasible. Therefore, putting X = x1 y1 ; (17) and noting that X 0 , x 0; y 0; xy 1, we can transform the problem (15) into an equivalent form: min x + y subject to x 0; y 0; xy 1: (18) In fact, the relation (17) implies the one-to-one correspondence between feasible solutions of (15) and (18). From this relation, we can easily see that X is an interior feasible solution if and only if xy > 1, thus xy = 1 3
implies that the corresponding X is on the boundary of the feasible region of (15). Noting that optimal solution of (18) is (x; y) = (1; 1), we can easily verify that 1 1 1 ? 1 X = 1 1 ; and (u ; S ) = 1; ?1 1 (19) are the unique optimal solutions of the primal (15) and the dual (16), respectively, that the optimal values coincide, and that X and (u ; S ) satisfy strict complementarity, namely, we can decompose X and (u ; S ) as X = Q 20 00 Qt ; and S = Q 00 02 Qt ; (20) where p p 2 ? 1= 1= p p (21) Q = 1= 2 1= 22 is the orthogonal matrix whose columns are the eigenvectors of X and S corresponding to each nonzero eigenvalues. We can calculate the dual estimate by (4) as follows. G(X) = 2(xy + 1); (22) p(X) = 2(x + y); (23) x + y (24) u(X) = G(X)?1 p(X) = xy + 1 ; 1 0 ? u(X) 0 1 = 1 ?u(X) : S(X) = (25) 0 1 1 0 ?u(X) 1 By a straightforward calculation, we have the ane scaling direction 2 xy ? 1 x ? 1 0 D(X) = xy + 1 (26) 0 y2 ? 1 : Hence an iteration of the general ane scaling algorithm can be written as k+1 k k2 x 1 x 1 (x ) ? 1 0 k = (27) 1 yk+1 1 yk ? 0 (yk )2 ? 1 for some scaler k > 0. Thus we have xk+1 = xk ? k ((xk )2 ? 1); (28) k +1 k k k 2 y = y ? ((y ) ? 1); (29) which is the expression of the ane scaling algorithm in the (x; y)-space. In other words, the sequence fXk g generated by the ane scaling algorithm for the SDP (15) corresponds to the sequence f(xk ; yk )g generated by (28) and (29) through the mapping (17). 4 f (x; y) j xy > 1; x < 1; y > 1 g. The initial point from which the ane scaling algorithm Let us de ne F = fails will be chosen in this region. In the remaining of this section, we will gradually show that the sequence generated by the ane scaling algorithm cannot escape from this region, from which the convergence of the sequence (Corollary 5) follows. Proposition 3 Assume that we do one iteration (28) and (29) of the ane scaling algorithm from (x; y) 2 F to produce the next iterate (x+ ; y+ ) with k = . Then we have x+ y+ ? 1 = 1 ? (x + y) ? 2R(x; y) < 1; (30) xy ? 1 where
4 (1 ? x2 )(y2 ? 1) : R(x; y) = xy ? 1
4
(31)
This proposition follows from (28) and (29) by a straightforward calculation, thus we omit the proof. As was mentioned after (18), (x+ ; y+ ) is on the boundary of the feasible region if and only if x+ y+ = 1, which implies that the left hand side of (30) is 0. Let us consider the following quadratic equation: 1 ? (x + y) ? 2 R(x; y) = 0: (32) 2 Since R(x; y) > 0 on F , the coecient of and the constant part of this quadratic equation have the opposite signs, from which it follows that one of the solutions of (32) is positive, and the other negative. Let ^(x; y) be the positive solution of (32), which is the distance to the boundary of the feasible region. Proposition 4 Let (x; y) 2 F and (x+ ; y+ ) be the next iterate of the ane scaling algorithm with arbitrary step size choice. Then we have x+ x, y+ y, and (x+ ; y+ ) 2 F . Proof : Let x() = x ? (x2 ? 1); (33) 2 y() = y ? (y ? 1): (34) Since x() is increasing and y() is decreasing, the former relations are obvious. We prove the last assertion by showing x(^(x; y)) < 1. We have from (32) that 1 ? (x + y)^ (x; y) = R(x; y)^(x; y)2 > 0; (35) from which (36) ^(x; y) < x +1 y follows. Noting that x ? 1 < 0 and y ? 1 > 0, we have x(^(x; y)) = 1 + (x ? 1) ? ^(x; y)(x2? 1) +1 < 1 + (x ? 1) 1 ? xx + y ? 1) < 1; = 1 + (x ?x1)(y (37) +y which completes the proof. 2
Corollary 5 If the initial point (x0; y0 ) is contained in F , the sequence generated by ane scaling algorithm converges regardless of step size. Proof : fxk g is monotonically increasing and bounded above by 1, thus has a limit point. Similarly, fyk g
is monotonically decreasing and bounded below by 1. Thus fyk g also has a limit point, and f(xk ; yk )g converges. 2
4 The Long Step Ane Scaling Algorithm We consider the long step ane scaling algorithm in this section. Since 2 x ? 1 ^(x; y) y2 ? 1 (38) is the whole step of the way to the boundary of the feasible region in the (x; y)-space, the iteration of the long step ane scaling algorithm can be written as follows: xk+1 = xk ? k ^(xk ; yk )((xk )2 ? 1); (39) k +1 k k k k 2 y = y ? k ^(x ; y )((y ) ? 1); (40) where k < 1 is the step size parameter. Note that k may vary between iterations by this de nition. Obviously, this is a generalization of the long step strategy mentioned in the previous section. We use this de nition of the long step ane scaling algorithm in what follows, which enables us to handle the short step case easily. 5
Proposition 6 If (x; y) 2 F and (x+ ; y+ ) is the next iterate of the long step ane scaling algorithm with step size parameter , the following relations hold:
x+ y+ ? 1 < 1 ? 2; xy ? 1 ^(x; y) < p 1 : R(x; y) Proof : In view of (30) with = ^(x; y), and (32), we have x+ y+ ? 1 = xy ? 1 = = < which is the former inequality. It follows from (32) that thus we have
(41) (42)
1 ? ^(x; y)(x + y) ? 2^(x; y)2 R(x; y) 1 ? ^(x; y)(x + y) ? 2(1 ? (x + y)^(x; y)) 1 ? 2 ? (x + y)^(x; y)(1 ? ) 1 ? 2 ;
(43)
+ y)^(x; y) < 1 ; ^(x; y)2 = 1 ? (xR(x; y) R(x; y)
(44)
^(x; y) < p 1 ; R(x; y)
(45)
which completes the proof. 2
Theorem 7 Assume that we use the long step ane scaling algorithm for (15) with step size k where 1 > k > 0 for all k. For any positive number < 1, let us de ne F =4 f (x; y) 2 F j x < 1 ? g :
(46)
If the initial point (x0; y0 ) 2 F satis es
2 0 p00 x y ? 1 < (1 ?2 ? x ) ;
(47)
then the limit point of f(xk ; yk )g is contained in the closure of F.
Since the closure of F does not contain the optimal solution, this theorem implies the convergence to a non-optimal point. The inequality (47) can be satis ed for any and . To see this, given and , x x0 < 1 ? , and assume that y0 # 1=x0. The right-hand side of (47) is a positive constant, while the left-hand side converges to 0 from above, thus there exists some y0 satisfying (47). Proof of Theorem 7 : We will prove the theorem by showing that if f (xk ; yk ) j k = 0; : : :; L g 2 F , then (xL+1 ; yL+1 ) 2 F . Then the theorem follows by induction. First, since xk < 1 ? and yk 1=xk 1=(1 ? ) 1 + , we have (1 ? (xk )2 )((yk )2 ? 1) (1 ? xk )(yk ? 1) 2: We can bound the sum of ^(xk ; yk ) as follows: L q L X X 1= R(xk ; yk ) (Use (42).) ^(xk ; yk ) < k=0
k=0
6
(48)
=
< < =
L X
s
xk y k ? 1 k2 k2 k=0 (1 ? (x ) )((y ) ? 1) p00 X x y ? 1 L q(1 ? 2 )k (Use (41) and (48).) k=0 p00 X x y ? 1 L (1 ? 2 =2)k =0 p 0 0 kX 1 x y ? 1 (1 ? 2 =2)k p 0 0 k=0 2 x y ? 1: 2
(49)
Now we have L X
xL+1 ? x0 =
k=0 L X
1; (63) x1 y1 + 1 2 where the last inequality follows from x1 < 1. Hence, we have u(x1 ; y1 ) > 1, which implies that the limit of the dual estimates is infeasible. Note that the dual estimate is continuous everywhere on the feasible region. In LP, non-degeneracy assumption implies this condition, which produces a much simpler proof of 8
the global convergence of the ane scaling algorithm. Our example however, shows that this condition is not eective in proving the global convergence of the ane scaling algorithm for SDP. Some examples are known that interior point methods fail to converge to an optimal solution. For LP, Mascarenhas [16] shows that the ane scaling algorithm fails if the step size is chosen at 0:999 of the way to the boundary in each iteration. His example is based on the zigzagging phenomena (see Terlaky and Tsuchiya [29]) of the ane scaling algorithm, and as a result, it is dicult to observe the failure in numerical experiment. In contrast, our example is not based on the zigzagging phenomena, thus not an extension of Mascarenhas' example to SDP. In fact, the failure can be numerically observed by choosing an initial point to satisfy (47) of Theorem 7 in the long step strategy or (57) of Theorem 9 in the short step strategy. For example, in the short step ane scaling algorithm with step size = 0:5, the sequence generated from (x0; y0 ) = (0:20001; 5:0) converges to approximately (0:2078065619;4:8121675791), a non-optimal point. For semi-in nite programming, Powell [24] and Vanderbei [32] show that the Karmarkaer's projective scaling algorithm and the ane scaling algorithm, respectively, fail to converge to an optimal solution. It is well-known that SDP can be regarded as a kind of semi-in nite programming. However, the connection between interior point methods for semi-in nite programming and SDP has not been established yet. Even though our example shows that the global convergence from arbitrary starting point is impossible, it may still be possible to prove global convergence from well-chosen starting points, or by allowing variable step sizes.
Acknowledgements The author thanks to the anonymous referees and the associate editor for their valuable comments on earlier version of this paper. The author wishes to thank Prof. Takashi Tsuchiya, Prof. Renato Monteiro, and Prof. Shinji Mizuno of the Institute of Statistical Mathematics for having many helpful conversations with the author, and for pointing out several improvements of the rst draft of this paper. This work was partially supported by the Grant-in-Aid for Encouragement of Young Scientists of Ministry of Education, Science, Sports and Culture of Japan, and the Grant-in-Aid for Developmental Scienti c Research (B) of Ministry of Education, Science, Sports and Culture of Japan.
References [1] F. Alizadeh, \Interior point methods in semide nite programming with application to combinatorial optimization", SIAM Journal on Optimization 5(1995)13-51. [2] F. Alizadeh, J. P. A. Haeberly, and M. L. Overton, \Primal-dual interior-point methods for semide nite programming", Technical report (1994). [3] F. Alizadeh, J. P. A. Haeberly, and M. L. Overton, \Complementarity and nondegeneracy in semide nite programming", Technical report (1995). [4] E. R. Barnes, \A variation on Karmarkar's algorithm for solving linear programming problems", Mathematical Programming 36(1986)174-182. [5] I. I. Dikin, \Iterative solution of problems of linear and quadratic programming", Doklady Akademii Nauk SSSR 174(1967)747-748. (Translated in: Soviet Mathematics Doklady 8(1967)674-675.) [6] L. Faybusovich, \On a matrix generalization of ane-scaling vector elds", SIAM Journal on Matrix Analysis and Application 16, No. 3(1995)886-897. [7] L. Faybusovich, \Dikin's algorithm for matrix linear programming problems", in: J. Henry and J. Pavon eds., Lecture notes in control and information sciences 197(Springer-Verlag, 1994) pp. 237-247. [8] D. Goldfarb and K. Scheinberg, \Interior point trajectories in semide nite programming", Technical report, Dept. of IE/OR, Columbia University, New York, USA(March 1996, Revised November 1996). 9
[9] C. Helmberg, F. Rendl, R. J. Vanderbei, and H. Wolkowicz, \An interior-point method for semide nite programming", SIAM Journal on Optimization 6(1996)342-361. [10] F. Jarre, \An interior-point method for minimizing the maximum eigenvalue of a linear combination of matrices", SIAM Journal on Control and Optimization 31(1993)1360-1377. [11] N. Karmarkar, \A new polynomial-time algorithm for linear programming", Combinatorica 4, No.4(1984)373-395. [12] M. Kojima, S. Shindoh and S. Hara, \Interior point methods for the monotone semide nite linear complementarity problems", Research Report #282, Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology, Oh-Okayama, Meguro, Tokyo 152, Japan (1994, Revised 1995). [13] M. Kojima, M. Shida and S. Shindoh, \Local convergence of predictor-corrector infeasible-interior-point algorithms for SDPs and SDLCPs", Research Report #B-306, Dept. of Mathematical and Computing Sciences, Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152, Japan(1995). [14] C-J. Lin and R. Saigal, \A predictor-corrector method for semi-de nite programming", Working paper, Dept. of Industrial and Operations Engineering, The University of Michigan, Ann Arbor, Michigan 48109-2177(1995). [15] Z-Q. Luo, J. F. Sturm and S. Zhang, \Superlinear convergence of a symmetric primal-dual path-following algorithms for semide nite programming", Report 9607/A, Econometric Institute, Erasmus University, Rotterdam, The Netherlands(1996). [16] W. F. Mascarenhas, \The ane scaling algorithm fails for = 0:999", Technical report, Universidade Estadual de Campinas, Campinas S. P., Brazil (1993). [17] R. D. C. Monteiro, \Primal-dual path-following algorithms for semide nite programming", Working paper(1995). (To appear in SIAM Journal on Optimization.) [18] R. D. C. Monteiro, T. Tsuchiya and Y. Wang, \A simpli ed global convergence proof of the ane scaling algorithm", Annals of Operations Research 47(1993)443-482. [19] R. D. C. Monteiro, and Y. Zhang, \A uni ed analysis for a class of path-following primal-dual interiorpoint algorithms for semide nite programming", Working Paper(1996). [20] Y. E. Nesterov and A. S. Nemirovskii, \Optimization over positive semide nite matrices: Mathematical background and user's manual", Technical report, Central Economic & Mathematical Institute, USSR Academy of Science, Moscow, USSR (1990). [21] Yu. E. Nesterov and A. Nemirovskii, Interior-point polynomial algorithms in convex programming (SIAM, Philadelphia, PA, USA, 1994) [22] Y. E. Nesterov and M. J. Todd, \Self-scaled barriers and interior-point methods for convex programming", Technical Report 1091, School of Operations Research and Industrial Engineering, Cornell University, Ithaca, New York, USA(1995). [23] Y. E. Nesterov and M. J. Todd, \Primal-dual interior-point methods for self-scaled cones", Technical Report 1125, School of Operations Research and Industrial Engineering, Cornell University, Ithaca, New York, USA(1995). [24] M. J. D. Powell, \Karmarkar's algorithm: A view from nonlinear programming", Bulletin of the Institute of Mathematics and Its Applications 26(8-9)(1990)165-181. [25] F. A. Potra and R. Sheng, \A superlinearly convergent primal-dual infeasible-interior-point algorithm for semide nite programming", Reports on Computational Mathematics No.78, Department of Mathematics, The University of Iowa, Iowa, USA(1995). 10
[26] R. Saigal, \A simple proof of primal ane scaling method", Annals of Operations Research 62(1996)303324. [27] R. Saigal, Linear programming: A modern integrated analysis (Kluwer Academic Publishers, USA, 1995). [28] J. F. Sturm and S. Zhang, \Symmetric primal-dual path-following algorithms for semide nite programming", Report 9554/A, Econometric Institute, Erasmus University Rotterdam, The Netherlands(1995). [29] T. Terlaky and T. Tsuchiya, \A note on Mascarenhas' counter example about global convergence of the ane scaling algorithm", Research Memorandum 596, The Institute of Statistical Mathematics, Tokyo, Japan (1996). [30] T. Tsuchiya and M. Muramatsu, \Global convergence of a long-step ane scaling algorithm for degenerate linear programming problems", SIAM Journal on Optimization 5, No.3(1995)525-551. [31] L. Vandenberghe and S. Boyd, \A primal-dual potential reduction method for problems involving matrix inequalities", Mathematical Programming 69(1995)205-236. [32] R. J. Vanderbei, \Ane scaling trajectories associated with a semi-in nite linear program", Mathematics of Operations Research 20 No.1(1995)163-174. [33] R. J. Vanderbei, M. S. Meketon and B. A. Freedman, \A Modi cation of Karmarkar's Linear Programming Algorithm", Algorithmica 1(1986)395-407. [34] Y. Zhang, \On extending primal-dual interior-Point algorithms from linear programming to semide nite programming", Technical Report TR95-20, Dept. of Math/Stat, University of Maryland, Baltimore County, Baltimore, Maryland, USA(1995).
11