Solving Semide nite Programs in Mathematica - CiteSeerX

3 downloads 1109 Views 258KB Size Report
Department of Computer Science, The University of Iowa, Iowa City, IA 52242, ... jrij ; i = 1;:::;m; kRdk g: We consider the symmetrization operator. HP(M) = 1. 2 h ...... At this location, we provide this paper in addition to a Mathematica notebook ...
REPORTS ON COMPUTATIONAL MATHEMATICS, NO. 97/1996, DEPARTMENT OF MATHEMATICS, THE UNIVERSITY OF IOWA

Solving Semide nite Programs in Mathematica Nathan Brixius, Florian A. Potray, and Rongqin Sheng y October, 1996

Abstract

Interior-point algorithms for solving semide nite programs are described and implemented in Mathematica. Included are Mizuno-Todd-Ye type predictor-corrector algorithms and Mehrotra type predictor-corrector algorithms. Three di erent search directions { the AHO direction, the KSH/HRVW/M direction and the NT direction, are used. Homogeneous algorithms using the Potra-Sheng formulation are tested. A simple procedure is derived for the computation of the homogeneous search directions. Numerical results show that the homogeneous algorithms are generally superior over their non-homogeneous counterparts in terms of number of iterations.

Department of Computer Science, The University of Iowa, Iowa City, IA 52242, USA. Department of Mathematics, The University of Iowa, Iowa City, IA 52242, USA. The work of the last two authors was supported in part by NSF Grant DMS 9305760.  y

1

1 Introduction In this paper we consider the semide nite programming (SDP) problem: minfC  X : Ai  X = bi ; i = 1; : : : ; m; X  0g;

(1.1)

and its associated dual problem:

m X T maxfb y : yi Ai + S = C; S  0g; i=1 nn ; i = 1; : : : ; m; b = (b1 ; : : : ; bm )T IR

(1.2)

where C 2 nn; Ai 2 2 m are given data, and m n n X 2 S+ , (y; S ) 2  S+ are the primal and dual variables, respectively. Here S+n denotes the set of all n  n symmetric positive semide nite matrices, and X  0 means that X 2 S+n . By G  H we denote the trace of (GT H ). Without loss of generality, we assume that the matrices C and Ai ; i = 1; : : : ; m, are symmetric. Also, for simplicity we assume that Ai ; i = 1; : : : ; m, are linearly independent. Throughout this paper we assume that both (1.1) and (1.2) have nite solutions and their optimal values are equal. Under this assumption, X  and (y ; S  ) are solutions of (1.1) and (1.2) if and only if they are solutions of the following nonlinear system: IR

IR

IR

Ai  X = bi ; i = 1; : : : ; m; m X yiAi + S = C;

(1.3b)

XS = 0; X  0; S  0:

(1.3c)

(1.3a)

i=1

The residues of (1.3a) and (1.3b) are denoted by:

ri = bi ? Ai  X; i = 1; : : : ; m; m X Rd = C ? yi Ai ? S:

(1.4a) (1.4b)

i=1

For any given  > 0 we de ne the set of -approximate solutions of (1.3) as

F = fZ = (X; y; S ) 2 S+n  m  S+n : X  S  ; jri j  ; i = 1; : : : ; m; kRd k  g: IR

We consider the symmetrization operator h i HP (M ) = 21 PMP ?1 + (PMP ?1 )T ; 8M 2 nn ; IR

2

introduced by Zhang [22]. Since, as observed by Zhang [22],

HP (M ) = I i M = I; for any nonsingular matrix P , any matrix M with real spectrum and any  2 , it follows that for any given nonsingular matrix P , (1.3) is equivalent to IR

Ai  X = bi ; i = 1; : : : ; m; m X yiAi + S = C;

i=1 HP (XS ) = 0;

X  0; S  0:

(1.5a) (1.5b) (1.5c)

A perturbed Newton method applied to the system (1.5) leads to the following linear system:

HP (X S + XS ) = I ? HP (XS ); Ai  X = ri ; i = 1; : : : ; m; m X yiAi + S = Rd ; i=1

(1.6a) (1.6b) (1.6c)

where (X; y; S ) 2 S n  m S n ,  2 [0; 1] is the centering parameter and  = (X  S )=n is the normalized duality gap corresponding to (X; y; S ). Here S n denotes the set of all n  n symmetric matrices. The search direction obtained through (1.6) is called the Monteiro-Zhang (MZ) uni ed direction [22, 14]. It is well known that P = I results in the Alizadeh-Haeberly-Overton (AHO) search direction [1], P = X ?1=2 or S 1=2 corresponds to the Kojima-Shindoh-Hara/HelmbergRendl-Vanderbei-Wolkowicz/Monteiro (KSH/HRVW/M) search direction [8, 3, 12], and the case of P T P = X ?1=2 [X 1=2 SX 1=2 ]1=2 X ?1=2 coincides with the Nesterov-Todd (NT) search direction [15]. A number of interior-point algorithms using the above mentioned directions have been analysed or implemented. Among them are the Mizuno-Todd-Ye [11] type predictor-corrector algorithms (cf. [6, 15, 17, 13]), and the Mehrotra [10] type predictor-corrector algorithms (cf. [2, 21]). The present paper aims to implement the Mizuno-Todd-Ye type and Mehrotra type algorithms using the Mathematica environment. There are twelve algorithms in our Mathematica package version 1.0. Six of them are devoted to the infeasible-interior-point algorithms requiring reasonably big starting points, while the other six algorithms use the Potra-Sheng homogeneous formulation [16] which does not need a large starting point. Instead, we use a xed starting point. The homogeneous versions can also detect infeasibility eciently by monitoring a changing parameter. Numerical results show that homogeneous algorithms generally have better performance than their corresponding non-homogeneous versions. IR

3

2 Infeasible predictor-corrector algorithms 2.1 Computation of search directions

As described in many other papers (cf. [2, 21, 22]), the linear system (1.6) for the Monteiro-Zhang uni ed search direction can be written in the following matrix form. 0 0 A 0 1 0 y 1 0 rp 1 @ AT 0 I A @ vec(X ) A = @ vec(Rd ) A ; (2.7) 0 E F vec(S ) vec(Rc) where AT = [vec(A1 ); vec(A2 ); :::; vec(Am )]; rT = [r1 ; r2 ; :::; rm ]; E; F 2 n2 n2 ; Rc 2 nn are such that (1.6a) has an equivalent vectorized form: E vec(X ) + F vec(S ) = vec(Rc): For any n  n matrix M, vec(M ) denotes the operation of stacking the columns of M on top of another so that vec(M ) = (m11 ; m21 ; : : : ; m1n ; : : : ; mnn )T . The above linear system (1.6) can be solved by the following procedure: IR

IR

Procedure 1  Compute y by solving the linear system

[AE ?1 FAT ]y = r + AE ?1 [F vec(Rd ) ? vec(Rc )];

 Compute S; X as follows: S = (1 ?  )Rd ?

m X i=1

wi Ai;

vec(X ) = E ?1 (Rc ? F vec(S )):

2.2 Mizuno-Todd-Ye type algorithms

Mizuno-Todd-Ye type algorithms using the di erent search directions described have been investigated in many papers (cf. [5, 9, 13, 15, 17, 16, 19, 18, 20]). Here, we present a general version of the algorithm. Let 0 < < < 1 be two parameters measuring the sizes of two neighborhoods of the central path. Then a typical Mizuno-Todd-Ye type algorithm moves the iterates forward to a neighborhood of size and then back to the smaller one of size by alternately using the ane and centering directions. 4

Algorithm 2.1

Choose 2 (0; 1), and  > 0; (X; y; S ) (I; 0; I ); Repeat until (X; y; S ) 2 F :

(Predictor step)

Solve the linear system (1.6) with  = 0; Compute  = maxf~ 2 [0; 1] : n X i=1

where

(X; y; S )

(i (X ()S () ? (1 ? )I )2

!1=2

 (1 ? ); 8  2 [0; ~]g;

(X (); S ()) = (X; S ) + (X; S );  = X  S=n; (X; y; S ) + (X; y; S );

(Corrector step)

Solve the linear system (1.6) with  = 1; (X; y; S ) (X; y; S ) + (X; y; S ):

The computation of the step size  involves solving a complicated nonlinear equation. Actually, we can use a lower bound as proposed in [17, 20] for the KSH/HRVW/M and NT directions, and in [13] for the AHO directions. Speci cally, for the KSH/HRVW/M and NT directions, we have 2 : (2.8) q 1 + 4kP X SP ?1 kF =(( ? )) + 1 As described in [13], in the case of AHO direction, we have 2 ;   r  2 !+ ? + 4 + !+ ? ? ? ? where

! = 1 kX ?1=2 (X S + XS + XS )X 1=2 kF ;  = 1 kX ?1=2 X SX 1=2 kF :

(2.9)

In our Mathematica package, we set = 0:499 and to be the size of the current neighborhood:

:= kX 1=2 SX 1=2 ? I kF =: 5

2.3 Mehrotra type algorithms

The Mehrotra type algorithms in our Mathematica package follow the paper of Todd, Toh and Tutuncu [21]. Let us brie y describe the algorithm. Algorithm 2.2 (Todd-Toh-Tutuncu[21]) Choose  2 (0; 1) and  > 0; (X; y; S ) (I; 0; I ); Repeat until (X; y; S ) 2 F :

(Predictor step)

Compute the predicted direction (X; y; S ) by solving the linear system (1.6) with  = 0; Determine the parameter :

)  (S + S )]2 ;  := [(X + X [X  S ]2 where

:= min (?; ? (X ?1 X )) ;  := min (?; ? (S ?1 S )) : min min

(2.10) (2.11)

(Corrector step)

Compute the corrected direction (X; y; S ) by solving linear system (1.6) with  de ned by (2.10) and right side of (1.6a) modi ed as

I ? HP (XS + XS ): Compute and  from (2.11) with X; S replaced by X; S . (X; y; S ) (X; y; S ) + ( X; y; S ): In our Mathematica package, we choose  = 0:98. Following [21], we choose  as follows: 8 < max(0:05; ) if +   1:8; actual = : max(0:1; ) if 1:4  +  < 1:8 ; max(0:2; ) if +  < 1:4:

3 Homogeneous predictor-corrector algorithms 3.1 Potra-Sheng homogeneous formulation of SDP Potra-Sheng [16] proposed a homogeneous formulation of SDP:

Ai  X = bi; i = 1; : : : ; m; 6

(3.12a)

m X

yiAi + S = C; i=1 bT y ? C  X = ;

(3.12b)

(3.12c) X  0; S  0;   0;   0: (3.12d) The search direction (X; y; S; ; ) of homogeneous algorithms is de ned by the following linear system:

HP (XS + X S ) = I ? HP (XS );  +   =  ? ; Ai  X ? bi = (1 ? )Ri ; i = 1; : : : ; m; m X yi Ai + S ? C = (1 ?  )Rd ; i=1

 ? bT y + C  X = (1 ?  ) ; where  is a parameter such that  2 [0; 1],  = X nS++1  ;

Rd = ?

m X i=1

(3.13a) (3.13b) (3.13c) (3.13d) (3.13e)

!

yiAi + S ? C ;

ri = ? (Ai  X ? bi ) ; i = 1; :: : ; m;

= ?  ? bT y + C  X :

Generic Homogeneous Algorithm

Let (X 0 ; y0 ; S 0 ; 0 ; 0 ) = (I; 0; I; 1; 1): Repeat until a stopping criterion is satis ed:  Choose  2 [0; 1] and compute the solution (X; y; S; ; ) of the linear system (3.13).  Compute steplength  such that X + X  0; S + S  0;  +  > 0;  +  > 0:  Update the iterates (X + ; y+ ; S + ; + ; + ) = (X; y; S; ; ) + (X; y; S; ; ): Properties of the above generic algorithm can be found in [17]. It was proved that fk g is bounded away from zero if and only if the original problem (1.3) has a solution. If k ! 0 for some subsequence, then (1.3) is infeasible. 7

3.2 Computation of homogeneous search directions

We will derive a simple procedure to compute the direction (X; y; S; ; ) de ned by (3.13). Let A~T = [vec(A1 ); vec(A2 ); :::; vec(Am ); ?vec(C )] = [AT ; ?vec(C )]; bT = [b1 ; b2 ; :::; bm ]; rT = [r1 ; r2 ; :::; rm ];

c =  ? : (3.14) 2 2 Further, let E; F 2 n n ; Rc 2 nn be such that (3.13a) has an equivalent vectorized form: E vec(X ) + F vec(S ) = vec(Rc): (3.15) From (3.13b) and (3.14), we have  = c = ? (= ): Putting the above expression for  into (3.13e), we obtain ?C  X = ?(= ) ? bT y + c= ? (1 ? ) : (3.16) By (3.13d), we get  y  T ~ A  + vec(S ) = (1 ? )vec(Rd ): (3.17) IR

IR

Therefore, from (3.15), (3.16) and (3.17), we deduce  y  ? 1 T ~ ~ AE F A  ~ ?1 F vec(S ) + (1 ?  )AE ~ ?1 F vec(Rd ) = ?AE ~ ?1 F vec(Rd ) = A~(vec(X ) ? E ?1 vec(Rc )) + (1 ?  )AE ~ ?1 [(1 ?  )F vec(Rd ) ? vec(Rc )] = A~vec(X ) + AE  ~ ?1 [(1 ?  )F vec(Rd ) ? vec(Rc )] = b?C+ (1?X )r + AE   + (1 ?  )r ~ ?1 [(1 ?  )F vec(Rd ) ? vec(Rc )] = = ? (1 ?b + AE T  )

? ( = )  ? b  y  c0 b   y  +  (1 ? )r  + AE ~ ?1 [(1 ?  )F vec(Rd ) ? vec(Rc )]: = ?bT ?= 

= ? (1 ? ) Then, we have the following procedure.

c

Procedure 2 8

 Compute y and  by solving the linear system     y    (1 ?  )r ~ ?1 F A~T + 0T ?b ~ ?1 [(1? )F vec(Rd )?vec(Rc )]+ AE = AE b =  = ?  ? (1 ? ) :  Compute S; X and  as follows: S = (1 ?  )Rd ?

m X i=1

wiAi + C;

vec(X ) = E ?1(Rc ? F vec(S ));  = ( ?  )= ? :

3.3 Mizuno-Todd-Ye type homogeneous algorithms Algorithm 3.1 Choose 2 (0; 1);

(X; y; S; ; ) (I; 0; I; 1; 1); Repeat until (X=; y=; S= ) 2 F or  is suciently small:

(Predictor step)

Solve the linear system (3.13) with  = 0; Compute  = maxf~ 2 [0; 1] : n X i=1

(i (X ()S ()) ? (1 ? ))2 + ( ()() ? )2

!1=2

 (1 ? ); 8  2 [0; ~]g;

where

(X (); S ();  (); ()) = (X; S; ; ) + (X; S; ; ); (X; y; S; ; ) X; y; S; ; ) + (X; y; S; ; );

(Corrector step)

Solve the linear system (3.13) with  = 1; (X; y; S; ; ) X; y; S; ; ) + (X; y; S; ; ):

In accordance with Algorithm 2.1, we can use a lower bound for the step size . If the KSH/HRVW/M or NT direction is used, then we have 2 ; p 1 + 4=( ? ) + 1 9

where



 = 1 kX S k2F + ( )2

In the case of AHO direction, we have

  r where

 !+ ? 2 ?

1=2

:

2 ; + 4? + !+ ? ?



! = 1 kX ?1=2 (X S + XS + XS )X 1=2 k2F + (  +  + )2  1=2  = 1 kX ?1=2 X SX 1=2 k2F + ( )2 :

1=2

;

In our Mathematica package, we set = 0:499 and to be the size of the current neighborhood:  1=2 := 1 kX 1=2 SX 1=2 ? I k2 + ( ? )2 : F



3.4 Mehrotra type homogeneous algorithms

Algorithm 3.2 Choose  2 (0; 1).

(X; y; S; ; ) (I; 0; I; 1; 1); Repeat until (X=; y=; S= ) 2 F or  is suciently small:

(Predictor step)

Compute the predicted direction (X; y; S; ; ) by solving the linear system (3.13) with  = 0; Determine the parameter  : S ) + ( +  )( + )]2 ; (3.18)  := [(X + X )  (S + [X  S + ]2 where   := min (?; =;  (X ??1 X (3.19) ); =; min(S ?1 S )) : min

(Corrector step)

Compute the corrected direction (X; y; S; ; ) by solving linear system (3.13) with  de ned by (3.18) and right sides of (3.13a)and (3.13b) replaced by

I ? HP (XS + XS ); 10

and

 ?  ? 

respectively; Compute  from (3.19) with X; S; ;  replaced by X; S; ; ; (X; y; S; ; ) (X; y; S; ; ) + (X; y; S; ; ):

In our Mathematica package, we choose  = 0:98. Similar to [21], we choose  as follows:

8 < max(0:05; ) if   0:9; actual = : max(0:1; ) if 0:7   < 0:9 ; max(0:2;  ) if  < 0:7:

4 Numerical results Having described the algorithms used in our Mathematica package, we are now ready to present the numerical results gained from testing the package. As mentioned earlier, in our package we implement the following search directions: AHO, KSH/HRVW/M, and NT. For each of these search directions, we implement two Mizuno-Todd-Ye type predictor-corrector algorithms and two Mehrotra (M) type predictor algorithms. Half of the algorithms are infeasibleinterior-point algorithms (I), which require starting points of the form (X; y; S ) = (I; 0; I ) with large , while the other half use the Potra-Sheng homogeneous formulation (H) [16] which uses the starting point (X; y; S ) = (I; 0; I ). Therefore, we have implemented 12 algorithms, abbreviated as follows: Infeasible Infeasible Infeasible Infeasible Infeasible Infeasible Homogeneous Homogeneous Homogeneous Homogeneous Homogeneous Homogeneous

Mizuno-Todd-Ye AHO I-PC-AHO Mizuno-Todd-Ye KSH/HRVW/M I-PC-KSH Mizuno-Todd-Ye NT I-PC-NT Mehrotra AHO I-M-PC-AHO Mehrotra KSH I-M-PC-KSH Mehrotra NT I-M-PC-NT Mizuno-Todd-Ye AHO H-PC-AHO Mizuno-Todd-Ye KSH/HRVW/M H-PC-KSH Mizuno-Todd-Ye NT H-PC-NT Mehrotra AHO H-M-PC-AHO Mehrotra KSH/HRVW/M H-M-PC-KSH Mehrotra NT H-M-PC-NT

4.1 Description of the test problems

In our paper, we consider the following classes of semide nite programs: 1. A simple SDP: m=3, n=2 11

First we test the algorithms on a semide nite program of small size:  5 ?1  C = ?1 0      A = ?11 ?21 ; 31 ?12 ; 13 31 051 b = @ ?1 A ?4 It can be veri ed that one solution to this SDP is:  1 ?3=7  X = ?3=7 11=7 0 10=7 1 y = @ 9=7 A  10=350  S= 0 0 2. An infeasible SDP: m=3, n=2 Permuting (4.20) slightly, we obtain an SDP with no solutions:  5 ?1  C = ?1 0  1 ?1   3 1   1 3  A = ?1 2 ; 1 ?2 ; 3 1:01 051 b = @ ?1 A ?4

(4.20a) (4.20b) (4.20c)

(4.21a) (4.21b) (4.21c)

(4.22a) (4.22b) (4.22c)

A useful feature of our homogeneous algorithms is that these algorithms can determine the feasibility of a given problem after a small number of steps. If, during the execution of a homogeneous algorithm, the condition  < 2 holds, then the problem is considered to be infeasible and the algorithm halts.

3. Kojima-Shida-Shindoh problem We also test our algorithms on a problem posed by Kojima, Shida, and Shindoh [7]: 0 0 C= 0 1 (4.23a) 12

 2 0   0 1  ; 1 ?2 0 0   b = ?2

(4.23b)

0

(4.23c)

  X = 10 00

(4.24a)

A=

whose solution is:

001 y = @0A  00 0  S= 0 1

(4.24b) (4.24c)

We solve this problem with a tolerance of 10?6 , 10?8 , 10?10 , 10?12 , and 10?14 . The purpose for testing our algorithms on this problem is to test whether each of the algorithms considered displays superlinear convergence. The problem (4.23) is an SDP that requires many more iterations to solve if the algorithm used to solve it does not possess superlinear convergence. We test our algorithms on this problem to see how quickly they can detect the infeasibility of this problem. 4. Minimum eigenvalue problem:

minf :  is an eigenvalue of M g

(4.25) where M 2 S N . It is well-known that this problem can be expressed as an SDP with m = 1, n = N, and

C=M A=I b = (1)

(4.26a) (4.26b) (4.26c)

To test this problem, we randomly generated 5 dense random symmetric matrices each for

N = 5, N = 10. To ensure the matrices are symmetric, we generate a random N  N matrix A and compute (A + AT )=2. Note that Mehrotra-type methods require the computation of the minimum eigenvalue of a matrix. For this reason, we do not test the Mehrotra-type algorithms on this type of problem. 13

5. Max-Cut problem:

minf?L  X : diag(X ) = e=4g;

(4.27)

where L = M ? diag(Ae), e is the vector of all ones, and M 2 N N is a weighted adjacency matrix of a graph.[3, 21] The authors are indebted to Todd, Toh and Tutuncu for making available their MATLAB code which contains a program for converting such a problem into an SDP with m = N, n = N: IR

C = M ? Diag(M:e) Ai (j; k) = 1; forj = k = i Ai (j; k) = 0; otherwise b = e=4 6. Norm minimization problem:

P x A kg minfkA0 + M i=1 i i

(4.28a) (4.28b) (4.28c) (4.28d) (4.29)

where Ai 2 N N ; i = 0; : : : ; M . this problem can easily be expressed as a semide nite program, [21] with m = M; n = 2N : IR

C = A0 A = [ A1 ; : : : ; A M ; I ] b = (0; : : : ; 0; 1)

(4.30a) (4.30b) (4.30c)

In order to test the algorithms on this class of problems, we randomly generated linearly independent dense matrices A0 : : : AM , Ai 2 N N ; i = 1; : : : ; M with M = N = 5 and M = N = 10. IR

4.2 Results and discussion

We solved all of the preceding problems using all 12 of the algorithms comprising the Mathematica package. Unless otherwise speci ed, our stopping criterion is to cease execution when Ek < 10?6 , where Ek := maxfX  S; jri j; i = 1; 2; : : : ; m; kRd kg; for the infeasible algorithms, and

Ek := maxfX  S= 2 ; jrij=; i = 1; 2; : : : ; m; kRd k= g; 14

for homogeneous algorithms. We choose (X; y; S ) = (I; 0; I ) with  = 104 as our starting point for the infeasible-interiorpoint algorithms, while for the homogeneous algorithms we use the xed starting point (X; y; S ) = (I; 0; I ). We choose to solve only problems of a small size because the current version of the Mathematica package cannot handle large problems eciently. With the release of Mathematica 3.0, we plan to revise our package so that can solve larger problems more eciently. Tables 4.1{4.8 show the number of iterations required to solve SDPs of the various types using the various methods. Our results show that for all problems considered, methods using the homogenous formulation converge more quickly than nonhomogeneous methods, usually by 1 or 2 iterations. In nearly all cases, Mehrotra-type methods outperform Mizuno-Todd-Ye-type methods. In general, the performance (in terms of the number of iterations required) of the algorithms rank as follows: 1. H-M-PC algorithms 2. H-PC algorithms 3. I-M-PC algorithms 4. I-PC algorithms The Kojima-Shida-Shindoh problem [7] is a good example of the di erences between homogeneous and non-homogeneous problems. The algorithms H-PC-KSH, H-PC-AHO, and H-PC-NT all solve this problem exactly in 1 iteration. We also nd that the Mehrotra-type algorithms using the homogeneous formulation take roughly half the number of iterations of the non-homogeneous algorithms. As shown in Table 4.9, the algorithms H-M-PC-KSH, H-M-PC-AHO, and H-M-PCNT reduce the error by a factor of 20 at each step. Clearly this is a case where the homogeneous algorithms provide superlinear convergence. Not only do homogeneous methods converge in fewer iterations, they are also less susceptible to numerical inaccuracies. In several cases, particularly with algorithms I-PC-KSH and M-PC-KSH, numerical inaccuracies in the computation of the KSH/HRVW/M direction cause the algorithms to fail to converge within 50 iterations. This limit of 50 iterations is generous, since we expect that most of the time convergence within the prescribed accuracy will occur within 10-20 iterations. An entry of 25 iterations or more in any of Tables 4.1{4.8 usually indicates that the convergence of the algorithm was a ected by numerical inaccuracies. In our test problems, algorithms using the homogeneous formulation converged to the solution more frequently. Our results concur with those presented in [21], in that algorithms using the AHO direction are able to achieve a high degree of accuracy more easily than the NT or KSH/HRVW/M directions. In our testing we have found that algorithms using the KSH/HRVW/M direction are unable to decrease Ek to less than 10?6 in some cases. In many cases during testing we observed that algorithms using the KSH direction would stagnate with Ek just above the 10?6 threshold. 15

Our results show that for the types of problems considered, algorithm H-M-PC-AHO is the most robust. It converged rapidly for all problems, and with a high degree of accuracy. The performance of H-M-PC-NT and H-M-PC-KSH was nearly the same as that of H-M-PC-AHO. In a select few cases, these two algorithms failed to converge within 50 iterations.

4.3 Obtaining the package

Our Mathematica package (version 1.0) is freely available at

http://www.cs.uiowa.edu/~brixius/sdp.html.

At this location, we provide this paper in addition to a Mathematica notebook containing all the algorithms tested, with examples of their usage. The notebook also contains routines for generating test problems of the types presented here. We also provide the actual semide nite programs tested to obtain the data presented in Tables 4.1{4.8. An updated version of the package, utilizing new features of Mathematica 3.0, will be released in the coming months. Finally, we remark that this package would not have been possible without the help of M. Kojima, who wrote the Mathematica code PINPAL [4], and M. J. Todd, K. C. Toh, and R. H. Tutuncu, who generously provided their MATLAB code, allowing us to e ectively implement the Mehrotra-type methods and present many of the examples considered here [21].

References [1] F. Alizadeh, J.-P. A. Haeberly, and M. L. Overton. Primal-dual interior point methods for semide nite programming. Working paper, 1994. [2] F. Alizadeh, J.-P. A. Haeberly, and M. L. Overton. Primal-dual interior-point methods for semide nite programming: convergence rates, stability and numerical results. Working paper, May 1996. [3] C. Helmberg, F. Rendl, R.J. Vanderbei, and H. Wolkowicz. An interior-point method for semide nite programming. Technical report, Program in Statistics and Operations Research, Princeton University, 1994. [4] M. Kojima. A primitive interior-point algorithm for semide nite programs in Mathematica. Research Reports on Information Sciences B-293, Department of Information Sciences, Tokyo Institute of Technology, 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152, Japan, December 1994. [5] M. Kojima, M. Shida, and S. Shindoh. Global and local convergence of predictor{corrector infeasible{interior{point algorithms for semide nite programs. Research Reports on Information Sciences B-305, Department of Information Sciences, Tokyo Institute of Technology, 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152, Japan, October 1995. 16

[6] M. Kojima, M. Shida, and S. Shindoh. Local convergence of predictor{corrector infeasible{ interior{point algorithms for semide nite programs. Research Reports on Information Sciences B-306, Department of Information Sciences, Tokyo Institute of Technology, 2-12-1 OhOkayama, Meguro-ku, Tokyo 152, Japan, December 1995. [7] M. Kojima, M. Shida, and S. Shindoh. A predictor-corrector interior-point algorithm for the semide nite linear complementarity problem using the Alizadeh-Haeberly-Overton search direction. Research Reports on Information Sciences B-311, Department of Information Sciences, Tokyo Institute of Technology, 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152, Japan, January 1996. [8] M. Kojima, S. Shindoh, and S. Hara. Interior-point methods for the monotone linear complementarity problem in symmetric matrices. Research Reports on Information Sciences B282, Department of Information Sciences, Tokyo Institute of Technology, 2-12-1 Oh-Okayama, Meguro-ku, Tokyo 152, Japan, April 1994. [9] Z-Q. Luo, J. F. Sturm, and S. Zhang. Superlinear convergence of a symmetric primal-dual path following algorithm for semide nite programming. Report 9607/A, Econometric Institute, Erasmus University Rotterdam, The Netherlands, January 1996. [10] S. Mehrotra. On the implementation of a primal-dual interior point method. SIAM Journal on Optimization, 2:575{601, 1992. [11] S. Mizuno, M. J. Todd, and Y. Ye. On adaptive-step primal-dual interior-point algorithms for linear programming. Mathematics of Operations Research, 18(4):964{981, 1993. [12] R. D. C. Monteiro. Primal-dual path following algorithms for semide nite programming. Working paper, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA, September 1995. [13] R. D. C. Monteiro. Polynomial convergence of primal-dual algorithms for semide nite programming based on Monteiro and Zhang family of directions. Working paper, School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, GA 30332, USA, July 1996. [14] R. D. C. Monteiro and Y. Zhang. A uni ed analysis for a class of path-following primal-dual interior-point algorithms for semide nite programming. Working paper, June 1996. [15] Y. E. Nesterov and M. J. Todd. Primal{dual interior{point methods for self{scaled cones. Technical Report 1125, School of Operations Research and Industrial Engineering, Cornell University, Ithaca, New York 14853{3801, USA, 1995. 17

[16] F. A. Potra and R. Sheng. Homogeneous interior-point algorithms for semide nite programming. Reports on Computational Mathematics 82, Department of Mathematics, The University of Iowa, Iowa City, IA 52242, USA, November 1995. [17] F. A. Potra and R. Sheng. A superlinearly convergent primal{dual infeasible{interior{point algorithm for semide nite programming. Reports on Computational Mathematics 78, Department of Mathematics, The University of Iowa, Iowa City, IA 52242, USA, October 1995. [18] F. A. Potra and R. Sheng. Superlinear convergence of a predictor-corrector method for semidefinite programming without shrinking central path neighborhood. Reports on Computational Mathematics 91, Department of Mathematics, The University of Iowa, Iowa City, IA 52242, USA, August 1996. [19] F. A. Potra and R. Sheng. Superlinear convergence of interior-point algorithms for semide nite programming. Reports on Computational Mathematics 86, Department of Mathematics, The University of Iowa, Iowa City, IA 52242, USA, April 1996. Revised May 1996. [20] R. Sheng, F. A. Potra, and J. Ji. On a general class of interior-point algorithms for semide nite programming with polynomial complexity and superlinear convergence. Reports on Computational Mathematics 89, Department of Mathematics, The University of Iowa, Iowa City, IA 52242, USA, June 1996. [21] M. J. Todd, K. C. Toh, and R. H. Tutuncu. On the Nesterov-Todd direction in semide nite programming. Technical Report, School of Operations Research and Industrial Engineering, Cornell University, Ithaca, NY 14853{3801, USA, 1996. [22] Y. Zhang. On extending primal{dual interior{point algorithms from linear programming to semide nite programming. TR 95-20, Department of Mathematics and Statistics, University of Maryland Baltimore County, Baltimore, Maryland 21228{5398, USA, October 1995.

18

Problem 1

PC H-PC I-M-PC H-M-PC KSH AHO NT KSH AHO NT KSH AHO NT KSH AHO NT 7 7 7 5 5 5 10 11 14 6 6 6 Table 4.1: Iterations to solve (4.20)

Problem 1

PC H-PC I-M-PC H-M-PC KSH AHO NT KSH AHO NT KSH AHO NT KSH AHO NT 50+ 50+ 50+ 11 10 11 50+ 50+ 50+ 13 13 13 Table 4.2: Iterations to detect infeasibility in (4.22)

PC H-PC I-M-PC H-M-PC KSH AHO NT KSH AHO NT KSH AHO NT KSH AHO NT 10?6 20 15 15 1 1 1 18 12 12 5 5 5 10?8 21 16 16 1 1 1 21 14 14 7 7 7 10?10 21 16 16 1 1 1 21 15 15 8 8 8 10?12 22 16 18 1 1 1 27 17 17 10 10 10 10?14 22 17 50 1 1 1 30 19 19 11 11 11 Table 4.3: Iterations to solve Kojima-Shida-Shindoh problem with varying tolerances



PC H-PC Problem KSH AHO NT KSH AHO NT 1 50+ 16 16 10 10 10 2 50+ 16 16 11 10 11 3 50+ 17 17 12 11 12 4 15 15 15 10 9 10 5 50+ 15 15 9 8 9 Table 4.4: Iterations to solve minimum eigenvalue problem on 5x5 matrix PC H-PC Problem KSH AHO NT KSH AHO NT 1 50+ 19 19 14 13 14 2 50+ 19 19 14 13 14 3 50+ 19 19 14 13 14 4 20 20 20 15 14 15 5 50 19 19 14 13 14 Table 4.5: Iterations to solve minimum eigenvalue problem on 10x10 matrix

19

Problem 1 2 3 4 5

I-PC H-PC I-M-PC H-M-PC KSH AHO NT KSH AHO NT KSH AHO NT KSH AHO 50+ 17 17 48 12 14 20 13 13 11 10 17 15 15 13 10 11 18 12 12 11 11 49 18 18 44 13 14 12 13 13 11 11 35 16 16 28 11 13 12 12 12 11 10 50+ 15 15 9 10 11 12 12 12 10 10 Table 4.6: Iterations to solve max-cut problem on 5x5 matrix

NT 10 11 11 10 10

Problem 1 2 3 4 5

I-PC KSH AHO 34 18 50+ 21 45 19 50+ 20 50+ 22 Table 4.7:

H-PC I-M-PC H-M-PC NT KSH AHO NT KSH AHO NT KSH AHO 18 26 12 14 35 13 13 50+ 11 21 50+ 15 25 50+ 13 14 12 12 19 42 13 15 50+ 13 13 12 11 20 50+ 14 19 50+ 13 13 12 12 22 50+ 16 19 50+ 14 14 13 13 Iterations to solve max-cut problem on 10x10 matrix

NT 50+ 15 50+ 50+ 13

I-PC H-PC I-M-PC H-M-PC KSH AHO NT KSH AHO NT KSH AHO NT KSH AHO 50+ 22 22 50+ 14 15 40 14 14 11 10 30 19 19 20 11 12 50+ 13 13 10 9 50+ 18 19 22 11 12 50+ 13 13 11 11 50+ 17 18 15 10 12 18 12 13 10 10 50+ 17 17 15 10 11 30 13 13 10 10 Table 4.8: Iterations to solve norm minimization problem on 5 5x5 matrices

NT 10 9 11 12 10

Problem 1 2 3 4 5 Iteration 1 2 3 4 5 6 7 8 9 10 11 12

X S

2.0000 100 1.0000 10?1 5.0000 10?3 2.5000 10?4 1.2500 10?5 6.2500 10?7 3.1250 10?8 1.5625 10?9 7.8125 10?11 3.9063 10?12 1.9500 10?13 4.8828 10?15

Table 4.9: Error of H-M-PC-KSH, H-M-PC-AHO, H-M-PC-NT with  = 10?14 20

Suggest Documents