Infeasible–Start Semidefinite Programming ...

1 downloads 0 Views 235KB Size Report
Abstract. The development of algorithms for semidefinite programming is an active research area, based on extensions of interior point methods for linear ...
Infeasible–Start Semidefinite Programming Algorithms via Self–Dual Embeddings.∗ E. de Klerk, C. Roos, T. Terlaky Faculty of Technical Mathematics and Informatics, Delft University of Technology, P.O. Box 5031, 2600 GA Delft, The Netherlands. e–mail: [email protected], [email protected], [email protected] May 1997

Abstract The development of algorithms for semidefinite programming is an active research area, based on extensions of interior point methods for linear programming. As semidefinite programming duality theory is weaker than that of linear programming, only partial information can be obtained in some cases of infeasibility, nonzero optimal duality gaps, etc. Infeasible start algorithms have been proposed which yield different kinds of information about the solution. In this paper a comprehensive treatment of a specific initialization strategy is presented, namely self-dual embedding, where the original primal and dual problems are embedded in a larger problem with a known interior feasible starting point. A framework for infeasible start algorithms with the best obtainable complexity bound is thus presented. The main results concern embedding extended Lagrange-Slater dual (ELSD) problems (as opposed to Lagrangian duals), in order to detect general infeasibility. Remaining difficulties are stated clearly, and several open problems are posed.

Key words: Semidefinite programming, interior point method, self-dual embedding, initialization, central path, extended Lagrange-Slater duality.

1

Introduction

The extension of interior point algorithms from linear programming (LP) to semidefinite programming (SDP) became an active research area when Alizadeh [1] and Nesterov and Nemirovskii [13] independently demonstrated the rich possibilities. Most of the algorithms found in the literature require feasible starting points. So-called ‘big-M’ methods (see e.g. [23]) are often employed in practice to obtain feasible starting points. ∗ Published as: E. de Klerk, C. Roos, and T. Terlaky. Infeasible start semidefinite programming algorithms via self-dual embeddings. In H. Wolkowicz and P.M. Pardalos, editors, Topics in Semidefinite and Interior Point Methods, number 18 in Fields Institute Communications, pages 215236, Providence, RI, 1998. American Mathematical Society. ISBN: 0-82180825-7.

1

In the LP case an elegant solution for the initialization problem is to embed the original problem in a skew–symmetric self–dual problem which has a known interior feasible solution on the central path [10, 26]. The solution of the embedding problem then yields the optimal solution to the original problem, or gives a certificate of either infeasibility or unboundedness. In this way detailed information about the solution is obtained. The idea of self-dual embeddings for LP dates back to the 1950’s and the work of Goldman and Tucker [9]. With the arrival of interior point methods, the embedding idea was revived to be used in infeasible start algorithms. Despite the desirable theoretical properties of self-dual embeddings, the idea did not receive immediate recognition in implementations, due to the fact that the embedding problem has a dense column in the coefficient matrix. This can lead to fill-in of Choleski factorizations during computation. In spite of this perception, Xu et al. [24] have made a successful implementation for LP using the embedding, and it has even been implemented as an option in the well-known commercial LP solver CPLEX-barrier. The common consensus now is that this strategy promises to be competitive in practice [3] (see also [7] and [20]). A homogeneous embedding of monotone nonlinear complementarity problems is discussed by Andersen and Ye in [4]. For semidefinite programming the homogeneous embedding idea was first developed by Potra and Sheng [16]. The embedding strategy was extended by De Klerk et al. in [5] and independently by Luo et al. [11] to obtain self-dual embedding problems with nonempty interiors. The resulting embedding problem has a known centered starting point, unlike the homogeneous embedding; it can therefore be solved using any feasible path-following interior point method. This is an advantage in the SDP case, where many possible primal-dual algorithms are available, while none has yet emerged as clear favourite. A so–called maximally complementary solution (e.g. the limit of the central path) of the embedding problem yields one of the following alternatives about the original problem pair: (I) an optimal solution with zero duality gap for the original problem is obtained; (II) a ray is obtained for either the primal and/or dual problem (strong infeasibility is detected); (III) a certificate is obtained that no optimal solution pair with zero duality gap exists and that neither the primal nor the dual problem has a ray. This can only happen if one or both of the primal and dual SDP problems fail to satisfy the Slater regularity condition. Loosely speaking, the original primal and dual problems are solved if a complementary solution pair exists, or if one or both of the problems are strongly infeasible. Unfortunately, some pathological duality effects can occur for SDP1 which are absent from LP, for example: • A positive duality gap at an optimal primal-dual solution pair; • an arbitrarily small duality gap can be attained by feasible primal-dual pairs, but no optimal pair exists; • an SDP problem may have an optimal solution even though its (Lagrangian) dual is infeasible. In cases like these little or no information could be given in [5]. In this paper we elaborate on the work in [5] in three important respects: (1) It is indicated how to extend any primal-dual algorithm to solve the embedding problem. In this √ way O( n + 2 log(1/ǫ)) iteration complexity can be obtained for computing an ǫ-optimal solution to the embedding problem. 1 Examples

of these effects will be given in Sections 2 and 7, and can also be found in [11] and [23].

2

(2) The problem of how to decide which variables are zero in a maximally complementary solution of the embedding problem, if only an ǫ-optimal solution is known, is discussed. This is important in drawing conclusions about the original problem pair from an ǫ-optimal solution of the embedding problem. (3) Solutions to the unresolved duality questions are given. We show how to detect weak infeasibility and unboundedness in general by using extended Lagrange-Slater dual problems [18] in the embedding, where necessary. In this way the optimal value of a given SDP problem can be obtained if it is finite. This solves the open problem posed by Ramana in [19], namely how to use the extended Lagrange-Slater dual problems in an infeasible-start algorithm.

Outline of the paper After some preliminaries in Section 2, a review of recent results concerning the convergence of the central path is given in Section 3, with simplified proofs. The embedding strategy is discussed thereafter in Section 4. Solution strategies for solving the embedding problem are given in Section 5. In Section 6 it is shown how to interpret an ǫ-optimal solution of the embedding problem in order to draw conclusions about the solution of the original problem pair (i.e. to distinguish between the abovementioned cases (I) to (III)). The remaining difficulties are highlighted. Remaining duality issues and ways of detecting weak infeasibility are discussed in Section 7. In Section 8 it is shown how extended Lagrange-Slater duals can be used in the embedding strategy instead of Lagrangian duals to give a certificate of the status of a given problem. Finally, some conclusions are drawn.

2 2.1

Preliminaries Problem statement

We will consider the semi–definite programming problem in the standard form. Thus a problem in standard primal form may be written as: (P ) : find p∗ = inf Tr (CX) X

subject to Tr (Ai X) =

bi ,



0,

X

i = 1, . . . , m

where C and the Ai ’s are symmetric n×n matrices, b ∈ IRm , and X  0 means X is positive semi–definite. The Lagrangian dual problem of a problem of the form (P ) takes the standard dual form: (D) : find d∗ = sup bT y S,y

subject to m X

yi Ai + S

=

C,

S



0.

i=1

The solutions X and (y, S) will be referred to as feasible solutions as they satisfy the primal and dual constraints respectively. The values p∗ and d∗ will be called the optimal values of (P ) and (D), respectively. We use the convention that p∗ = −∞ if (P ) is unbounded and p∗ = ∞ if (P ) is infeasible, with the analogous convention for (D). 3

The primal and dual feasible sets will be denoted by P and D respectively, and P ∗ and D∗ will denote the respective optimal sets, i.e.  P ∗ = {X ∈ P : Tr (CX) = p∗ } and D∗ = (S, y) ∈ D : bT y = d∗ . A problem (P ) (resp. (D)) is called solvable if P ∗ (resp. D∗ ) is nonempty.

We will assume that the matrices Ai are linearly independent. Under this assumption y is uniquely determined for a given dual feasible S.

2.2

The duality gap and orthogonality property

Recall that the duality gap for (P ) and (D) at solutions X ∈ P and (y, S) ∈ D is given by ! m m X X T yi Tr (Ai X) = Tr (SX). yi Ai + S)X − Tr (CX) − b y = Tr ( i=1

i=1

The optimal duality gap is said to be zero if inf Tr (CX) = sup bT y.

X∈P

(1)

S,y∈D

Note that (1) does not imply that P ∗ and D∗ are nonempty. A problem (P ) (resp. (D) ) is called strictly feasible if there exists X ∈ P with X ≻ 0 (resp. (y, S) ∈ D with S ≻ 0). Strict feasibility is equivalent to the well-known Slater’s constraint qualification or Slater regularity condition. It is well-known that if both (P ) and (D) are strictly feasible, then P ∗ and D∗ are nonempty and the duality gap is zero. If both (P ) and (D) are feasible, and one is strictly feasible, then (1) is also guaranteed to hold. The proof of the following well-known orthogonality property is trivial. Lemma 2.1 (Orthogonality) Let (X, S) and (X 0 , S 0 ) be two pairs of feasible solutions. The following orthogonality relation holds:  Tr (X − X 0 )(S − S 0 ) = 0.

2.3

Feasibility issues

To decide about possible infeasibility and unboundedness of the problems (P ) and (D) we need the following definition. Definition 2.1 (Primal and dual rays) We say that the primal problem (P ) has a ray if there is a ¯ = 0, ∀ i and Tr (C X) ¯ < 0. Analogously, the dual problem ¯  0 such that Tr (Ai X) symmetric matrix X Pm m ¯ (D) has a ray if there is a vector y¯ ∈ IR such that S := − i=1 y¯i Ai  0 and bT y¯ > 0.

Primal rays cause infeasibility of the dual problem, and vice versa. Formally one has the following result. ¯ implies infeasibility Lemma 2.2 If there is a dual ray y¯ then (P ) is infeasible. Similarly, a primal ray X of (D). Proof: Let a dual ray y¯ be given. By assuming the existence of a primal feasible X one has T

0 < b y¯ =

m X i=1

¯ ≤ 0, Tr (Ai X)¯ yi = −Tr (X S)

which is a contradiction. The proof in case of a primal ray proceeds similarly. 4



Definition 2.2 (Strong infeasibility) Problem (P) (resp. (D)) is called strongly infeasible if (D) (resp. (P)) has a ray. Every infeasible LP problem is strongly infeasible, but in the SDP case so-called weak infeasibility is also possible. Definition 2.3 (Weak infeasibility) Problem (P) is weakly infeasible if P = ∅ and for each ǫ > 0 exists an X  0 such that |Tr (Ai X) − bi | ≤ ǫ, ∀i.

Similarly, problem (D) is called weakly infeasible if D = ∅ and for every ǫ > 0 exist y ∈ IRm and S  0 such that

m

X

yi Ai + S − C ≤ ǫ.

i=1

Example 2.1 An example of weak infeasibility is given if (D) is defined by: find sup y1

subject to 

y1 

1

0

0

0







0 1 1 0

 

where we can construct an ‘ǫ-infeasible solution’ by setting   1/ǫ 1  , y1 = − 1 . S= ǫ 1 ǫ



It has been shown [11] that an infeasible SDP problem is either weakly infeasible or strongly infeasible.

2.4

Complementarity

The optimality conditions for (P ) and (D) are

Pm

i=1

Tr (Ai X) =

bi , X  0

yi Ai + S

C,

XS

= =

S0

0.

i = 1, . . . , m

    

(2)

   

Feasible solutions X and S satisfying the last equality constraint are called complementary. Since X and S are symmetric positive semi–definite matrices the complementarity of X and S (XS = 0) is equivalent to Tr (XS) = 0. Complementary feasible solutions therefore are optimal with zero duality gap. It will be convenient to introduce subspaces B, N and T of IRn as follows: B is the subspace generated by all columns occuring in primal optimal solutions X, N the subspace generated by all columns occuring in dual optimal solutions S and T the orthocomplement of the subspace B +N . For any primal-dual optimal pair (X, S) we have XS = 0, implying that the column spaces of X and S are orthogonal. Consequently, the subspaces B and N are orthogonal as well. Thus the subspaces B, N and T partition IRn into three mutually orthogonal spaces. 5

The range (or column) space of any primal (dual) feasible X (S) is denoted as R(X) (R(S)). If X is primal optimal and R(X) = B then we call X a maximal complementary primal solution and, similarly, if S is dual optimal and R(S) = N then S is called a maximal complementary dual solution. If X and S are both maximal complementary then we call (X, S) a maximal complementary optimal pair; if moreover T = 0 we call the pair (X, S) strictly complementary.

In the next sessions it will become clear that maximal complementary solutions exist.2 Before proceeding we introduce some more notation. Since X  0 and S  0 and X and S commute (XS = SX) we can decompose X and S according to X = QΛQT , S = QΣQT (3) where Q is orthogonal and the diagonal matrices Λ and Σ have the (nonnegative) eigenvalues of X and S on their respective diagonals. Obviously XS = 0 if and only if ΛΣ = 0 and R(X) = R(QΛ), R(S) = R(QΣ).

3

Definition and features of the central path

In this section we assume that (P ) and (D) are strictly feasible. The analysis of this section will then apply to the embedding problem presented in the next section, as the embedding problem will be self-dual and strictly feasible. If the optimality conditions (2) for (P ) and (D) are relaxed to Tr (Ai X) = X

yi Ai + S

=

XS

=

bi , X  0, i = 1, . . . , m C,

µI,

S0

with µ > 0, then this system has a unique solution [8], denoted by X(µ), S(µ), y(µ). This solution can be seen as the parametric representation of a smooth curve (the central path) in terms of the parameter µ. In this section we will show that the central path has accumulation points in the optimal set and that these accumulation points are maximal complementary. Then we will prove (under the simplifying assumption that T = {0}) that as µ → 0 the central path converges to a maximal complementary solution pair. The limit is the so-called analytic center of the optimal set that will be defined later on. In what follows we consider a fixed sequence {µt } → 0 with µt > 0, t = 1, · · ·, and prove that there exists a subsequence of {X(µt ), S(µt )} which converges to a maximally complementary solution. The existence of limit points of the sequence is an easy consequence of the following lemma. Lemma 3.1 Given µ ¯ > 0, the set {(X(µ), S(µ)) : 0 < µ ≤ µ ¯} is bounded. Proof: Let (X 0 , S 0 ) be any strictly feasible primal-dual solution, and (X(µ), S(µ)) a central solution corresponding to some µ > 0. By orthogonality, Lemma 2.1, one has  Tr (X(µ) − X 0 )(S(µ) − S 0 ) = 0. (4)

The centrality conditions imply Tr (X(µ)S(µ)) = nµ, which simplifies (4) to Tr (X(µ)S 0 ) + Tr (X 0 S(µ)) = nµ + Tr (X 0 S 0 ). 2 Results

(5)

pertaining to bounds on the rank of optimal solutions may be found in [14, 15], and on nondegeneracy and strict complementarity properties of optimal solutions in [2].

6

The left hand side terms of the last inequality are nonnegative by feasibility. One therefore has  Tr X(µ)S 0 ≤ nµ + Tr (X 0 S 0 ), which for a given µ ¯ > 0 implies

Tr (X(µ)) ≤

n¯ µ + Tr (X 0 S 0 ) , ∀µ≤µ ¯ λmin (S 0 )

where λmin (S 0 ) denotes the smallest eigenvalue of S 0 . Now using the fact that any positive semidefinite matrix X satisfies kXk ≤ Tr (X) for the Frobenius norm, one has kX(µ)k ≤

n¯ µ + Tr (X 0 S 0 ) , ∀µ≤µ ¯. λmin (S 0 )

A similar bound can be derived for kS(µ)k.



Now let X(µt ) := Q(µt )Λ(µt )Q(µt )T ,

S(µt ) := Q(µt )Σ(µt )Q(µt )T

denote the spectral (eigenvector-eigenvalue) decompositions of X(µt ) and S(µt ). Lemma 3.1 implies that the eigenvalues of X(µt ) and S(µt ) are bounded. The matrices Q(µt ) are orthonormal for all t, and are therefore likewise restricted to a compact set. It follows that the sequence of triples (Q(µt ), Λ(µt ), Σ(µt )) has an accumulation point, (Q∗ , Λ∗ , Σ∗ ) say. Thus there exists a subsequence of {µt } (still denoted by {µt } for the sake of simplicity) such that lim Q(µt ) = Q∗ ,

t→∞

lim Λ(µt ) = Λ∗ ,

t→∞

lim Σ(µt ) = Σ∗ .

t→∞

Note that Λ(µt )Σ(µt ) = µI. Thus, defining S ∗ := Q∗ Σ∗ Q∗ T = lim S(µt ),

X ∗ := Q∗ Λ∗ Q∗ T = lim X(µt ),

t→∞

t→∞

we have Λ∗ Σ∗ = 0 and the pair (X ∗ , S ∗ ) is optimal. Theorem 3.1 (Maximal complementarity) The pair (X ∗ , S ∗ ) is a maximally complementary pair. Proof: Let (X, S) be an arbitrary optimal pair. Applying the orthogonality property (Lemma 2.1) and Tr (XS) = 0, Tr (X(µt )S(µt )) = nµt we obtain Tr (X(µt )S) + Tr (XS(µt )) = nµt . Since X(µt )S(µt ) = µt I, diviving on both sides by µt we obtain that Tr (S(µt )−1 S) + Tr (XX(µt )−1 ) = n

(6)

for all t. This implies Tr (XX(µt )−1 ) ≤ n,

Tr (S(µt )−1 S) ≤ n,

(7) ∗



since both terms in the left hand side of (6) are nonnegative. We derive from this that X and S are maximal complementary. Below we give the derivation for X ∗ ; the derivation for S ∗ is similar and is therefore omitted. Denoting the i-th column of the orthonormal (eigenvector) matrix Q(µt ) as qi (µt ) and the i-th diagonal element of the (eigenvalue) matrix Λ(µt ) as λi (µt ) we have X(µt )−1 = QΛ(µt )−1 QT =

n X i=1

7

1 qi (µt )qi (µt )T . λi (µt )

(8)

Combining the first inequality in (7) and (8) yields −1

Tr XX(µt )



=

n X

Tr

i=1



1 Xqi (µt )qi (µt )T λi (µt )



=

n X qi (µt )T Xqi (µt ) i=1

λi (µt )

≤ n.

(9)

The last inequality implies qi (µt )T Xqi (µt ) ≤ nλi (µt ), i = 1, 2, · · · , n. Letting t go to infinity we obtain qi∗ T Xqi∗ ≤ nλ∗i , i = 1, 2, · · · , n, where qi∗ denotes the i-th column of Q∗ and λ∗i the i-th diagonal element of Λ∗ . Thus we have qi∗ T Xqi∗ = 0 whenever λ∗i = 0. This implies Xqi∗ = 0 if λ∗i = 0, (10)

2 1 1

since qiT X ∗ qi = (X ∗ ) 2 qi , where (X ∗ ) 2 is the symmetric square root factor of X ∗ . In other words, the row space of X is orthogonal to each column qi∗ of Q∗ for which λ∗i = 0. Hence the row space of X is a subspace of the space generated by the columns qi∗ of Q∗ for which λ∗i > 0. The latter space is just R (Q∗ Λ∗ ) and this space is equal to R (X ∗ ). Since X is symmetric we conclude that R(X) ⊆ R (X ∗ ). X being an arbitrary primal optimal solution, this implies that R (X ∗ ) = B, and hence the proof is complete. ✷ Now define B N T

:= {i : λ∗i > 0} ,

:= {i : σi∗ > 0} , := {1, 2, · · · , n} \ (B ∪ N ) .

Then the sets B, N and T form a partition of the full index set {1, 2, · · · , n}. Let Q∗J denote the submatrix of Q∗ consisting of the columns indexed by J ⊆ {1, 2, · · · , n}. Then it follows from Theorem 3.1 that any optimal pair (X, S) can be written as X = Q∗B UX Q∗B T , and S = Q∗N US Q∗N T

(11)

for suitable matrices UX and US . In fact, since Q∗B T Q∗B is equal to the identity matrix IB of size |B| and Q∗N T Q∗N equals the identity matrix IN of size |N |, UX and US uniquely follow from UX = Q∗B T XQ∗B , and US = Q∗N T SQ∗N .

(12)

Note that UX and US are symmetric. It can easily be understood that the matrices X and UX have the same spectrum, except that the multiplicity of zero in the spectrum of X will be larger than in the spectrum of UX . Note that UX ∗ is just the minor of Λ∗ determined by the indices in B. Hence the eigenvalues of UX ∗ are all positive, and therefore det (UX ∗ ) > 0. Definition 3.1 (Analytic center) The analytic center of P ∗ is the (unique) solution of the maximization problem max∗ det (UX ) . X∈P



Similarly, the analytic center of D is the (unique) solution of the maximization problem max det (US ) .

S∈D ∗

8

The unicity of the analytic centers follows easily. Note that the analytic center is necessarily a maximally complementary solution. We now prove the convergence of the central path to the analytic center of the optimal set under the assumption that a strictly complementary solution exists (i.e. T = ∅, or, equivalently T = {0}). This result has been proved by Ye [25] for general self-scaled conic problems. It is nevertheless insightful to derive the proof for the semidefinite case, which is analogous to the proof in the LP case. The assumption of strict complementarity simplifies things, but is not necessary — Goldfarb and Scheinberg [8] have proved the result in the general case where no strictly complementary solution is available; they showed that any limit point of the central path satisfies the KKT conditions of the optimization problem which defines the analytic center. Theorem 3.2 If T = {0} then X ∗ is the analytic center of P ∗ and S ∗ is the analytic center of D∗ . Proof: (6) as

Just as in the proof of Theorem 3.1, let (X, S) be an arbitrary optimal pair. We may rewrite n X qi (µt )T Xqi (µt ) i=1

λi (µt )

+

n X qi (µt )T Sqi (µt ) i=1

σi (µt )

= n.

Since all terms in the above sums are nonnegative, this implies

n n X qi (µt )T Xqi (µt ) X qi (µt )T Sqi (µt ) + ≤ n. λi (µt ) σi (µt )

i∈B

i∈N

Letting t go to infinity we obtain X q ∗ T Xq ∗ i

i

i∈B

λ∗i

+

X q ∗ T Sq ∗ i i ≤ n. σi∗

i∈N

This can be rewritten as

or

    −1 ∗ T ∗ T ≤ n, + Tr SQ∗N US−1 Tr XQ∗B UX ∗ QN ∗ QB     −1 ≤ n. + Tr Q∗N T SQ∗N US−1 Tr Q∗B T XQ∗B UX ∗ ∗

Using the definition of UX and US , this implies.   −1 Tr UX UX + Tr US US−1 ≤ n. ∗ ∗

−1 −1 Since T = ∅, we have |B| + |N | = n. Recall that the matrix UX UX ∗ has size |B| × |B| and US US ∗ has size |N | × |N |. Applying the arithmetic-geometric mean inequality to the eigenvalues of these matrices we get       n 1 −1 −1 −1 −1 det UX UX ∗ det US US ∗ ≤ Tr UX UX ∗ + Tr US US ∗ ≤ 1, n

which inplies

det (UX ) det (US ) ≤ det (UX ∗ ) det (US ∗ ) .

(13)

Substituting S = S ∗ in (13) gives det (UX ) ≤ det (UX ∗ ) and by setting X = X ∗ we obtain det (US ) ≤ det (US ∗ ). Thus we have shown that X ∗ is the analytic center of P ∗ and S ∗ the analytic center of D∗ . ✷

9

4

The embedding strategy

In what follows, we no longer make any assumptions about feasibility of (P ) and (D). Consider the following homogeneous embedding of (P ) and (D):



Pm

Tr (Ai X) −τ bi

i=1

yi Ai

bT y

y ∈ IRm ,

+τ C

=0 −S

−Tr (CX) X  0,

=0 −ρ

=0

  ∀i      

(14)

      

τ ≥ 0, S  0, ρ ≥ 0.

A feasible solution to this system with τ > 0 yields feasible solutions τ1 X and τ1 S to (P ) and (D) respectively (by dividing the first two equations by τ ). The last equation guarantees optimality by requiring a nonpositive duality gap. For this reason there is no interior solution to (14). The formulation (14) was first solved by Potra and Sheng [16] using an infeasible interior point method. In this paper we consider the extended self-dual embedding [5], in order to have a strictly feasible, self–dual SDP problem with a known starting point on the central path. The advantage is that any feasible start path-following algorithm can be applied to such a problem. This is an important consideration in SDP, where many possible search directions and algorithms are available, with no clear method of choice at this time. The strictly feasible embedding is obtained by extending the constraint set (14) and adding extra variables to obtain: min θβ y,X,τ,θ,S,ρ,ν

subject to



Pm

i=1

Tr (Ai X) −τ bi yi Ai

bT y

−¯bT y

m

y ∈ IR ,

+τ C −Tr (CX)

+θ¯bi −θC¯

=0 −S

+θα

=0 −ρ

=0

¯ +Tr (CX)

−τ α

X  0,

τ ≥ 0, θ ≥ 0, S  0, ρ ≥ 0, ν ≥ 0

−ν

where ¯bi C¯

:= :=

α := β :=

= −β

  ∀i         

(15)

         

bi − Tr (Ai ) C −I

1 + Tr (C) n + 2.

It is straightforward to verify that a feasible interior starting solution is given by y 0 = 0, X 0 = S 0 = I, and θ0 = ρ0 = τ 0 = ν 0 = 1. It is also easy to check that the embedding problem is self–dual via Lagrangian duality. This implies that the duality gap is equal to 2θβ and therefore θ∗ = 0 at an optimal solution since the self–dual embedding problem satisfies the Slater condition. It is readily verified that θβ = Tr (XS) + τ ρ + θν.

10

(16)

This shows that an optimal solution (where θβ = 0) satisfies the complementarity conditions: XS

= 0

ρτ θν

= 0 = 0.

We can now use a maximally complementary solution of the embedding problem (15) to obtain information about the original problem pair (P ) and (D). In particular, one can distinguish between the three possibilities as discussed in the Introduction, namely (I) A primal–dual optimal pair (X ∗ , y ∗ ) is obtained with zero duality gap Tr (CX ∗ ) − bT y ∗ = 0; (II) A primal and/or dual ray is detected; (III) A certificate is obtained that no optimal pair with zero duality gap exists, and that neither (P ) nor (D) has a ray. Given a maximally complementary solution of the embedding problem, these cases are distinguished as follows (for a proof, see [5]): Theorem 4.1 Let (y ∗ , X ∗ , τ ∗ , θ∗ , S ∗ , ρ∗ , ν ∗ ) be a maximally complementary solution to the self–dual embedding problem. Then: (i) if τ ∗ > 0 then case (I) holds; (ii) if τ ∗ = 0 and ρ∗ > 0 then case (II) holds; (iii) if τ ∗ = ρ∗ = 0 then case (III) holds. Three important questions now arise: • How is the embedding problem actually solved? • How does one decide if τ ∗ > 0 and ρ∗ > 0 in a maximally complementary solution, if only an ǫ-optimal solution of the embedding problem is available? • What additional information can be obtained if case (III) holds? These three questions will be addressed in turn in the following three sections.

5

Solving the embedding problem

The embedding problem can be solved by any path following primal-dual method. To this end, one can relax the complementarity optimality conditions of the embedding problem to XS τρ

= =

µI µ

νθ

=

µ,

˜ S˜ as follows: If one defines new ‘primal and dual variables’ X,    S X       ˜ = X  , S˜ =  τ    ν 11



ρ θ

  , 

˜ S˜ = µI. It follows from (16) that θβ = (n + 2)µ along then the centrality condition becomes the usual X the central path. This observation will be important in Section 7.   ˜ S˜ = 0, i.e. the orthogonality principle holds Furthermore, it is straightforward to verify that Tr ∆X∆ for the new variables. These two observations make the application of primal-dual path following methods ˜ S) ˜ can be computed from straightforward: the search direction at a given point (X,



Pm

i=1

T

b ∆y −¯bT ∆y

Tr (Ai ∆X) −∆τ bi ∆yi Ai

+∆τ C −Tr (C∆X)

¯ +Tr (C∆X)

+∆θ¯bi −∆θC¯

=0 −∆S

=0 −∆ρ

+∆θα

∀i

−∆τ α

=0 −∆ν

=0

and T  L (∆XS + X∆S) L−1 + L (∆XS + X∆S) L−1

ρ∆τ + τ ∆ρ ν∆θ + θ∆ν

 T  = 2µI − LXSL−1 + LXSL−1

= µ = µ

where the matrix L determines which linearization of the centrality condition  [27] and √ is used (see e.g. [22]). In this way the embedding problem can be solved to ǫ-optimality in O n + 2 log(1/ǫ) iterations.

˜ and S˜ respectively, corresponding to a fixed, shared Note that ρ and τ can be viewed as eigenvalues of X eigenvector. This interpretation will be important in the next section.

6

Separating small and large variables

A path following interior point method only yields an ǫ-optimal solution to the embedding problem. This solution may yield small values of ρ and τ , and to distinguish between cases (I) to (III) it is necessary to know if these values are zero in a maximally complementary solution. This is the most problematic aspect of the analysis at this time, and only partial solutions are given here. Two open problems are stated which would help resolve the current difficulties. ˜ for the embedding problems is denoted by P˜ and the optimal set In what follows the set of feasible X ˜ and D ˜ ∗ are defined similarly. Finally, the dimension of the embedding problem is by P˜ ∗ . The sets D n ˜ := n + 2. To separate ‘small’ and ‘large’ variables we need the following definition: Definition 6.1 The primal and dual condition numbers of the embedding are defined as σP := max

min

˜ P ˜ ∗ i:λi (X)>0 ˜ X∈

˜ λi (X), σD := max

min

˜ D ˜ ∗ i:λi (S)>0 ˜ S∈

˜ λi (S),

The condition number σ of the embedding is defined as the minimum of these numbers σ := min{σP , σD }. Note that σ is well defined and positive because the solution set of the strictly feasible, self-dual embedding problem is compact (see e.g. [8]). In linear programming a positive lower bound for σ can be given in terms of the problem data [20]. It is an open problem to give a similar bound in the semidefinite case:

12

Open problem 6.1 Given strictly feasible SDP problems (P ) and (D) one can define σ similarly to Definition 6.1. Using the notation of Definition 3.1, one can alternatively write σ = max t UX ,US ,y

subject to  = bi , i = 1, . . . , m Tr Ai QP UX QTP m X yi Ai + QD US QTD = C i=1

UX  tI,

US  tI.

Derive a lower bound for σ in terms of the problem data. Interesting pointers to this problem can be found in the paper by Freund [6]. If we have a centered solution to the embedding problem with centering parameter µ then we can use any knowledge of σ to decide the following: Lemma 6.1 For any positive µ one has: σ n ˜µ and ρ(µ) ≤ n ˜ σ n ˜µ σ τ (µ) ≤ and ρ(µ) ≥ σ n ˜ τ (µ) ≥

if τ ∗ > 0 and ρ∗ = 0 if τ ∗ = 0 and ρ∗ > 0,

where the superscript ∗ indicates a maximally complementary solution. ˜ ∗ be such that Proof: Assume that ρ∗ is positive in a maximally complementary solution. Let S˜∗ ∈ D ∗ ∗ ρ is as large as possible. By definition one therefore has ρ ≥ σ. Recall that   ˜ S˜∗ ≤ n ˜ µ, Tr X(µ)

˜ S˜∗ satisfy which implies that the eigenvalues of X(µ)   ˜ S˜∗ ≤ n ˜ µ, ∀ i. λi X(µ)

In particular

τ (µ)ρ∗ ≤ n ˜ µ. This shows that τ (µ) ≤

n ˜µ n ˜µ ≤ . ρ∗ σ

Since τ (µ)ρ(µ) = µ one also has

σ . n ˜ The case where τ ∗ > 0 and ρ∗ = 0 is proved in the same way. ρ(µ) ≥

✷  2 The lemma shows that once the barrier parameter µ has been reduced to the point where µ ≤ σn˜ , then it is known which of τ or ρ is positive in a maximally complementary solution, provided that one is indeed positive. The case ρ∗ = τ ∗ = 0 cannot be detected using Lemma 6. It is an open problem to establish the convergence rate of τ and ρ in this case.

13

Open problem 6.2 Consider the central path (X(µ), S(µ)) for strictly feasible problems (P ) and (D) where λi (X(µ))λi (S(µ)) = µ, i = 1, . . . , n. Let T ⊂ {1, . . . , n} be the index set where λi (X(µ)) → 0 and λi (S(µ)) → 0, as µ → 0 ∀i ∈ T. Establish an upper bound for λi (X(µ)) and λi (S(µ)) for i ∈ T in terms of µ. In a recent paper, Stoer and Wechs [21] consider the analogous problem in the case of horizontal sufficient √ linear complementarity problems, and prove a bound of O( µ). The proof of Lemma 6 can easily be extended to the case where the ǫ-optimal solution is only approximately centered, where approximate centrality is defined by ˜˜ ˜ S) ˜ := λmin (X S) ≤ κ, δ(X, ˜ S) ˜ λmax (X for some parameter κ > 1. Formally one has the result ˜ S) ˜ be a feasible solution of the embedding problem such that δ(X, ˜ S) ˜ ≤ κ for some Lemma 6.2 Let (X, κ > 1. One has the relations: ˜ S) ˜ Tr (X σ and ρ ≤ κn σ ˜ S) ˜ Tr (X σ τ≤ and ρ ≥ σ κn τ≥

if τ ∗ > 0 and ρ∗ = 0 if τ ∗ = 0 and ρ∗ > 0

where the superscript ∗ indicates a maximally complementary solution.

7

Remaining duality and feasibility issues

If ρ∗ = τ ∗ = 0 in a maximally complementary solution of the embedding problem (i.e. case (III) holds), then one of the following situations has occurred: 1) The problems (P ) and (D) are solvable but have a positive duality gap; 2) either (P ) or (D) (or both) are weakly infeasible; 3) both (P ) and (D) are feasible, but one or both are unsolvable, e.g. inf X∈P ∗ Tr (CX) is finite but is not attained. The case 2) was illustrated in Example 2.1. The remaining two cases occur in the following examples: Example 7.1 The following problem (adapted from [23]) is in the form (D): find sup y2 subject to 

0

  y1  0  0

0 0





1 0

    1 0  + y2  0 0   0 1 0 0

0





1 0

    1  0 0   0 0 0

0



  0   0

is solvable with optimal value y2∗ = 0 but the corresponding primal problem has optimal value 1. 14



Example 7.2 Another difficulty is illustrated by the following problem (adapted from [23]): find sup y2 subject to 

y1 

1 0 0 0





 + y2 

0 0 0 1







0

1

1

1

 

which is not solvable but supy∈D y2 = 1. The corresponding primal problem is solvable with optimal value 1. ✷ The aim is therefore to see what further information can be obtained in the case τ ∗ = ρ∗ = 0. To this end, recall that along the central path of the embedding problem one has ρ(µt )τ (µt ) = µt and θ(µt )β = n ˜ µt

(17)

which shows that ρ(µt ) → ρ∗ = 0 implies θ(µt )/τ (µt ) → 0 as t → ∞.

(18)

This shows (by (15)) that: Tr and



 1 Ai X(µt ) → bi , ∀i τ (µt )

m X yi (µt ) i=1

τ (µt )

Ai +

1 S(µt ) → C. τ (µt )

In other words if either or both of the sequences     1 1 X(µt ) and S(µt ) τ (µt ) τ (µt )

(19)

(20)

(21)

converge, the limit is feasible for (P ) or (D) respectively. On the other hand, if (19) (resp. (20)) holds but (P ) (resp. (D)) is infeasible, then (P ) (resp. (D)) is weakly infeasible. If one also has ρ(µt ) → 0 as t → ∞ τ (µt )

(22)

then it also follows from (15) that 1 T 1 b y(µt ) − Tr (CX(µt )) → 0. τ (µt ) τ (µt ) If this happens, at least one of the sequences in (21) diverges (or else an optimal pair with zero duality gap exists). On the other hand, one always has θ(µt )/ρ(µt ) → 0 if τ (µt ) → 0, from (17). If it also holds that τ (µt ) → 0 as t → ∞ ρ(µt ) then

1 T 1 b y(µt ) − Tr (CX(µt )) → 1, ρ(µt ) ρ(µt )   1 Ai X(µt ) → 0, ∀i Tr ρ(µt ) 15

(23)

(24)

and

m X yi (µt ) i=1

τ (µt )

Ai +

1 S(µt ) → 0. τ (µt )

(25)

An asymptotic ray (or weak ray) is thus detected for (P ) and/or (D). It is shown in [11] that an asymptotic ray in (P ) (resp. (D)) implies weak infeasibility in (D) resp. (P ). The problem is that none of these indicators gives a certificate of the status of a given problem. For example, there is no guarantee that (23) will hold if one (or both) of (P ) and (D) have weak rays. Luo et al. [11] derive similar detectors and show that these detectors yield no information in some cases. We therefore need to go a step further, by replacing the embedding of (P ) and (D) with a different embedding problem where ‘stronger’ duals are embedded. This is the subject of the next section.

8

Embedding extended Lagrange-Slater duals

Assume now that the aim is to solve a given problem (D) in the standard dual form, like the problems in the examples. In other words, we wish to find the value d∗ = sup bT y S,y∈D

if it is finite, or obtain certificate that (D) is infeasible, or alternatively, a certificate of unboundedness. For the example problems the embedding of (D) and its Lagrangian dual (P ) will be insufficient for this purpose. The solution proposed here is to solve a second embedding problem, using so-called extended Lagrange-Slater duals. To this end, the so-called gap-free primal problem (Pgf ) of (D) may be formulated instead of using the standard primal problem (P ). The gapfree primal was first formulated by Ramana [18], and takes the form: min Tr (C(U0 + Wm )) subject to Tr (Ai (U0 + Wm )) =

bi , i = 1, . . . , m

Tr (C(Ui + Wi−1 )) = Tr (Ai (Ui + Wi−1 )) =

0, i = 1, . . . , m 0, i = 1, . . . , m

 

W0 = 

0,



0,

I

WiT

Wi

Ui

 

U0

0, i = 1, . . . , m

where the variables are Ui  0 and Wi ∈ IRn×n , i = 0, . . . , m.

Note that the gap-free primal problem is easily cast in the standard primal form. Moreover, its size is polynomial in the size of (D). Unlike the standard primal (P ), (Pgf ) has the following desirable features: • (Weak duality) If (y, S) ∈ D and (Ui , Wi ) (i = 0, . . . , m) is feasible for (Pgf ) then bT y ≤ Tr (C(U0 + Wm )) . • (Dual boundedness) If (D) is feasible, its optimal value is finite if and only if (Pgf ) is feasible. • (Zero duality gap) The supremal value of (Pgf ) equals the infimum value of (D) if and only if both (Pgf ) and (D) are feasible. 16

• (Attainment) If the supremum value of (D) is finite, then it is attained by (Pgf ). The standard (Lagrangian) dual problem associated with (Pgf ) is called the corrected dual (Dcor ). The surprising result is that the pair (Pgf ) and (Dcor ) are now ‘gap-free’ [19], i.e. (1) is satisfied. Moreover, a feasible solution to (D) can be extracted from a feasible solution to (Dcor ). The only problem is that (Dcor ) does not necessarily attain its supremum, even if (D) does. A natural question is whether (Dcor ) is strongly infeasible if (D) is only weakly infeasible. This would simplify matters greatly as strong infeasibility can be detected more easily. Unfortunately this is not the case — it is readily verified that the weakly infeasible problem (D) in Example 2.1 has a weakly infeasible corrected problem (Dcor ). In what follows we solve the embedding problem using (Pgf ) and (Dcor ) for our problem (D). We assume therefore that the solution of the embedding of (D) and its Lagrangian dual (P ) has yielded τ ∗ = ρ∗ = 0. We therefore already know that (D) is not strongly infeasible. Three possibilities remain: (i) the problem (D) is feasible and has a finite supremal value; (ii) the problem (D) is feasible and unbounded but does not have a ray; (iii) the problem (D) is weakly infeasible; If (and only if) case (i) holds, then (Pgf ) and (Dcor ) will have the same (finite) optimal values (zero duality gap). Problem (Pgf ) will certainly attain this common optimal value, but (Dcor ) may not. The possible duality relations are listed in Table 1.

Status of (D)

Status of (Pgf )

Status of (Dcor )

d∗ < ∞

p∗gf = d∗

d∗cor = d∗

unbounded

infeasible

unbounded

infeasible

unbounded

infeasible

Table 1: Duality relations for a given problem (D), its gapfree dual (Pgf ) and its corrected problem (Dcor ).

In what follows the variables (y, X, τ, θ, S, ρ, ν) refer to the embedding of (Pgf ) and (Dcor ). The feasible sets of (Pgf ) and (Dcor ) are denoted by Pgf and Dcor respectively. We will use the subscripts ‘gf ’ and ‘cor’ for the variables of (Pgf ) and (Dcor ) respectively, but the problem data for (Pgf ) and (Dcor ) will be denoted by C, b, Ai for simplicity. We aim to identify or exclude the general situation where (Pgf ) and (Dcor ) are such that sup

bT ycor =

ycor ,Scor ∈Dcor

min

Xgf ∈Pgf

Tr (CXgf ),

(26)

and the optimal value supycor ,Scor ∈Dcor bT ycor may or may not be attained. If the optimal value is attained, the embedding yields a solution with τ ∗ > 0 and we are done. Similarly, if τ ∗ = 0 and ρ∗ > 0, a ray is detected and the status of (D) follows from Table 1. We therefore need only consider the case where the embedding of (Pgf ) and (Dcor ) has τ ∗ = ρ∗ = 0 in a maximally complementary solution. To proceed, we first show that (22) must hold if d∗ is finite. 17

Lemma 8.1 Assume that a given problem (D) has finite optimal value d∗ . Then (22) holds for the embedding of (Pgf ) and (Dcor ). Proof: Let ǫt := θ(µt )/τ (µt ) and (Xgf , ycor , Scor ) ∈ Pgf × Dcor . Note that ǫt → 0 as t → ∞ by (18). For ease of notation we further define Xt :=

1 1 X(µt ), St := S(µt ). τ (µt ) τ (µt )

In terms of this notation one has from (15): m X

Tr (Ai Xt ) + ǫt¯bi

=

bi , i = 1, . . . , m

(yt )i Ai + St + ǫt C¯

=

C.

i=1

Using the feasibility of Xgf and Scor it is easy to show that   ¯ gf ) + ǫt¯bT ycor + Tr (CXt ) − bT yt . Tr (Xt Scor + St Xgf ) = Tr (Xgf Scor ) − ǫt Tr (CX Pm Substitution of ¯bi = bi − Tr (Ai ) and C¯ = C − I, and using Tr (Scor ) = Tr (C − i=1 (ycor )i Ai ) yields   Tr (Xt Scor + St Xgf ) = (1 + ǫt )Tr (Xgf Scor ) − ǫt Tr (Xgf + Scor ) − ǫt Tr (C) + Tr (CXt ) − bT yt (27) If (22) does not hold, then there exists an ¯ǫ > 0 such that   ǫ Tr (CXt¯) − bT yt¯ < −¯

(28)

for some t¯ which can be chosen arbitrarily large.

Since Xgf and Scor were arbitrary we can assume that Tr (Xgf Scor ) < ǫ¯/2. Choose t¯ such that (28) holds and ǫt¯Tr (Xgf Scor ) − ǫt¯Tr (Xgf + Scor ) − ǫt¯Tr (C) < ǫ¯/2. The left hand side of (27) is always nonnegative, while the right hand side is negative for the above choice of t¯. This contradiction shows that if a pair (Xgf , Scor ) exists with arbitrarily small duality gap, then (22) must hold. This completes the proof, since (Pgf ) and (Dcor ) are feasible with zero gap if and only if (D) is feasible with finite optimal value. ✷ The next question is how to obtain the value d∗ if it is finite. The following lemma shows that this value can be obtained from a sequence of centered iterates of the embedding as a limit value. ∗ Lemma 8.2 Assume the optimal value of (D) to be finite, i.e. d∗ < ∞, and let Xgf be an optimal solution of (Pgf ). One now has    1 T CX(µt ) ∗ ∗ d = Tr CXgf = lim b y(µt ) = lim Tr . t→∞ τ (µt ) t→∞ τ (µt ) ∗ Proof: Let Xgf be any optimal solution of (Pgf ). (Recall that (Pgf ) is always solvable and its optimal value equals the optimal value of (D)). Using the ‘subscript t’ notation from the previous lemma, and the statement of the self–dual problem in (15), one can easily show that   ∗ ∗ ¯ gf Tr Xgf St = Tr (CX ∗ ) − bT yt − ǫt Tr CX

or

 Tr CX ∗ − bT yt gf

= ≤

  ¯ ∗ Tr X ∗ St − ǫt Tr CX gf gf   ∗ ∗ ¯ gf Tr Xgf St + ǫt Tr CX . 18

The second right hand side term converges to zero as ǫt → 0. The first right hand side term can be made arbitrarily small, as can easily be seen from (27). This completes the proof. ✷ We now show how to detect infeasibility or unboundedness of (D). Recall that if lim

t→∞

τ (µt ) =0 ρ(µt )

(29)

then an asymptotic ray is detected for (Dcor ) or (Pgf ). This implies weak infeasibility of either (Dcor ) or (Pgf ), and thus the status of (D) is known from Table 1. The possible combinations are listed in Table 2. It is therefore only necessary to consider the cases where (29) does not hold. This is done in the following

Status of (D)

limµ→0 Tr



CX(µ) ρ(µ)

unbounded

[0, ∞)

infeasible

(−∞, 0)



limµ→0

bT y(µ) ρ(µ)

(−∞, 0) [0, ∞)

Table 2: Indicators of the status of problem (D) via the embedding of (Dcor ) and (Pgf ), for the case (µ) where limµ→0 τρ(µ) = 0. In this case d∗ cannot be finite. lemma. t) Lemma 8.3 Assume that limt→∞ τρ(µ (µt ) = k, where 0 ≤ k < ∞ in the embedding of (Pgf ) and (Dcor ). The status of (D) is now decided as follows:   T   b y(µ) CX(µ) = lim Tr = ∞ if (D) is unbounded; lim Tr µ→0 µ→0 τ (µ) τ (µ)   T   b y(µ) CX(µ) = lim Tr = −∞ if (D) is infeasible. lim Tr µ→0 µ→0 τ (µ) τ (µ)

Proof: Recall from Table 1 that (D) is infeasible if and only if (Pgf ) is unbounded. Let us assume that (Pgf ) is unbounded, and let K > 0 be given. By the assumption, there exists a Xgf ∈ Pgf such that Tr (CXgf ) ≤ −K. It is straightforward to derive the following relation from the statement of the self–dual problem (15): 1 Tr (CX(µt )) = τ (µt ) ≤

θ(µt ) ρ(µt ) θ(µt ) ¯ gf ) − 1 Tr (S(µt )Xgf ) α− − Tr (CX τ (µt ) τ (µt ) τ (µt ) τ (µt ) θ(µt ) ρ(µt ) θ(µt ) ¯ gf ), −K + α− − Tr (CX τ (µt ) τ (µt ) τ (µt )

Tr (CXgf ) +

where we have discarded the last term (which is nonpositive) to obtain the inequality. Taking the limit as t → ∞ yields 1 lim Tr (CX(µt )) ≤ −k − K. t→∞ τ (µt ) Since K > 0 was arbitrary, the second result follows. The case where (D) is unbounded is proved in a similar way. ✷ In Table 3 the results of the lemma are summarized. The only question that cannot be answered by this analysis is whether or not (D) actually attains its optimal value, if it is finite. This question can be answered by solving a third embedding problem, where (D) and (Pgf ) are combined as a single SDP 19

Status of (D)

limµ→0

ρ(µ) τ (µ)

limµ→0 Tr



CX(µ) τ (µ)



limµ→0

bT y(µ) τ (µ)

d∗ < ∞

0

d∗

d∗

unbounded

[0, ∞)





infeasible

[0, ∞)

−∞

−∞

Table 3: Indicators of the status of problem (D) via the embedding of (Dcor ) and (Pgf ), for the case where limµ→0 τρ(µ) (µ) < ∞.

problem, with the zero objective function and the added constraint that the objective values of (D) and (Pgf ) must be equal. The resulting SDP problem is feasible if and only if (D) attains its optimal value, and infeasibility can be detected as described in this paper. The need for three embedding problems (in the worst-case) is somewhat unfortunate, and unifying the approach such that one embedding is sufficient remains a topic for future research.

9

Conclusion

The embedding  an ǫ-optimal solution of a given semidefinite program and its Lagrangian √ strategy yields dual in O n + 2 log(1/ǫ) iterations, provided a complementary solution pair exists. If no complementary pair exists, strong infeasibility and unboundedness are detected instead, if either occurs. The underlying assumption is that enough information concerning a maximally complementary solution of the embedding problem can be obtained from an ǫ-optimal solution. This issue is not yet satisfactorily resolved. If neither strong infeasibility nor a complementary solution pair is found, a second embedding problem can be solved using extended Lagrange-Slater dual problems instead of standard (Lagrangian) duals. This embedding is used to generate sequences in terms of which weak infeasibility or a (finite) optimal value of a given problem can be characterized. In this way infeasibility and unboundedness can be detected or the optimal value can be obtained. It is again assumed that some information concerning a maximally complementary solution of the second embedding problem can be obtained from an ǫ-optimal solution.

References [1] F. Alizadeh. Combinatorial optimization with interior point methods and semi–definite matrices. PhD thesis, University of Minnesota, Minneapolis, USA, 1991. [2] F. Alizadeh, J.-P.A. Haeberley, and M.L. Overton. Complementarity and nondegeneracy in semidefinite programming. Working Paper, RUTCOR, Rutgers University, New Brunswick, NJ, 1995. [3] E.D. Andersen, J. Gondzio, C. M´esz´ aros, and X. Xu. Implementation of interior-point methods for large scale linear programs. In T. Terlaky, editor, Interior point methods of mathematical programming, pages 189–252. Kluwer, Dordrecht, The Netherlands, 1996. [4] E.D. Andersen and Y. Ye. On a homogeneous algorithm for the monotone complementarity problem. Technical Report, Department of Management Sciences, University of Iowa, Iowa City, USA, 1995. [5] E. de Klerk, C. Roos, and T. Terlaky. Initialization in semidefinite programming via a self–dual, skew–symmetric embedding. Technical Report 96–10, Faculty of Technical Mathematics and Computer Science, Delft University of Technology, Delft, The Netherlands, 1996. (To appear in OR Letters). 20

[6] R.M. Freund. Complexity of an algorithm for finding an approximate solution of a semidefinite program with no regularity assumption. Technical Report OR302–94, Operations Research Center, MIT, Boston, USA, 1994. [7] R.M. Freund and S. Mizuno. Interior point methods: current status and future directions. OPTIMA, 51, 1996. [8] D. Goldfarb and K. Scheinberg. Interior point trajectories in semidefinite programming. Working Paper, Dept. of IEOR, Columbia University, New York, NY, 1996. [9] A.J. Goldman and A.W. Tucker. Theory of linear programming. In H.W. Kuhn and A.W. Tucker, editors, Linear inequalities and related systems, Annals of Mathematical Studies, No. 38, pages 53– 97. Princeton University Press, Princeton, New Jersey, 1956. [10] B. Jansen, C. Roos, and T. Terlaky. The theory of linear programming : Skew symmetric self–dual problems and the central path. Optimization, 29:225–233, 1994. [11] Z.-Q. Luo, J.F. Sturm, and S. Zhang. Duality and self-duality for conic convex programming. Technical Report 9620/A, Tinbergen Institute, Erasmus University Rotterdam, 1996. [12] A. Marshall and I. Olkin. A convexity proof of Hadamard’s inequality. American Mathematical Monthly, 89:687-688, 1982. [13] Yu. Nesterov and A.S. Nemirovskii. Interior point polynomial algorithms in convex programming. SIAM Studies in Applied Mathematics, Vol. 13. SIAM, Philadelphia, USA, 1994. [14] G. Pataki. On the rank of extreme matrices in semidefinite programming and the multiplicity of optimal eigenvalues. Technical Report MSRR-604, Carnegie Mellon University, Pitsburgh, USA, 1994. Revised Aug. 1995. [15] G. Pataki. On the facial structure of cone-lp’s and semi-definite programs. Technical Report MSRR595, Carnegie Mellon University, Pitsburgh, USA, 1995. [16] F.A. Potra and R. Sheng. A superlinearly convergent primal–dual infeasible–interior–point algorithm for semidefinite programming. Reports on Computational Mathematics 78, Dept. of Mathematics, The University of Iowa, Iowa City, USA, 1995. [17] F.A. Potra and R. Sheng. Homogeneous interior–point algorithms for semidefinite programming. Reports on Computational Mathematics 82, Dept. of Mathematics, The University of Iowa, Iowa City, USA, 1995. [18] M. Ramana. An exact duality theory for semidefinite programming and its complexity implications. Mathematical Programming, 77(2), 1997. [19] M.V. Ramana and P.M. Pardalos. Semidefinite programming. In T. Terlaky, editor, Interior point methods of mathematical programming, pages 369–398. Kluwer, Dordrecht, The Netherlands, 1996. [20] C. Roos, T. Terlaky, and J.-Ph. Vial. Theory and Algorithms for Linear Optimization: An interior point approach. John Wiley & Sons, New York, 1997. [21] J. Stoer and M. Wechs. Infeasible-interior-point paths for sufficient linear complementarity problems and their analyticity. Manuscript, Institut f¨ ur Angewandte Mathematik und Statistik, Universit¨at W¨ urzburg, W¨ urzburg, Germany, 1996. [22] M.J. Todd, K.C. Toh, and R.H. T¨ ut¨ unc¨ u. On the Nesterov-Todd direction in semidefinite programming. Working paper, School of OR and Industrial Engineering, Cornell University, Ithaca, New York 14853–3801, 1996. [23] L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Review, 38:49–95, 1996. [24] X. Xu, P.-F. Hung, and Y. Ye. A simplified homogeneous and self–dual linear programming algorithm and its implementation. Annals of OR, 62:151–171, 1996. [25] Y. Ye. Convergence behavior of central paths for convex homogeneous self-dual cones. Technical note, Dept. of Management Science, University of Iowa, Iowa City, USA, 1996. Avalable at http://www.mcs.anl.gov/home/InteriorPoint/archive.html. √ [26] Y. Ye, M.J. Todd, and S. Mizuno. An O( nL)–iteration homogeneous and self–dual linear programming algorithm. Mathematics of Operations Research, 19:53–67, 1994.

21

[27] Y. Zhang. On extending primal-dual interior point algorithms from linear programming to semidefinite programming. Technical Report, Department of Mathematics and Statistics, University of Maryland at Baltimore County, Baltimore, USA, 1995.

22

Suggest Documents