Scaling relationship between the copositive cone ... - Semantic Scholar

2 downloads 0 Views 144KB Size Report
Feb 29, 2012 - Email: duer@uni-trier.de. ‡Johann Bernoulli Institute of Mathematics and Computer Science, University of Groningen, P.O.Box 407, 9700 AK.
Scaling relationship between the copositive cone and Parrilo’s first level approximation Peter J.C. Dickinson∗

Mirjam D¨ ur†

Luuk Gijben‡

Roland Hildebrand§

February 29, 2012

Abstract We investigate the relation between the cone C n of n × n copositive matrices and the approximating cone Kn1 introduced by Parrilo. While these cones are known to be equal for n ≤ 4, we show that for n ≥ 5 they are unequal. This result is based on the fact that Kn1 is not invariant under diagonal scaling. We show in fact that for any copositive matrix which is not the sum of a nonnegative and a positive semidefinite matrix we can find a scaling which is not in Kn1 . For the 5 × 5 case, we show the more surprising result that we can scale any copositive matrix X into K51 and in fact that any scaling D such that (DXD)ii ∈ {0, 1} for all i yields DXD ∈ K51 . From this we are able to use the cone K51 to check if any order 5 matrix is copositive. Another consequence of this is a complete characterisation of C 5 in terms of K51 . We end the paper by formulating several conjectures. Key words: copositive cone, Parrilo’s approximations, sum-of-squares conditions, 5 × 5 copositive matrices AMS Subject Classification: 15A48, 15A21, 90C22

1

Introduction

In recent years, many combinatorial and quadratic problems have been shown to possess a copositive formulation, i.e. a formulation as a conic linear program over the cone C n of copositive matrices: C n = {A ∈ S n : xT Ax ≥ 0 for all x ≥ 0}, where S n denotes the set of symmetric n × n matrices. While this is an elegant form of writing a combinatorial problem as a convex problem in matrix variables, it does not resolve the combinatorial nature of the problem, since the copositive cone is intractable [8]. However, the copositive formulation has opened new possibilities to approximate and relax the problem. One immediate option is to replace n (the semidefinite cone), yielding well known semidefinite relaxations. Another option is to C n with S+ impose sum-of-squares conditions. This approach has been pioneered by Parrilo [9] who introduced a hierarchy of approximating cones ! [ 0 1 2 n r Kn ⊆ Kn ⊆ Kn ⊆ . . . ⊆ C with cl Kn = C n , r∈N



Johann Bernoulli Institute of Mathematics and Computer Science, University of Groningen, P.O.Box 407, 9700 AK Groningen, The Netherlands. Email: [email protected] † Department of Mathematics, University of Trier, 54286 Trier, Germany. Email: [email protected] ‡ Johann Bernoulli Institute of Mathematics and Computer Science, University of Groningen, P.O.Box 407, 9700 AK Groningen, The Netherlands. Email: [email protected] § LJK, Universit´e Grenoble 1 / CNRS, 51 rue des Math´ematiques, BP53, 38041 Grenoble cedex, France. Email: [email protected]

1

where the r-level cone Knr is defined as     !r n n   X X Knr = A ∈ S n :  Aij x2i x2j  x2i has a sum-of-squares decomposition .   i,j=1

i=1

These cones are of interest in optimisation since sum-of-squares conditions can be expressed through linear matrix inequalities and solved with interior point algorithms. n + N n , where N n is the cone of entrywise nonnegative matrices. Parrilo showed in [9] that Kn0 = S+ n 0 It then follows from [7] that C = Kn if and only if n ≤ 4. In the 5 × 5 case, a counterexample is given by the Horn-matrix [4],   1 −1 1 1 −1  −1 1 −1 1 1    5 5 5 H= 1 −1 1   1 −1  ∈ C \ (S+ + N ).  1 1 −1 1 −1  −1 1 1 −1 1 For Kn1 , the combined proofs of Parrilo [9] (sufficient) and Bomze and de Klerk [1] (necessary) show that X ∈ Kn1 if and only if the following system of LMIs has a feasible solution M 1 , . . . , M n : n X − M i ∈ S+ i

(M )ii = 0

(M i )jj + 2(M j )ij = 0 i

j

k

(M )jk + (M )ik + (M )ij ≥ 0

i = 1, . . . , n

(1a)

i = 1, . . . , n

(1b)

i, j = 1, . . . , n s.t. i 6= j

(1c)

i, j, k = 1, . . . , n

s.t. i < j < k.

(1d)

Parrilo [9] showed that H ∈ K51 and posed the question to determine the minimum n for which C n 6= Kn1 . In this paper, we answer this question and show that already C 5 6= K5r for all r ≥ 0. A central ingredient of our proof is the observation that if we are given a diagonal matrix D with strictly positive n , N n , S n + N n } we have that X ∈ X ⇔ DXD ∈ X . diagonal, then for any matrix class X ∈ {C n , S+ + However, this property does not hold for X = Knr for r ≥ 1. We will show in fact that for any matrix n + N n ) and for any r ≥ 0, there exists a diagonal matrix D with strictly positive in X ∈ C n \ (S+ diagonal such that DXD ∈ / Knr . We shall refer to such a transformation of a matrix as a ‘scaling’, as effectively we are scaling the underlying coordinate basis. For the 5 × 5 case, we will also show the more surprising result that for any matrix X ∈ C 5 scaling it in such a way that (DXD)ii ∈ {0, 1} will yield DXD ∈ K51 . Our main result (Theorem 3.5) is a complete characterisation of C 5 in terms of K51 . We end the paper by formulating several conjectures and open problems.

Notation The scalar product of two matrices A and B is defined as hA, Bi = trace(B T A). Given a vector d ∈ Rn , we denote by Diag(d) the diagonal matrix with d on its diagonal. Conversely, given a matrix A, we denote by diag(A) the vector of diagonal entries of A. We shall denote the set of scalings by D := {Diag(d) | d ∈ Rn++ }, where Rn++ is the set of strictly positive vectors.

2

2

Scaling a matrix out of Knr

In this section we will show that for n ≥ 5 we have that Knr 6= C n for all r ≥ 0. Instead of giving a n +N n ) specific example of a matrix in C n but not in Knr , we will show that in fact any matrix X ∈ C n \(S+ r n n n can be “scaled out” of Kn . As S+ + N 6= C if and only if n ≥ 5, this will then give us the required result. First we shall show an auxiliary result on the relationship between the cones Knr for r ≥ 1 and Kn0 . Lemma 2.1. Let n ≥ 1, r ≥ 0 be integers. Then {X ∈ S n | DXD ∈ Knr ∀ D ∈ D} = Kn0 . n + N n is invariant under arbitrary scalings, we have that X ∈ K0 Proof. Since the cone Kn0 = S+ n 0 r implies DXD ∈ Kn ⊂ Kn for all D ∈ D. This proves one inclusion, and the whole statement for r = 0. We shall now for r ≥ 1. Let X ∈ S n be such that DXD ∈ Knr for all  P P prove the other inclusion  n n 2 2 2 r is a sum of squares of polynomials in x , . . . , x for all D ∈ D. Then 1 n i,j=1 Xij di dj xi xj i=1 xi P  P  r n n −1 2 2 2 d1 , . . . , dn > 0. Equivalently, is a sum of squares of polynomials in i,j=1 Xij yi yj i=1 di yi √ the variables yi = di xi , i = 1, . . . , n. Let us now fix d1 = 1and let di → +∞, 1. Since the cone of  i> r Pn 2 2 2 sums of squares polynomials is closed, the limit polynomial Xij yi yj y1 must also be a sum P  i,j=1 P n N 2 2 2r 2 of squares of polynomials in y1 , . . . , yn , say i,j=1 Xij yi yj y1 = k=1 qk (y). But then qk (y) = 0 P  2(r−1) n 2 2 for all k whenever y1 = 0. It follows that y1 can be factored out of qk , i.e., i,j=1 Xij yi yj y1 P  n 2 y2 is also a sum of squares. Continuing this process, we arrive at the conclusion that X y ij i j i,j=1

is a sum of squares, i.e., X ∈ Kn0 . This concludes the proof.

n + N n ) and any r ≥ 0, there exists D ∈ D such that Theorem 2.2. For any matrix X ∈ C n \ (S+ n r DXD ∈ C \ Kn .

Proof. For any X ∈ C n and D ∈ D we have that DXD ∈ C n . Therefore we need only to show that n + N n ) there exists D ∈ D such that DXD ∈ for any X ∈ C n \ (S+ / Knr . Let us assume that such a D does not exist. Then for all D ∈ D we have DXD ∈ Knr , and by the n + N n , a contradiction. preceding lemma it follows that X ∈ Kn0 = S+ Corollary 2.3. Let r ≥ 0 be an integer. Then C n = Knr if and only if n ≤ 4.

3

Scaling a matrix into K51

In this section, we will show that in the 5 × 5 case it is possible to scale any copositive matrix into K51 . More precisely, we show that for any X ∈ C 5 there exists a scaling D such that DXD ∈ K51 . To this end, we make use of the theory developed in [3]. That paper investigates matrices of the form   1 − cos θ1 cos(θ1 + θ2 ) cos(θ4 + θ5 ) − cos θ5  − cos θ1 1 − cos θ2 cos(θ2 + θ3 ) cos(θ5 + θ1 )    S(θ) := cos(θ1 + θ2 ) − cos θ2 1 − cos θ3 cos(θ3 + θ4 ) (2) , cos(θ4 + θ5 ) cos(θ2 + θ3 )  − cos θ3 1 − cos θ4 − cos θ5 cos(θ5 + θ1 ) cos(θ3 + θ4 ) − cos θ4 1  where θ ∈ Θ := θ ∈ R5+ eT θ < π . These matrices are transformed versions of the matrices T (ϕ) that were introduced by Hildebrand in [6] by making the substitution ϕj = − π2 + θ(j−3) mod 5 . We use S(θ) instead of T (ϕ) since it makes the notation easier. For these matrices, the following result was shown in [3]: 3

5 + N 5 ). Then X can be decomposed as X = R + N for some N ∈ N 5 Theorem 3.1. Let X ∈ C 5 \ (S+ T with diag N = 0 and R = DP S(θ)P D for some D ∈ D, a permutation matrix P and some S(θ) as defined in (2) with θ ∈ Θ.

We will show below in Theorem 3.2 that S(θ) ∈ K51 . This will then immediately imply the proposed 5 + N 5 ). If X has a zero diagonal entry, sayX scaling result: Take X ∈ C 5 \ (S+ 11 = 0, then copositivity of X implies that its first row and column must be nonnegative, so we can decompose X as     0 nT 0 0 X= + (3) n 0 0 C 4 + N 4 , from which we immediately get X ∈ K0 ⊆ K1 . where n ∈ R4 is nonnegative and C ∈ C 4 = S+ 5 5 Now assume diag(X) > 0. Then we use Theorem 3.1 to get X = DP T S(θ)P D + N . But this is equivalent to D−1 XD −1 = P T S(θ)P + D−1 N D −1

which is then a decomposition of D−1 XD −1 as a sum of two elements in K51 (observe that K51 is closed under permutations), so we conclude D−1 XD −1 ∈ K51 . Observe that the diagonal entries of D−1 XD −1 all equal 1 because of (S(θ))ii = 1 and (N )ii = 0. In case there was a zero entry (X)ii = 0, we can analogously scale the submatrix C in (3) to 0/1 diagonal. This shows that for any matrix X ∈ C 5 , scaling X to 0/1 diagonal will yield a matrix in K51 . So for the above arguments to be valid, it remains to show the following theorem. Theorem 3.2. For all θ ∈ Θ, we have that S(θ) ∈ K51 . Proof. Recall from the introduction that S(θ) ∈ K51 if and only if the following system of LMIs has a feasible solution M 1 (θ), . . . , M 5 (θ) : 5 S(θ) − M i (θ) ∈ S+ i

(M (θ))ii = 0 i

j

(M (θ))jj + 2(M (θ))ij = 0 i

j

i = 1, . . . , 5

(4a)

i = 1, . . . , 5

(4b)

i, j = 1, . . . , 5 k

(M (θ))jk + (M (θ))ik + (M (θ))ij ≥ 0

i, j, k = 1, . . . , 5

We claim that the following matrices constitute a feasible solution for    0 0 0 0 0 0 0   0 0 α (θ) 0 0 1    2 ,  0 M 1 (θ) =  0 0 0 β (θ) γ (θ) M (θ) = 1 1    0 α1 (θ) β1 (θ)  γ2 (θ) 0 0 0 0 γ1 (θ) 0 0 0    0 0 0 α3 (θ) β3 (θ) 0  0   0 0 0 γ (θ) β 3    4 (θ) 3 4   0 0 0 0 , M (θ) =  0 M (θ) =  γ4 (θ) α3 (θ)  0 0 0 0 0  β3 (θ) γ3 (θ) 0 0 0 0   0 0 α5 (θ) 0 0  0 0 β5 (θ) γ5 (θ) 0   5  0 0 0 M (θ) = α5 (θ) β5 (θ) ,  0 γ5 (θ) 0 0 0 0 0 0 0 0 4

s.t. i 6= j

s.t. i < j < k.

(4c) (4d)

this system:  0 0 γ2 (θ) 0 0 0 0 0   0 0 0 α2 (θ) , 0 0 0 β2 (θ)  0 α2 (θ) β2 (θ) 0  β4 (θ) γ4 (θ) 0 0 0 0 0 α4 (θ)  0 0 0 0  , 0 0 0 0  α4 (θ) 0 0 0

where for all i = 1, . . . , 5 (the indices being modulo 5) we define αi (θ) = cos(θi−2 + θi−1 + θi ) + cos(θi+1 + θi+2 ), βi (θ) = − cos(θi+2 ) − cos(θi−2 + θi−1 + θi + θi+1 ), γi (θ) = cos(θi−1 + θi + θi+1 ) + cos(θi+2 + θi−2 ).

From the fact that (M i (θ))jj = (M i (θ))ij = 0 for all i, j = 1, . . . , 5 it is immediately clear that (4b) and (4c) hold for M 1 (θ), . . . , M 5 (θ). To show (4a) we first note that due to the cyclic symmetry in S(θ) and the M i (θ)’s we need only prove this for a single i. Taking i = 1 we have the following factorisation, where we use the well known formula that cos(a + b) = cos a cos b − sin a sin b. S(θ) − M 1 (θ)   1 − cos θ1 cos(θ1 + θ2 ) cos(θ4 + θ5 ) − cos θ5  − cos θ1  1 − cos θ2 − cos(θ4 + θ5 + θ1 ) cos(θ5 + θ1 )    = cos(θ1 + θ2 ) − cos θ2 1 cos(θ4 + θ5 + θ1 + θ2 ) − cos(θ5 + θ1 + θ2 )  cos(θ4 + θ5 ) − cos(θ4 + θ5 + θ1 ) cos(θ4 + θ5 + θ1 + θ2 )  1 − cos θ4 − cos θ5 cos(θ5 + θ1 ) − cos(θ5 + θ1 + θ2 ) − cos θ4 1  T   T 1 1 0 0  − cos θ1   − cos θ1     sin θ1 sin θ1              = cos(θ1 + θ2 ) cos(θ1 + θ2 ) + − sin(θ1 + θ2 ) − sin(θ1 + θ2 )  . cos(θ4 + θ5 ) cos(θ4 + θ5 )  sin(θ4 + θ5 )   sin(θ4 + θ5 )  − cos θ5 − cos θ5 − sin θ5 − sin θ5 

5 and (4a) is shown. Hence S(θ) − M 1 (θ) ∈ S+ Showing (4d) requires a bit more work. We shall do this by first noting that for most indices this condition is equivalent to 0 ≥ 0, and for the remaining indices it is equivalent to (the indices being modulo 5) j = 1, . . . , 5, θ ∈ cl Θ. αj (θ) + βj−2 (θ) + γj+1 (θ) ≥ 0

We recall that for a, b, c ∈ R we have that

From this we also get that

1 2 (a +

 b) cos  cos a − cos b = 2 sin 12 (a + b) sin cos a + cos b = 2 cos

1 2 (b

 − a) ,  1 2 (b − a) .

cos(a + b − c) + cos(a − b + c) − cos(−a + b + c)     = cos(a + b + c) + cos(a + b − c) + cos(a − b + c) − cos(−a + b + c) + cos(a + b + c) = cos(a + b + c) + 2 cos a cos(b − c) − 2 cos a cos(b + c) = cos(a + b + c) + 4 cos a sin b sin c.

5

Using these trigonometric identities, we get the following. αj (θ) + βj−2 (θ) + γj+1 (θ) = cos(θj−2 + θj−1 + θj ) + cos(θj+1 + θj+2 ) − cos(θj ) − cos(θj+1 + θj+2 + θj−2 + θj−1 ) + cos(θj + θj+1 + θj+2 ) + cos(θj−2 + θj−1 ) = 2 cos



1 T 2e θ



cos

1 2 (θj

 + (θj−2 + θj−1 ) − (θj+1 + θj+2 )) − cos

1 2 (−θj

 + (θj−2 + θj−1 ) + (θj+1 + θj+2 ))

 + cos 12 (θj − (θj−2 + θj−1 ) + (θj+1 + θj+2 )) = 2 cos



1 T 2e θ



cos



1 T 2e θ



+ 4 cos

1 2 θj



sin

1 2 (θj−2

 + θj−1 ) sin

1 2 (θj+1

!

!  + θj+2 ) .

This can now clearly be seen to be greater than or equal to zero for all j = 1, . . . , 5, θ ∈ Θ. We can now summarise the discussions of this section in the following corollary and theorems: Corollary 3.3. Let X ∈ S 5 such that (X)ii ∈ {0, 1} for all i. Then X ∈ C 5 if and only if X ∈ K51 . This gives us the following interesting result. Theorem 3.4. Let X ∈ S 5 and D ∈ D such that (DXD)ii ∈ {0, 1} for all i. Then we have that X ∈ C 5 if and only if DXD ∈ K51 . This means that if we want to test a matrix X ∈ S 5 with nonnegative diagonal entries for copositivity, then it suffices to scale it so that its diagonal entries become binary, and to test the scaled matrix for inclusion in K51 . The original matrix will be copositive if and only if the scaling is in K51 . Finally we have the following result on a characterisation of C 5 . Theorem 3.5. For matrices of order five, we have the following characterisation of C 5 : C 5 = {DXD | X ∈ K51 , D ∈ D}.

4

Conjectures and Open Problems

In this paper we elaborated some very surprising results, and we are left with some open questions which we will discuss in this section. We will formulate a number of conjectures for which we also provide supporting evidence. In Section 3 we saw that for n ≤ 5 and r = 1 we have that C n = {DXD | X ∈ Knr , D ∈ D}.

(5)

It is an open question as to whether a similar result holds for higher n. It could be that (5) holds with r = 1 for all n ≥ 1, or alternatively the weaker statement that for all n ≥ 1 there exists an r ≥ 0 such that (5) could hold. In fact in Section 3 we found that for n = 5 and r = 1, it was useful to scale the matrix to have binary values on the diagonal. We extend this with the following conjecture. 6

Conjecture 4.1. For all n ≥ 1, there exists a finite r ≥ 0 such that for any matrix X ∈ S n , with (X)ii ≥ 0 for all i, and any D ∈ D such that (DXD)ii ∈ {0, 1} for all i we have that X ∈ C n if and only if DXD ∈ Kn1 . Numerical experiments with randomly generated instances carried out by the authors suggested that if we take a matrix in Kn1 and scale it such that the diagonal is binary then the scaled version of the matrix would also be in Kn1 . An extension and reinterpretation of this result is the following. Conjecture 4.2. Let X ∈ S n such that (X)ii ∈ {0, 1} for all i. Then we have that X ∈ / Knr implies that DXD ∈ / Knr for all D ∈ D. This conjecture effectively says that if you have an arbitrary matrix, and you wish to use one of the Parrilo cones as an approximation for testing if this matrix is copositive, then the best thing to do is to first scale the matrix so that its on diagonal entries are binary, as, if this is not in the Parrilo cone, then no scaling of it will be in the Parrilo cone. These conjectures suggest the importance of scaling the diagonal to binary for the Parrilo cones. One piece of support for this idea for n > 6 is the following theorem. Theorem 4.3. Let X ∈ C n be such that (X)ij ∈ {−1, +1} for all i, j = 1, . . . , n. Then we have that X ∈ Kn1 . Proof. We consider an arbitrary X ∈ C n such that (X)ij ∈ {−1, +1} for all i, j. First note that we must have that (X)ii = 1 for all i. We now let M 1 , . . . , M n ∈ S n be given as follows, (M i )jk = (X)jk − (X)ij (X)ik

for all i, j, k = 1, . . . , n.

We claim that these provide a feasible solution to the system of LMIs (1a)–(1d), and thus a certificate for X ∈ Kn1 . From construction it is immediately apparent that for all i we have that X − M i is a rank 1, positive semidefinite matrix, and so (1a) holds. For all i, j = 1, . . . , n we have that (M i )jj = (X)jj − (X)2ij = 1 − (±1)2 = 0,

(M i )ij = (X)ij − (X)ii (X)ij = (X)ij − (X)ij = 0.

From this we immediately get that (1b) and (1c) hold. We are now left to show that (1d) holds. Suppose for the sake of contradiction that there exists an i < j < k such that 0 > (M i )jk + (M j )ik + (M k )ij = (X)jk + (X)ik + (X)ij − (X)ij (X)ik − (X)ij (X)jk − (X)ik (X)jk . As all the elements of X are in {−1, +1}, we must have that −1 = (X)jk = (X)ik = (X)ij , however if we now let z ∈ {0, 1}n ⊂ Rn+ such that ( 1 if l ∈ {i, j, k} (z)l = 0 otherwise, then we get the contradiction that 0 ≤ zT Xz = (X)ii + (X)jj + (X)kk + 2(X)ij + 2(X)ik + 2(X)jk = −3 < 0.

7

Copositive matrices with all entries equal to ±1 were previously studied in [5]. We have shown n + N n (for example that these matrices must be in Kn1 , even though this includes matrices not in S+ 1 the Horn matrix) which from Section 2 we know can be scaled out of Kn . Equivalent to scaling to binary, would be to scale such that all the diagonal entries are either equal to zero or equal to the same postive scalar. We shall now see that further support for this type of scaling comes from the use of the Parrilo cones in approximating the stability number of a graph. It was shown in [2] that the stability number α(G) of a graph with n nodes and adjacency matrix AG can be computed as α(G) = min{λ ∈ R | λ(I + AG ) − E ∈ C n }, where E is the all ones matrix and I is the identity matrix. An approximation of this can then be provided by replacing the copositive cone with one of the Parrilo cones. This provides a very good approximation in practice. Since AG has a zero diagonal we have that for any λ the diagonal entries of λ(I + AG ) − E are all equal. This means that the matrix is already scaled in the way that these conjectures would suggest is best, which could give an interpretation as to why the approximations are so good. In fact for the 5 × 5 case an analogous result of Corollary 3.3 applies. Consequently, our results show that for any graph G with five nodes we have that α(G) = min{λ ∈ R | λ(I + AG ) − E ∈ C 5 } = min{λ ∈ R | λ(I + AG ) − E ∈ K51 }.

(6)

This fact has been observed by De Klerk and Pasechnik [2] for the case where G is the 5-cycle. All other graphs with five vertices are perfect, and it is known [10, Corollary 15] that the Kn0 -approximation (and hence also the Kn1 -approximation) provides an exact answer for perfect graphs. Therefore, (6) is implicitly known, but it seems that it has never been stated explicitly.

Acknowledgements This work was mostly done while M. D¨ ur was working at the Johann Bernoulli Institute of Mathematics and Computer Science, University of Groningen, The Netherlands, and initiated during a visit of R. Hildebrand to that institute. M. D¨ ur, L. Gijben and R. Hildebrand gratefully acknowledge support from the Netherlands Organisation for Scientific Research through Vici grant no.639.033.907. The first three authors would also like to thank Janez Povh for stimulating discussions on the topic.

References [1] Immanuel M. Bomze and Etienne De Klerk. Solving standard quadratic optimization problems via linear, semidefinite and copositive programming. Journal of Global Optimization, 24(2):163–185, 2002. [2] Etienne de Klerk and Dmitrii V. Pasechnik. Approximation of the stability number of a graph via copositive programming. SIAM J. Optim., 12(4):875–892 (electronic), 2002. [3] Peter J.C. Dickinson, Mirjam D¨ ur, Luuk Gijben, and Roland Hildebrand. Irreducible elements of the 5 × 5 copositive cone. Preprint, 2011. [4] Marshall Hall, Jr. and Morris Newman. Copositive and completely positive quadratic froms. Proc. Cambridge Philosoph. Soc., 59:329–339, 1963. [5] Emilie Haynsworth and A. J. Hoffman. Two remarks on copositive matrices. Linear Algebra and its Applications, 2:387–392, 1969. 8

[6] Roland Hildebrand. The extreme rays of the 5 × 5 copositive http://www.optimization-online.org/DB_HTML/2011/03/2959.html, 2011.

cone.

[7] John E. Maxfield and Henryk Minc. On the matrix equation X ′ X = A. Proc. Edinburgh Math. Soc. (2), 13:125–129, 1962/1963. [8] Katta G. Murty and Santosh N. Kabadi. Some NP-complete problems in quadratic and nonlinear programming. Math. Programming, 39(2):117–129, 1987. [9] Pablo A. Parrilo. Structured semidefinite programs and semialgebraic geometry methods in robustness and optimization. PhD thesis, California Institute of Technology, 2000. [10] Javier Pe˜ na, Juan Vera, and Luis F. Zuluaga. Computing the stability number of a graph via linear and semidefinite programming. SIAM J. Optim., 18(1):87–105, 2007.

9

Suggest Documents