JOURNAL OF INDUSTRIAL AND MANAGEMENT OPTIMIZATION Volume 9, Number 3, July 2013
doi:10.3934/jimo.2013.9.703 pp. 703–721
COMPUTABLE REPRESENTATION OF THE CONE OF NONNEGATIVE QUADRATIC FORMS OVER A GENERAL SECOND-ORDER CONE AND ITS APPLICATION TO COMPLETELY POSITIVE PROGRAMMING
Ye Tian School of Business Administration Southwestern University of Finance and Economics Chengdu, 611130, China
Shu-Cherng Fang and Zhibin Deng∗ Department of Industrial and Systems Engineering North Carolina State University, Raleigh, NC 27606, USA
Wenxun Xing Department of Mathematical Sciences Tsinghua University, Beijing, 10084, China
(Communicated by Kok Lay Teo) Abstract. In this paper, we provide a computable representation of the cone of nonnegative quadratic forms over a general nontrivial second-order cone using linear matrix inequalities (LMI). By constructing a sequence of such computable cones over a union of second-order cones, an efficient algorithm is designed to find an approximate solution to a completely positive programming problem using semidefinite programming techniques. In order to accelerate the convergence of the approximation sequence, an adaptive scheme is adopted, and “reformulation-linearization technique” (RLT) constraints are added to further improve its efficiency.
1. Introduction. Studying the cones formed by quadratic forms that achieve nonnegative values over a given region in Rn has been proven useful in optimization. For example, the quadratic forms that are nonnegative over Rn lead to the cone of positive semidefinite matrices, which is of extreme importance to semidefinite programming. Given a nonempty set F ⊆ Rn , Sturm and Zhang [28] defined the cone of nonnegative quadratic forms over F as follows: CF = {M ∈ S n | xT M x ≥ 0 for all x ∈ F}, 2010 Mathematics Subject Classification. Primary: 90C26, 90C59, 90C22; Secondary: 30E10. Key words and phrases. Cone of nonnegative quadratic forms general nontrivial second-order cone completely positive programming approximation adaptive scheme. Tian and Deng’s research has been supported by the Edward P. Fitts Fellowship at NCSU. Fang’s research has been supported by the US National Science Foundation Grant # DMI-0553310. Xing’s research has been supported by the China National Science Foundation # 11171177.
703
704
YE TIAN, SHU-CHERNG FANG, ZHIBIN DENG AND WENXUN XING
where S n denotes the set of real symmetric square matrices of order n. Its dual cone is ∗ CF = cl Cone{xxT ∈ S n | x ∈ F},
where “cl” means closure and “Cone” stands for the conic hull of a set. Here, the Pk conic hull of a set F ⊆ Rn is defined as Cone(F) , { i=1 αi xi | xi ∈ F, αi ≥ 0, k ∈ N+ } with N+ being the set of positive integers. In general, a cone of nonnegative quadratic forms is not computable. For example, the cone of nonnegative quadratic forms over the nonnegative orthant Rn+ becomes the cone of copositive matrices. Checking the copositivity of a given matrix is co-NP-Complete (ref.[21]). However, for some special cases, we can find computable representations. Characterization of computable cones can help us solve some subclasses of optimization problems, e.g., minimizing a nonconvex quadratic objective function over the domain defined by one nonconvex quadratic inequality constraint, by one strictly convex/concave quadratic equality constraint, by one convex quadratic inequality and one linear inequality (ref.[28]), or by one elliptic constraint and two parallel linear constraints (ref.[9]). In this paper, we focus on a general nontrivial second-order cone which is an unbounded set. Here, nontriviality means the cone contains at least one point other than the origin point. Notice that the computable q representation of CF over the
classical second-order cone SOC(n) = {x ∈ Rn | x21 + . . . + x2n−1 ≤ xn } has been studied as a special case of the homogenization of the spherical domain in [28]. However, to the best p of our knowledge, for a general nontrivial second-order cone n , the set of positive definite FSOC = {x ∈ Rn | xT Qx ≤ f T x}, where Q ∈ S++ n matrices, and f ∈ R , the existence of a computable representation remains unresolved. We aim to answer this question and provide a computable representation of the cone of nonnegative quadratic forms over FSOC . In addition, inspired by Lu et al. [18], in which they use the computable cone of nonnegative quadratic functions over an ellipsoid to solve quadratically constrained quadratic programming problems, we try to use the computable cone of nonnegative quadratic forms over a general nontrivial second-order cone as a new tool to approximate the completely positive programming problem which is known to be NP-Hard (ref.[12]). A completely positive programming problem (ref.[2]) has the following form: min f (X) = D · X s.t. Ai · X = bi , i = 1, 2, . . . , m, X ∈ C∗,
(CPP)
Pr where D, Ai ∈ S n , bi ∈ R, i = 1, 2, . . . , m, and C ∗ = {X ∈ S n | X = i=1 xi xTi , xi ∈ Rn+ , r ∈ N+ } denotes the cone of completely positive matrices. For two matrices Pn Pn A, B ∈ S n , A · B = trace(AT B) = i=1 j=1 Aij Bij , where Aij and Bij denote the elements of A and B in the ith row and jth column, respectively. In this paper, n we let S+ and N+n denote the set of positive semidefinite matrices and the set of nonnegative matrices, respectively. Let bT = (b1 , b2 , . . . , bm ), the dual problem of (CPP) becomes max s.t.
T h(y) Pm = b y y A i=1 i i + S = D, S ∈ C,
(DCPP)
NEW REPRESENTATION AND APPROXIMATION
705
where C = {A ∈ S n | xT Ax ≥ 0 for all x ∈ Rn+ } denotes the cone of copositive matrices. This dual problem is the commonly seen copositive programming problem. Completely positive programming models have become useful in quadratic and combinatorial optimization (ref.[12]). For example, Burer [8] proved that every quadratic programming problem with linear and binary constraints can be written as a completely positive programming problem. Many combinatorial problems can also be stated as completely positive programming problems, see [17, 13, 22]. Since the completely positive programming problem is NP-Hard, there is no known polynomial-time algorithm for finding an exact solution. Bomze et al. [4] proposed a new attempt using the feasible descent method in C ∗ . However, the method is not guaranteed to converge and it requires extra work for finding a feasible starting point, which is as difficult as solving the original problem. Most ˘ recently, Zilinskas [30] developed a simplicial partition algorithm that may solve the dual problems of the completely positive programming problems of size up to 24 in a reasonable period of time. Deng et al. [10] transformed the copositivity detection problem into a quadratic programming problem which is then approximated by solving a sequence of linear conic programming problems defined on the dual cones of the cones of nonnegative quadratic functions over a union of ellipsoids. In this paper, after showing that the cone of nonnegative quadratic forms over a nontrivial FSOC is computable, a sequence of computable cones of nonnegative quadratic forms over a union of nontrivial second-order cones are used to provide a new way of approximation to the completely postive programming problem. The results obtained become better as the partitions of the underlying standard simplex become finer. We also introduce an adaptive scheme with some RLT (reformulationlinearization technique) constraints to improve the approximation and reduce the computational burden. The layout of this paper is as follows. Section 2 studies the cone of nonnegative quadratic forms over a general nontrivial second-order cone and its properties. In particular, we prove that the cone of nonnegative quadratic forms over a general nontrivial second-order cone is computable. Section 3 designs a new way to approximate the completely positive programming problem based on this computable cone. Section 4 improves the proposed approximation by an adaptive scheme and some RLT constraints. A special implementation issue is addressed in Section 5. Numerical tests are provided in Section 6. At last, a summary is given in Section 7. 2. Computable cone of nonnegative quadratic forms over FSOC . We first introduce some properties of the cone of nonnegative quadratic forms and its dual cone over a set F ⊆ Rn . Some monotonicity properties are shown in the next lemma. ∗ ∗ Lemma 2.1. If F1 ⊆ F2 ⊆ Rn , then CF2 ⊆ CF1 and CF ⊆ CF . Moreover, for any 1 2 n ∗ n F ⊆ R , CF ⊆ S+ ⊆ CF . n ∗ Proof. Notice that S+ = CRn = CR The result is a direct consequence of the n. definitions of the cone of nonnegative quadratic forms and its dual cone.
Recall that if a cone is closed, pointed and has an interior point, we say that the cone is proper. ∗ Theorem 2.2. Let F ⊆ Rn . If F has an interior point, then CF and CF are proper cones.
706
YE TIAN, SHU-CHERNG FANG, ZHIBIN DENG AND WENXUN XING
Proof. By definition, it is easy to verify that CF is closed and convex. Let fM (x) = M · xxT . Since F has an interior point x∗ , we can find a ball B ⊆ Rn such that x∗ ∈ B ⊆ F. For any M ∈ S n , if fM (x) = 0 for all x ∈ B, then M = 0. Therefore, for any M ∈ CF and M 6= 0, there exists x ¯ ∈ B such that fM (¯ x) > 0. Then we have f−M (¯ x) = −fM (¯ x) < 0, thus −M ∈ / CF . Consequently, CF is a pointed cone. n n ⊆ CF and S+ has an interior point, CF has an interior point too. Moreover, since S+ Hence CF is a proper cone. Since the dual cone of a proper cone is also a proper ∗ cone [2], CF must also be proper. n Pss ⊆ R be sn nonempty sets, the summation of sets is defined by: PsLet K1 , ..., K i=1 Ki = { i=1 xi ∈ R | xi ∈ Ki , i = 1, ..., s}. The following two lemmas indicate that a covering of Rn+ can be used to construct or approximate C and C ∗ .
Lemma 2.3 (Corollary 16.4.2 of [24]). Let K1 , . . . , Ks be nonempty closed convex cones in Rn . Then s X s ∗ (∩i=1 Ki ) = cl( Ki∗ ). i=1
If the relative interior of Ki contains a common point for i = 1, . . . , s, then the closure sign can be removed from the right hand side of the above statement. Lemma 2.4. Assume that Rn+ = ∪ki=1 Fi and each Fi ⊆ Rn is nonempty, then Pk ∗ C = ∩ki=1 CFi and C ∗ = i=1 CF . i Proof. Since Fi ⊆ Rn+ , C ⊆ CFi for i = 1, . . . , k. Hence C ⊆ ∩ki=1 CFi . If M ∈ ∩ki=1 CFi , then M ·xxT ≥ 0 for all x ∈ Fi , i = 1, . . . , k. This means M ·xxT ≥ 0 for all x ∈ ∪ki=1 Fi = Rn+ . Therefore, M ∈ C and ∩ki=1 CFi ⊆ C. Consequently, C = ∩ki=1 CFi . n n ∗ Notice that S+ ⊆ CFi for i = 1, ..., k, and S+ has an interior point, thus CF is a i nonempty closed convex cone with a common interior point for i = 1, . . . , k. Then Pk ∗ we know, by Lemma 2.3, that C ∗ = (∩ki=1 CFi )∗ = i=1 CF . i The next theorem shows that the cones of nonnegative quadratic forms over F and over Cone(F) are actually the same. Theorem 2.5. Suppose that F ⊆ Rn is a closed convex set. Then CF = CCone(F ) . Proof. Let fM (x) = xT M x = M · xxT . If M ∈ CF , then fM (x) ≥ 0, ∀ x ∈ F. For each x ¯ ∈ Cone(F), we can find an x ∈ F and a scalar λ ≥ 0 such that x ¯ = λx. Therefore, fM (¯ x) = λ2 fM (x) ≥ 0. Consequently, M ∈ CCone(F ) and CF ⊆ CCone(F ) . On the other hand, since F ⊆ Cone(F), Lemma 2.1 leads to CCone(F ) ⊆ CF . For the special set F, we consider a general nontrivial second-order cone FSOC = p n {x ∈ Rn | xT Qx ≤ f T x} with Q ∈ S++ and f ∈ Rn . The set GSOC = {x ∈ n T T R | x Qx ≤ 1, f x = 1} ⊆ FSOC is an ellipsoid depicting a cross-section of FSOC . The following two results reveal some structures of FSOC . n Lemma 2.6. Suppose that FSOC is a nontrivial second-order cone with Q ∈ S++ n and f ∈ R . Then FSOC = Cone(GSOC ). n Proof. Since FSOC is nontrivial and Q ∈ S++ , for any x ¯ ∈ FSOC with x ¯ 6= 0, p x ¯ T T we have 0 < x ¯ Q¯ x ≤ f x ¯. Let x = f T x¯ , then it is easy to verify that x ∈ GSOC . Hence FSOC ⊆ Cone(GSOC ). Moreover, since GSOC ⊆ FSOC , we know Cone(GSOC ) ⊆ FSOC . Consequently, FSOC = Cone(GSOC ).
NEW REPRESENTATION AND APPROXIMATION
707
For a nontrivial FSOC , we now show that the cones of nonnegative quadratic forms over FSOC and over GSOC are exactly the same. n Lemma 2.7. Suppose that FSOC is a nontrivial second-order cone with Q ∈ S++ n and f ∈ R . Then CFSOC = CGSOC .
Proof. Lemma 2.6 says FSOC = Cone(GSOC ). The conclusion comes from Theorem 2.5. n Based on the above results, for a nontrivial FSOC with Q ∈ S++ and f ∈ Rn , the n problem of checking whether a matrix M ∈ S belongs to the cone CFSOC or not is equivalent to checking whether the optimal value of the following optimization problem is nonnegative or not: min xT M x s.t. xT Qx ≤ 1, (P1) f T x = 1. Notice that when FSOC is nontrivial, (P1) is feasible and achieves a finite optimal value. In this paper, for an optimization problem (∗), we denote its optimal value by V (∗).
Theorem 2.8. Suppose that M ∈ S n and FSOC is a nontrivial second-order cone n with Q ∈ S++ and f ∈ Rn . Then M ∈ CFSOC if and only if V (P1) ≥ 0. Proof. If M ∈ CFSOC , then M ∈ CGSOC . From the definition of CGSOC , we know V (P1) ≥ 0. On the other hand, for a matrix M ∈ S n , if V (P1) ≥ 0, then for any feasible solution x of (P1), we have xT M x ≥ 0. Notice that the feasible domain of (P1) is actually GSOC . Therefore, M ∈ CGSOC = CFSOC . n , (P1) can be written as By letting B = f f T ∈ S+
M ·X Q · X ≤ 1, B · X = 1, (P2) rank(X) = 1, n . X ∈ S+ Removing the nonconvex constraint of rank(X) = 1, (P2) can be relaxed to the following positive semidefinite programming problem: min M · X s.t. Q · X ≤ 1, (P3) B · X = 1, n X ∈ S+ . The dual problem of (P3) becomes max −σ1 + σ2 n s.t. M + σ1 Q − σ2 B ∈ S+ , (P4) σ1 ≥ 0, σ2 ∈ R. We restate a decomposition scheme of a positive semidefinite matrix below. min s.t.
n Lemma 2.9 (Lemma 2.2 in [29]). Given G ∈ S n and X ∈ S+ . Suppose that G·X ≤ 0P and rank(X) = r, then there exists a rank-one decomposition for X such r that X = i=1 xi xTi and xTi Gxi ≤ 0 with xi ∈ Rn for i = 1, 2, ..., r. In particular, if G · X = 0, then xTi Gxi = 0 for i = 1, 2, ..., r.
708
YE TIAN, SHU-CHERNG FANG, ZHIBIN DENG AND WENXUN XING
n Theorem 2.10. Suppose that Q ∈ S++ , f ∈ Rn and B = f f T . If a nonzero matrix n X ∈ S with rank(X) = r satisfies that
Q · X ≤ 1, B · X = 1, (1) n X ∈ S+ , Pr then P there exists a decomposition X = i=1 µi xi xTi with r ∈ N+ and xi ∈ Rn such r that i=1 µi = 1, µi > 0, xTi Qxi ≤ 1 and f T xi = 1 for i = 1, 2, ..., r. n Proof. Let α = Q · X. If X is not a zero matrix, due to the fact that Q ∈ S++ , we have 0 < α ≤ 1. Then αB · X = Q · X, P i.e., (Q − αB) · X = 0. By Lemma 2.9, r ˆi x ˆTi such that x ˆTi (Q − αB)ˆ xi = 0 there exists a matrix decomposition X = i=1 x 1 T T T 2 for i = 1, 2, ..., r. Notice that x ˆi B x ˆi = (f x ˆi ) = α x ˆi ˆi Qˆ xi > 0. Let xi = f T1xˆi x Pr and µi = x ˆTi B x ˆi , then i=1 µi = 1, µi > 0, xTi Qxi ≤ 1 and f T xi = 1 for i = 1, 2, ..., r.
From the relationship among (P1), (P2), (P3) and (P4), the next result can be derived. Lemma 2.11. Suppose that M ∈ S n and FSOC is a nontrivial second-order cone n with Q ∈ S++ and f ∈ Rn . Then problems (P1), (P2), (P3) and (P4) are feasible and achieve the same finite optimal value. Proof. Since FSOC is nontrivial, (P1) is feasible and achieves a finite optimal value. Let x be a feasible solution of (P1), it is easy to verify that X = xxT is feasible n , by letting σ2 = 0, we can to (P2) and (P3). Due to the fact that Q ∈ S++ n . Hence (P4) find a sufficiently large σ1 > 0 such that M + σ1 Q − σ2 B ∈ S+ ∗ is strongly feasible. Let X be the optimal solution of (P3). By Theorem 2.10, Pr ∗ ∗ T ∗ there existsPa decomposition X ∗ = µ x (x ) with r ∈ N and x ∈ Rn + i i i=1 i i r ∗ T ∗ T ∗ such that i=1 µi = 1, µi > 0, i ) Qxi ≤ 1 and f xi = 1 for i = 1, 2, ..., r. P(x r Note V (P1)= V (P2)≥ V (P3)= i=1 µi (x∗i )T M x∗i . Since each x∗i is a feasible point of (P1), we have that (x∗i )T M x∗i ≥ V (P1) for i = 1, 2, ..., r. Hence each x∗i in this decomposition is an optimal solution to (P1) and V (P1)= V (P3). Because (P3) and (P4) are dual problems, from the SDP theory (ref.[2]), we have V (P3)= V (P4). Based on the above results, we know M ∈ CGSOC if and only if V (P4) ≥ 0. Theorem 2.12. Suppose that M ∈ S n and FSOC is a nontrivial second-order cone n with Q ∈ S++ , f ∈ Rn and B = f f T . Then M ∈ CFSOC if and only if there exists a λ ≥ 0 such that n M + λ(Q − B) ∈ S+ . (2) Proof. Recall that CFSOC = CGSOC and M ∈ CGSOC if and only if V (P4) ≥ 0. Therefore, M ∈ CFSOC if and only if there exist σ1 and σ2 such that −σ1 + σ2 ≥ 0, n M + σ1 Q − σ2 B ∈ S+ , σ1 ≥ 0, σ2 ∈ R.
(3)
Let λ = σ1 , system (3) becomes −λ + σ2 ≥ 0, n M + λ(Q − B) − (σ2 − λ)B ∈ S+ , λ ≥ 0, σ2 ∈ R.
(4)
NEW REPRESENTATION AND APPROXIMATION
709
n n Since B = f f T ∈ S+ , if M + λ(Q − B) − (σ2 − λ)B ∈ S+ for some σ2 ≥ λ, then n M + λ(Q − B) ∈ S+ . Hence M ∈ CFSOC if and only if the following system is feasible: n M + λ(Q − B) ∈ S+ , λ ≥ 0.
∗ An equivalent system for describing the dual cone CF is shown in the next SOC result.
Theorem 2.13. Suppose that X ∈ S n and FSOC is a nontrivial second-order cone n ∗ with Q ∈ S++ , f ∈ Rn and B = f f T . Then X ∈ CF if and only if X satisfies SOC that (Q − B) · X ≤ 0, (5) n . X ∈ S+ ∗ Moreover, CF = Cone(Z) with Z = {xxT |x ∈ GSOC }. SOC ∗ Proof. By the definition of dual cone, X ∈ CF if and only if X · Y ≥ 0 for all SOC Y ∈ CFSOC . From Theorem 2.12, Y ∈ CFSOC if and only if there exist a matrix n ∗ S ∈ S+ and a scalar λ ≥ 0 such that Y = S−λ(Q−B). Hence X ∈ CF if and only SOC n n if X · S − λX · (Q − B) ≥ 0 for all S ∈ S+ and λ ≥ 0, which is equivalent to X ∈ S+ and (Q − B) · X ≤ 0. For the second part of the theorem, since GSOC ⊆ FSOC , ∗ ∗ from definition, we have Cone(Z) ⊆ CF . On the other hand, if X ∈ CF with SOC SOC X 6= 0, then B · X ≥ Q · X > 0. Based on the proof of Theorem 2.10, we know Pr thereP exists a decomposition X = i=1 µi xi xTi with r ∈ N+ and xi ∈ Rn such r that i=1 µi = B · X, µi > 0, xTi Qxi ≤ 1 and f T xi = 1, for i = 1, 2, ..., r. Hence ∗ X ∈ Cone(Z). Consequently, we have CF = Cone(Z). SOC
In summary, for a general nontrivial second-order cone FSOC , we have pro∗ vided computable representations for CFSOC and CF by linear matrix inequalities SOC ∗ (LMI). Hence we know CFSOC and CFSOC are both computable. 3. Approximation to completely positive programming. In this section, we consider how to approximate the completely positive programming problem (CP) ∗ using the computable cones of CFSOC and CF . Notice that a symmetric matrix SOC T M ∈ C if and only if x M x ≥ 0 for any x ∈ ∆, where ∆ = {x ∈ Rn+ | eT x = 1} is the standard simplex. If ∆ is divided into different small simplices, by Lemma 2.4 and Lemma 2.7, the cones of nonnegative quadratic forms over these simplices can be used to approximate the cone of completely positive matrices C ∗ . Let V be an arbitrary simplex generated by a set of vertices {v1 , v2 , ..., vn }, with vi ∈ Rn , i = 1, 2, ..., n, being linearly independent. Now we show how to provide an approximation to the cone of nonnegative quadratic forms CV over the simplex V. First we consider the following system: Q · [vi viT ] ≤ 1, i = 1, 2, ..., n, n . Q ∈ S++
(6)
Lemma 3.1. Suppose that V is an arbitrary simplex generated by a set of linearly independent vertices {v1 , v2 , ..., vn } with vi ∈ Rn for i = 1, . . . , n. Then system (6) is feasible.
710
YE TIAN, SHU-CHERNG FANG, ZHIBIN DENG AND WENXUN XING
Proof. Since {v1 , v2 , ..., vn } are linearly independent, V = [v1 , v2 , ..., vn ]T is an invertible matrix. Let ei denote an n × 1 vector with all elements being Pn 0 except the ith element being 1. Let ui = V −1 ei for i = 1, 2, ..., n and Q = i=1 ui uTi , then Q is positive definite with Q · [vi viT ] = 1 for i = 1, 2, ..., n. Hence system (6) is feasible. Moreover, since {v1 , v2 , ..., vn } are linearly independent, the following system of linear equations: viT y = 1, i = 1, 2, ..., n, (7) must have a unique solution. Let y ∗ = f be this unique solution, by combining Q p and f , we would like to construct a second-order cone FSOC = { x ∈ Rn | xT Qx ≤ f T x} in the next lemma. Lemma 3.2. Suppose that V is an arbitrary simplex generated by a set of linearly independent vertices {v1 , v2 , ..., vn } with vi ∈ Rn for i = 1, . . . , n. If Q is a feasible solution of system (6), fpis the unique solution of system viT y = 1, i = 1, 2, ..., n, and FSOC = { x ∈ Rn | xT Qx ≤ f T x}, then V ⊆ FSOC . Proof. Since V and FSOC are convex, V ⊆ FSOC if and only if vi ∈ FSOC for i = 1, 2, ..., n. Note that Q · [vi viT ] = viT Qvi ≤ f T vi = 1, thus vi ∈ FSOC and V ⊆ FSOC . However, there may exist multiple feasible choices of Q for system (6). In order to obtain a unique solution, we consider the following problem: min log det(Q−1 ) s.t. Q · [vi viT ] ≤ 1, i = 1, 2, ..., n, (P5) n . Q ∈ S++ Notice that if Q · [vi viT ] = 1, vi is a boundaryppoint of the ellipsoid {x ∈ Rn |xT Qx ≤ 1} and the cross-section GSOC = {x ∈ Rn | xT Qx ≤ 1, f T x = 1}. It also implies that vi is a boundary point of the second-order cone FSOC . The optimal solution of (P5) corresponds the minimal volume ellipsoid which centers at the origin point and its cross-section GSOC covers the simplex V. Problem (P5) can be solved efficiently using an interior-point algorithm. Some numerical softwares, such as cvx (ref.[14]) and SeDuMi (ref.[27]), may apply. In particular, for the standard simplex ∆, vi = ei for i = 1, . . . , n, where ei denotes an n × 1 vector with 1 being its ith element and 0 for other elements. In 0 this case,√Q = I, the identity matrix, solves problem (P5). Moreover, ∆ ⊆ FSOC = n T T {x ∈ R | x Ix ≤ e x}, where e is an n × 1 vector with all elements being 1. n Let B = eeT ∈ S+ , our previous results assure that a lower bound of problem (CPP) can be obtained by solving the following problems: min s.t.
D·X Ai · X = bi , i = 1, 2, ..., m, (I − B) · X ≤ 0, n X ∈ S+ ,
and max s.t.
(8)
bT y P m
i=1 yi Ai + S = D, n S + λ(I − B) ∈ S+ , λ ≥ 0. Therefore, we have obtained a computable approximation to problem (CPP).
(9)
NEW REPRESENTATION AND APPROXIMATION
711
Now we consider further improvement on the lower bound. An intuitive idea is to partition the standard simplex S ∆ into S a collection of k(> 1) small simplices {∆1 , . . . , ∆k } such that ∆ = ∆1 ∆2 · · · ∆k . For each ∆j , we then follow the j procedure in this section to get a corresponding second-orderpcone FSOC ={x∈ p j T n n T T R | x Qj x ≤ fj x} such that ∆j ⊆ GSOC = { x ∈ R | x Qj x ≤ 1, fjT x = 1}. As the partition becomes finer, these second-order cones may cover Rn+ more precisely. Notice that for any simplex ∆j ⊆ ∆, y = e is a solution to the system (7). Therefore, for each simplex ∆j in the partition, B = eeT is fixed and we only need j to compute Qj for FSOC . Theorem 3.3. Suppose that ∆ is the standard simplex in Rn and ∆1 , . . . , ∆k ⊆ ∆ S S Tk j such that ∆ = ∆1 ∆2 · · · ∆k . If ∆j ⊆ GSOC , j = 1, 2, ..., k, then j=1 CF j ⊆ SOC Pk ∗ ∗ C and C ⊆ j=1 CF j . SOC
S S Proof. For ∆ = ∆1 ∆2 · · · ∆k , Rn+ =Cone(∆)=Cone(∆1 ) Cone(∆2 ) . . . S Cone(∆k ). Since each CCone(∆j ) , j = 1, . . . , k, is not empty, from Lemma 2.4, Pk ∗ we know that C = ∩kj=1 CCone(∆j ) and C ∗ = j=1 CCone(∆ . Furthermore, Theorem j) ∗ ∗ 2.5 leads to CCone(∆j ) = C∆j and CCone(∆j ) = C∆j . Therefore, C = ∩kj=1 C∆j and Pk j j ∗ C ∗ = j=1 C∆ . Since ∆j ⊆ GSOC ⊆ FSOC , from Lemma 2.1, we have CF j ⊆ C∆j j SOC Tk Pk ∗ ∗ ∗ ∗ and C∆j ⊆ CF j . Consequently, j=1 CF j ⊆ C and C ⊆ j=1 CF j . S
SOC
SOC
SOC
S
S
Now, assuming a simplex partition ∆ = ∆1 · · · ∆k is obtained. We can further ∆j , 1 ≤ j ≤ k, into jk (> 1) smaller simplices such that ∆j = S partition S ∆j1 · · · ∆jk . As the number of simplices increases by repeating the process, the simplex partition gets finer and the error of the approximation becomes smaller. Notice that, S the S approximation of problem (CPP) under a fixed simplex partition ∆ = ∆1 · · · ∆k can be formulated as below. min D · X s.t. Ai · X = bi , i = 1, 2, ..., m, X = X1 + ... + Xk , (ACPP) (Qj − B) · Xj ≤ 0, n Xj ∈ S+ , j = 1, 2, ..., k. Its dual problem is in the following form: T max bP y m s.t. i=1 yi Ai + S = D, (ADCPP) n S + λj (Qj − B) ∈ S+ , λj ≥ 0, j = 1, 2, ..., k. Furthermore, in order to measure the accuracy of the approximation, we define a σ-neighborhood of a domain F . Definition 3.4. (Definition 3 of [10]). For a set F ⊆ Rn and σ > 0, the σneighborhood of F is defined as Nσ,F = {x ∈ Rn | ∃y ∈ F, s.t. kx − yk∞ < σ}, where k · k∞ denotes the infinite norm. We expect that, when the original problem achieves a finite optimal value, the lower bounds obtained from the approximation converge to the optimal value as the number of simplices increases.
712
YE TIAN, SHU-CHERNG FANG, ZHIBIN DENG AND WENXUN XING
Theorem 3.5. Assume that problem (CPP) achieves a finite optimal value. Let li , i = 1, 2, . . ., be the lower bounds sequentially returned by solving corresponding problem (ACPP) with i simplicies in the partition. Then, for any > 0, there exists an N ∈ N+ such that |V (CPP)− max1≤i≤N li | < for any N ≥ N . Proof. Let FCP P = {X ∈ S n |X ∈ C ∗ and Ai · X = bi for i = 1, 2, . . . , m} be the feasible domain of the problem (CPP). For any σ > 0, let Bσ = {X ∈ S n |X ∈ ∗ CN and Ai · X = bi for i = 1, 2, . . . , m} be a close neighborhood of FCP P . σ,∆ Since problem (CPP) achieves a finite optimal value and its objective function is continuous, for any > 0, there exists a σ > 0 such that |V (CPP) − minX∈q Bσ D · X| < . Note that the volume of a simplex with equal edge length s is √
sn n!
n+1 2n .
Thus, we can use up to (d σ2 + 1e)n simplices with edges being equal to σ to cover ∆ (otherwise, the total volume of all these simplices becomes larger than the i , volume of ∆). Then for each simplex ∆i and the corresponding cross-section GSOC i GSOC ⊆ Nσ,∆i . Thus, for each simplex ∆ , the corresponding second-order cone i √ ∗ i ∗ for each i and ⊆ Cone(Nσ,∆ ). Let N = (d σ2 + 1e)n , we have CF ⊆ CN FSOC i σ,∆ SOC PN ∗ ∗ ⊆ CNσ,∆ . Therefore, minX∈ Bσ D · X ≤ V (ACPP)= lN ≤ V (CPP) i i=1 CFSOC and |V (CPP)− max1≤i≤N li | < for any N ≥ N . In summary, we have shown that, for a completely postive programming problem with a finite optimal value, the task of finding an -optimal solution is computable. 4. Adaptive scheme and RLT for improvement. In order to obtain a precise covering of Rn+ , many second-order cones may be needed. For computational concerns, fewer number of second-order cones involved is preferred. Lu et al. [19] pointed out that, for an optimization problem, the importance of each feasible point to solving the problem is not the same. A sensitive subregion, which has a high potential to contain an optimal solution, should be paid more attentions. Following this idea, we may want to arrange more second-order cones to approximate a sensitive subregion and only a few second-order cones to coarsely cover those insensitive subregions. In this way, as the sensitive subregion is approximated finer, a better lower bound may be achieved. S S Now, assuming a simplex partition ∆ = ∆1 ∆2 ... ∆k is obtained, we consider how to efficiently partition it for better approximation. Since some of these simplices may not be involved in the current optimal solution of problem (ACPP), we only need to focus on the region that is most related to the current optimal solution. In this way, the precision of the sensitive subregion can be improved and a better optimal solution may be obtained. However, for a current optimal solution, there may exist several simplices that simultaneously get involved. If all of them are divided at each iteration, then the number of simplices will increase quickly. Therefore, we need to consider which of them is the most important one. From the SDP theory (ref.[2]), the optimality conditions for problems (ACPP) and (ADCPP) are: (X, X1 , ..., Xk ) is feasible to (ACPP), (S, y, λ) is feasible to (ADCPP), Xi · [S + λi (Qi − B)] = 0, i = 1, 2, ..., k.
(Optimality Condition)
NEW REPRESENTATION AND APPROXIMATION
713
For a pair of optimal solutions of problems (ACPP) and (ADCPP), if Xi 6= 0 for some i, then Xi · S + λi Xi · (Qi − B) = 0. Hence S + λi (Qi − B) can not be positive definite. Moreover, S + λi (Qi − B) is on the boundary of the positive n semidefinite cone and S + λi (Qi − B) ∈ S+ is an active constraint for the current optimal solution of problem (ADCPP). We only need to partition the simplex ∆i with i ∈ {1 ≤ i ≤ k| Xi 6= 0} into some finer simplices. n Theorem 4.1. Let X, X1 , . . . , Xk ⊆ S+ such that X = X1 + · · · + Xk . If (Qi − B) · Xi ≤ 0 and rank(Xi ) = ri for i = 1, 2, ..., k, then there exists a decomposition Pk Pri (10) µij xij xTij X = i=1 j=1 P P k ri µij = B · X. with µij > 0, xij ∈ Rn , xTij Qi xij ≤ 1, eT xij = 1 and i=1 j=1 P ri x ¯ij x ¯Tij such Proof. By Lemma 2.9, for each Xi we have a decomposition Xi = j=1 that x ¯Tij (Qi − B)¯ xij ≤ 0 for j = 1, 2, . . . , ri . Therefore, x ¯Tij Qi x ¯ij ≤ x ¯Tij eeT x ¯ij . Since x ¯ij T n ¯ij is nonzero, we have 0 < e x ¯ij . Let xij = eT x¯ij and µij = x Qi ∈ S++ and x ¯Tij B x ¯ij , Pri P r i T T T then we have Xi = j=1 µij xij xij , xij Qi xij ≤ 1, e xij = 1 and j=1 µij = B · Xi . Since X = X1 + · · · + Xk , the conclusion comes by summing up the decompositions of Xi together.
Let I = {(i, j)| i = 1, 2, ..., k, 1 ≤ j ≤ ri } be an index set. If xij ∈ Rn+ for each (i, j) ∈ I in the decomposition, then X is completely positive and feasible to (ACPP). Hence X is optimal to (CPP). Otherwise, there exists at least one / Rn+ } as the (i, j) ∈ I such that xij ∈ / Rn+ . Denote Ip = {(i, j) | (i, j) ∈ I, xij ∈ index set for the infeasible solutions, we can define the sensitive points as follows: Pk Pri Definition 4.2. Let X = i=1 j=1 µij xij xTij be a decomposition of an optimal solution of problem (ACPP) such that µij > 0, xij ∈ Rn , xTij Qi xij ≤ 1, eT xij = 1 Pk Pri and i=1 j=1 µij = B · X. Define any x∗ = argminx∈{xij |(i,j)∈Ip } xT Dx to be a sensitive point. Let t be the smallest number of the first index i among all sensitive points x∗ , then ∆t is defined to be the “corresponding sensitive simplex.” Notice that, for a relaxation problem, redundant constraints may be added to improve the lower or upper bound of the problem. One of the most important techniques in this area is the Reformulation-Linearization Techniques (RLT) (ref. ∗ [1]). Since C ∗ = CR n , a simple RLT inequality X ≥ 0 can be added to the problem + (ACPP) (here X ≥ 0 indicates that every element of matrix X is nonnegative). Then, we have the following relaxation problem with RLT constraints: min D · X s.t. Ai · X = bi , i = 1, 2, ..., m, X = X1 + ... + Xk , (RACPP) X ≥ 0, (Qj − B) · Xj ≤ 0, n Xj ∈ S+ , j = 1, 2, ..., k. Now, we design an adaptive algorithm to approximate the copositive programming problem. Adaptive Approximation for Completely Positive Programming (AACPP) Algorithm: √ 0 Step 1 (Initialization Step): Set the second-order cone FSOC = {x ∈ Rn | xT Ix ≤ eT x} to cover ∆. Let k = 1.
714
YE TIAN, SHU-CHERNG FANG, ZHIBIN DENG AND WENXUN XING
Step 2: Solve (RACPP) with approximation cones to obtain the optimal solution X ∗ = X1∗ + ... + Xk∗ . Record the optimal value of (RACPP) as lk . Step 3: Decompose X ∗ , if there is no sensitive point, then return X ∗ as the optimal solution of problem (CPP). Otherwise, find all sensitive points x∗ and the corresponding sensitive simplex ∆t . Step 4: If all sensitive points x∗ ∈ / Cone(Nσ,∆ ) and the computation time is less than the pre-assigned maximum time, go to Step 5. Otherwise, return max{l1 , ..., lk } as a lower bound for the problem (CPP). t Step 5: Drop FSOC from the approximation cones. Pick the longest edge of ∆t and split it into two equal pieces to get two new simplices. After that, approximate each of these two simplices by two smaller second-order cones. Set k = k + 1 and go to Step 2. Remark 1. The reason we choose the longest edge is that this choice can prevent the simplex from being narrowly shaped. In this way, numerical solvers can easily find a starting interior point and hence improve the efficiency. Moreover, if algorithm (AACPP) converges, the convergence speed is not correlated with the √ theoretical upper bound (d σ2 + 1e)n , but a good approximation to problem (CPP) is indeed provided. We now use Example 4.1 of [7] to illustrate the power of the AACPP algorithm: Example 1. max s.t.
0 0 ·X 0 1 2 1 · X = 2, 1 2 X∈C
(11)
The optimal solution of this problem is obtained at the first iteration of (AACPP) while the T proposed method in [7] takes more than 6 iterations. The reason is that n n + N+n for n ≤ 4. Thus, our LMI representation and N+n and C = S+ C ∗ = S+ RLT constraints can precisely depict C ∗ from the very beginning. 5. Implementation issue. Adding RLT may bring a much better lower bound VRLT for problem (CPP) at the beginning of the algorithm. However, from our computational tests, we notice that this improvement trend may slow down as algorithm (AACPP) continues, especially for higher dimensional problems. This phenomenon is related to the validity of the second-order cone constraints. Notice that in the first k partitions (k varies in different problems), the second-order cone constraints may not be valid in problem (RACPP) due to a coarse covering while the nonnegative RLT inequalities X ≥ 0 play a key role in the approximation problem. However, as the partition of the simplices becomes finer, the second-order cone constraints will finally get involved and improve the lower bound as we expect. To improve the efficiency of algorithm (AACPP), we may want to design an initial covering strategy to get the second-order cone constraints actively involved much earlier in the iterations. An intuitive idea to accelerate the process is to find a smaller sensitive subregion and get an initial covering at the beginning of algorithm (AACPP). For the standard √ 0 simplex ∆ with the second-order cone FSOC = {x ∈ Rn | xT Ix ≤ eT x}, let VRLT be the optimal value of the corresponding problem (RACPP). If the obtained optimal
NEW REPRESENTATION AND APPROXIMATION
715
solution is infeasible to problem (CPP), then we shrink ∆ uniformly by a ratio α, 0 < α ≤ 1, to get a new simplex ∆α which satisfies the following properties: (i) ∆α and ∆ have the same shape, share the same center point (the average mean of the n vertices of a simplex); (ii) the size of the new simplex ∆α is α times of the size of the standard simplex ∆; (iii) The corresponding faces of P ∆ and ∆i are n parallel. Notice that, ∆α is also on the hyperplane H = {x ∈ Rn | i=1 xi = 1}. Therefore, for ∆α , we can easily get the corresponding smallest volume cross-section α n T T α GSOC p = {x ∈ R | x Qα x ≤ 1, e x = 1} and the second-order cone FSOC = {x ∈ α α ⊆ FSOC . Then, we solve the following Rn | xT Qα x ≤ eT x} such that ∆α ⊆ GSOC α problem based on FSOC : min s.t.
D·X Ai · X = bi , i = 1, 2, ..., m, X ≥ 0, (Qα − B) · X ≤ 0, n . X ∈ S+
(CPPα )
As α reduces from 1 to 0, the domain of problem (CPPα ) becomes smaller. Therefore, the optimal value of problem (CPPα ) will remain the same as VRLT at the beginning, then monotonically increases after hitting an inflexion point of α. We call this inflexion point as the breaking value. Remark 2. For a fixed α > 0, due to the special structure of ∆α , the corresponding α can be easily derived by solving a simple system of linear second-order cone FSOC equations. Moreover, any efficient line search method can be used to find the breaking value. Suppose that we have obtained a ratio α∗ which is close to the breaking value. ∗ Let ∆α which satisfies the following properties: (i) ∆i has i ⊂ ∆ denote the simplex ∗ the same shape as ∆; (ii) ∆α shares the same vertex ei with ∆; (iii) the n − 1 edges i ∗ ∗ attached to ei of ∆α the n−1 edges of ∆; (iv) ∆α only one tangent i are parts of i has ∗ ∗ ∗ ∗ α α∗ point with the cross-section GSOC . Moreover, let S α = ∆\(∆α ∪ ∆α 1 2 ∪ · · · ∪ ∆n ) ∗ ∗ ∗ ∗ α α α be the remaining simplex in ∆. Then ∆α is a covering of 1 ∪ ∆2 ∪ · · · ∪ ∆n ∪ S α∗ ∆. Notice that S is the desired sensitive subregion. These simplices constitute the initial status of algorithm (AACPP). ∗
Remark 3. Since simplices ∆α i , i = 1, . . . , n, are in a rotational relation, we only need to figure out one of them. From the relationship between the center point of ∗ α∗ ∆ and the shrinking ratio α∗ , ∆α can be derived without i , i = 1, . . . , n, and ∆ much difficulty. Due to the special structures of these simplicies, the corresponding second-order cones can also be calculated without much difficulty. From Remarks 2 and 3, we see that the initial status of algorithm (AACPP) can be efficiently determined such that the lower bound VRLT may be improved more effectively. Here we use Example 5.1 of [3] to show the effectiveness of this strategy for setting an initial covering of ∆. Example 2. min s.t.
D·X A · X = b, X ∈ C∗.
(12)
716
where
YE TIAN, SHU-CHERNG FANG, ZHIBIN DENG AND WENXUN XING
1 1 1 1 1 1 0 1 1 0 1 1 1 1 1 0 1 0 1 1 D = 1 0 1 0 1 , A = 1 1 1 1 1 , b = 1. 1 1 1 1 1 1 1 0 1 0 1 1 1 1 1 0 1 1 0 1 The optimal value of this problem is 0.5. Given σ = 10−3 , the results are shown in Table 1, in which “#Simplices” denotes the number of simplices used in the algorithm, “Time” denotes the computation time in seconds, “Value” denotes the lower bound calculated by the algorithm.
Table 1. Results of algorithm (AACPP) with or without the strategy for initial covering Type #Simplices With 23 Without 86
Time 7.12s 49.38s
Value 0.4999 0.4999
6. Numerical results. In this section, some computational results on four types of commonly studied completely positive programming problems are presented. Algorithm (AACPP) was implemented using MATLAB 7.9.0 on a computer with Intel Core 2 CPU 2.8 Ghz and 4G memory. The solvers of cvx [14] and SeDuMi 1.3 [27] were incorporated in solving problems. The optimal values of the testing problems were calculated by the commercial software Baron [25]. 6.1. Box constrained quadratic programming problem. A box constrained quadratic programming problem has the following form: min s.t.
xT Qx + cT x 0 ≤ xi ≤ 1, i = 1, . . . n.
(13)
It can be reformulated as the following completely positive programming problem (ref. [5]). min s.t.
Q · X + cT x xi + si = 1, i = 1, . . . n, X = 1, i = 1, . . . n, ii ii + SiiT + 2R 1 x sT x X RT ∈ C ∗ , s R S x ≥ 0, s ≥ 0, X, S, R ∈ S n .
(14)
Hansen et al. [15] provided a classical method for generating Q and c. We followed their method to randomly generate the instances. For example, Spar030-060-2 indicates that this is the second instance in the type of problem whose dimension and density are 30 and 0.6, respectively. Our computational results are shown in Table 2. In this table, “n” denotes the dimension of the original problem, “fopt ” denotes the optimal value of the original problem obtained by Baron, “fAACPP ” denotes the lower bound calculated by (AACPP), “Iters” denotes the number of iterations used in the approximation process, “Time” denotes the CPU time in second. For all instances, we choose σ = 5 × 10−4 .
NEW REPRESENTATION AND APPROXIMATION
717
Table 2. Box constrained quadratic programming problems Instance spar020-100-1 spar020-100-2 spar020-100-3 spar030-060-1 spar030-060-2 spar030-060-3 spar030-080-1 spar030-080-2 spar030-080-3 spar040-030-1 spar040-030-2 spar040-030-3
n fopt fAACPP 20 -966 -966.001 20 -769 -769.001 20 -1019 -1019.001 30 -1514 -1514.057 30 -1811 -1811.691 30 -1917 -1917.001 30 -3072 -3072.003 30 -2186.208 -2186.209 30 -1897.522 -1897.534 40 -2671 -2671.023 40 -2367 -2367.332 40 -1943.972 -1944.481
Iters Time 1 24.93s 1 27.85s 1 26.20s 1 333.39s 1 338.30s 1 261.05s 1 285.13s 1 278.43s 1 311.47s 1 1284.32s 1 1322.56s 22 36447.23s
The results clearly show that, for most of the completely positive programming problems derived from the box constrained quadratic programming problem, algorithm (AACPP) can provide excellent lower bounds at the first iteration. For the only exceptional instance, algorithm (AACPP) returns an excellent lower bound in 22 iterations but it takes much larger computation time, which is a direct outcome of the low-speed SDP-solver in MATLAB. 6.2. Standard quadratic programming problem. A standard quadratic programming problem has the following form: min s.t.
T x PnQx
i=1 xi = 1, xi ≥ 0, i = 1, . . . n.
(15)
Bomze and de Klerk [3] showed that this problem can be reformulated as the following completely positive programming problem: min s.t.
Q·X E · X = 1, X ∈ C∗,
(16)
where E is the n × n matrix with all elements being 1. The method in [15] was used to generate Q and c for instances with different dimensions and densities. Computational results are shown in Table 3. The notations in Table 3 have the same meaning as those in Table 2. For these instances, we also choose σ = 10−3 . The results clearly show that, for completely positive programming problems derived from the standard quadratic programming problem, algorithm (AACPP) finds the optimal value of the original problems at the first iteration. The computation time is an outcome of the low-speed SDP-solver in MATLAB. 6.3. Maximum clique problem. Let w(G) and AG denote the maximum clique number and the adjacent matrix of a graph G, respectively. Motzink and Straus [20] showed a completely positive formulation of the maximum clique problem as
718
YE TIAN, SHU-CHERNG FANG, ZHIBIN DENG AND WENXUN XING
Table 3. Standard quadratic programming problems Instance spar020-100-1 spar020-100-2 spar020-100-3 spar030-060-1 spar030-060-2 spar030-060-3 spar030-080-1 spar030-080-2 spar030-080-3 spar040-030-1 spar040-030-2 spar040-030-3
n fopt fAACPP Iters Time 20 -44.9231 -44.9231 1 1.12s 20 -49 -49 1 0.58s 20 -48 -48 1 0.44s 30 -48 -48 1 2.82s 30 -39.0508 -39.0508 1 3.86s 30 -45 -45 1 2.76s 30 -49 -49 1 3.01s 30 -49 -49 1 3.37s 30 -38 -38 1 3.39s 40 -46 -46 1 14.09s 40 -35.2 -35.2 1 15.29s 40 -30.3707 -30.3707 1 15.60s
follows: w(G) =
max s.t.
E·X (E − AG ) · X = 1, X ∈ C∗,
(17)
where E is the n × n matrix with all elements being 1. Computing w(G) or getting reasonable approximations of w(G) is a difficult task [5]. We apply our algorithm to solve two types of the graphs known as Sanchis [26] and Brock [6] in the challenging DIMACS collection [11]. We choose the depth of hiding u = 0 in generating Brock graphs. Computational results are shown in Table 4. In this table, “node” and “edge” denote the number of nodes and edges in the graph G, respectively, “w(G)” denotes the maximum clique number of the graph G, “wAACPP ” denotes the upper bound calculated by (AACPP). For these instances, we set σ = 10−3 . The results show that algorithm (AACPP) returns optimal maximum clique numbers for both graphs at the first iteration of the algorithm. The computation time is determined by the speed of the SDP-solver in MATLAB. 6.4. Binary constrained quadratic programming problem. A binary constrained quadratic programming problem has the following form: min s.t.
xT Qx + cT x xi ∈ {0, 1}, i = 1, . . . n.
(18)
Burer [8] has pointed out that the optimal value of problem (18) is the same with the optimal value of the following completely positive programming problem: min s.t.
Q · X + cT x x i = XiiT, i= 1, . . . n, 1 x ∈ C∗. x X
(19)
We followed the method in [15] to randomly generate the instances. Computational results are shown in Table 5. The notations in Table 5 have the same meaning as those in Table 2. Moreover, “%Error” denotes the percentage of the relative gaps
NEW REPRESENTATION AND APPROXIMATION
719
Table 4. Maximum clique problems Instance Sanchis20 Sanchis30 Sanchis40 Sanchis50 Sanchis60 Sanchis70 Sanchis80 Sanchis90 Sanchis100 Brock20 Brock30 Brock40 Brock50 Brock60 Brock70 Brock80 Brock90 Brock100
node 20 30 40 50 60 70 80 90 100 20 30 40 50 60 70 80 90 100
edge 100 200 500 900 1100 1500 1800 2000 2500 70 120 242 350 541 701 903 1568 1978
w(G) wAACPP Iters 5 5 1 8 8 1 9 9 1 10 10 1 14 14 1 20 20 1 20 20 1 22 22 1 25 25 1 5 5 1 8 8 1 9 9 1 10 10 1 14 14 1 20 20 1 20 20 1 22 22 1 25 25 1
Time 0.42s 4.41s 16.06s 69.46s 216.98s 453.44s 1075.94s 2123.63s 4081.67s 0.43s 3.54s 15.22s 54.73s 171.52s 411.03s 929.74s 1282.75s 3625.51s
between the optimal values and the lower bounds. For these instances, we choose σ = 10−2 . Table 5. Binary constrained quadratic programming problems Instance spar010-100-1 spar010-100-2 spar010-100-3 spar020-100-1 spar020-100-2 spar020-100-3 spar030-060-1 spar030-060-2 spar030-060-3 spar030-080-1 spar030-080-2 spar030-080-3
n 10 10 10 20 20 20 30 30 30 30 30 30
fopt -535 -390 -554 -915 -984 -910 -1917 -2362 -2003 -2187 -1761 -1777
fAACPP -537.437 -391.088 –554.001 -919.745 -986.233 -914.531 -1920.863 -2366.173 -2010.641 -2192.226 -1767.003 -1784.365
%Error 0.46% 0.28% 0.00% 0.52% 0.23% 0.50% 0.20% 0.18% 0.38% 0.24% 0.34% 0.41%
Iters 8 34 1 23 18 21 257 228 385 312 307 288
Time 16.92s 68.19s 0.21s 213.43s 174.81s 193.45s 35211.51s 34735.73s 53287.61s 43726.12s 40273.47s 35238.88s
For completely positive programming problems derived from binary constrained quadratic programming problems, Table 5 shows that algorithm (AACPP) could efficiently approximate the optimal values for small and middle size problems and obtained good lower bounds for large size problems by taking much more computational efforts. We notice that the iteration number in the process is not large, which implies that algorithm (AACPP) was actually efficient in approximating the underlying completely positive cone. The long computation time is due to the low-speed SDP-solver in MATLAB.
720
YE TIAN, SHU-CHERNG FANG, ZHIBIN DENG AND WENXUN XING
One aim of this paper is to provide an effective computational scheme for approximating a general completely positive programming problem. This general scheme has not been developed before. The lower iteration numbers in our computational results for all different type testing problems indicate that algorithm (AACPP) has the potential to approximate the cone of completely positive matrices in a general problem effectively. The long computation time is a direct consequence of using the low-speed SDP-solver in MATLAB environment. For any specific type of problems formulated in completely positive programming, the proposed algorithm (AACPP) can be further customized for computational efficiency. Also, an implementation in C/C++ with a faster SDP-solver may greatly reduce the computation time required. 7. Concluding remarks. In this paper, we have provided a computable representation of the cone of nonnegative quadratic forms over a general nontrivial secondorder cone. Based on such computable cones over a union of second-order cones, a new approximation scheme has been proposed to approximate the completely positive programming problem. In order to reduce the computational burden and save memory storage in the process, an adaptive scheme and Reformulation-Linearization Technique (RLT) constraints were proposed. Furthermore, we designed an initial covering strategy to improve the efficiency. It is worth mentioning that our results provide the first proof of the existence of a computable representation of the cone of nonnegative quadratic forms over a general nontrivial second-order cone. It may be used for approximating other optimization problems with unbounded domains in our future study. Algorithm (AACPP) has been designed to approximate a general completely positive programming problem. The numerical results of four types of commonly studied problems strongly support the effectiveness of the algorithm. For many problems, our algorithm can return very good lower bounds or optimal values at the first iteration. Moreover, the small iteration numbers in all tests indicate that algorithm (AACPP) indeed approximates the underlying completely positive problem efficiently. This presents a great potential in solving large size problems with a faster SDP solver. REFERENCES [1] K. Anstreicher, Semidefinite programming versus the Reformulation-Linearization Technique for nonconvex quadratically constrained quadratic programming, Journal of Global Optimization, 43 (2009), 471–484. [2] A. Ben-Tal and A. Nemirovski, “Lectures on Modern Convex Optimization Analysis, Algorithms and Engineering Applications,” SIAM, Philadelphis, 2001. [3] I. Bomze and E. de Klerk, Solving standard quadratic optimization problem via linear, semidefinite and copositive programming, Journal of Global Optimization, 24 (2002), 163–185. [4] I. Bomze and G. Eichfelder, Copositivity Detection by difference-of-convex decomposition and ω-subdivision, to appear in Mathematical Programming. [5] I. Bomze, F. Jarre and F. Rendl, Quadratic factorization heuristics for copositive programming, Mathematical Programming Computation, 3 (2011), 37–57. [6] M. Brockington and J. Culberson, “Camouflaging Independent Sets in Quasi-random Graphs,” Clique Coloring and Satisfiability: Second DIMACS Implementation Challenge, 26, Amer Mathematical Society, 1994. [7] S. Bundfuss and M. D¨ ur, An adaptive linear approximation algorithm for copositive programs, SIAM Journal on Optimization, 20 (2009), 30–53. [8] S. Burer, On the copositive representation of binary and continuous nonconvex quadratic programs, Mathematical Programming, 120 (2009), 479–495. [9] S. Burer and K. Anstreicher, Second-order cone constraints for extended trust-region subproblems, submitted to SIAM Journal on Optimization, (2011).
NEW REPRESENTATION AND APPROXIMATION
721
[10] Z. Deng, S.-C. Fang, Q. Jin and W. Xing, Detecting copositivity of a symmetric matrix by an adaptive ellipsoid-based approximation scheme, accepted by European Journal of Operational Research, (2013). [11] DIMACS Implementation Challenges. ftp://dimacs.rutgers.edu/pub/challenge/graph/ benchmarks/clique [12] M. D¨ ur, “Copositive Programming: A Survey,” Recent Advances in Optimization and Its Application in Engineering, Springer, New York, 2012. [13] N. Govozdenovic and M. Laurent, The operator Ψ for the chromatic number of a graph, SIAM Journal on Optimization, 19 (2008), 572–591. [14] M. Grant and S. Boyd “CVX: matlab Software for Disciplined Programming,” version 1.2, 2010. http://cvxr.com/cvx [15] P. Hansen, B. Jaumard, M. Ruiz and J. Xiong, Global minimization of indefinite quadratic functions subject to box constraints, Naval Research Logistics, 40 (1993), 373–392. [16] K. Ikramov, Linear-time algorithm for verifying the copositivity of an acyclic matrix, Computational Mathematics and Mathmetical Physics, 42 (2002), 1701–1703. [17] E. de Klerk and D. Pasechnik, Approximation of the stability number of a graph via copositive programming, Journal of Global Optimization, 12 (2002), 875–892. [18] C. Lu, S.-C. Fang, Q. Jin, Z. Wang and W. Xing, KKT solution and conic relaxation for solving quadratically constrained quadratic programming problems, SIAM Journal on Optimization, 21 (2010), 1475–1490. [19] C. Lu, Q. Jin, S.-C. Fang, Z. Wang and W. Xing, An LMI based adaptive approximation scheme to cones of nonnegative quadratic functions, Submitted to Mathematical Programming, (2011). [20] T. Motzkin and E. Straus, Maxima for graphs and a new proof of a theorem of Turan, Canadian Journal of Mathematics, 17 (1965), 533–540. [21] K. Murty and S. Kabadi, Some NP-complete problems in quadratic and nonlinear programming, Mathematical Programming, 39 (1987), 117–129. [22] J. Povh and F. Rendl, Copositive and semidefinite relaxations of the quadratic assignment problem, Discrete Optimization, 6 (2009), 231–241. [23] J. Preisig, Copositivity and the minimization of quadratic functions with nonnegativity and quadratic equality constraints, SIAM Journal on Control and Optimization, 34 (1996), 1135– 1150. [24] R. Rockafellar, “Convex Analysis,” Princeton University Press, Princeton, 1996. [25] N. Sahinidis and M. Tawarmalani, “BARON 9.0.4: Global Optimization of Mixed-Integer Nonlinear Programs,” 2010. http://archimedes.cheme.cmu.edu/baron/baron.html [26] L. Sanchis, Test case construction for the vertex cover problem, in “Computational Support for Discrete Mathematics,” DIMACS Series in Discrete Mathematics and Theoretical American Mathematical Society, 15 (1992), Providence, Rhodle Island, (1992). [27] J. Sturm, SeDuMi 1.02, a matlab tool box for optimization over symmetric cones, Optimization Methods and Software, 11&12 (1999), 625–653. [28] J. Sturm and S. Zhang, On cones of nonnegative quadratic functions, Mathematics of Operations Research, 28 (2003), 246–267. [29] Y. Ye and S. Zhang, New results on quadratic minimization, SIAM Journal on Optimization, 14 (2003), 245–267. ˘ [30] J. Zilinskas, Copositive programming by simplicial partition, Informatica, 22 (2011), 601–614.
Received October 2012; 1st revision December 2012; final revision February 2013. E-mail E-mail E-mail E-mail
address: address: address: address:
[email protected] [email protected] [email protected] [email protected]