A New Finite Cone Covering Algorithm for

0 downloads 0 Views 240KB Size Report
Company, second edition, 1973. 11] C. Meyer. Algorithmes coniques pour la minimisation quasiconcave. PhD thesis, Ecole. Polytechnique de Montr eal, 1996.
A New Finite Cone Covering Algorithm for Concave Minimization Christophe Meyer

 Polytechnique de Montreal Ecole Departement de Mathematiques et de Genie Industriel C. P. 6079, succ. Centre-ville Montreal (Quebec) Canada H3C 3A7 Fax: (514) 340-5665 email: [email protected]

Brigitte Jaumard

 Polytechnique de Montreal GERAD and Ecole Departement de Mathematiques et de Genie Industriel C. P. 6079, succ. Centre-ville Montreal (Quebec) Canada H3C 3A7 Fax: (514) 340-5665 email: [email protected]

December, 1998

Les Cahiers du GERAD G{98{70

Abstract

We propose a new nite cone covering algorithm for concave minimization over a polytope, in which the cones are de ned by extreme points of the polytope. The main novelties are the use of cones de ned by an arbitrary number of edges, and the subdivision process. This latter is shown to have a \descent property", i.e., all subcones are strictly better in some sense than the subdivided cone, which eliminates the possibility of cycling. The main task in the subdivision process consists in expressing a given point of a face of the polytope as a convex combination of extreme points of this face.

Keywords: concave minimization, cone covering, nite convergence. Resume Nous proposons un nouvel algorithme ni de recouvrement de c^ones pour la minimisation d'une fonction concave sur un polytope, dans lequel les c^ones sont de nis par des points extr^emes du polytope. Les principales nouveautes sont l'utilisation de c^ones de nis par un nombre quelconque de generatrices, et le processus de subdivision. On montre que ce dernier possede une propriete de descente, c'est-a-dire que les sous-c^ones sont strictement meilleurs, dans un certain sens, que le c^one subdivise, ce qui elimine la possibilite de cyclage. L'operation principale de la procedure de subdivision consiste a exprimer un point donne d'une face du polytope comme combinaison convexe de points extr^emes de cette face.

Mots cles: minimisation concave, recouvrement de c^ones, convergence nie.

Acknowledgments: Work of the rst author was supported by NSERC-network

grant NET0200815. Work of the second author was supported by FCAR (Fonds pour la Formation de Chercheurs et l'Aide a la Recherche) grant 95-ER-1048 and by NSERC (Natural Sciences and Engineering Research Council of Canada) grant GP0036426.

Les Cahiers du GERAD

G{98{70

1

1 Introduction We consider the following concave minimization problem (CP )

minff (x) j x 2 P g

where f is a concave function de ned on IRn and P is a full dimensional polytope of IRn . The rst conical algorithm for concave minimization was proposed by Tuy [17] in 1964. The main idea was to cover the polytope by polyhedral cones, each of which having exactly n edges corresponding to extreme points of the polytope. If it can be shown that a cone cannot contain a better solution than the one at hand, the cone is fathomed; otherwise it is replaced by a set of subcones that cover it. Unfortunately, Zwart [22] showed that this algorithm is not nitely convergent as it was rst expected, by exhibiting a small example on which the algorithm cycles. In order to avoid the possibility of cycling, Bali [1] and Zwart [23] proposed a small modi cation which has the e ect to transform Tuy's algorithm into a cone partitioning one. During the next two decades, numerous cone partitioning algorithms were developed and shown convergent ([4, 5, 6, 8, 16, 18, 19, 20, 21] to cite only a few; for additional references, see the surveys of Horst and Tuy [7], Benson [2]). Note however that up to now only in nite convergence could be shown when an exact solution is sought; the fact that the cones are not anymore de ned by extreme points of the original polytope complicates signi cantly the search of a nitely convergent cone partitioning algorithm. In contrast, the only other paper on cone covering is due to Gallo [3] who proposed a modi cation of Tuy's 1964 algorithm that results in a search tree which is a subtree of Tuy's one. However no convergence proof was given. It is only recently that Meyer [12] proved the in nite convergence of Gallo's [3] and Tuy's [17] algorithms. Only in nite convergence can be shown because cycling can occur in both two algorithms. Two methods were proposed to transform these algorithms into nite ones. These methods are passive with respect to cycling: the rst one does not attempt to identify if cycling occurs but still has a stopping criterion that guarantees an optimal solution after however a very large number of iterations. The second ensures that a cone is not generated twice by keeping a list of all generated cones. In this paper, we propose a nite cone covering algorithm which prevents cycling. The key point is a subdivision process: it was initially proposed in [11] and used to develop a cone covering algorithm that can be shown to be nitely convergent under the (rather unrealistic) assumption that the best simplicial lower bound is computed exactly for each cone (the notion of best simplicial lower bound was introduced in [4]; the computation of the best simplicial lower bound amounts to solve a convex program as shown in [11]). The nonexistence of cycles is proved by de ning a function whose value depends only on the cone and on the current incumbent value, and that is shown to decrease

Les Cahiers du GERAD

G{98{70

2

strictly when going from one cone to one of its subcones. This function is essentially equal to the optimal value of the linear program that is solved in cone partitioning algorithms in order to check if the cone can be fathomed. The paper is organized as follows. In Section 2, the basic operations of the cone covering algorithm are recalled, i.e., the construction of an initial cover, the fathoming test, and the subdivision procedure with its descent property. In Section 3, we give the algorithm and prove its nite convergence. Conclusions are drawn in the last section.

2 Basic operations In this section, we recall the basic operations needed to de ne the algorithm, namely the construction of an initial conical cover (Section 2.1), the fathoming test (Section 2.2) and the subdivision procedure (Section 2.3). The descent property satis ed by the subdivision process is given in Section 2.3.3.

2.1 Initial cover Assume that a nondegenerate vertex of P is available. By performing a change of variables if necessary, we may assume that this vertex is the origin O. Let K 0 be the polyhedral cone of origin O de ned by the n adjacent vertices to O. This cone is used to de ne the initial cover of P . If no nondegenerate vertex is available, we compute an interior point of P and use it to decompose P into n + 1 subpolytopes. This point de nes a nondegenerate vertex for each of the n + 1 subpolytopes (see, e.g., Meyer [12]). In this paper, we consider polyhedral cones of origin O that may have more than n edges. Moreover, each edge corresponds to an extreme point of P , i.e., the edge intersects the boundary of P at an extreme point. The set of cones of origin O and whose edges correspond to extreme points of P will be denoted by K. If uj ; j = 1; : : : ; p are the directions of a cone, this cone will be noted conefu1 ; u2 ; : : : ; up g (since the cone is vertexed at O, uj can be viewed indi erently as a point or as a vector of IRn , for j = 1; : : : ; p). For a subset X of IRn , we denote by conv(X ) the convex hull of X .

2.2 Fathoming of a cone In order to de ne the fathoming procedure, we rst recall the de nition of -extensions. Basically the -extension along an hal ine of origin O (with  f (O)) is the farthest point y on the hal ine with value f (y)  . More precisely, if u denotes the direction of

Les Cahiers du GERAD

G{98{70

3

the hal ine, the -extension is the point y = u with  = maxf j f (u)  ; u 2 C g where C is a large set containing the polytope P whose aim is to ensure that the extensions are at nite distance. The notion of -extension was introduced in Tuy [17] (see also Horst and Tuy [7]). Now consider a cone K = conefu1 ; : : : ; up g, let = f be the value of the best known solution, and let yj ; j = 1; : : : ; p be the -extensions along the directions u1 : : : ; up respectively. Consider the following pair of primal-dual linear programs

PLP (K ) max s.t.

DLP (K ) min s.t.

p X

j =1 8 > < > :

j

p X i a j y j j =1

i = 1; : : : ; m

  0:

m X i=1 8

i bi

m X > < > :

 bi ;

i ai yj  1; i=1   0:

j = 1; : : : ; p

It can be shown that problem PLP (K ) has a nite optimal value, which due to linear programming duality (see, e.g., Luenberger [10]), is also an optimal value to problem DLP (K ). Let ^ denote this optimal value. Consider the optimal solutions of problem PLP (K ). We denote by ~ the number of edges of K that are involved with a strictly positive coecient in at least an optimal solution. Note that a convex combination of optimal solutions is still an optimal solution, therefore there exists an optimal solution (not necessarily basic) with exactly ~ components j that are strictly positive. ~ will be used in the de nition of the descent function in Section 2.3.3. Moreover, for a given optimal solution ~ , we de ne

!~ =

p X

j =1

~ j yj :

(1)

Now, let ^ be an optimal solution of problem DLP (K ). We de ne

^ =

m X i=1

^i ai

and let H^ be the hyperplane of equation ^ x = ^. The following property holds for H^ .

(2)

Les Cahiers du GERAD

G{98{70

4

Proposition 1 H^ \ P is a face of P . Proof:

In order to prove that H^ \ P is a face of P , we have to show that P is entirely contained in one of the halfspace de ned by H^ and that H^ \ P 6= ;. We rst show that P  fx 2 IRn : ^ x  ^g. Indeed, let x 2 P : then ai x  bi for i = 1; : : :m; m. Multiplying each inequality by ^i and summing we obtain ^ x = m X X ^i ai x  ^ibi = ^. i=1

i=1

Now by the complementary slackness conditions, we have

 p X j i ~ ^i a j y ? bi j =1 

=0

for i = 1; : : : ; m. After summing, we obtain ^ !~ = ^ which shows that !~ 2 H^ \ P . The following result is well known when the cone has exactly n edges (see, e.g., Horst and Tuy [7]):

Proposition 2 If ^  1, then x2min f (x)  f . K \P Proof:

This result is usually proved by reasoning on the primal problem (see, e.g., Horst and Tuy [7]). Here we give a proof using problem DLP (K ). By Proposition 1, y ^ j j ^ ^ H supports P . Furthermore H intersects the edges of K at points z^ = y for ^ j ?  1 p j ^ j = 1; : : : ; p. Thus K \ P  S = conv fO; z^ ; : : : ; z^ g . But since ^  1 and ^y  1 by feasibility of ^ and de nition of ^ , we have z^j 2 [Oyj ] for j = 1; : : : ; p. Hence by concavity of f , f (^z j )  minff (O); f (yj )g  f for j = 1; : : : ; p. Using again the concavity of f , we deduce that min^ f (x) = minff (O); f (^z 1 ); : : : ; f (^z p )g  f . Using x2S ^ the inclusion K \ P  S , we obtain then x2min f (x)  f . K \P

This result implies that if ^  1, the portion of the polytope P contained in the cone K cannot contain a point that improves the best known solution, hence the cone can be fathomed.

2.3 Subdivision of a cone The subdivision of a cone of K is done in two steps. In the rst step, the cone is subdivided into subcones with at most one edge not corresponding to an extreme point of the polytope: this is done by the well-known !-subdivision process, which is recalled in Section 2.3.1. In a second step, the subcones with an edge not corresponding to an extreme point are

Les Cahiers du GERAD

G{98{70

5

extended (by addition of new edges) in such a way that all their edges correspond to an extreme point: this operation is described in Section 2.3.2. Finally in Section 2.3.3, we show that this subdivision process satis es a descent property that will be used to show the niteness of the algorithm presented in Section 3.

2.3.1 !-subdivision Let K = conefu1 ; : : : ; up g be a cone to be subdivided and let w =

p X j =1

j yj be a point

of IRn distinct from O. Let J> = fj j j > 0g. Assume that jJ> j  1 (note that this p X assumption is satis ed if j > 0 as it is the case if  is an optimal solution of problem j =1

PLP (K )). For each j 2 J> , de ne K j as the cone obtained from K by replacing the j th edge of K by the hal ine of origin O passing through w. We have the following result.

Proposition 3 K

[

j 2J>

Kj:

Proof:

Let x be a point of K . There exists at least a vector   0 such that x =

j . We have Now let ` such that ` = jmin 2 J  > j ` x = =

p X j =1

j uj .

p X

  p X  ` j j j u +  w ? j u ` j =1;j 6=` j =1;j 6=`  p  X  ` j ?  j uj + ` w: ` ` j =1;j 6=`

By de nition of `, j ? ` j  0 for j = 1; : : : ; p; j 6= ` and ` S Kj. to K ` , which proves the inclusion K 

`  0. Hence x belongs `

j 2J>

By Proposition 3, the set of cones fK j gj 2J> de nes a cover of the cone K . This subdivision process was proposed by Tuy [17] for the case where w 2 K (in which case we obtain a partition of K ), and extended by Gallo [3] to the case where w can lie outside K . The cones K j are said subcones of K . If  = ~ is an optimal solution solution of problem PLP (K ), then w = !~ and the subdivision process is referred to as an !-subdivision.

Les Cahiers du GERAD

G{98{70

6

2.3.2 Extension of a cone Since !~ =

p X

j =1

~j yj may not be an extreme point of P , a subcone K 0 of a cone K 2 K via

!-subdivision is not in general a cone of K. The purpose of this Section is to construct a cone K " 2 K such that K 0  K ". Recall that H^ is the hyperplane constructed from an optimal solution of the dual DLP (K ). Denote by z^j ; j = 1; : : : ; p the intersection of the edges of K with the hyperplane H^ . By renumbering the edges of K if necessary, assume that K 0 is the cone obtained from K by replacing the rst edge by the hal ine passing through !~ , so that K 0 = conef!~ ; z^2 ; : : : ; z^p g. Let X = fx1 ; : : : ; xr g be a set of extreme points of H^ \ P such that !~ 2 conv(X ). Since !~ 2 H^ \ P , such points always exist by Caratheodory Theorem (see, e.g., Rockafellar [14]) which in addition states that there exists a set X with cardinality r  dim(H^ \P )+1  n. Note that since H^ \ P is a face of P , extreme points of H^ \ P are actually extreme points of P . The cone K " is then simply de ned as K " = conefz^2 ; : : : ; z^p ; x1 ; : : : ; xr g: It may happen that x` = z^` for some ` and `0 : in this case, we simply remove the redundant edges from K ". The following proposition shows that if this situation occurs, we may sometimes improve the current best solution. 0

Proposition 4 If ^ > 1 and if x` = z^` for some ` and `0 with `0 such that ~` > 0, then 0

0

f (x`) < f .

Proof:

Since ~ ` > 0, the complementary slackness conditions imply z^` = ^y` , hence x` = z^` lies after y` on the edge. Since x` is an extreme point of P , it follows that y` 2 P  C , hence f (x`) < f (y` ) = f by de nition of the -extension. 0

0

0

0

0

0

0

The following result is immediate.

Proposition 5 The inclusion K 0  K " holds. Proof:

Let x be a point of K 0 : there exists   0 such that x = 1 !~ + other hand, since !~ 2 conv(X ), there exists   0 satisfying

r X `=1

p X j =2

j z^j . On the

` = 1 such that

Les Cahiers du GERAD

G{98{70 p X

r X

7 r X

!~ = ` x` . By replacing !~ , we obtain x = j z^j + 1 `x`, which shows that j =2 `=1 `=1 x belongs to K ". A set X corresponding to Caratheodory Theorem can be constructed by the following procedure, which is based on one of the numerous proofs of this theorem (Scherk [15]). We rst de ne some notations: for a given point x of P , we denote by I (x) the set of indices of the constraints of P satis ed at equality by x; for a given subset I of constraints of P , we de ne the polytope P (I ) as P (I ) = fx 2 IRn : ai x  bi (i 62 I ); ai x = bi (i 2 I )g. We now give the procedure:

Step 1 (initialization) : set ! !~ , I I (! ) and k 1. Step 2 (extreme point) : nd an extreme point xk of P (I k ). Step 3 (update of !) : compute the intersection point !k of the hal ine [xk !k ) with 1

1

1

+1

the boundary of P (I k ). If !k+1 is an extreme point of P (I k ), stop: !~ is a convex combination of the extreme points x1 ; : : : ; xk ; !k+1 .

Step 4 (update of I ) : let I k = I (!k ). Increment k and return to Step 2. +1

+1

If Step 2 is implemented by solving a linear program with an arbitrary objective function, the set X is obtained after solving at most n linear programs. These solutions can be more eciently computed with the observation that the optimal solution at iteration k is dual-feasible for the linear program solved at step k + 1. Also note that the dimension of the polytopes P (I k ) decreases by at least 1 at each iteration.

2.3.3 Descent property Let K be a cone of K, and K " an other one obtained from K by !-subdivision and extension. We show in this section that (K ")  (K ) where  is a function from K to IR  IN that maps a cone K to the 2-dimensional vector (^(K ); ~(K )) (where ^(K ) and ~(K ) are the quantities ^ and ~ de ned in Section 2.2). The symbol  is the less-than

symbol for the lexicographic order. Before giving the rst result, we give some notations. Let K be a cone, and Y (K ) = pX (K ) fy1(K ); : : : ; yp(K )(K )g be the set of its f -extensions. Let !~ (K ) = ~j (K )yj (K ) be an j =1

optimal solution of the primal problem PLP (K ). Let J~> (K ) = fj : ~ j (K ) > 0g, and Y> (K ) = fyj 2 Y (K ); j 2 J~> (K )g the restriction of Y (K ) to J~> (K ). Let H^ (K ) = fx 2

Les Cahiers du GERAD

G{98{70

8

IRn : ^ (K )x = ^(K )g be the hyperplane associated to an optimal solution of the dual problem DLP (K ).

Proposition 6 Let K and K 0 be two cones such that ^(K )y0  1 for all y0 2 Y (K 0 ). Then ^(K 0)  ^(K ): (3) Furthermore, if the following three conditions are satis ed: (i) inequality (3) is satis ed at equality, (ii) ^ (K )y0 > 1 for all y0 2 Y (K 0 ) n Y (K ), (iii) Y> (K ) 6 Y (K 0 ) then

~(K 0) < ~(K ):

Proof:

(4)

m P

Recall that ^ (K ) = ^i (K )ai where ^(K ) is an optimal solution of problem i=1 DLP (K ). Since ^ (K )yj (K 0)  1 for j = 1; : : : ; p(K 0 ), ^(K ) is a feasible solution for DLP (K 0 ), hence ^(K 0 )  ^(K ). Now let us show the second part. Recall that ~(K 0 ) is the number of edges that can be part of an optimal solution of problem PLP (K 0 ) with a strictly positive p(P K) coecient. Let _ 0 be such an optimal solution and consider !_ 0 = _ 0j yj (K 0 ). j =1 We have ^ (K )yj (K 0 )  1 for j = 1; : : : ; p(K 0 ) with strict inequality for the new edges by assumption (ii), i.e., those that are not already edges of cone K . Observe that _ 0j = 0 for all j such that ^ (K )yj (K 0 ) > 1. Indeed, if not, we would p(P K) have ^ (K )!_ (K 0 ) > _ 0j = ^(K 0 ) = ^(K ), which is impossible since !_ 0 2 P and 0

0

j =1

P  fx 2 IRn j ^(K )x  ^(K )g. Hence !_ 0 is a positive combination of points of Y (K ) \ Y (K 0 ), which means that _ 0 is a feasible solution for PLP (K ). Since ^(K 0 ) = ^(K ), it is actually an optimal solution, hence ~(K 0 )  ~(K ). This inclusion is strict since by assumption (iii) at least one edge that is involved with a strictly positive component in an optimal solution of PLP (K ) is absent from the decomposition of !_ 0 .

Now assume that K 2 K, and let K 00 be the cone obtained by !-subdivision and extension as explained in Section 2.3.1 and 2.3.2. From Proposition 6, we conclude that

Corollary 1 If ^(K ) > 1, then either we nd a point x` of H^ (K )\P satisfying f (x` ) < f , or (K ")  (K ). 0

0

Les Cahiers du GERAD

G{98{70

9

Proof:

The cone K " di ers from K by:

 the removing of an edge [Oyj (K )) corresponding to j 2 J~> (K ), that counts for 1 in ~(K ).  the addition of edges corresponding to extreme points x` of H^ (K ) \ P . Assume that none of these points improves the best known solution. Then their f extension satis es ^ (K )y` > 1. These -extensions correspond to the set Y (K 0 )n Y (K ) in Proposition 6.

Clearly for the edges common to K and K ", we have ^ (K )yj (K )  1 by de nition of ^(K). Hence ^(K ")  ^(K ) by Proposition 6. If ^(K ") < ^(K ), we are done. Hence assume that ^(K ") = ^(K ). Since we already know that ^ (K )y0 > 1 for y0 2 Y (K 0 ) n Y (K ), it suces to show (iii) Y> (K ) 6 Y (K 0). This is true if the best known solution is not improved since an element of Y> (K ) was removed and could not be added again in the extension process by Proposition 4.

3 Algorithm The proposed algorithm is a two-phase algorithm: the rst phase consists in a local search while the second phase aims to prove that the current best point is optimal or to nd a better point.

Phase 1 (local search) : starting from a point z of P , nd an extreme point x of P satisfying f (x)  f (z ). Let f = f (x). Go to Phase 2. Phase 2 (transcending the incumbent) : Step 1 (initialization): construct an initial conical cover C of P as indicated in Section 2.1. For each cone K of C , solve the linear problem PLP (K ), obtaining the optimal value ^(K ), the point !~ (K ) and the hyperplane H^ (K ). Let L be the set of cones of C for which ^(K ) > 1. Step 2 (optimality test and selection): if L = ;, stop: x is an optimal solution of problem (CP ) with value f . Otherwise select K  2 arg maxf^(K ) j K 2 Lg. In case of equality, select the cone that maximizes ~(K ). Subsequent ties can be broken by de ning an order on the extreme points de ning a cone and by selecting the greatest with respect to this order. Remove K  from L.

Les Cahiers du GERAD

G{98{70

10

Step 3 (subdivision): !-subdivide the cone K  via the point !~ (K ) as indicated

in Section 2.3.1 and extend the subcones using a set X of extreme points of the face H^ (K  ) \ P as explained in Section 2.3.2. Let C be the set of subcones. Step 4 (update of the incumbent): if for some extreme point x of X , f (x) < f , go to Phase 1 with z = x. Step 5 (fathoming): for all cone K in C , construct the linear program PLP (K ). Let ^(K ) be its optimal value, and !~ (K ) and H^ (K ) be respectively the point and hyperplane associated with the primal and dual optimal solution. Add to L all cones K of C for which ^(K ) > 1 and return to Step 2.

Theorem 1 After a nite number of iterations, the algorithm terminates with an optimal solution x of problem (CP ).

Proof:

Since the number of extreme points of a polytope is nite, and since at each occurrence of Phase 1 we have an extreme point with strictly smaller value, the number of occurrences of Phase 1 is nite. Hence we have only to show that Phase 2 is nite. Let K h be the cone selected at iteration h of Phase 2, ^h = ^(K h ) and ~h = ~(K h ). By de nition of the selection rule (Step 2) and by the descent property (Corollary 1), the function h 7! (^h ; ~h ; K h ) is decreasing when considering the lexicographic order (by K h , we mean for example the vector obtained by concatenating the vectors corresponding to the extreme points de ning the cone). But since the cones are de ned by extreme points of P and since the incumbent is updated with extreme points of P (see Step 4), there are a nite number of distinct pairs (K; f ), and hence ~ K ). It follows that the algorithm terminates after a a nite number of values (^; ; nite number of iterations.

The selection rule in Step 2 merely ensures that a same cone is not considered several times but is not necessary for the niteness of the algorithm. The niteness comes from the fact that when subdividing, we replace a cone by subcones with strictly smaller value, and from the niteness of the number of possible cones. In particular, since ~ is only used in the selection test, it is not necessary to compute this quantity. Proposition 6 can also be applied to the cone K 0 , obtained from K by !-subdivision. Hence the cone partitioning algorithm (the modi ed version of Tuy's 1964 [17] algorithm by Bali[1] and Zwart [23]; see also Jaumard and Meyer [8][9]) also shows the descent property. However since no upper bound on the number of possible cones is available, we cannot use the same argument to show the nite convergence of the cone partitioning algorithm.

Les Cahiers du GERAD

G{98{70

11

4 Conclusions In this paper we have presented a new nite cone covering algorithm for concave minimization. As for previous cone covering algorithms, the cones are de ned by extreme points of the polytope, which is a desired property since the optimal solution belongs to the set of extreme points. But contrary to these algorithms, cycling does not occur due to the descent property of the subdivision process. In particular, it is not necessary to maintain a list of all generated cones.

References [1] S. Bali. Minimization of a Concave Function on a Bounded Convex Polyhedron. PhD thesis, University of California at Los Angeles, 1973. [2] H. P. Benson. Concave minimization: Theory, applications and algorithms. In Reiner Horst and Panos M. Pardalos, editors, Handbook of Global Optimization. Kluwer Academic Publishers, 1996. [3] G. Gallo. On Hoang Tui's concave programming algorithm. Nota scienti ca S-76-1, Instituto di Scienze dell'Informazione, University of Pisa, Italie, 1975. [4] P. Hansen, B. Jaumard, C. Meyer, and H. Tuy. Best simplicial and double-simplicial bounds for concave minimization. Les Cahiers du GERAD G-96-17, GERAD, Montreal, Canada, 1996. Submitted for publication. [5] R. Horst and N. V. Thoai. Modi cation, implementation and comparison of three algorithms for globally solving linearly constrained concave minimization problems. Computing, 42:271{289, 1989. [6] R. Horst, N. V. Thoai, and H. P. Benson. Concave minimization via conical partitions and polyhedral outer approximation. Mathematical Programming, 50:259{274, 1991. [7] R. Horst and H. Tuy. Global Optimization (Deterministic Approaches). SpringerVerlag, Berlin, third, revised and enlarged edition, 1996. [8] B. Jaumard and C. Meyer. On the convergence of cone splitting algorithms with !subdivisions. Les Cahiers du GERAD G-96-36, GERAD, July 1996 (revised February 1997). Submitted for publication. [9] B. Jaumard and C. Meyer. A Simpli ed Convergence Proof for the Cone Partitioning Algorithm. Les Cahiers du GERAD G-98-07, GERAD, March 1998. To appear in Journal of Global Optimization.

Les Cahiers du GERAD

G{98{70

12

[10] D. G. Luenberger. Linear and Nonlinear Programming. Addison-Wesley Publishing Company, second edition, 1973. [11] C. Meyer. Algorithmes coniques pour la minimisation quasiconcave. PhD thesis, Ecole Polytechnique de Montreal, 1996. [12] C. Meyer. On Tuy's 1964 cone splitting algorithm for concave minimization. Les Cahiers du GERAD G-97-48, GERAD, July 1997 (revised October 1997). To appear in From Local to Global Optimization, proceeding of the workshop in honor of Professor Tuy's 70th birthday (Linkoping), Kluwer Academic Publishers. [13] M. Nast. Subdivision of simplices relative to a cutting plane and nite concave minimization. Journal of Global Optimization, 9:65{93, 1996. [14] R. T. Rockafellar. Convex Analysis. Princeton University Press, Princeton, New Jersey, 1970. [15] P. Scherk. On Caratheodory's theorem. Canadian Mathematical Bulletin, 9(4):463{ 465, 1966. [16] N. V. Thoai and Hoang Tuy. Convergent algorithms for minimizing a concave function. Mathematics of Operations Research, 5:556{566, 1980. [17] H. Tuy. Concave programming under linear constraints. Soviet Mathematics, 5:1437{ 1440, 1964. [18] H. Tuy. E ect of the subdivision strategy on convergence and eciency of some global optimization algorithms. Journal of Global Optimization, 1:23{36, 1991. [19] H. Tuy. Normal conical algorithm for concave minimization over polytopes. Mathematical Programming, 51:229{245, 1991. [20] H. Tuy, V. Khatchaturov, and S. Utkin. A class of exhaustive cone splitting procedures in conical algorithms for concave minimization. Optimization, 18(6):791{807, 1987. [21] H. Tuy, T. V. Thieu, and Ng. Q. Thai. A conical algorithm for globally minimizing a concave function over a closed convex set. Mathematics of Operations Research, 10:498{514, 1985. [22] P. B. Zwart. Nonlinear programming: Counterexamples to two global optimization algorithms. Operations Research, 21:1260{1266, 1973. [23] P. B. Zwart. Global maximization of a convex function with linear inequality constraints. Operations Research, 22:602{609, 1974.

Suggest Documents