An Algorithm for Monotonic Global Optimization Problems 1 Introduction

21 downloads 0 Views 201KB Size Report
least one i = 1;:::;n then we say that x0 dominates x: We also write x0 > x and say ... f(x): Many functions encountered in various applications are increasing in.
An Algorithm for Monotonic Global Optimization Problems Alex Rubinov, Hoang Tuy yand Heather Maysz Abstract We propose an algorithm to locate a global maximum of an increasing function subject

to an increasing constraint on the cone of vectors with nonnegative coordinates. The algorithm is based on the outer approximation of the feasible set. We establish the convergence of the algorithm and provide a number of numerical experiments. We also discuss the types of constraints and objective functions for which the algorithm is best suited.

Key words Monotonic global optimization, increasing functions, outer approximation method, abstract quasiconvexity.

1 Introduction The outer approximation method [1, 2, 10, 11, 12, 13] is a general approach for solving global optimization problems. Its successful implementation is based on speci c properties of the problem under consideration. In particular the outer approximation method for concave minimization [1, 2, 12] exploits separation properties of convex sets and linear approximation properties of convex functions. It is well known that a lower semicontinuous function is convex if and only if it can be represented as the upper envelope of a set of ane functions. Generalizing this property, in recent years a wide class of functions, called abstract convex and abstract quasiconvex functions, has been introduced and studied (see [4, 9]). It turned out that many results and ideas from convex analysis can be extended to abtract convex analysis. One may wonder whether certain numerical methods for convex and quasiconvex functions can also be adapted to handle optimization problems involving abstract convex and abstract quasiconvex functions. This question is of interest not only from a theoretical but also from a practical point of view because many nonconvex problems arising from applications, especially from mathematical economics, can be described in terms of abstract convex and quasiconvex functions. The aim of the present paper is to extend the outer approximation method for concave minimization to the global optimization of an increasing function subject to an increasing constraint. It is assumed that all functions are de ned on the cone R+n of all vectors with nonnegative coordinates. We consider the monotonicity as a certain kind of abstract quasiconvexity (compare with [6] where such a kind of abstract quasiconvexity was considered  y z

School of Information and Mathematical Sciences, University of Ballarat, Australia Institute of Mathematics, Hanoi, Vietnam School of Information and Mathematical Sciences, University of Ballarat, Australia

1

n of all vectors with positive coordinates). The class for functions de ned on the cone R++ of increasing functions is characterized by a property of level sets which is analogous to the separation property of convex sets. Speci cally, any point which does not belong to the level set of a function of this class can be separated from this set by means of a cone (rather than by a halfspace as with quasiconvex functions). This allows cutting cones to be used for discarding un t solutions just in the same way as cutting hyperplanes in outer approximation procedures for solving concave minimization problems. The resulting algorithm, which is extremely simple conceptually and very easy to implement, turns out to be practically quite ecient as well. In fact, preliminary numerical experiments using conventional PC computers have shown that this algorithm can solve fairly quickly problems of dimension over 10 (while it is known that general concave minimization problems of this dimension are already very hard to solve by the standard outer approximation method). Hopefully, certain results from the present paper can be extended to other problems involving larger classes of abstract quasiconvex functions. The paper contains 8 sections. In Section 2 we de ne some preliminary concepts and notations. In Section 3 we describe the problem, then in Section 4 we study properties which serve as the theoretical basis of the method. Section 5 is devoted to a description of the proposed solution method while Section 6 discusses its convergence. A small illustrative example is given in Section 7 and nally some computational experience is presented in Section 7.

2 Preliminaries We begin with introducing some notations and concepts. For any two vectors x0 ; x 2 Rn we write x0  x to mean that x0i  xi 8i = 1; : : : ; n: If x0  x and x0i > xi for at least one i = 1; : : : ; n then we say that x0 dominates x: We also write x0 > x and say that x0 strictly dominates x if x0i > xi 8i = 1; : : : ; n: Let R+n = fx 2 Rn j x  0g and n = fx 2 Rnj x > 0g. For x 2 Rn denote R++ + n = fx0 2 Rn j x0 > xg; Kx = x + R++ +

clKx = x + R+n = fx0 2 R+n j x0  xg:

If a  b we de ne the rectangle [a; b] to be the set of all x such that a  x  b: We also write (a; b] := fxj a < x  bg: As usual e is the vector of all ones and ei the i-th unit vector of Rn : A function f : Rn ! R is said to be increasing on R+n if x0 ; x 2 R+n and x0  x imply that f (x0 )  f (x): Many functions encountered in various applications are increasing in this sense. Oustanding examples are the production functions and the utility functions in Mathematical Economics (under the assumption that all goods are useful). The sum of two increasing functions is increasing and a product f is increasing if  > 0 and f is increasing. Consequently, polynomials with nonnegative coecients and posynomials P m c Qn (x )a with c  and a  0; such as the well known Cobb-Douglas function: j ij j =1 j i=1 i ij

f (x) =

Y i

x i ; i

2

i  0

are increasing. If a family (f ) 2A of increasing functions is bounded from above, then sup f is increasing. If this family is bounded from below, then inf f is increasing as well. Other non trivial examples of increasing functions are functions of the form f (x) = supy2a(x) g(y); where g : R+n ! R is an arbitrary function and a : R+n ! R+n is a set-valued mapping with bounded images such that a(x0 )  a(x) for x0  x: Let us also mention the simplest example of increasing function which is the so-called min-type function l(x)  hl; xi = i2min lx (1) I (l) i i where l 2 R+n and I+ (l) = fij li l = (l1 ; : : : ; ln) and

+

n j hl; yi > 1g where > 0g: Note that here Kx = fy 2 R++

(

1 xi

if xi > 0 (2) if xi = 0 A set D 2 R+n is called normal if (x 2 D; 0  x0  x) ) x0 2 U . Normality is a mathematical formalization of the notion of free disposal in mathematical economics (see [3]). Let  R+n . The set N[ ] = ( ?R+n )\R+n is called the normal hull of . Alternatively, N[ ] = [f[0; x]j x 2 g: It is well known [3] that N[ ] is the intersection of all normal sets containing . Let D be a normal set in R+n : A point y 2 D is said to be a vertex of D if there is no point of D dominating it. Since a point y 2 D is dominated by a point x 2 D if and only if y 2 [0; x] n fxg; it follows that y 2 D is a vertex of D if and only if the set D n fyg is still normal. Proposition 1 A compact normal set D is the normal hull of its vertex set V: Proof. Let y 2 D: Since D is compact, the set Ny = fx 2 Dj x  yg is nonempty and compact. Using Zorn lemma it is then easy to see that there exists in Ny a maximal element v with respect to the order  : Clearly v cannot be dominated by any point of D; hence v is a vertex of D: Thus y 2 [0; v] for some vertex of D; i.e. D  N[V ]: Since the converse inclusion is obvious, we have D = N[V ]: 2 A normal set M of the form M = [f[0; z ]j z 2 Z g where jZ j < +1 is called a polyblock generated by Z: It is easily seen that the vertex set of M consists of all z 2 Z such that there is no z 0 2 Z dominating z: It follows directly from the de nition that the following assertions are equivalent: 1) M is a polyblock; 2) M is the normal hull of a nite set; 3) M is a compact normal set with a nite number of vertices. A point y 2 D is said to be an upper boundary point of D if there is no point of D strictly dominating y: The set of upper boundary points of D is called the upper boundary of D and denoted by @ + D: Of course any vertex of D is an upper boundary point, but not conversely.

l = (l1 ; : : : ; ln) with li =

3

0

Remark 1 In mathematical economics, theory of games and theory of multiobjective optimization vertices are called Pareto points and upper boundary points are called weak Pareto points.

If is a normal closed set, then x 2 R+n and x 62 imply that Kx \ = ;; in other words hl; xi  1 for all x 2 where hl; xi is de ned by (1) and (2). Note that hl; xi = 1 and hl; x0 i > 1 for x0 2 Kx. The relationship betwen the two concepts of increasing function and normal set is expressed in the following fact:

Proposition 2 A closed set is normal if and only if there exists a l.s.c. increasing function g(x) such that = fxj g(x)  1g: Proof. In fact, the \if" part is obvious. Conversely, let C be a closed normal set and let  (x) = inf f > 0j x 2  g (Minkowski functional of ): Then  (x) is an increasing function (see e.g. [6]) and obviously = fxj   1g: 2 Let H be a set of functions de ned on a set X . A function f : X ! R is called abstract convex with respect to H (or H -convex) if f (x) = supfh(x)j h 2 H; h  f g for all x 2 X [4, 9]. A set  X is called abstract convex with respect to H (or H -convex) if for every x 62 there exists a function h 2 H separating x from ; i.e. such that h(x) > supx0 2 h(x0 ). The empty set is abstract convex by de nition. A function g : X ! R is called abstract quasiconvex with respect to H if its level sets fx 2 X j g(x)  cg are abstract convex with respect to H for all c 2 R. Let L be the set of all min-type functions (1) with l 2 R+n . Then (see [8]) a set is L-convex if and only if it is normal and closed. It follows directly from this assertion that a function g : R+n ! R is L-quasiconvex if and only if it is l.s.c and increasing. It can be shown [7, 8] that a function f is L-convex if and only if it is increasing and positively homogeneous of the rst degree.

3 The Problem We will be concerned with the following nonconvex global optimization problem maxff (x)j g(x)  1; x 2 R+n g:

(P )

where f; g : R+n ! R are continuous (but not necessarily smooth) and increasing functions and g(0) < 1. Denote by D the feasible set of this problem:

D = fx 2 R+n j g(x)  1g: Since g is increasing and continuous it follows that D is normal and closed. The set fxj g(x) < 1g contains 0 and is open (in R+n ) so cone(D) (the cone hull of D) coincides with R+n . We assume that D is bounded, so that an optimal solution of the problem (P ) exists. Note that to problem (P ) one can easily reduce any problem of the form (P 0 )

maxff (x)j x 2 g 4

where is an arbitrary closed bounded subset of R+n : In fact, since f is increasing, it follows that any optimal solution of (P 0 ) is also an optimal solution of the problem: (P 00 )

maxff (x)j x 2 N[ ]g:

But, as we saw from the previous Section, there exists an increasing function g(x) such that N[ ] = fxj g(x)  1g. Hence the problem (P 0 ) with an arbitrary compact feasible set is equivalent to a problem (P ) with D = N[ ]. Also many nonconvex optimization problems can be rewritten as a monotonic optimization problem (P 0 ); and hence as a problem (P ): For instance, if f : P S ! R+ is a positive-valued Lipschitz function de ned on the unit simplex S = fx 2 R+n j ni=1 xi = 1g; then one can nd an increasing function F on R+n which coincides with f on S (see [5]). Consequently, the problem of minimizing f (x) on a subset U of S reduces to minimizing F on U; which in turn is a problem (P 0 ); and can be reduced to a problem (P ): Furthermore, as was proved in [5], an explicit expression for the function F can be given if the Lipschitz constant of f is known. The problem (P ) is in general a multi-extremal problem, as shown by the following simple example Example 1 Let n = 2 and let D = [0; x1 ] [ [0; x2 ] be a polyblock with vertices x1 = (1; 3); x2 = (2; 1). Let f (x) = x1 + x2 for x 2 R+2 . Clearly x2 is a local maximum and X 1 is a global maximum of the function f over the set D. We can represent D in the form D = fx 2 R+2 jg(x)  1g with g(x) = min(max(x1 ; 31 x2); max( 12 x1 ; x2 )): Therefore, (P ) cannot be solved eciently by local optimization techniques. However, we shall show that by exploiting the monotonic structure, it is possible to devise a variant of outer approximation method for handling this problem.

4 Basic Properties We rst present some properties of normal sets and monotonic functions which play a basic role in the analyis of the problem.

Proposition 3 Let y 2 D: Then y is an upper boundary point of D if and only if g(x) > 1 8x > y; i.e. if and only if the open cone Ky := fx 2 R+n j x > yg is disjoint from D: Proof. Obvious from the de nition of an upper boundary point. 2

Corollary 1 For any two distinct y; y0 2 @ +D we must have y 2= Ky0 ; y0 2= Ky : Proposition 4 For any x 2 R+n n D the segment fxj0    1g contains a uniquely de ned upper boundary point of D:

5

Proof. Since D is normal and bounded and the equality coneD = R+n holds it follows that the set D \ ftxj t  0g is a segment. Let y be the endpoint of this segment, i.e. y = x where  = maxftj tx 2 Dg: (3) Then clearly y is the unique upper boundary point of D on the segment fxj0    1g:

2

Denote the point y = x with  de ned as in (3) by (x): Proposition 5 We have D = R+n n [y2@+D Ky = \y2@+D (R+n n Ky ): Proof. Obviously D  R+n n[y2@ + D Ky : To prove the converse, let x 2= D and let y = (x): Then x 2 Ky and therefore x 2= R+n n [y2@ + D Ky : 2 Proposition 6 Let D be a compact normal set contained in a polyblock M with vertex set V: Let z 2 V n D; y = (z ) and Z 0 = (V n fz g) [ fx1 ; : : : ; xn g where xi = z ? (zi ? yi)ei i = 1; : : : ; n: (4) 0 0 Then the polyblock M generated by Z satis es M  M 0  D; z 2 M n M 0: (5) Proof. We have z 2 Ky  R+n nD; i.e. Ky separates z from D: Hence [0; z ]nKy  [0; z ]\D: But Ky = \ni=1 fxi > yi g; so [0; z ] n Ky = [ni=1 fx 2 [0; z ]j xi  yi g= [ni=1 fxj 0  xi  yi; 0  xj  zj 8j 6= ig = [ni=1[0; xi ]: Consequently, the polyblock M 0 = N[Z 0 ] satis es (5) (the right inequality in (5) holds because z is a vertex of M and so z 2= N(V n fz g)):

2

Proposition 7 The maximum of an increasing function f (x) over a polyblock M is attained at a vertex of M: Proof. Immediate. 2

Propositions 5 and 6 show that the feasible set D (a compact normal set) can be approximated as closely as desired by a polyblock M  D: That is, an approximate optimal solution to the problem (P) can be obtained by solving an approximate problem of the form maxff (x)j x 2 M g where M is a suitable polyblock containing D: But by Proposition 7 the latter problem can be solved by simple enumeration of the vertex set of M (which is nite). Therefore, all is reduced to nding a polyblock M  D suciently close to D in a certain neighbourhood of a global minimizer. This can be done by a procedure similar to the outer approximation method for concave programs (see [1], [2] and also [12]), namely: Start from an initial polyblock M1 : If the maximizer of f (x) over M1 belongs to D we are done; otherwise, this maximizer, say x1 ; does not belong to D; then using Proposition 5 we construct a polyblock M2 still containing D but excluding x1 : Then repeat, until a polyblock Mk is obtained such that the maximizer xk of f (x) over Mk is suciently close to D: 6

5 Proposed Solution Method We now describe the algorithm in precise terms. Given a tolerance " > 0 denote

D" = D \ clK"e = fx 2 Dj xi  " i = 1; : : : ; ng: Assuming that " is so small that D" = 6 ;; a global optimal solution of the problem (P" ) maxff (x)j x 2 D" g will be called an "-optimal solution of (P ): A solution x 2 D, such that f (x) di ers from the optimal value in (P" ) by at most  > 0, will be referred to as an ("; )-approximate optimal solution of (P ): For initialization of the outlined outer approximation procedure we need a simple polyblock containing D: This initial polyblock is taken to be a box [0; b]  D constructed as follows. For each i = 1; : : : ; n nd an upper estimate bi  maxfxi j x 2 Dg: Then

D  [0; b] because for any x 2 D we have xi  bi 8i: We shall assume that the tolerance " > 0 is so small that x1  "e:

Algorithm.

Step 0. (Initialization) Compute bi  maxfxi j x 2 Dg; i = 1; : : : ; n: Let M1 = [0; b]; V1 = fbg; x1 = b; y1 = y1 (intersection point of the upper boundary of D with the segment between 0 and x1 ): Set k = 1: Step 1. Compute xk 2 argmax ff (x)j x 2 Vk ; x  "eg: If xk 2 D terminate: xk is an "-optimal solution. Step 2. Compute the intersection point yk of the upper boundary of D with the segment between 0 and xk : Set yk = argmaxff (yk?1); f (yk )g: If f (yk )  f (xk ) ?  terminate: yk is an ("; )- approximate solution of (P ): Step 3. Compute the n extreme points of the rectangle [yk ; xk ] that are adjacent to xk :

xk;i = xk ? (xki ? yik )ei i = 1; : : : ; n (6) (where, as above mentioned, ei is the i-th unit vector of Rn ): Set Zk+1 = (Vk n fxk g) [ fxk;1; : : : ; xk;ng: Let Vk+1 be the set obtained from Zk+1 after droping all those z 2 Zk+1 that are not vertices (i.e. such that z is dominated by some other z 0 2 Zk+1 ): 7

Step 4. Set k

k + 1 and return to Step 1.

Remark 2 In usual outer approximation algorithms for concave minimization, the number of vertices of the enclosing polytope increases exponentially at each iteration, with the new vertices becoming more and more dicult to compute accurately (see e.g. [12]). By contrast, in the present algorithm the vertex set Vk increases at most by n ? 1 at each iteration and the new vertices are extremely easy to compute. In spite of that, the vertex set of the current polyblock may all the same reach a prohibitively large size, creating storage problems. Should this happen, it is recommended to break o the procedure and to restart it from the last xk : Speci cally, let S be the critical size for the vertex set (so diculties may arise if there are more than S vertices to be recorded). Step 4 should be modi ed as follows: Step 4. If jVk+1 j  S then set k k + 1 and return to Step 1. Otherwise go to Step 5. Step 5. Rede ne Vk+1 = fb ? (bi ? yik )ei ; i = 1; : : : ; ng; (i.e. Mk+1 = [0; b] n (xk ; b]); then set k k + 1 and return to Step 1. This restarting expedient enables us to overcome memory space limitations, and sometimes also to speed up the convergence.

6 Convergence To establish the convergence of the above Algorithm we rst observe the following Proposition 8 For every k we have Mk  D and xk 2 argmaxff (x)j x 2 Mk ; x  "eg: Hence f (yk )  maxff (x)j x 2 D"g  f (xk ): (7) Proof. By Proposition 5 we have Mk  D for all k: Furthermore, since Vk is the vertex set of Mk it follows that xk 2 argmaxff (x)j x 2 Mk ; x  "eg: Thus the right inequality in (7) holds. Since y1 ; : : : yk 2 D" and yk is one of these points we conclude that the left inequality in (7) holds as well. 2 As a consequence, the answer given by the algorithm when it terminates is correct (xk is an "-optimal solution if termination occurs at Step 1,, while yk is an ("; )-approximate solution if termination occurs at Step 2). It remains to prove niteness of the algorithm. Theorem 1 The algorithm terminates after nitely many steps, yielding an "-optimal solution or an ("; )-approximate solution. Proof. We rst prove that for any  > 0 there exists k such that mini=1;:::;n(xki ? yik )  : Suppose the contrary, that mini=1;:::;n (xki ? yik ) >  for however large k: Observe that for any xk generated by the algorithm there exists a sequence

x1 = xh1  xh2  : : :  xh = xk p

8

such that Hence, setting

xh +1 = xh ? (xhi ? yih )ei ; q

q

q

q

q

q = 1; : : : ; p ? 1:

q

q

(8)

q = xhi ? yih >  q

q

q

q

and adding p ? 1 equalities in (8) we obtain

xk = x 1 ?

pX ?1 q=1

q e i :

(9)

q

Let iq = j for Nj values of q = 1; : : : ; p ? 1: Since x1 = b; the inequalities (9) imply that, for every j = 1; : : : ; n; X bj  bj ? xkj = q  Nj ; iq =j

P

so that Nj  bj = for all j = 1; : : : ; n: Hence, p = 1 + N1 + : : : + Nn  1 + nj=1 (bj =); con icting with p becoming arbitrarily large as k ! +1: We have thus proved that for any  > 0 there exists k such that mini=1;:::;n(xki ? yik )  : Noting that yk = k xk and xki  " 8i; we then have min (xk ? yik ) = i=1min (1 ? k )xki  (1 ? k )"; i=1;:::;n i ;:::;n hence 1 ? k  1" mini=1;:::;n (xki ? yik )  ="; i.e.

kxk ? yk k = (1 ? k )kxk k  (=")kbk: Thus, if  > 0 is chosen so that f (xk ) ? f (yk )   whenever kxk ? yk k  (=")kbk then for some k suciently large we will have f (xk ) ? f (yk )  f (xk ) ? f (yk )   and the algorithm will stop. 2

7 Illustrative Example To illustrate how the algorithm works, we give a small example. Consider the problem maxff (x)jx1 x2 x3  1; 0  xi  5; i = 1; 2; 3g

(10)

where the objective function f (x) is de ned by f (x) = minfx1 + x2 + x3 ; 0:5x1 +2x2 + x3 g: This is a problem (P), since both f (x) and g(x) := x1 x2 x3 are increasing in R+3 : To solve this problem with tolerances " = 0:001 and  = 0:01 the proposed algorithm proceeds as follows. At iteration 1, V1 = f(5; 5; 5)g and x1 = (5; 5; 5). The line through 0 and x1 intersects the upper boundary of D at the point y1 = (1; 1; 1) and we have f (x1 ) = 15; f (y1 ) = 3: The extreme points of the rectangle [y1 ; x1 ] that are adjacent to x1 are (5; 1; 5); (1; 5; 5) and (5; 5; 1); so V2 = f(5; 1; 5); (1; 5; 5); (5; 5; 1)g: 9

At iteration 2, the values of the objective function at each point of V2 are computed:

x

(5, 1, 5) (1, 5, 5) (5, 5, 1)

f (x) 9.5 11 11

So x2 = (5; 5; 1) 2 argmaxff (x)j x 2 V2 g: The line through 0 and x2 intersects the upper boundary of D at y2 = (1:710; 1:710; 0:342) with f (y2 ) = 3:7619: The current best solution is then y2 = y2 : The extreme points of the rectangle [y2 ; x2 ] that are adjacent to x2 are (5; 1:710; 1); (1:710; 5; 1) and (5; 5; 0:342); so

V3 = f(5; 1:710; 1); (1:710; 5; 1); (5; 5; 0:342); (5; 1; 5); (1; 5; 5)g: At iteration 3, the point x3 2 argmaxff (x)j x 2 V3 g is x3 = (1; 5; 5); with f (x3 ) = 11: The line through 0 and x3 intersects the upper boundary of D at y3 = (0:342; 1:710; 1:710) with f (y3 ) = 3:7619: The current best solution is now y3 = y2 ; and so on. Continuing this way, the algorithm terminates after 48 iterations, yielding the optimal value of 10.040 and an optimal solution xopt = (5; 5; 0:040):

8 Computational Experience The algorithm has been tested on problems with 2, 4, 8 and 16 variables. The following types of increasing functions were chosen for both the objective function f (x) and the constraint function g(x) :  Cobb-Douglas functions f (x) = Q x i ; (the cases with sums of powers less than 1, equal to 1 and greater than 1 were considered separately)  Quadratic functions f (x) = xT Ax; where A is a random matrix with nonnegative entries aij such that either 0  aij  10 or 0  aij  100 or 0  aij  1000  Polynomials in x with nonnegative coordinates, where x 2 Rn with n = 2; n = 4; n = 8; n = 16:  Minmax functions ( the cases with linear and nonlinear terms were considered separately). The results are summarized in Table 1 below. i

Table 1

10

Type of objective function Cobb-Douglas

Type of constraint Number of function variables Minimax (linear) 2 Minimax (nonlinear) Polynomial Sum of Cobb-Douglas Sum of Cobb-Douglas Sum of Cobb-Douglas Quadratic Polynomial Cobb-Douglas Minimax linear) 4 Minimax (nonlinear) Polynomial Sum of Cobb-Douglas Sum of Cobb-Douglas Quadratic Polynomial Cobb-Douglas Minimax (linear) 8 Minimax (nonlinear) Polynomial Sum of Cobb-Douglas Quadratic Minimax (linear) Minimax (nonlinear) Polynomial Polynomial Minimax (linear) Minimax (nonlinear) Minimax (linear Polynomial Quadratic Quadratic Minimax (linear) 16 Minimax (nonlinear) Polynomial Polynomial Minimax (linear) Minimax (nonlinear) Quadratic

Average solution time (seconds) 2 5 10 15 60 50 5 20 70 150 70 8 30 140 500 5 25 70 10 20 70 5 30 700 100 30 650 6

These results were obtained using Maple V version 3.0 running under Windows 95 on a Pentium machine with 32 MB RAM. In many instances, we found that, whilst the algorithm itself proved to be ecient (in terms of number of iterations required to reach the optimum), much of the computational time was spent on locating the upper boundary points of the feasible set. We have used two approaches for nding the upper boundary points. One is a version of Newton's technique for solving equations, the other is Bolzano's bisection procedure. For small problems (of dimension 2 or 4) the two approaches produced roughly similar results, but for larger problems Bolzano's bisection procedure was found to perform considerably better. For example, for an 8-variable problem where the constraint function was of type P minmax or i (xi ) and the objective function was either quadratic or of Cobb-Douglas type, the computational time when using this procedure never exceeded ve minutes (even for very small values of the tolerance parameters " and ), while it exceeded 1 hour 11

when using Newton's method and in a worst case, the process had to be prematurely aborted after 2 hours of unsucessful computation. This de nitely con rms that for the implementation of our algorithm the bisection technique, rather than Newton's method, should be used to locate the upper boundary points. Execution times were also sensitive to the values of " and . Typically, we set each of these values to be 0.01 and, when a solution was obtained, we reset these values depending upon the relative sizes of the xi values and the nal value of the function. Typically, a reduction in the values of " and  by a factor of 10 led to an increase in calculation times by a factor of 3 to 4. The nal important factor was the dimensionality of the problem. Because the number of vertices to be examined at each iteration increases at a linear rate, the execution time for each iteration also increases linearly and so the total execution time to locate a solution increases at a greater rate. For example, the total execution time for an 8-dimensional problem would be about 4 times for a 4-dimensional problem of the same type. For the 2-dimensional case, the algorithm always managed to solve the problem regardless of the form of the objective function, the constraint function or the values set for " and . Execution times were less than 1 minute and the number of iterations typically did not exceed 100. The best results were obtained when the constraint function was a minmax function and the worst results were recorded for problems where the constraint function took the form x + y = 1. Similarly, the best results occurred when the objective function took the form x  y and the worst results occurred when the objective function took the form x 1  y 1 + x 2  y 2 . Execution times for the former type of problem were about one quarter of those for the latter. For the 4-, 8- and 16-variable cases, we found that the best results were obtained when the constraint function took the form xT Ax, particularly when the entries of the matrix A were not very large. The numbers of iterations required to solve the problems were very small (typically less than 10) and the execution times were also small (less than 1 minute), regardless of the form of the objective function. Similar results were found when the constraint function was a minmax function (regardless of whether the terms were linear or nonlinear). In all cases, similar execution times were obtained regardless of whether Newton's method or Bolzano's method was used to locate the upper boundary points. On the other hand, for problems with a constraint function of Cobb Douglas type or a polynomial, solutions often could not be found in a suitable time when Newton's method was used to locate the upper boundary points. However, Bolzano's method resulted in satisfactory execution times in all cases. Acknowledgement. The most part of this work has been completed during the visit of Hoang Tuy to the School of Information Technology and Mathematical Sciences, Ballarat University, July 1998.

12

References [1] R. Horst and H. Tuy: 1996, Global Optimization (Deterministic Approaches), third edition, Springer-Verlag, Berlin, New York. [2] H. Konno, P.T. Thach and H. Tuy: 1997, Optimization on Low Rank Nonconvex Structures, Kluwer Academic Publishers, Dordrecht/Boston/London. [3] V.L. Makarov and A.M. Rubinov, Mathematical Theory of Economic Dynamic and Equilibria, Springer-Verlag, Berlin, New York, 1977. [4] D. Pallaschke and S. Rolewicz: 1997, Foundations of Mathematical Optimization, Kluwer Academic Publishers, Dordrecht/Boston/London. [5] A. Rubinov and M. Andramonov: 1999, Lipschitz programming via increasing convexalong-rays functions, Optimization Methods and Software, 10, 763-781. [6] A.M. Rubinov and B. M. Glover: 1998, Duality for increasing positively homogeneous functions and normal sets , RAIRO Operations Research, 32(1998), 105 -123. [7] A. M. Rubinov and B. M. Glover: 1997, On generalized quasiconvex conjugation, Contemporary Mathematics 204, 199 -217. [8] A. M. Rubinov and I. Singer: 1999, Best approximation by normal and conormal sets, Research Report 32/99, SITMS, University of Ballarat, 1999. [9] I. Singer: 1997, Abstract Convex Analysis, John Wiley and Sons, New York. [10] H. Tuy: 1995, D.C. Optimization: Theory, Methods and Algorithms, in R. Horst and P.M. Pardalos eds., Handbook on Global Optimization, Kluwer Academic Publishers, Dordrecht/Boston/London, 149-216. [11] H. Tuy: 1995, Canonical D.C. Programming: Outer Approximation Methods Revisited, Operations Research Letters, 18, 99-106. [12] H. Tuy: 1998, Convex Analysis and Global Optimization. Kluwer Academic Publishers, Dordrecht/Boston/London. [13] H. Tuy and B.T. Tam: 1995, Polyhedral Annexation vs Outer Approximation for Decomposition of Monotonic Quasiconcave Minimization Problems, Acta Mathematica Vietnamica, 20, 86-99.

13