Global minimization of increasing positively homogeneous functions ...

2 downloads 0 Views 210KB Size Report
i2I(lk) lk i xi ?ck. ): lk. = (lk. 1;:::;lk n) 2 IR n. +; I(lk. ) = fi : lk i > 0g; ck 2 IR; k = 1;:::;j; j n: This problem is essentially of combinatorial nature. Some methods for itsĀ ...
Global minimization of increasing positively homogeneous functions over the unit simplex 1

A.M. Bagirov and A.M. Rubinov School of Information Technology and Mathematical Sciences, The University of Ballarat, Vic 3353, Australia.

Abstract In this paper we study a method for global optimization of increasing positively homogeneous functions over the unit simplex, which is a version of the cutting angle method. Some properties of the auxiliary subproblem are studied and a special algorithm for its solution is proposed. A cutting angle method based on this algorithm allows one to nd an approximate solution of some problems of global optimization with 50 variables. Results of numerical experiments are discussed.

Key words: Global optimization, cutting angle method, increasing positively homogeneous function.

1 Introduction The cutting angle method for global minimization of some functions on a nite dimensional set X was introduced and studied in ([1, 2]). This method is reduced to the solution of the following auxiliary problem:

h(x) ?! min subject to x 2 X where

(1)

h(x) = max ( mink lik xi ? ck ): kj

i2I (l ) lk = (l1k ; : : : ; lnk ) 2 IRn+ ; I (lk ) = fi : lik > 0g; ck 2 IR;

k = 1; : : : ; j; j  n:

This problem is essentially of combinatorial nature. Some methods for its solution based on dynamic programming and integer programming were proposed in [1, 7, 8]. These methods allow one to successfully apply the cutting angle method, when the dimension n of the space is not very high. In this paper we propose an alternative approach for the solution of the problem (1) in a case, when ck = 0 for all k  j and X coincides with the unit simplex. The approach 1

This research has been supported by the Australian Research Council Grants

1

is based on a convenient description of all local minima of the function h; then a global minimizer can be found by a sorting of local minima. The proposed approach allows us to apply the cutting angle method for minimization of IPH (increasing positively homogeneous) functions over the unit simplex. The class of IPH functions is fairly large (see examples in Section 2). It follows from the results of [8], that the minimization of a Lipschitz function over the unit simplx can be transformed to the minimization of an IPH function. (An example, which includes the solution of a problem of such kind, can be found in Section 7 ). Using some transformation of variables and penalization we can present the minimization of a Lipschitz function subject to linear (and even some nonlinear) constraints as the minimization of a Lipschitz function over the simplex (see [8] for details). Thus the version of the cutting angle method that is studied in this paper can be applied (at least theoretically) to problems of Lipschitz minimization. The cutting angle method is suitable for the search of an approximate solution of a global optimization problem. Very often it is enough to have such a solution, since it can be made more precise by an appropriate local method. The cutting angle method based on the proposed approach allows us to solve many problems up to 50 variables, using a conventional PC. An appropriate combination of the cutting angle method with a local search can also be applied to the search for a global minimizer (see [1] for details). However the choice of local methods and the adjustment of some parameters require some additional investigation. We will present the corresponding results later. The structure of the paper is as follows: in Section 2 we provide some brief preliminary de nitions and results related to IPH functions, in Section 3 we discuss a presentation of an IPH function by min-type functions, which plays a key role in the discussed approach. A version of the cutting angle method is presented in Section 4, while the description of local minima of the problem (1) in Section 5. Solution of this problem is discussed in Section 6. Results of numerical experiments are described in Section 7. A brief conclusion is provided in Section 8.

2 IPH functions Consider an n-dimensional linear space IRn . We shall use the following notation:

     

I = f1; : : : ; ng; xi is the i-th coordinate of a vector x 2 IRn; P [l; x] = i2I li xi is the inner product of vectors l and x; if x; y 2 IRn then x  y , xi  yi for all i 2 I ; if x; y 2 IRn then x  y , xi > yi for all i 2 I ; IRn+ := fx = (x1 ; : : : ; xn ) 2 IRn : xi  0 for all i 2 I g (nonnegative orthant); 2

 S = fx 2 IRn : Pi2I xi = 1g (unit simplex). +

We shall study the following optimization problem:

f (x) ?! min subject to x 2 S

(2)

where f is an IPH (increasing positively homogeneous of degree one) function de ned on IRn+ . Recall that a function f de ned on IRn+ is called increasing if x  y implies f (x)  f (y); the function f is positively homogeneous of degree one if f (x) = f (x) for all x 2 IRn+ and  > 0. We now give some examples of IPH functions

Example 2.1 The following functions are IPH: P 1) a(x) = i2I ai xi with ai  0; P 2) pk (x) = ( i2I xki ) k1 (k > 0); p 3) f (x) = [Ax; x] where A is a matrix with nonnegative entries. P Q 4) f (x) = j 2J xtjj where J  I , tj > 0; j 2J tj = 1.

It is easy to check that the following holds: 1) the sum of two IPH functions is also an IPH function; 2) if f is IPH, then the function f is IPH for all > 0; 3) if T is an arbitrary index set and (ft )t2T is a family of IPH functions then the function finf (x) = inf t2T ft (x) is IPH; 4) if (ft )t2T is the same family and there exists a point y  0 such that supt2T ft (y) < +1 then the function fsup (x) = supt2T ft (x) is nite and IPH. These properties allow us to give two more examples of IPH functions.

Example 2.2 The following maxmin functions are IPH: P jk 1) f (x) = maxk2K minj 2J i2I ajk i xi where ai  0; k 2 K; j 2 J; i 2 I . Here J and K are nite sets of indices; 2) X j f (x) = max min ai xi (3) k2K j 2J

where aji

k

 0 j 2 Jk ; k 2 K . Here K and Jk are nite sets of indices. 3

Note that an arbitrary piece-wise linear function f generated by a collection of linear functions a1 ; : : : ; am can be represented in the form (3) (see [3]); hence the arbitrary piece-wise linear function generated by nonnegative vectors is IPH.

Remark 2.1 Let f be a Lipschitz function de ned on the unit simplex S . Let minx2S f (x)  c > 0 and L be a Lipschitz constant of the function f in k  k1 norm, that is jf (x ) ? f (x )j  L 1

2

X i

jxi ? xi j 1

2

for all x1 ; x2 2 S . Assume that (2L=c)  1. Then the function g de ned on IRn+ by ! X x g(x) = ( xi )f P i2I xi

i2I

(4)

is IPH ([7]). Note that g(x) = f (x) for x 2 S , so g is an IPH extension of f . Consider now an arbitrary Lipschitz function  de ned on S . Consider the function f (x) = (x) + M where M is a very large number. Since the Lipschitz constant of the function f coincides with the Lipschitz constant of , we can obtain the inequality (2L=c)  1 choosing a very large M . Using (4) we can construct the IPH function g(x) which coincides with (x) + c for x 2 S . Note that the functions , f and g have the same global minimizers on S . Hence global minimization of a Lipschitz function over the S can be accomplished by global minimization of IPH functions.

3 Presentation of an IPH function by min-type functions Let l 2 IRn+ ; l 6= 0 and I (l) = fi 2 I : li > 0g. Consider the function where

x 7! hl; xi

(5)

hl; xi = imin lx: 2I l i i

(6)

( )

The function (5) is called a min-type function generated by the vector l. We shall denote this function by the same symbol l. Clearly a min-type function is IPH. Let L be the set of all functions x 7! hl; xi with l 2 IRn+ n f0g. We shall use the following notation ( c = lci if i 2 I (l); (7) 0 if i 62 I (l): l The following result holds (see [7]). 4

Theorem 3.1 1) A nite function f de ned on IRn is IPH if and only if f (x) = maxfhl; xi : l 2 L; l  f g: 2)Let x 2 IRn be a vector such that f (x ) > 0 and l = f (x )=x . Then hl; xi  f (x) for all x 2 IRn and hl; x i = f (x ). +

0

0

+

+

0

0

0

0

Subsequently we will the use the unit orths of the space IRn . Consider the m-th orth em = (0; : : : ; 0; 1; 0; : : : ; 0). Clearly I (em ) = fmg, hence the vector l = f (em )=em can be represented in the form l = f (em )em . Clearly hf (em )em ; xi = f (em )xm :

4 Algorithm We now present an algorithm for the search for a global minimizer of an IPH function f over the simplex S . Note that an IPH function is nonnegative on IRn+ , since f (x)  f (0) = 0. We assume that f (x) > 0 for all x 2 S . It follows from positivety of f that I (l) = I (x) for all x 2 S and l = f (x)=x.

Algorithm Step 0. (Initialization) Take points xm = em , m = 1; : : : ; n. Choose arbitrary points xn+i 2 S; xn+1  0; i = 0; 1; ::q. Let lk = f (xk )=xk , k = 1; : : : ; n + q: De ne the function hn+q : hn+q (x) = k=1max mink lik xi : ;:::;n+q i2I (x

)

Step 1. Let the point xj has been constructed. Let lj = f (xj )=xj and j x )  max min lj x : hj (x) = max(hj?1 (x); i2min l i i j k=1;:::;j i2I (xj ) i i I (x )

Step 2. Solve the problem

hj (x) ?! min subject to x 2 S:

(8)

Step 3. Let y be a solution of the problem (8). Set xj +1 = y and go to step 1. This algorithm can be considered as a version of the cutting angle method ([1, 2]). A more general version of this algorithm, known as the  - bundle method, has been discussed in [6]. Convergence of the -bundle method can be proved under very mild assumptions (see [6]). The algorithm provides lower and upper estimates of the global minimum f  for the problem (2).

5

Let

j = min h (x) x2S j

(9)

be the value of the problem (8). It follows from Theorem 3.1 that

hlk ; xi  i2min lk xi  f (x) for all x 2 S; k = 1; : : : ; j: I xk i (

)

Hence hj (x)  f (x) for all x 2 S and

j  min h (x)  min f (x): x2S j x2S Thus j is a lower estimate of the global minimum f . Consider the number j = f (xj ). Clearly j  f , so j is an upper estimate of f  . It can be shown ([1]) that j is an increasing sequence and j ? j ! 0 as j ! +1. Thus we have a stopping criterion, which enables us to obtain an approximate solution with an arbitrary given tolerance.

Remark 4.1 Let f be the value of the global minimum of a function f over the simplex S . The precision r of the current point xj is de ned as follows: j r (xj ) = min(f (xj ) ? f ; f (x f) ? f )  Very often the quantity f is unknown, so we shall consider the following number j (xj ) = min(f (xj ) ? j ; f (x ) ? j ) j as an estimate of the precision. Note that (xj ) > r (xj ). Numerical experiments show that r (xj ) substantially less than (xj ) in many instances. Thus very often we have substantially more precise solution than it is indicated by the estimate  of the precision.

Remark 4.2 Consider a Lipschitz function  de ned on the simplex S . Let f (x) = (x) + M where M is a number such that c = minx2S f (x) > 0 and (2L=c)  1, where L is a Lipschitz constant of the function  in the k  k1 norm. Then a function g, de ned by (4), is IPH. Since g and  have the same minimizers over the simplex, we can apply the cutting angle method for the minimization of the function g over S in order to nd a global minimizer of the function . It is easy to express this method in terms of the function  itself. Indeed since g(x) = (x) + M for x 2 S we should make some changes only under the construction of the vectors lk (see Step 0 and Step 1). These vectors should be de ned in the following way:

k

lk = (x x)k+ M : 6

Remark 4.3 The computation time for solving the subproblem (8) sharply increases with

the number j of vectors lk . One of the ways to reduce the computation time is to use a renewal process: adding the new vector we exclude one of the vectors ln+1 ; : : : ; lj . Hence, the maximal number j = n + m of vectors lk is xed. Starting with the iterate n + m + 1 the (n + i)-th vector is replaced by the (n + i + 1)-th vector (i = 2; : : : ; m + 1) and the (n + 1)-th vector is removed. Here the value of m depends on the concrete problem. It should be noted that the convergence of the main algorithm is proved without any renewal process, so we can not guarantee that this process leads to a global minimizer. However, at each iterate j having the lower bound j , as de ned by (9), we can easily evaluate how far from the global minimum we are. Numerical experiments demonstrate that this method allows one to nd quickly enough an approximate solution.

5 Auxiliary problem The Step 2 ( nd the global minimum of the problem (8)) is the most dicult part of the algorithm. This problem can be represented in the following form (we omit the index j for the sake of simplicity): where

h(x) ?! min subject to x 2 S

(10)

h(x) = max ( mink lik xi ) kj

(11)

x2I (l

)

where j  n + q, lk = f (xk )=xk are given vectors (k = 1; : : : ; j ) . Note that xk = ek ; k = 1; : : : ; n: Let k (x) = mink lik xi; (12) i2I (l

then h(x) = maxkj k (x).

)

Proposition 5.1 Let j > n, lk = lkk ek ; k = 1; : : : ; n, lk  0; k = n + 1; : : : ; j . Then each

local minimizer of the function h de ned by (11) over the simplex S is a strictly positive vector. Proof: It is sucient to show that for each non strictly positive x 2 S and for each " > 0 there exists x0 2 S such that x0  0; kx0 ? xk < " and h(x0 ) < h(x): For x 2 S consider the set I0 (x) = I n I (x) = fi 2 I : xi = 0g. Assume that I0 (x) is nonempty. Let us calculate functions k de ned by (12) at the point x. We have

k (x) = mink lik xi = lkk xk ; i2I (l

In particular

)

k = 1; : : : ; n:

k 2 I0 (x):

k (x) = 0; 7

(13) (14)

We also have

k (x) = min lk x = 0; k = n + 1; : : : ; j: (15) i2I i i It follows from (13), (14) and (15) that k (x) > 0 if and only if k  n and k 62 I0 (x), that is k 2 I (x). Hence h(x) = max  (x) = kmax lk x : (16) k k 2I (x) k k

Let " > 0 be a small number and m = jI0 (x)j. Consider the point x("), where ( if i 2 I (x); x(")i = xni??m "" if i 2 I0(x): m We have x(") 2 S and x(")  0 for fairly small ". Let us calculate h(x(")) = maxkj k (x(")). Let k  n + 1. For suciently small " we have: k n ? m " = n ? m " min lk : k (x(")) = min lk x(")i = i2min l i i2I i m m i2I (x) i I (x)

(17)

k (x(")) = mink lik x(")i = lkk n ?mm ":

(18)

k (x(")) = mink lik x(")i = lkk (xi ? "):

(19)

0

0

Let k 2 I0 (x). Then Let k 2 I (x). Then

i2I (l

i2I (l

)

)

It follows from (17), (18) and (19) that for very small " h(x(")) = max  (x) = max lkk (xk ? "): kj k k2I (x)

It follows from (16) and (20) that h(x(")) < h(x).

(20)

2

Corollary 5.1 Let (xj ) be a sequence generated by the algorithm. Then xj  0 for all j > n, hence lj  0 for all j > n. Proof: It follows from Proposition 5.1 by induction on j . 2 Let ri S = fx 2 S : xi > 0 for all i 2 I g (ri S is the relative interior of the simplex S .) It follows from Proposition 5.1 and Corollary 5.1 that we can solve the problem (10) by sorting local minima of the function h over the set ri S . We now describe some properties of local minima of h on ri S . We need the following well-known de nition. Let f be a function de ned on a set X  IRn , x 2 X and let an element u 2 IRn be such that x + u 2 X for all fairly small > 0. Then the limit f 0(x; u) = lim 1 (f (x + u) ? f (x)) !+0

8

is called the directional derivative at the point x in the direction u. It is well known that functions k and h are directionally di erentiable. Let

R(x) = fk : k (x) = h(x)g;

Qk (x) = fi 2 I (lk ) : k (x) = lik xi g:

(21)

Proposition 5.2 (see, for example,[4]). Let x  0. Then 0k (x; u) = i2min lk u ; Q (x) i i k

h0 (x; u) = kmax 0 (x; u) = kmax min lk u : 2R(x) k 2R(x) i2Q (x) i i k

Let x 2 S . The cone

K (x; S ) = fu 2 IRn : 9 0 > 0 such that x + u 2 S 8 2 (0; 0 )g is called the tangent cone at the point x with respect to the simplex S . The following necessary conditions for a local minimum hold (see, for example, [4]).

Proposition 5.3 Let x 2 S be a local minimizer of the function h over the set S . Then h0 (x; u)  0 for all u 2 K (x; S ). Proposition 5.4 Let x 2 ri S . Then K (x; S ) = fu :

X i2I

ui = 0g:

Proof: It follows directly from the de nition. Applying Propositions 5.2, 5.3 and 5.4 we can obtain the following result.

2

Proposition 5.5 Let x  0 be a local minimizer of the function h over the set ri S , such that h(x) > 0. Then there exists a subset flk1 ; lk2 ; : : : ; lkn g of the set fl1 ; : : : ; lj g such that 1)

2)

x = dk1 ; : : : ; kdn ln l1

!

where d = P k

max min li = 1; kj i2I (lk ) lki i

9

1

1

i2I lki i

;

(22)

(23)

3) Either ki = fig for all i 2 I or there exists m 2 I such that km  n + 1; if km  n then km = m; 4) if km  n + 1 then likm > liki for all i 2 I ; Proof: Let x  0 be a local minimizer. Then for each u 2 K (x; S ) there exists k 2 R(x) such that 0k (x; u)  i2min lk u  0: (24) Qk (x) i i P Let m 2 I . For each i 6= m choose a number i > 0 such that i6=m i = 1. Consider the vector u such that ( if i = m; ui = ?1 if i 6= m: i

It follows from Proposition 5.4 that u 2 K (x; S ). Hence there exists k 2 R(x) such that (21) holds. If Qk (x) 6= fmg then 0k (x; u) = min lik ui < 0 i2Qk (x)

which contradicts (24). Hence Qk (x) = fmg: Thus for each m 2 I there exists km 2 R(x) such that Qkm (x) = fmg: Let m1 6= m2 and kmi be an index such that Qkmi = fmi g; i = 1; 2: Since m1 6= m2 it follows that km1 6= km2 . Let m 2 I and km 2 R(x) be an arbitrary index such that Qkm = fmg. Since km 2 R(x) and m 2 Qkm (x) it follows that h(x) = km (x) = lmkm xm: Hence (25) xm = hk(xm) ; for all m 2 I: It follows from (25) that Since x 2 S we have Hence

lm

!

x = h(kx1 ) ; : : : ; h(kxn) : ln l1 1=

X i2I

xi = h(x)

h(x) = P 1

(26)

X 1

ki : i2I li 1

i2I lki i

:

(27)

Thus item 1) of Proposition has been proven. We now demonstrate that item 2) holds. Let us calculate h(x) having in mind (26): h(x) = max min lk x = max min lk h(x) : kj i2I (lk ) i i

10

kj i2I (lk ) i liki

Thus

k

li 1 = max min k kj i2I (l ) lki i

Item 2) has been proven. Let

K = fk1 ; : : : ; kn g; K1 = fkm 2 K : km  ng; K2 = fkm 2 K : km  n + 1g: Let km 2 K1 , that is km  n. It follows from (25) that lmkm 6= 0. Since the unique nonzero coordinate of the vector lkm is lkkmm we have km = m and xm = hl(mx) ; (km = m 2 K1 ): (28) m If K2 = ; then K1 = f1; 2; : : : ; ng and x = x where   x = h(x) l11 ; : : : ; l1n n 1

with

h(x) = P 1 1 : i2I lii

Thus, if x 6= x then K2 6= ;. Item 3) has been proven. Let km 2 K2 . Then I (lkm ) = I and Qkm = fi 2 I : km (x) = likm xi g: Since km 2 R(x) and Qkm = fmg we have likm xi > km (x) = h(x) for i 6= m. It follows from (26) that lkm h(x) > h(x) i 6= m: i

Hence Item 4) has been proven.

liki

likm > liki for all km 2 K2 ; i 6= m:

2

Remark 5.1 Consider the equality (23). Let lik : v = max min k i2I (lk ) liki It follows from (23) that v = 1. Since I (lm ) = fmg for m  n and I (lk ) = I for k  n + 1,

we have

lmm ; max min lik ) v = max (max km kn+1 i2I lki mn lm i m Assume that K1 6= ; and m 2 K1 . Then km = m, hence lm =lmkm ) = 1. Thus v = 1 if and only if

k

li  1 for all k  n + 1: lmm  lmkm for all m  n; m 62 K1 ; min i2I liki 11

6 Solution of the auxiliary problem It follows from Propositions 5.1 and 5.5 that we can nd a global minimizer of the function

h de ned by (11) over the unit simplex using the following procedure:

 sort all subsets (lk1 ; : : : ; lkn ) of the given set l ; : : : ; lj vectors, such that (23) holds and likm > liki if km  n + 1 and km = m if km  n;  for each such subset, nd the vector x de ned by (22);  choose the vector with the least value of the function h among all the vectors de1

scribed above.

Thus, the search for a global minimizer is reduced to sorting some subsets, containing n elements of the given set fl1 ; : : : lj g with j > n. Fortunately, Proposition 5.5 allows one to substantially diminish the number of sorted subsets. The following simple example will clarify this point.

Example 6.1 Assume we have the following set of three-dimensional vectors: l1 = (4; 0; 0); l2 = (0; 3; 0); l3 = (0; 0; 2); l4 = (3; 3; 1); l5 = (1; 2; 1); l6 = (8; 3; 1): We would like to nd all subsets (lk1 ; lk2 ; lk3 ) of this set, such that 1) 2)

3)

lik ) = 1; min max (max( k41 ; k32 ; 2k3 ); kmax l1 l2 l3 =4;5;6 i=1;2;3 liki

(29)

either ki = i; i = 1; 2; 3 or there exists m 2 f1; 2; 3g such that km  4; if km  3; then km = m; m = 1; 2; 3; (30) if km  4 then likm > liki i = 1; 2; 3:

(31)

One of these subsets I  = f1; 2; 3g is known. All other subsets should contain at least one vector with the index greater than 3. It follows from (30) that k1 6= 2; 3. Since l11 =l14 > 1 it follows that (29) does not hold if k1 = 4. Hence there are no subsets I 0 with the required properties, which contains l4 as the rst vector. The same reason shows that l5 also can not be the rst vector of any subset I 0 . Assume now that k1 = 6. Since l16 = 8 > l1k for all k, it follows that (31) does not hold. Thus there exists the unique possibility k1 = 1 and we do not need to sort subsets flk1 ; lk2 ; lk3 g with k1 6= 1.

12

The algorithm for the search for a global minimizer of the auxiliary function based on Proposition 5.5 consists of two parts. Firstly we sort subsets consisting of n di erent elements from the given set fl1 ; : : : ; lj g, excluding those, which do not provide the global minimum. We also include in the algorithm the renewal procedure, described in Remark 4.3. All this together allows us to signi cantly reduce the computation time. Secondly we choose the subset with the least value of the function h. We now describe an algorithm for sorting of all subsets of the set fl1 ; : : : ; lj g. Let " > 0 be a given tolerance. It should be noted that the choice of the number " > 0 depends on the concrete problem. For a broad class of problems a suitable tolerance is " = 10?4  10?7 :

Algorithm Step 0. Set m = 1 and i = 1. Step 1. Take a vector li . Step 2. If lmi  " or lmm =lmi > 1, then set i = i + 1 and go to Step 1. Otherwise set km = i. Step 3. Set m = m + 1; i = 1: If m > n then stop. Step 4. Take a vector li . If lmi ?1 > lmkm??11 ; lmi > "; lmi  lmm and lmi < lmkm , then set km = i and go to Step 3. Step 5. Otherwise set i = i + 1. If i  j then go to Step 4, otherwise stop.

7 Results of numerical experiments A number of numerical experiments have been carried out in order to verify the practical eciency of the suggested algorithm. We consider 5 di erent classes of problems. Four of them are problems with IPH objective functions. We also consider a problem with a special kind of Lipschitz functions. The codes have been written in Microsoft Fortran-90. Numerical experiments have been carried out on an IBM Pentium-S CPU 150 MHz. We present results of numerical experiments for some concrete typical problems from each of these classes. To decribe these results we use the following notations:

   

f = f (x) is the objective function; n is the number of variables; k is the number of iterations; t is the computation time (in seconds). 13

The estimate (xk ) of the precision r (xk ) of the current point xk is de ned as follows:

(xk ) = min(f (xk ) ? k ; f (x ) ? k ) k

k

where k is the lower estimate of the global minimum, de ned by (9). See Remark 4.1 for discussions related to this estimatate. Results of numerical experiments with the estimates of the precision  = 10?2 are presented below. To carry out numerical experiments we use the following problems:

Problem 1 f1 (x) = maxfai xi : i = 1; 2; :::; ng + minf bj xj : j = 1; 2; :::; ng; x 2 IRn; ai = 2 + 0:5i; i = 1; 2; :::; n; bj = (j + 2)(n ? j + 2); j = 1; 2; :::; n: Results of numerical experiments. Table 1

n

5 10 20 30 50

k

80 47 32 26 21

t

2.5 25.8 52.5 167.0 470.2

Problem 2 f2 (x) = maxf[ai ; x] : i = 1; 2; :::; 40g + minf[bj ; x] : j = 1; 2; :::; 20g; x 2 IRn; aik = k(1 +20jii? kj) ; k = 1; 2; :::; n; i = 1; 2; :::; 40; bjk = 5j sin(j ) sin(k)j; k = 1; 2; :::; n; j = 1; 2; :::; 20:

14

Results of numerical experiments. Table 2

n

5 10 20 30 50

Problem 3

k

18 35 35 30 26

t

1.4 123.2 1477.2 3565.4 9205.1

f3 (x) = 1max min [aij ; x]; x 2 IRn ; i20 1j n

aijk = k(1 +10jkj ? j j) j cos(i ? 1)j; i = 1; 2; :::; j = 1; 2; :::; n; k = 1; 2; :::; n: Results of numerical experiments. Table 3

n

k

t

5 108 7.6 10 107 927.7 15 167 3881.9

Problem 4

f4 (x) = (

n X n X i=1 j =1

8 > 12 + n=i; > > < 0 aij = > 0 > > : 15=(i + 0:1j )

15

aij xixj )1=2 ; if i = j; if i = j + 1; if j = i + 2 otherwise.

Results of numerical experiments. Table 4

n

5 10 20 30 50

Problem 5 where ai ;

f5(x) =

n X i=1

k

20 44 49 34 22

t

2.9 136.8 2843.0 7383.7 9304.3

minf0; 15kx ? ai k2 ? bi g; x 2 IRn ;

i = 1; 2; :::; n are n-vectors with coordinates:  i, i aj = 1(n=2+n;1)=2n; ifif jj = 6= i.

b = (b1 ; b2 ; :::; bn ) 2 Rn; b1 = 4; bi = bi?1 ? 2=(n ? 1); n > 1: Results of numerical experiments. Table 5

n

k

t

5 29 2.7 10 21 23.7 15 34 234.4

Comments Problem 3 is a linear maxmin problem (see [5] for discussions related to the problems of such kind). Problems 1 and 2 are special cases of Problem Pn Pn 3. Problem 4 is equivalent to the minimization of the quadratic function f (x) = i=1 j =1 aij xi xj : In contrast with previous problems, the objective function f5 of Problem 5 is not IPH. This is a nonsmooth Lipschitz function with n local minima. The function f5 attained its global minimum over the unit simplex at the point a1 . The version of the cutting angle method described in Remark 4.2 was applied for the solution of this problem. We consider a function g(x) = f5 (x) + M with M = 10 in order to apply our algorithm. Experiments show that numbers M 0 > 10 do not improve the convergence. Since the value of the global minimum of the function f5 is known, we can compare the number of iterates which we need in order to get an approximate solution with the precision r (xk ) and with its estimate (xk ) for the Problem 5. Experiments show that we need almost half iterates indicated in Table 5 in order to obtain the approximate solution with the prescribed precision r (xk ). 16

8 Conclusion In this paper, we have proposed a method for the global minimization of increasing positively homogeneous functions over the unit simplex, which is a version of the cutting angle method. The most dicult part of this method is solving the auxiliary subproblem. The objective function in the subproblem has many local minima and so the nding its global minimizer is a dicult problem. We study some properties of this function and suggest a special algorithm for the solving of the subproblem, which is based on these properties. A number of numerical experiments have been carried out. In numerical experiments, IPH-functions presented as the maxmin over linear functions and as the square root of a quadratic function were considered. Results of numerical experiments show that the suggested algorithm is e ective for the nding of approximate global minimumizers for a broad class of IPH-functions over the unit simplex with dimension n  50. This algorithm also can surve for minimization of some Lipschitz functions.

Aknowledgements The authors are very thankful to thank Y.G. Evtushenko and V. G. Zhadan for very useful discussions. The authors also thank H. Mays for her help in preparation of the nal version of this paper.

References [1] M. Yu. Andramonov, A. M. Rubinov and B. M. Glover, Cutting angle method for minimizing increasing convex-along-rays functions, Research Report 97/7, SITMS, University of Ballarat, 1997. [2] M. Yu. Andramonov, A.M. Rubinov and B. M. Glover, Cutting angle methods in global optimization, Applied Mathematics Letters 12(1999), 95-100. [3] S. G. Bartels, L. Kuntz and S. Sholtes, Continuous selections of linear functions and nonsmooth critical point theory, Nonlinear Analysis, TMA 24 (1995), 385 -407. [4] V.F. Demyanov and A.M. Rubinov, Constructive Nonsmooth Analysis, Peter Lang, Frankfurt am Main, 1995. [5] D.-Z. Du and P.M.Pardalos (eds), Minimax and Applications, Kluwer Academic Publishers, Dordrecht, 1995. [6] D. Pallachke and S. Rolewicz, Foundations of Mathematical Optimization (Convex Analysis without Linearity), Kluwer Academic Publishers, Dordrecht, 1997. 17

[7] A.M. Rubinov and M. Yu.Andramonov, Minimizing increasing star-shaped functions based on abstract convexity, to appear Journal of Global Optimization. [8] A. Rubinov and M. Andramonov, Lipschitz programming via increasing convex-alongrays functions, to appear Optimization Methods and Software.

18