Relaxed Cutting Plane Method for Solving Linear Semi-In ... - CiteSeerX

10 downloads 0 Views 170KB Size Report
3 Walter Clark Professor, North Carolina State University, Industrial Engineering and Op- erations Research, Raleigh, North Carolina. 4 Graduate Student ...
Relaxed Cutting Plane Method for Solving Linear Semi-In nite Programming Problems 1

S.-Y. Wu2 , S. -C. Fang3, and C. -J. Lin4 Communicated by Paul Tseng

This research was partially supported by the North Carolina Supercomputing Center, Cray Research Grant, and the National Textile Center Research Grant. 1

Professor, National Cheng Kung University, Department of Mathematics, Tainan, Taiwan, R.O.C.

2

Walter Clark Professor, North Carolina State University, Industrial Engineering and Operations Research, Raleigh, North Carolina.

3

Graduate Student, University of Michigan, Industrial and Operations Engineering, Ann Arbor, Michigan. 4

Abstract. One of the major computational tasks of using the traditional cutting plane ap-

proach to solve linear semi-in nite programming problems lies in nding a global optimizer of a non-linear and non-convex program. This paper generalizes Gustafson and Kortanek's scheme to relax this requirement. In each iteration, the proposed method chooses a point at which the in nite constraints are violated to a degree rather than at which the violation are maximized. A convergence proof of the proposed scheme is provided. Some computational results are included. An explicit algorithm which allows the unnecessary constraints to be dropped in each iteration is also introduced to reduce the size of computed programs. Key Words: Linear semi-in nite programming, cutting plane method.

1

1 Introduction Consider the following linear semi-in nite programming problem (LSIP )

min s.t.

n X

j =1 n X j =1 xj

cj xj

xj fj (t)  g(t); 8t 2 T

 0;

j = 1;    ; n

(1) (2)

where T is a compact metric space with an in nite cardinality, fj , j = 1;    ; n; and g are real valued continuous functions de ned on T . Note that T can be extended to a compact Hausdor space (Refs. 1 and 2) without much diculty. Its dual problem can be formulated in the following form (DLSIP )

max s.t.

Z

ZT

g(t)d(t)

fj (t)d(t)  cj ; j = 1;    ; n  2 M +(T ); T

where M + (T ) is the space of all non-negative bounded regular Borel measures on T . Under some regularity conditions, it can be shown (Refs. 1 and 2) that there is no duality gap between (LSIP ) and (DLSIP ) and the latter achieves its optimum at an \extreme point" of its feasible domain. The applications of the linear semi-in nite programming can be referred to Refs. 3 and 4. Many papers (Refs. 1, 3, 5 and 6) have dealt with solution methods for solving (LSIP ). According to a recent review article (Ref. 7), the so-called \cutting plane method," or \implicit exchange method," is one of the key solution techniques. Basically, this approach nds a sequence of optimal solutions of corresponding regular linear programs in a systematic way and shows that the sequence converges to an optimal solution of (LSIP ). More precisely, in the kth iteration, let Tk = ft1 ; t2;    ; tk g  T and consider the following linear program (LPk )

min

n X j =1

cj xj 2

(3) (4)

s.t.

n X j =1 xj

xj fj (ti )  g(ti); i = 1; 2;   ; k

 0;

j = 1; 2;   ; n: Firstly, we solve (LPk ) for an optimal solution xk = (xk1 ; xk2 ;    ; xkn)T . Then de ne k (t) =

n X

j =1

fj (t)xkj ? g(t):

(5) (6) (7)

Secondly, we nd an optimizer

tk+1 2 arg min  (t): (8) t2T k P If k (tk+1 ) = nj=1 fj (tk+1 )xkj ? g (tk+1 )  0, then xk must be an optimal solution to (LSIP ) because (LPk )'s feasible domain contains (LSIP )'s. Otherwise, we let Tk+1 = Tk S ftk+1 g to construct (LPk+1 ) and continue the iterative process. The convergence proof of fx1; x2;   ;xk ;   g to an optimal solution of (LSIP ) can be found in Ref. 3. A re ned version which allows us to drop some redundant points in Tk in order to reduce the size of (LPk ) can be found in Ref. 2. In the above approach, one constraint (corresponding to one cut) is added at a time and the major computational work in each iteration involves (i) solving a linear program (LPk ) and (ii) nding a global minimizer tk+1 of k (t). To reduce the computational requirement for solving (LPk ), an \inexact approach" was proposed earlier (Refs. 8 and 9). However, when the dimensionality of the compact metric space T becomes high, nding a minimizer of a continuous function k (t) over T could be extremely time-consuming, in particular, when fj (t) and g (t) are highly nonlinear and non-convex. In this case, the computational bottleneck of the cutting plane approach falls in nding a global minimizer tk+1 of k (t). Ideas of relaxing the requirement of nding the global minimizer for di erent settings can be found in Refs. 10 - 13. For these approaches, at each iteration, instead of nding the global minimizer tk+1 of k (t), one either has to check that k (t)  0, 8t 2 T , in order to add a constraint (Ref. 10), or has to nd the value of  (xk ) = mint2T fk (t)g in order to nd a new cut at tk+1 such that k (tk+1 ) < 0 and k (tk+1 )   (xk ) + k , where fk g is a given sequence of nonnegative numbers converging to zero as k increases to in nity (Refs. 12 and 13). Even so, the required computation could still be a bottleneck. In this paper, we further relax Gustafson and Kortanek's scheme such that a new cut is found at any tk+1 2 T with k (tk+1 ) < ? , where  > 0 is a suciently small number which can be prescribed. In this way, we can avoid the task of nding the global minimizer tk+1 and/or checking the minimum value (xk ) in every iteration. After introducing the relaxed scheme in Section 2, we show that, under appropriate conditions, the proposed scheme terminates in nite iterations to generate an approximate solution with desired 3

accuracy. Some computational experiments and analysis are included in Section 3. Based on the relaxed scheme, an explicit algorithm which allows the unnecessary constraints to be dropped in each iteration is developed in Section 4 while some concluding remarks are made in Section 5.

2 Relaxed Approach Given that  > 0 is a prescribed small number, a general scheme is proposed as follows:

Step 1: Set k 1, choose any t 2 T , and set T = ft g. Step 2: Solve (LPk ) with an optimal solution xk = (xk ;    ; xkn)T : 1

1

1

1

De ne k (t) according to (7). Step 3: Find any tk+1 2 T such that k (tk+1 ) < ?. If such tk+1 does not exist, stop and output xk as the solution. Otherwise, set Tk+1 = Tk [ ftk+1 g: Step 4: Update k k + 1 and go to Step 2. Note that if (LPk ) is found to be infeasible in Step 2, then (LSIP ) is infeasible. In this case, there is no reason to continue the iterations. Therefore, without loss of generality, we may assume that (LPk ) is feasible in this paper. Also note that tk+1 2= Tk and if the scheme terminates in Step 3, then the output solution xk indeed solves (LSIP ) with  being small enough (up to machine accuracy, say 10?7 ). More on the e ects of the size of  will be discussed later. Moreover, the linear dual of (LPk ) can be formulated as the following problem (DLPk )

max s.t.

k X

i=1 k X i=1 yi

g(ti)yi

fj (ti )yi  cj ; j = 1; 2;   ; n

 0;

i = 1; 2;    ; k:

(9) (10)

For the purpose of easy description of the proposed approach, we assume that (LPk ) is solvable with an optimal value denoted by V (LPk ) and (DLPk ) is also solvable with an optimal value denoted by V (DLPk ). Remember that xk = (xk1 ;    ; xkn )T solves (LPk ). Let Bk = fj1k ;    ; jpkk g be an index set such that

xkj > 0; if and only if j 2 Bk : 4

(11)

Then we have the following result:

Theorem 2.1: If (DLPk ) is nondegenerate, then V (LPk ) > V (LPk ). +1

+1

Proof: De ne a k  n matrix A with aij = fj (ti) being its (i; j )th element, for i = 1;    ; k; and j = 1;    ; n: Denoting b = (g (t1);    ; g (tk))T ; we can rewrite (LPk ) in the following form: (LPk )

n X

min

j =1

s.t.

cj xj

1 0 x (A j ?Ikk ) B @ ?? CA = b; x0 x; x0  0:

0 k 1 x C ?? Let x k = B A be an optimal solution with a basis B: Its corresponding optimal basic @ 0 k x variables are x kB = B ? b  0: From the choice of tk in Step 3 of the proposed scheme, 1

+1

we have

n f (t )xk ? g (t ) k (tk+1 ) = P k+1 j =1 j k+1 j P = f (t )xk ? g(t ) < ?: j 2Bk j k+1 j

(12)

k+1

If we let f = (f1(tk+1 );    ; fn (tk+1 )); then (LPk+1 ) becomes (LPk+1 )

min s.t.

n X j =1

cj xj

0 A j B@ ?? j ?I k f j

k

( +1) ( +1)

1 1 0 10 b x CA B@ ?? CA = B@ ?? CA ; g(tk ) x00 +1

x; x00  0: Denoting c = (c1;    ; cn)T , we have a linear dual program

5

(DLPk+1 )

kX +1

max

0i=1

g(ti)yi

0

1

1

T c C A j B@ ?? j ?I(k+1)(k+1) CA y  B@ ?? A: 0(k+1)1 f j

s.t.

Moreover, we let f 0 = (f1 (tk+1 );    ; fn (tk+1 ); 0;    ; 0) 2 Rn+k and fB0 = (fj1k (tk+1 );    ; fjpkk (tk+1 ); 0;   ; 0) be those in f 0 corresponding to B: In this way, if

0 10 k 1 0 1 B j 0k  B b x B@ ?? j ?? CA B@ ?? CA = B@ ?? CA ; g(tk ) x fB0 j ?1 1

(13)

+1

then, by (12),

x =

X

fj (tk+1 )xkj ? g(tk+1 ) = k (tk+1 ) < ?:

(14)

1 0 k 1 0 1 0 B j 0k  ? B b x B @ ?? CA = B@ ??0 j ?? CA B@ ?? CA : g(tk ) x fB j ?1

(15)

j 2Bk

Moreover, 1

1

+1

Let y = (y1;    ; yk)T = (B ?1 )T cB ; the theory of linear programming (Refs. 14 and 15) implies that y is an optimal solution of (DLPk ). Note that

1 1T 0 00 B j 0k  ? cB B@B@ ?? j ?? CA CA B@ ?? 0 fB0 j ?1 1

1

1 0  1 y C CA = B@ ?? A: 0

Hence we know that

0 A j B@ ?? j ?I k f j

k

( +1) ( +1)

1 1T 0  1 0 c y CA B@ ?? CA  B@ ?? CA : 0

6

0(k+1)1

(16)

0  1 y C B In other words, @ ?? A is a feasible solution of (DLPk ) with a corresponding basic 0 k 1 0 xB C ?? solution B A for (LPk ). @  x P Now, since ki g (ti )yi + g (tk )  0 = V (DLPk ), (DLPk ) is nondegenerate, and x < ? < 0; the basic property of the dual simplex method (Ref. 15, p. 97) guarantees that +1

+1

=1

+1

+1

V (LPk+1 ) = V (DLPk+1 ) > V (DLPk ) = V (LPk ):

(17)

2 The incremental amount in Theorem 2.1 can be further analyzed. Let yk = (y1k ;    ; ykk )T be an optimal solution of (DLPk ). We de ne a discrete measure k on T such that

k (t) =

(

if t = ti 2 Tk if t 2 T n Tk :

yik ; 0;

In this way, k (t)  0; 8t 2 T: Furthermore, let Tk0 = ft 2 Tk j k (t) > 0g and Bk0 = fik1 ;    ; ikmk g be an index set such that k (ti) > 0; if and only if i 2 Bk0 : In other words, yik > 0; if and only if i 2 Bk0 : We claim that

tk+1 2 Tk0 +1 :

(18) (19) (20) (21) (22)

Note that if tk+1 2= Tk0 +1 , then k+1 (tk+1 ) = 0. In this case, the measure k+1 achieves nonzero value no more than at those points in Tk : Hence we have V (DLPk+1 ) = V (DLPk ) which contradicts Theorem 2.1. Therefore, without loss of generality, we can rearrange tk+1 to be the last element in Tk0 +1 . We de ne an n  mk matrix Hk with its j th row vector being

(fj (tik1 );    ; fj (tikmk )); for j = 1;    ; n: (23) Since k (ti ) > 0; 8i 2 Bk0 , from the \complementary slackness theorem" of linear programming (Refs. 14 and 15), we have HkT xk = gk ; (24) where (25) gk = (g(tik1 );    ; g(tikmk ))T : 7

Let Mk be a pk  mk matrix with its j th row vector being (fj (tik1 );    ; fj (tikmk )); for j 2 Bk :

(26)

Remember that tk+1 is the last element in Tk0 +1 . Now de ne an n  (mk + 1) matrix Hk+ with its j th row vector being (fj (tik1 );    ; fj (tikmk ); fj (tk+1 )); for j = 1;    ; n:

(27)

For gk+1 = HkT+ xk+1 , from the constraints satis ed, we know that (gk+1 )T  ((gk )T ; g (tk+1))  (gk+1 )T : We de ne

(28)

sk = gk ? gk : (29) Then sk  0. Moreover, the last element of sk must be zero. Otherwise, we have +1

+1

k+1 (tk+1 ) = 0 which contradicts the fact that k+1 (tk+1 ) > 0. Remembering the de nition of k (t), on the dual side, we de ne X kj = fj (ti)k (ti) ? cj ; for j = 1;    ; n: (30) i2Bk0

Note that kj  0; 8j; k, and we have the following result:

Theorem 2.2: If (DLPk ) is nondegenerate, then n X X V (LPk ) ? V (LPk ) = kj xkj ? k (ti)k (ti ) +1

+1

+1

+1

i2Bk0 +1

j =1

n X k = sj k (tikj ) ? kj xkj +1 : j =1 j =1 mk X

Proof: By the de nition of k (t); we have P 0  (t ) (t ) i i2Bk+1 k i k P P = i2Bk0 +1 ( nj fj (ti )xkj ? g (ti))k (ti ) P P P = i2Bk0 +1 nj fj (ti )xkj k (ti ) ? i2Bk0 +1 g (ti )k (ti ) P P = nj ( i2Bk0 +1 fj (ti )k (ti ))xkj ? V (LPk ) P = n (k + c )xk ? V (LP ) +1

+1

=1

+1

=1

j =1 j

+1

+1

=1

+1

j j

+1

k+1

8

Therefore,

P P = nj=1 kj +1 xkj + nj=1 cj xkj ? V (LPk+1 ) P = nj=1 kj +1 xkj + V (LPk ) ? V (LPk+1 ): V (LPk+1 ) ? V (LPk ) = Pnj=1 kj +1 xkj ? Pi2Bk0 +1 k (ti )k+1 (ti ):

On the other hand, by the de nition of kj ; we have

P n k x k j j j P P n = j ( i2Bk0 fj (ti )k (ti ) ? cj )xkj P P P = nj ( i2Bk0 fj (ti )k (ti )xjk ) ? nj cj xkj P P = i2Bk0 ( nj fj (ti )xkj )k (ti ) ? V (LPk ) P = mj k (skj + g (tikj ))k (tikj ) ? V (LPk ) P P = mj k skj k (tikj ) + i2Bk0 g (ti )k (ti ) ? V (LPk ) P = mk sk  (t k ) + V (LP ) ? V (LP ): +1

=1

+1

=1

+1

=1

+1

=1

+1

=1

+1

+1

=1

+1

=1

j =1 j k ij

k

k+1

Hence we have V (LPk+1 ) ? V (LPk ) = Pmj=1k skj k (tikj ) ? Pnj=1 kj xjk+1 :

2

Theorem 2.2 is a fundamental result, with it, we show that under proper conditions, for any given  > 0, the proposed scheme actually terminates in a nite number of iterations.

Theorem 2.3: Given any  > 0, in each iteration, assume that (A1) (LPk ) has a bounded feasible domain, (A2) (DLPk ) is nondegenerate. Moreover, there exists a  > 0 such that (A3) k (ti )  ; 8i 2 Bk0 , (A4) kj < ?; 8j 2= Bk ; (A5) MkT has a square submatrix Dk with rank pk (= jBk j) and jDet(Dk )j > . Then, the proposed scheme terminates in a nite number of iterations. Proof: Suppose that the scheme does not stop in a nite number of iterations. When (DLPk ) is nondegenerate for each k, Theorem 2.1 implies that

V (LP1) < V (LP2 ) <     V (LSIP ): 9

Thus limr!1 V (LPr ) =  V (LSIP ): We claim this is impossible. By the assumption (A1), the in nite sequence fxk g is con ned in a compact set C in Rn : There exists a subsequence fxkr g of fxk g such that xkr converges to x; and the subsequence ftkr +1g converges to some point t as r ! 1: Now we let

 (t) =

n X i=1

fi (t)xi ? g(t):

(31)

Then kr (tkr +1 ) converges to  (t ): Since kr (tkr +1 ) < ? , for each r; we have 0 6=  (t) 

?:

Now let " 2 (0;  ) be an arbitrary number, we can nd a large integer N 2 fkr g1 r=1 such that

j V (LPN ) ? j " and j N (tN ) ? (t) j " : 2

2

+1

(32)

By Theorem 2.2, we have

j V (LPN ) ? V (LPN ) j=j +1

m XN j =1

sNj N (tiNj ) ?

n X j =1

Nj xNj +1 j "2

(33)

Remember that Nj  0, for j = 1; 2;   ; n; each term in the rst summation sign of (33) is nonnegative, and each term in the second summation sign of (33) is nonpositive. It follows that 0  sNj N (tiNj )  "2 ; for j = 1;    ; mN , and 0  ?Nj xNj +1  "2 ; for j = 1;    ; n. By the assumption (A3); we have N (ti )  ; 8i 2 BN0 : Hence

sNj  ";

for j = 1;    ; mN :

(34)

By the assumption (A4); we have

Nj < ?;

if j 2= BN :

(35)

xNj +1  ";

if j 2= BN :

(36)

It follows from (33) that

By (29), (34), and (36), we have 10

X j 2BN

for i 2 BN0 ;

fj (ti)xNj +1 = g(ti) + Oi(");

(37)

where Oi (") ! 0 as " ! 0: By (24), we have

X j 2BN

fj (ti )xNj = g(ti);

for i 2 BN0 :

(38)

By the de nition of MN ; we know that the matrix MN has row vectors (fj (tiN1 ); fj (tiN2 );    ; fj (tiNmN ));

for j 2 BN :

Let x = (xNj )j 2BN and x0 = (xNj +1 )j 2BN ; then (37) and (38) can be expressed as and

MNT x0 = (g(tiN1 ) + O1(");    ; g(tiNmN ) + OmN ("))T MNT x = (g(tiN1 );    ; g(tiNmN ))T :

It follows from the assumption (A5) that

j xNj ? xNj j< "("); for j 2 BN ;

(39)

+1

where " (") ! 0 as " ! 0: Combining the facts that N +1 (tN +1 ) = 0; (36); and (39); we have

N (tN +1 ) ! 0 as " ! 0:

(40)

But N (tN +1 ) !  (t ) 6= 0; as N ! 1: Hence (40) cannot be true and we have a contradiction. Therefore our claim is valid and the proof is complete.

2

Note that (A1) is commonly assumed in linear semi-in nite programming to simplify proofs. It can be relaxed by using \bounded level sets." (A2) is also a technical condition commonly used in linear programming. Moreover, when  is chosen to be suciently small, 11

(A3), (A4), and (A5) in general can be satis ed without much diculty. The violation of any of these three assumptions will lead to some rare instances of degeneracy or inconsistency. In particular, the violation of (A3) will result in a subsequence of (DLPk ) which has a limit optimal solution being dual degenerate. Similar situation goes for the violation of (A4). Moreover, the violation of (A5) will provide a subsequence of (LPk ) whose determinant value of optimal basis matrix tends to zero. Under these conditions, Theorem 2.3 assures that the proposed scheme terminates in nitely  iterations, with an optimal solution xk = (xk ;    ; xk )T , such that many iterations, say k 1 n xkj  > 0, for j 2 Bk . In this case, xk is of course feasible for (LSIP ), and it can be viewed as an approximate solution of (LSIP ). The next theorem tells us how good such approximate solution can be.

Theorem 2.4: For any given  > 0, if there exists x = (x ;    ; xn)T with xj  ?xkj  =; 8j 2 Bk , and xj  0; 8j 2= Bk , such that k X xj fj (t)  1; 8t 2 T; (41) 1

j =1

then

jV (LPk ) ? V (LSIP )j  j

n X j =1

cj xj j:

Proof: By the de nition of xk ; we have n X j =1

fj (t)xkj  ? g(t)  ?;

for each t 2 T:

(42)

By (41), we have n X j =1

fj (t)xj  ;

for each t 2 T:

(43)

It follows from (42) and (43) that n X j =1

fj (t)(xkj  + xj ) ? g(t)  0;

for each t 2 T:

(44)

By our assumption, we know that xkj  +  xj  0; for j = 1; 2;   ; n: Hence xk +  x is a feasible solution of (LSIP ). Therefore, 12

j V (LPk ) ? V (LSIP ) j  j

n X j =1

cj xkj  ?

n X j =1

cj (xkj  + xj ) j=  j

n X j =1

cj xj j :

2 Note that, the required condition (41) implies that (1) has an interior solution. Also when  > 0 is chosen to be small enough, since ffj (t) j j 2 Bk g are in general linearly independent, the existence of the required x in Theorem 2.4 is not a problem. Theorems 2.3 and 2.4 guarantee that the proposed scheme generates an approximate solution to (LSIP ) with any level of accuracy in a nite number of iterations.

3 Computational Experiments In this section, the following two commonly seen L1 problems (Refs. 3 and 16) are used to illustrate the computational behavior of the proposed scheme: Problem 3.1 max s.t. Problem 3.2 max s.t.

X ?wj 8

j =1

X 8

j =1

j

?tj? wj  2??1t ; t 2 [0; 1]: 1

X ?wj 7

j =1

X 7

j =1

j

?tj? wj  1

X 4

j =0

t2j ; t 2 [0; 1]:

It was reported in Refs. 3 and 16 that 0.6931 and -1.78688 are approximate optimal solutions to Problem 1 and Problem 2, respectively. We use MATLAB version 4.2c (Ref. 17) on a SUN UltraSparc1 workstation for the experiment. Three di erent approaches, namely, the traditional cutting plane method, the proposed method, and the well-known discretization method, have been implemented. 13

For all three methods, the lp subroutine of the MATLAB optimization toolbox was used to solve linear programs in each of them. For the discretization method, we discretized the interval [0; 1] into 101 evenly spaced points and solved a linear program with 101 explicit constraints. For the traditional cutting plane method, the fmin subroutine of MATLAB was utilized for nding the global minimizer of problem (8). For the proposed method, we set  = 0:0001 and recursively discretize T to nd tk+1 in Step 3. In other words, in the beginning, the interval [0; 1] was discretized by 11 evenly spaced points and test each point to see if (t) < ? . If all points failed, then we re ne the discretization by 101 points to test. If this failed, then we test 1001 points. If this failed again, the MATLAB subroutine fmin is nally employed to nd tk+1 . Our numerical experiment has shown that only in the last iteration, the subroutine fmin is called for the proposed method. In our experiment, the following condition

k (tk+1 ) > ?0:0001 was used as a stopping criterion for both the traditional and proposed methods. Also notice that the relaxed scheme was stated in Section 2 as a theoretical algorithm, without loss of generality, under the assumption that (LPk ) is solvable with an optimal solution xk . In practice, even when (LSIP ) is solvable, some (LPk ) at the early stages may still be unbounded below. This problem can be easily handled by adding enough points to T1 to start the rst iteration. Or, we can simply choose xk in Step 2 to be a feasible solution of (LPk ), whose objective value is negatively large. In our experiments, we adopted the latter approach. The numerical results are shown in the following table: Table 1: Comparison of di erent methods Problem 3.1

Problem 3.2

obj: value k time (sec) obj: value k time (sec) Traditional cp method 0.693146 14 6.87 -2.2395e+11 71 40.96 Proposed cp method 0.693148 12 4.82 -1.786917 14 4.94 Discretized method 0.693148 6.18 -1.786901 1.46  : the algorithm does not converge in 70 iterations In the table, obj: value is the nal objective value, k indicates the number of iterations required, and time is in cpu-seconds. Note that k also represents the number of linear programs (or number of explicit constraints) solved. For both problems, the proposed method generates a better approximate solution (comparable to the result obtained by the discretization method with 100 constraints) in shorter time than the traditional cutting plane method. For the second problem, the traditional cutting plane method failed to nd an approximate solution in 70 iterations. In this case all optimizers, i.e., t0k s, generated by 14

the traditional cutting plane method stay quite close, so the traditional method failed to converge fast. This expriment shows the potential of the proposed method. To further understand the role of  played in the proposed method, we ran the proposed method for both problems with di erent  value. The results are shown in the following table: Table 2: Numerical behaviors of using di erent  Problem 3.1

 obj: value

0.0001 0.01 1.0

Problem 3.2

k time (sec) obj: value

0.693148 12 0.693148 12 0.693148 11

k time (sec)

4.82 -1.786917 14 4.75 -1.787018 10 4.46 -1.787018 9

4.94 2.95 2.75

Although in theory for the proposed method, we think  should be very small, such as 10?7 for our workstation implementation. But in practice,  can be chosen much larger than we expected. In our experiment, the results are not bad even when  = 1:0.

4 Explicit Algorithm The proposed scheme adds one inequality constraint in each iteration. Hence the size of (LPk ) becomes larger and larger. In order to avoid solving too large problems, we would like to exploit the possibility of dropping unnecessary constraints while a new constraint is added in each iteration. This is referred to as an \explicit exchange method" in this paper. To introduce such an explicit algorithm, let us start with some notation. Let T 0 = ft1 ; :::; tmg be a subset with m elements in T . We denote the following linear program with m explicit constraints induced by T 0 as

LP (T 0 )

min s.t.

n X

j =1 n X

cj xj

xj fj (ti) j =1 xj  0;

 g(ti); i = 1; 2;   ; m j = 1; 2;    ; n:

(45) (46)

Similar to situation faced in Section 2, for the purpose of easy description, given that  > 0 is a prescribed small number, we state our explicit algorithm in the following steps, under 15

the assumption that LP (Tk ) and its dual DLP (Tk ) are both solvable:

Step 0: Let k 1, choose any t 2 T , set T = ft g, and set m = 0. Step 1: Solve LP (Tk ): Let xk = (xk ; : : :; xkn)T be an optimal solution. 0 1

1

0 1

0

1

De ne k (x) according to Equation (7). Step 2: Solve DLP (Tk ). Let yk = (y1k ; : : :; ymk k?1+1)T be an optimal solution. De ne a discrete measure k on T by letting ( k tik?1 2 Tk ; k (t) = 0yi ( 0) ifif tt = 2 T nTk:

Set Ek = ft 2 Tk j k (t) > 0g = ftk1 ;    ; tkmk g: Step 3: Find any tkmk +1 2 T such that k (tkmk +1) < ?. If such tkmk +1 does not exist, stop and output xk as a solution. Otherwise, set Tk+1 = Ek [ ftkmk +1 g. Step 4: Update k k + 1 and go to Step 1. Note that tkmk +1 2= Tk and if the algorithm terminates in Step 3, then the output solution xk indeed solves (LSIP ) with  being suciently small. Also note that when it is applicable, a primal-dual algorithm can be employed to nd solutions for LP (Tk ) and DLP (Tk ) simultaneously for computational eciency. Remember that xk = (xk1 ;    ; xkn)T solves LP (Tk ). Let Bk = fj1k ;    ; jpkk g be an index set such that xkj > 0; if and only if j 2 Bk . Also let Mk be a pk  mk matrix with its j th row vector being (47) (fj (tk1 );    ; fj (tkmk )); for j 2 Bk : On the dual side, remembering the de nition of k (t), we de ne

kj

=

mk X i=1

fj (tki )k (tki ) ? cj ; for j = 1;    ; n:

(48)

In this way, we know that kj  0; 8j; k. With this setting and parallel to the proof given in Theorem 2.3, we have the following result: Theorem 4.1: Given any  > 0, in each iteration, assume that (A1) The set fx1; x2; : : :g is bounded. 16

(A2) DLP (Tk ) is nondegenerate. Moreover, there exists a  > 0 such that (A3) k (tki )  ; 8i = 1; :::; mk, (A4) kj < ?; 8j 2= Bk ; (A5) MkT has a square submatrix Dk with rank pk (= jBk j) and jDet(Dk )j > . Then, the explicit algorithm terminates in a nite number of iterations. The above theorem asserts that the explicit algorithm terminates with a solution in a nite number of iterations, under proper assumptions. Similarly, the result of Theorem 2.4 can be used to tell how good such a solution is. Note that, for the relaxed scheme in Theorem 2.3, if (LPk ) has a bounded feasible domain, so do (LPk+1 ), (LPk+2 ); : : :. However, for the explicit algorithm, Tk may not be a subset of Tk+1 . Therefore, we directly assume that the set of optimal solutions to LP (Tk ), k = 1; 2; : : :, is bounded. We now use Problem 2 of Section 3 to illustrate the potential of the explicit algorithm. The computational experiment was conducted in the same test environment as described in Section 3, and the result is shown in the following table: Table 3: Comparison of relaxed scheme and the explicit algorithm iteration k 1-9 10 11 12 13 14 time(sec)

obj: value

Number of constraints in LP (Tk ) relaxed scheme explicit algorithm 1-9 1-9 10 6 11 6 12 6 13 6 14 6 4.94 4.73 -1.786917 -1.786917

Observe that in the rst nine iterations, since LP (Tk ) is unbounded below and hence DLP (Tk ) is infeasible, we simply nd one feasible solution of LP (Tk ) and de ne k (x) in Step 1, set Ek = Tk , and jump to Step 3. In this way, both methods add one constraint in iteration. After the 9th iteration, DLP (Tk ) becomes feasible and the explicit algorithm starts dropping unnecessary constraints. In this experiment, both cases stop at the 14th iteration with literally the same nal objective value. Although the time reduction is not huge for this small problem, the potential of the explicit algorithm is clearly seen.

17

5 Concluding Remarks In this paper, we have presented a relaxed cutting plane scheme to solve linear semi-in nite programming problems. The proposed scheme, in each iteration, chooses a point where the in nite constraints are violated to a degree rather than a point where the violation are maximized to generate a new cut for the next iteration. Under proper conditions, it has been proven that the proposed scheme can generate an approximate solution of any level of accuracy in a nite number of iterations. Based on this scheme, an explicit algorithm which allows the dropping of unnecessary constraints is developed. Since the requirement of nding an optimizer of a nonlinear and nonconvex program is relaxed, we see the potential advantage of the proposed scheme, in particular, in higher dimensional spaces. Our very limited computational results support the theory. Although the method we implemented for nding tk+1 in Step 3 of the proposed scheme is very primitive, and may not be good for general purposes, the potential advantage of the proposed method is clearly illustrated.

18

References 1. ANDERSON, E. J., and NASH, P., Linear Programming in In nite-Dimensional Spaces, Wiley, Chichester, England, 1987. 2. LAI, H.-C., and WU, S.-Y., On Linear Semi-In nitec Programming Problems: an Algorithm, Numerical Functional Analysis and Optimization, Vol. 13, pp. 287-304, 1992. 3. GLASHOFF, K., and GUSTAFSON, S.  A., Linear Optimization and Approximation, Springer Verlag, New York, New York, 1982. 4. GUSTAFSON, S.  A., and KORTANEK, K. O., Semi-In nite Programming and Applications, Mathematical programming: The State and Art. Edited by A. Bachem, M. Grotschel and B. Korte, Springer Verlag, Berlin, Germany, pp. 132-157, 1983. 5. GUSTAFSON, S.  A., On The Computational Solution of a Class of Generalized Moment Problems, SIAM Journal on Numerical Analysis, Vol. 7, pp. 343-357, 1970. 6. GUSTAFSON, S.  A., and KORTANEK, K. O., Numerical Treatment of a Class of SemiIn nite Programming Problems, Naval Research Logistics Quarterly, Vol. 20, pp. 473-504, 1973. 7. HETTICH, R., and KORTANEK, K. O., Semi-In nite Programming: Theory, Method and Applications, SIAM Review, Vol. 35, pp. 380-429, 1993. 8. FANG, S.-C., and WU, S.-Y., An Inexactc Approach to Solving Linear Semi-In nite Programming Problems, Optimization, Vol. 28, pp. 291-299, 1994. 9. FANG, S.-C., LIN, C.-J., and WU, S.-Y., On Solving Convex Quadratic Semi-In nite Programming Problems, Optimization, Vol. 31, pp. 107-125, 1994. 10. GRIBIK, P. R., A Central Cutting Plane Algorithm for SIP, Semi-in nite Programming. Edited by R. Hettich, Springer Verlag, Berlin, Germany, pp. 66-82, 1979. 11. TICHATSCHKE, R., and NEBELING, V., A Cutting Plane Algorithm for Solving Quadratic Semi-In nite Programs, Optimization, Vol. 19, pp. 803-817, 1988. 12. GUSTAFSON, S.  A., and KORTANEK, K. O., Computation of Optimal Experimental Designs for Air Quality Surveillance, Quantitative Modelle fur O konomisch-O kologische Analysen. Edited by P. J. Jansen, O. Moeschlin, and O. Rentz, Verlag Anton Hain, Meisenheim, Germany, pp. 43-60, 1976. 13. GUSTAFSON, S.  A., and KORTANEK, K. O., Computational Schemes for SemiIn nite Programs, Technical Report, Department of Mathematics, Carnegie-Mellon University, Pittsburgh, Pennsylvania, 1977. 14. FANG, S.-C., and PUTHENPURA, S. C., Linear Optimization and Extensions: Theory and Algorithms, Prentice Hall, Englewood Cli s, New Jersey, 1993. 15. LUENBERGER, D. G., Linear and Nonlinear Programming, Addison-Wesley, Reading, Massachusetts, 1984. 16. FERRIS, M. C., and PHILPOTT, A. B., An Interior Point Algorithm for Semi-In nite Linear Programming, Mathematical Programming, Vol. 43, PP. 257-276, 1989. 17. MATLAB User Guide, The Math Works, Natick, Massachusetts, 1994.

19

Suggest Documents