The finite criss-cross method for hyperbolic programming - CiteSeerX

3 downloads 26525 Views 210KB Size Report
As we will see all the basic characteristics of the criss-cross ... The primal hyperbolic programming (PHP) problem in standard form is as follows: min. cT x. dT x.
The finite criss-cross method for hyperbolic programming Report 96-103

T. Ill´es ´ Szirmai A. T. Terlaky

Faculteit der Technische Wiskunde en Informatica Faculty of Technical Mathematics and Informatics Technische Universiteit Delft Delft University of Technology

ISSN 0922-5641

Copyright c 1996 by the Faculty of Technical Mathematics and Informatics, Delft, The Netherlands. No part of this Journal may be reproduced in any form, by print, photoprint, microfilm, or any other means without permission from the Faculty of Technical Mathematics and Informatics, Delft University of Technology, The Netherlands. Copies of these reports may be obtained from the bureau of the Faculty of Technical Mathematics and Informatics, Julianalaan 132, 2628 BL Delft, phone +31152784568. A selection of these reports is available in PostScript form at the Faculty’s anonymous ftp-site. They are located in the directory /pub/publications/tech-reports at ftp.twi.tudelft.nl

DELFT UNIVERSITY OF TECHNOLOGY

REPORT Nr. 96{103 The Finite Criss-Cross Method for Hyperbolic Programming

T. Illes, A . Szirmai, T. Terlaky

ISSN 0922{5641 Reports of the Faculty of Technical Mathematics and Informatics Nr. 96{103 Delft, October, 1996 i

Tibor Illes, Tamas Terlaky Delft University of Technology Faculty of Technical Mathematics and Informatics P.O. Box 5031, 2600 GA Delft, The Netherlands E-mail: [email protected], [email protected] A kos Szirmai Eotvos Lorand University Computer Science Department Muzeum krt. 6-8., H-1088 Budapest, Hungary E-mail: sz [email protected] Tibor Illes is on leave from Eotvos Lorand University, Operations Research Department, Budapest, Hungary.

c 1996 by Faculty of Technical Mathematics and Informatics, Delft, Copyright The Netherlands. No part of this Journal may be reproduced in any form, by print, photoprint, micro lm or any other means without written permission from Faculty of Technical Mathematics and Informatics, Delft University of Technology, The Netherlands. ii

Abstract In this paper the nite criss-cross method is generalized to solve hyperbolic programming problems. Just as in the case of linear or quadratic programming the criss-cross method can be initialized with any, not necessarily feasible basic solution. Finiteness of the procedure is proved under the usual mild assumptions. Some small numerical examples illustrate the main features of the algorithm. Key words: hyperbolic programming, pivoting, criss-cross method

iii

1 Introduction The hyperbolic (fractional linear) programming problem is a natural generalization of the linear programming problem. The linear constraints are kept, but the linear objective function is replaced by a quotient of two linear functions. Such fractional linear objective functions arise in economical models when the goal is to optimize pro t/allocation type functions (see for instance [12]). The objective function of the hyperbolic programming problem is neither linear nor convex, however there are several ecient solution methods for this class of nonlinear programming problems. The existence of ecient algorithms are due to the following nice properties of the objective function. Martos proved in his early work [13], that the objective function is not only pseudoconvex, but also pseudolinear, thus any local minima of it is global, too. Another attractive property of the fractional linear problem was identi ed by Charnes and Cooper [4]. They showed that any hyperbolic programming problem is equivalent to a special linear programming problem. Thus it is not surprising that suitable adaptations of linear programming algorithms, like simplex methods [3, 4, 6] or Karmarkar's interior point algorithm [1, 2], solve the hyperbolic programming problem. A thorough survey with nearly 1200 references, on this topic was written by Schaible [14]. The criss-cross method forces a new view at pivot methods for optimization problems. Its main features are: 1, it can be initialized with any, not necessarily feasible, basis solution; 2, it solves the problem, without introducing arti cial variables, in one phase; 3, in a nite number of steps either it solves the problem or shows that the optimization problem is either infeasible or unbounded. First criss-cross methods were designed for linear [15, 16] and oriented matroid linear programming [17, 18]. The generalization to quadratic programming [9], sucient complementarity problems [7] and oriented matroid linear complementarity problems [5] followed shortly. Up till now, to our best knowledge no criss-cross methods were designed for hyperbolic programming. The goal of this paper is to ll this gap by generalizing the criss-cross method for this important class of problems. In Section 2, after the formulation of the primal and dual problem we summarize some of the basic properties of the hyperbolic programming problem pair. We discuss the relations between speci c sign structures of pivot tableaus and infeasibility or unboundedness of the problems. Then, in Section 3, after presenting the adaptation of the algorithm, its niteness is proved under the usual weak assumptions. As we will see all the basic characteristics of the criss-cross method are preserved. Finally, our hyperbolic criss-cross algorithm is illustrated on two simple examples chosen from Martos' book [13]. These examples show that di erent basic solutions are visited during the solution procedure compared with other known simplex-type algorithms.

1

2 Basic properties of the hyperbolic (fractional linear) programming problem The primal hyperbolic programming (PHP ) problem in standard form is as follows:

9 Tx c > min dT x > > = (PHP ); Ax = b > > x  0 >; where c; d; x 2 Rn, b 2 Rm, A 2 Rmn, rank(A) = m and J = f1; 2; :::; ng: The set P := fx 2 Rn : Ax = b; x  0g contains all feasible solutions of the primal problem. Let us introduce the standard [2] positivity assumption which is necessary condition to make a dual problem without duality gap. This assumption is used in proving the correctness and niteness of several algorithms [2, 4].

Assumption 2.1 P  fx 2 Rn : dT x > 0g: If P  fx 2 Rn : dT x < 0g, then using ?d and ?c in (PHP ) we nd that the previous assumption holds. Thus the only case which is excluded by the assumption above is that 9x 2 P : dT x = 0. It is well-known that the dual of (PHP ) is a special linear programming problem:

9 > max ?y > = AT y + dy  ?c > (DHP ): bT y  0 >; 0

0

As we are going to present a nite pivot algorithm for hyperbolic programming problem we need a suitable simplex { or with other words basic { tableau. This tableau di ers from the known simplex tableau, thus we give a formal de nition of it below. Let us denote by aj the j th column of the matrix A, and by t i T the ith row of matrix T . ( )

2

p ?c

vj

0

cT

b

T

?d

dT

0

0 0 ... 0 1

q

uiT ( )

Figure 1 The following notations are used in Figure 1:

c := cTB B ? b; d := dTB B ? b; b := B ? b; cT := cT ? cTB B ? A; dT := dT ? dTB B ? A; p := c ? (c =d )d; q := (1=d )b; T T T i i u := t + (bi=d )d ; vj := B ? aj + (dj =d )B ? b; T := B ? A; where matrix B denotes a basis of the linear equation Ax = b. Let us denote the index set of the basic vectors by JB , while JN = J n JB denotes the index set of the non basic vectors. 1

0

1

0

1

1

1

0

0

( )

( )

0

1

0

0

1

1

De nition 2.1 The variable xi < 0; i 2 JB is called primal infeasible and the variable xj ; j 2 JN is called dual infeasible, if cj < (c =d )dj . 0

0

The well-known [13] optimality, primal and dual infeasibility criteria for the hyperbolic programming pair (PHP ) and (DHP ) are summarized in Propositions 2.1-2.3.

Proposition 2.1 If?b  0 and p  0, then we have optimal solution and the vector x = (xB ; xN ) = (B b; 0) is a primal optimal solution and y = ?c =d ; y = ??? are 1

0

dual optimal solutions.

3

0

0

Proposition 2.2 If there exists r 2 JB such that br < 0 and u r  0 then there is no ( )

primal feasible solution.

Proposition 2.3 If there exists r 2 JN such that pr < 0 and vr  0 then there is no dual feasible solution.

Propositions 2.1 - 2.3 serve as the stopping criteria for our algorithm. If we would just formaly generalize the criss-cross method [15, 16] then we would have the following pivot rule and

Pivot rule P. I. (i) If b  0 and p  0, then the pivot table is optimal. STOP. (ii) If (i) is not true, then let k := minfi : bi < 0 or pi < 0; i = 1; 2; : : : ; ng: II. (i) If pk < 0 and vk  0 then there is no dual feasible solution. STOP. (ii) If (i) is not true, then let s := minfj : vjk > 0; j 2 JB g; make a pivot at position (s; k) and go to I. III. (i) If bk < 0 and u r  0 then there is no primal feasible solution. STOP. (ii) If (i) is not true, then let r := minfi : uki < 0; i 62 JN g; make a pivot at position (k; r) and go to I. ( )

The above described Pivot rule P. has to be modi ed, because sk at step II. (ii) or kr at step III. (ii) can be zero. This is an immediate consequence of that we are using arti cially de ned vectors p; vk; u k for nding the leaving and entering variables. Instead of discussing disadvantages of the Pivot rule P., let us explain a useful property of it, which will be very important for the correctness and niteness of our algorithm. ( )

present basis. Choose the Proposition 2.4 Let us suppose that d0 6=s 0, where B 0 is the 0 pivot position (s; r) by using vectors b; u (or p; vr ). If sr 6= 0 then d00 6= 0, where d00 0

denotes the new value of d0.

( )

0

4

0

Proof: Based on Pivot rule P., we have that bs < 0 and usr = sr + bs dd00 < 0: Let us suppose to the contrary that d00 = 0: Then r

0

?d00 = ?d0 ? dr b = 0; s

0

0

sr

therefore d0 = ?dr b : s

0

sr

Using the assumptions d0 6= 0 and sr 6= 0 we obtain 0

sr + bs dd0r = 0; 0

which is a contradiction. The proof goes analogously for the case pk < 0 and sr > 0.

2

If we apply Pivot rule P. starting from a basis with d 6= 0 and at pivot position there is a nonzero element, then by Proposition 2.4 the new tableau will have at position of d nonzero, as well. We can repeat such pivots until sk = 0 (or kr = 0) occur, when obviously it is impossible to pivot at position (s; k) or (k; r), respectively. If we want to have a pivot rule which only uses the sign structure of the vectors p; b; vj and u i , then we should adjust the Pivot rule P., to cover the pathological case, i.e. when sk = 0 (or kr = 0). In such a case it seems to be reasonable to choose either position ?c or position ?d . (We refer to such a pivot as external pivot.) Considering the de nition of the vectors p; vj and u i it is not surprising that the position of ?d is the only suitable position for an external pivot. Figure 2 shows the changes of the entries after an external pivot. Obviously the sucient condition for an external pivot is d 6= 0, but this property of the tableau is preserved during the pivot sequence (Assumption 2.1 and Proposition 2.4). 0

0

( )

0

0

( )

0

0

?c : : : 0

... bi    ... ?d : : : 0

cj ... ij ... dj

::: 0 0    ... 0 ::: 1

)

0 : : : cj ? dc00 dj : : : ? dc00 ... ... 0 ... : : :  + d b : : : b ij d0 i d0 . ... .. 0 1 : : : ? dd0 : : : ? d0 j

j

Figure 2 5

i

1

After an external pivot our pivot tableau has the following form (Figure 3). 0

pT

?c =d

0

T =V =U

q

1

?1=d dT

?1=d

0

0

0

0

Figure 3 Observe that new vectors p and q occur in the rst row and the last column, respectively. This means that b is no longer a part of our pivot tableau. If we decide about the pivot position by using the same Pivot rule P. then the pivot position will still be the same position (s; k) (or (k; r)). But now at that position we have either

sk + ddk bs = ddk bs or kr + ddr bk = ddr bk : 0

0

0

0

These values are non-zeros, i.e. we can make pivot at the position (s; k) (or (k; r)) after an external pivot. Summarizing the above discussions we have that if at the originally choosen position, the coecient was zero then after an external pivot, it become nonzero, thus we can do the pivot there. Since this procedure involves two pivots, we are speaking about a double-pivot. An advantage of double-pivoting is that after that we can decide about the next pivot position by using the same kind of vectors (like p; vj and u i ) without destroying the structure of our pivot tableau. Hence double-pivoting has no e ect on the sign structure used for pivot selection. If we adjust Pivot rule P. by making double-pivot when it is necessary, one easily can verify the following lemma. ( )

Lemma 2.1 Double-pivoting may occur at most once.

2

If our initial basic solution x0 was such that d0 = dT x0 6= 0 then after a pivot at any position selected by Pivot rule P., the next basic solution x00 have the property d00 = dT x00 6= 0. The sucient condition for being able to do an external-pivot (or doublepivot) is just the same i.e. d0 6= 0. For this purpose let us assume that. 0

0

0

6

Assumption 2.2

fx 2 Rn : Ax = bg \ fx 2 Rn : dT x = 0g = ;:

Assumption 2.2 is similar to the usual assumptions given in hyperbolic programming [1, 10, 11]. It is easy to check, and practically, it is not a strong condition.

3 The Criss-cross algorithm In this chapter we present our criss-cross algorithm for the fractional linear programming problem. We prove its niteness and illustrate the solution process by two simple examples. The positivity assumption is necessary to prove the correctness of the algorithm.

Algorithm 3.1 Initialization: Let us suppose that an initial basis, B is given and the Assumptions 2.2 0

and 2.1 hold. Step 1: Compute the values of d = dTB B ? b and c = cTB B ? b and the vectors p := c ? dc00 d and q := d0 b. Let I := fi 2 JB : bi < 0g [ fi 62 JB : pi < 0g. If I = ; then one of the following two cases may occur: 1

0

1

0

1

   (1)

0 0 ... 0 1

 ...

 ? Figure 4

Optimal solution. STOP.

7

 ...



0    0 ... 0 1 Figure 5

(2)

 ...



Optimal solution. STOP.

else let r := minfi : i 2 I g and go to Step 2. Step 2: If r 2 JB then (Dual iteration)

compute the vector u r and let K := fj 62 JB : urj < 0g: If K = ; then P = ;; STOP else let s := minfj : j 2 K g. If rs = 0 then do double pivot at positions d and (r; s), else pivot at position (r; s). ( )

0

else (Primal iteration)

compute the vector vr and let K := fj 2 JB : vjr > 0g: If K = ; then D = ;; STOP else let s := minfj : j 2 K g. If sr = 0 then do double pivot at positions d and (s; r), else pivot at postion (s; r). Go to Step 1. 2 0

Before verifying the niteness of the algorithm let us analyze the stopping situations. The sign structure of Figure 4 corresponds to a primal feasible solution and ?d < 0 means that Assumption 2.1 is (also) satis ed. The case d = 0 is allowed by Assumption 2.1, but the pivot rule, as it is proved in Proposition 3.1, expel that. 0

0

8

   0 0 ... 0 1

 ...

 +

...



Figure 6 The sign structure shown in Figure 6 is expeled by Assumption 3.2, because ?d > 0 can not occur at a primal feasible solution. 0

Figure 5, is dealing with the case when ?1=d  0. If ?1=d < 0 then b  0 because q = 1=d b, therefore we have a primal feasible solution (and Assumption 3.2 is satis ed). The Primal feasibility together with p  0 means that our solution is an optimal one [13]. 0

0

0

If ?1=d = 0 then the following result holds. 0

Lemma 3.1 Let us suppose that sometimes after a double-pivot the sign structure given by the following tableau

0  :::  0  ... ... 0  1 0 occur (Figure 5, case ?1=d = 0). Then problem (P ) has primal feasible solution, the objective function is bounded from below then there is no optimal solution and the optimal value of the objective function can only be approached in the limit. 0

Proof:

The obtained sign structure says that an optimal solution (^u; ^) := (q; 0) of the problem

9

9 min cT u > > > Au ?  b = 0 >>>= dT u = 1 > (SP ); > u  0 >>> ;   0> is found where the optimal value is '^ := cT u^ and ^ = 0. Thus u^ solves

Au = 0 dT u = 1 u  0: In this case the feasible solution set of (PHP ) is unbounded and a direction, u^ is found such that the sequence of solutions de ned by

xk := x + ku^ 2 P; where x 2 P and dT x = 1 (then ' = cT x ). Further 0

0

0

0

0

T T kcT u^ = ' + k'^ : ' ;k := dc T xxk = dcT xx + + kdT u^ k+1 k It is easy to verify that the values ' ;k form a monotonically decreasing sequence of real numbers. If we want to compute the objective function value with " > 0 accuracy, then it suces to choose 0

0

0

0

0

k(x ; ") := ' "? '^ : It remains to show that in such a case there is no optimal solution. Let us assume contrary that there is an optimal solution x with objective value '. But for the sequence of solutions produced similarly as before (just using x := x) the objective function value is (strictly) monotonically decreasing. This contradicts to the assumption that x is an optimal solution. 2 0

0

0

The sign structure of Figure 7 is related to the Step 1, case (2) (Figure 5), but it is excluded by Assumption 2.1 10

0    0 ... ... 0 1 + Figure 7 The niteness of the Algorithm 3.1 will be veri ed by using the pivot tableau described in Figure 3. The only di erence is that instead of vector q we will use vector b. The proof is based on the orthogonality theorem ([8], Theorem 2.3). The double pivot has no in uence on the index of the entering and the leaving variable (Proposition 2.4 and Lemma 1.6), thus we do not need to make di erence between the cases when such pivot occurs or not. Our proof follows the main steps of Terlaky's original proof for the linear programming case [15, 16].

Theorem 3.1 The criss-cross method (Algorithm 3.1) for the hyperbolic programming problem is nite.

Proof: Let us assume to the contrary, that the algorithm is not nite. The number of all possible bases are nite, thus at least one basis should occur in nitely many times during the computation. This can be happened only if the algorithm is cycling. Let us denote by I  the index set of those variables which enter the basis during a cycle. (These indices i. correspond to the variables which leave the basis during a cycle, as well.) Let l := max i2I  Let us examine when variable xl enters/leaves the basis. We have four cases: the variable xl

(a) enters and leaves the basis at primal iterations; (b) enters the basis at primal iteration, but leaves at dual iteration; (c) enters the basis at dual iteration, but leaves at primal iteration; (d) enters and leaves the basis at dual iterations. The sign structure of the four cases when variable xl enters or leaves the basis are summarized on Figure 8. (At the top row of tableau the sign structure of the vector p, while the 11

right hand side column contains the sign structure of vector b, instead of that of vector q, Figure 3.) The cases when variable xl enters and leaves the basis at primal iterations are shown on tableaus (A) and (B), respectively. The cases when variable xl enters and leaves the basis at dual iterations are shown on tableaus (C) and (D), respectively. (A):



...

?

l

s

?

(B):



: : :

. . .



 l

(C):

(D):

 . . .



...

l

+



...



 . . .

 ? ? r

 ? l

Figure 8 Let us analyze all possible cases. For simplicity, we use the sign-vector of the rows and (extended) columns as it was de ned in the paper of Klafszky and Terlaky ([8], page 102). The hyperbolic programming problem can be treated as the natural extension of the concept of linear programming, but for verifying the niteness of the Algorithm 3.1, instead of the row space of matrix

12

0 1 B@ A ?b CA c 0 as in the case of linear programming, we have to use the row space of the following matrix

0 1 A ? b 0 BB C BB c 0 0 CCC : @ A d 0 1 To show that the cases (a), (c) and (d) cannot occur is relatively easy and very similar to each other, while case (b) needs a little bit more attention. Let us start with the easier cases.

(a) Variable xl enters and leaves the basis at primal iterations. When variable xl leaves the basis then variable xs enters. Using tableaus (A) and (B) of Figure 8 and the orthogonality theorem ([8], Theorem 2.3) we know that the row vector p of tableau (A) is orthogonal to the (extended) column vector corresponding to the non-basic variable xs in tableau (B). The row belonging to vector p is t p = t c ? dc00 t d . (From this equation immediately follows that at positions of c and d we have 1 and ? dc00 .) Then the vectors ( )

( )

( )

l b c d

t p =  : : :  ? 0 1 ? dc ( )

0 0

ts = : : : + 0 c s

ds

should be orthogonal, but tsT t p < 0 holds, taking into consideration that ps = cs ? c0 d0 ds < 0. ( )

(c) Variable xl enters the basis at dual iteration while variable xr leaves the basis and variable xl leaves the basis at primal iteration. Using tableaus (C) and (B) of the Figure 8, from the orthogonality theorem we have that the rth row of tableau (C) is orthogonal to the sth (extended) column of tableau (B), i.e. l b c d

t r =  :::  ? ? 0 0 ( )

13

ts = : : : + 0   are orthogonal. From the sign structures of these vectors obviously follows that tsT t r < 0, where  means that we have no information about the sign of the corresponding elements. The coordinates of t r at the positions related to vectors c and d are zeros because those are row vectors ('basic' vectors). ( )

( )

(d) Variable xl enters the basis at dual iteration while variable xr leaves the basis and variable xl leaves the basis at dual iteration, too. Using tableaus (C) and (D) of the Figure 8, from the orthogonality theorem we have that the rth row of tableau (C) is orthogonal to the (extended) column of vector b of tableau (B), i.e. l b c d

t r =  :::  ? ? 0 0 ( )

tb =  : : :  ? ?1   are orthogonal. The values of t r at the positions related to vectors c and d are zeros because those are row vectors, while elements of tb at the same positions are denoted by  because in both cases no information about their values is available. Then tbT t r > 0 contradicting the orthogonality of these two vectors. All the cases (a), (c) and (d) contradict to the orthogonality theorem, thus they cannot occur. ( )

( )

Finally, let us consider case (b).

(b) Variable xl enters the basis at a primal iteration and leaves at a dual one. The tableaus (A) and (D) of Figure 8 are used. The vectors of tableaus (A) and (D) are marked with ' and ", respectively. From both tableaus, the row vector of p and the (extended) column vector of b are produced. The column vectors of b are normalized such that at the position belonging to vector d there is ?1. Then vectors with the following sign

structures are obtained.

l b c

d

t p =  : : :  ? 0 1 ? dc

0 0 0

( )0

0

14

t00b =  : : :  ?

?001 d0

and

? dc0000 ?1 0 0

l b c

d

t p =  : : :  0 0 1 ? dc

00 0 00

( )00

0

t0b =  : : :  ?

?01 d0

? dc00 ?1 0 0

00 T t p 0 = 0 and t0 T t p 00 = 0, therefore 0 = From the orthogonality theorem we have t b b t00bT t p 0 + t0bT t p 00 . But from the sign structures above the sum of the two scalar products is a positive number, which leads to contradiction. This completes our proof. 2 ( )

( )

( )

( )

Finally, let us illustrate the performance of our algorithm on two small examples choosen from the book of Martos [13]. In this way we can immediately compare the sequence of bases produced by our algorithm to those which are produced in [13]. Martos used the following example ([13], page 170) to illustrate his own algorithm.

Example 3.1 Find the minimum of the function ', where '(x ; x ) = 5x24+x x+ +6 1 1

1

2

1

under the constraints

?x + x  1 x ?x  1 x ;x  0 1

2

1

2

1

2

15

2

Vertices of the feasible solution set are x^1 = (1; 0), x^2 = (0; 0); x^3 = (0; 1) with the following objective function values '^ = 6; '^ = 6; '^ = 3, respectively. Using Martos' method from vertex x^1 there is no direct way to rich vertex ^x3, which corresponds to the optimal basic solution, because there is no other feasible solution with smaller objective value than '^ , but bigger than '^ . In such a case Martos in his algorithm apply a special step, called regularization (for more detail see [13], page 169). The regularization means that a new, special constraint should be added to the set of constraints. In our example this constraint is 1

1

2

3

3

x +x 2 1

2

which generates two more primal feasible basic solutions x^4 = (1=2; 3=2), ^x5 = (3=2; 1=2). Starting from ^x1 through x^5; x^4 we can rich x^3 in three pivots by using Martos algorithm. Applying our criss-cross algorithm to the same problem, starting from the same feasible solution x^1, we make a double-pivot immediately. After that double-pivot we arrive to the vertex x^2 from which the optimal solution x^3 is obtained with a single pivot. Our algorithm used two steps but three pivots to solve the example. The sequence of basic solutions followed by our algorithm is di erent from that of generated by Martos' method. The second example is also from Martos' book ([13], page 177) and is used to illustrate the steps of the hyperbolic simplex algorithm.

Example 3.2 Find the minimum of function ', where '(x ; x ) = ?6x ? 5x 1

1

2x + 7

2

2

1

under the constraints

x + 2x  3 3x + 2x  6 x ;x  0 1

2

1

2

1

2

For this problem there is a trivial, starting feasible solution, when x = x = 0. In this case the arti cal variables y and y are in the basis. Using the hyperbolic simplex algorithm the following bases are obtained: fy ; y g, fx ; y g and fx ; x g. The last basis is optimal. All other are primal feasible and the objective value is monotonically decreasing on this sequence of solutions. 1

1

2

1

2

2

16

2

2

1

2

Applying our criss-cross algorithm to solve this problem starting from the same basic solution, di erent sequence of bases are obtained: fy ; y g, fx ; y g es fx ; x g. In our case the second basis is dual feasible, but primal infeasible ! The changes of the objective value is not monotone any more. 1

2

1

2

1

2

However with both methods Example 3.2 is solved by using only two pivots, but the obtained sequence of bases are di erent and all important characteristics of these algorithms are illustrated. Acknowledgement. This research was partially supported by the Hungarian National Research Council (grants OTKA T 014302 and OTKA T 019492). The rst version of this paper was nished in July 1995, when the rst author has been visiting DIKU, University of Copenhagen sponsored by the Hungarian State Eotvos Fellowship. We kindly acknowledge all of the supports.

References [1] Anstreicher, K.M., Analysis of Karmarkar's algorithm for fractional linear programming, Technical Report, (1985), November, Yale School of Management, Yale University, New Haven, CT 06520, USA. [2] Anstreicher, K.M., A monotonic projective algorithm for fractional linear programming, Algorithmica, 1, No. 4, (1986) 483-498. [3] Bitran, G. R., Novaes, A. G., Linear programming with a fractional objective function, Operations Research, 21 (1973) 22-29. [4] Charnes, A., Cooper, W. W., Programming with linear fractionals, Naval Research Quaterly, 9 (1962) 181-186. [5] Fukuda, K., Terlaky T., Linear complementarity and oriented matroids, Journal of the Operational Research Society of Japan, 35 (1992) 1, 45-61. [6] Gilmore, P.C., Gomory, R.E., A linear programming approach to the cutting stock problem, part II., Operations Research, (1963) 863-888. [7] Hertog, D. den, Roos, C., Terlaky T., The linear complementarity problem, sucient matrices and the criss-cross method,Linear Algebra and its Applications, 187 (1993) 1-14. [8] Klafszky E., Terlaky T., The role of pivoting in proving some fundamental theorems of linear algebra, Linear Algebra and its Applications, 151 (1991) 97-118. [9] Klafszky E., Terlaky T., Some generalization of the criss-cross method for quadratic programming, Math. Oper. u. Statist. Ser. Optim., 24 (1992) 2, 127-139. 17

[10] Martos B., Hiperbolikus programozas, Az MTA Matematikai Kutato Intezetenek Kozlemenyei, 5 Budapest (1960) 383-406. [11] Martos B., Hyperbolic programming, Naval Research Logistic Quarterly, 11 (1964) 135-155. [12] Martos B., Nem-linearis programozasi modszerek hatokore, Az MTA Kozgazdasagtudomanyi Intezetenek Kozlemenyei, 20 Budapest, 1966. [13] Martos B., Nonlinear Programming: Theory and Methods, Akademiai Kiado, Budapest, 1975. [14] Schaible, S., Fractional Programming, in Eds. Pardalos, P. and Horst, R. Handbook of Global Optimization, Kluwer Academic Publisher, 1995. [15] Terlaky T., Egy uj, veges criss-cross modszer linearis programozasi feladatok megoldasara, Alkalmazott Matematikai Lapok, 10 (1984) 289-296. [16] Terlaky T., A convergent criss-cross method, Math. Oper. u. Statist. Ser. Optim., 16 (1985) 5, 683-690. [17] Terlaky T., A nite criss-cross method for oriented matroids, J. of Combin. Theory, Ser. B, 42 (1987), 319-327. [18] Wang, Zh., A conformal elimination free algorithm for oriented matroid programming, Chinese Annals of Mathematics 8 (1987) B 1.

18

Suggest Documents