A monotonic build-up simplex algorithm for linear programming Report 91-82
K.M. Anstreicher T. Terlaky
Faculteit der Technische Wiskunde en Informatica Faculty of Technical Mathematics and Informatics Technische Universiteit Delft Delft University of Technology
ISSN 0922-5641
Copyright c 1991 by the Faculty of Technical Mathematics and Informatics, Delft, The Netherlands. No part of this Journal may be reproduced in any form, by print, photoprint, microfilm, or any other means without permission from the Faculty of Technical Mathematics and Informatics, Delft University of Technology, The Netherlands. Copies of these reports may be obtained from the bureau of the Faculty of Technical Mathematics and Informatics, Julianalaan 132, 2628 BL Delft, phone +3115784568. A selection of these reports is available in PostScript form at the Faculty’s anonymous ftp-site. They are located in the directory /pub/publications/tech-reports at ftp.twi.tudelft.nl
DELFT UNIVERSITY OF TECHNOLOGY
REPORT 91{82 A MONOTONIC BUILD-UP SIMPLEX ALGORITHM FOR LINEAR PROGRAMMING K. M. Anstreicher and T. Terlaky
ISSN 0922{5641 Reports of the Faculty of Technical Mathematics and Informatics no. 91{82 Delft, 1991
Address of the authors: Kurt M. ANSTREICHER Department of Management Sciences University of Iowa Iowa City, IA 52240 USA e-mail:
[email protected] Tamas TERLAKY Faculty of Technical Mathematics and Computer Science Delft University of Technology Mekelweg 4 2628 CD Delft Netherlands. e-mail:
[email protected] The second author is on leave from the Eotvos University, Budapest, and partially supported by OTKA no. 2115.
c 1991 by Faculty of Technical Mathematics and Informatics, Delft, The Copyright Netherlands. No part of this Journal may be reproduced in any form, by print, photoprint, micro lm or any other means without written permission from Faculty of Technical Mathematics and Informatics, Delft University of Technology, The Netherlands.
:
Abstract We devise a new simplex pivot rule which has interesting theoretical properties. Beginning with a basic feasible solution, and any nonbasic variable having a negative reduced cost, the pivot rule produces a sequence of pivots such that ultimately the originally chosen nonbasic variable enters the basis, and all reduced costs which were originally nonnegative remain nonnegative. The pivot rule thus monotonically builds up to a dual feasible, and hence optimal, basis. A surprising property of the pivot rule is that the pivot sequence results in intermediate bases which are neither primal nor dual feasible. We prove correctness of the procedure, give a geometric interpretation, and relate it to other pivoting rules for linear programming.
Key Words: Simplex method, linear programming, pivoting, shadow vertex algo-
rithm.
1
Introduction
We consider primal and dual linear programming (LP) problems in the standard form: (P ) min fcx : Ax = b; x 0g; (D) max fb : A + s = c; s 0g; where A is an m n matrix, and b and c are conforming column and row vectors. The main goal of this paper is to present a new simplex algorithm for (P), where the number of dual feasible variables s 0 is (strictly) monotonically increasing. The algorithm may visit `exterior' basic solutions, which are neither primal nor dual feasible, but always eventually returns to a primal feasible basis with an increased number of dual feasible variables. Linear programming has been an extremely active area in applied mathematics during the last forty years. Dantzig's [6] simplex method still seems to be the most ecient algorithm for the majority of practical problems. The practical eciency of the simplex method has been theoretically justi ed by proving polynomial behavior for the expected number of pivot steps on random problems; [2], [4], [12], [26]. The simplex method allows considerable exibility in the choice of pivot element on each iteration, and dozens of variants have been developed in the last decades. (See for example [3], [5], [6], [7], [9], [19], [18], and [29].) Unfortunately there is still no provably polynomial{time simplex algorithm; nding such a method, and settling the related `Hirsch conjecture,' remain perhaps the most challenging open problems in the theory of linear programming. Exponential examples for dierent simplex variants are given in [1], [10], [11], [15], [18], [20], and [23]. There have been a number of eorts to relax the requirement that a (primal) simplex method maintain feasibility in the problem (P). A rst step was the parametric selfdual algorithm, [6], [9], which can be interpreted [17] as Lemke's [16] algorithm for a corresponding linear complementarity problem. A dierent approach to relaxing feasibility, so-called criss-cross methods, were designed by Zionts [28] and Terlaky [24], [25]. Criss-cross methods, like the parametric self-dual method, can be initiated with any basic solution. Termination with an optimal basis, if one exists, is assured for Terlaky's variant, but the algorithm's purely combinatorial nature renders it inecient in practice. One widely studied simplex variant, based on parametric programming, is the shadow vertex algorithm [4]. This method is known to be exponential in the worst case [10], but under reasonable probabilistic assumptions its expected number of pivot steps is polynomial [4], [12]. Although it is motivated dierently, the algorithm we develop here can be considered to be a generalization of the shadow vertex algorithm. Our algorithm can also be viewed as a generalization of the recently developed j
4
Exterior Point Simplex Algorithm (EPSA) of Paparizzos [21]. The paper is organized as follows. The monotonic build-up (MBU) pivot rule is introduced in the next section, and correctness of the method is proved. Section 3 is devoted to dierent interpretations of the algorithm, and numerical illustrations of its behavior. The relationship to other simplex pivot rules is discussed in Section 4.
2
The MBU Pivot Rule
We use standard simplex notation and terminology. Given a linear program (P), a basis B is a square, nonsingular submatrix of A. The columns and variables corresponding to B are called basic, and the remaining columns and variables are called nonbasic. The nonbasic columns of A are denoted by N . A basis is (primal) feasible if b = B ?1b 0, and in this case the basic feasible solution corresponding to B is x = b, x = 0, where x and x denote the basic and nonbasic variables, respectively. For a given basis B , the reduced costs are c = c ? A, where = c B ?1 are the simplex multipliers and c are the basic components of c. A basis is dual feasible if c 0, and in this case the multipliers are a feasible solution to (D). A basis is optimal if it is both primal and dual feasible, and in this case the basic feasible solution x = b, x = 0 and the simplex multipliers are optimal solutions of (P) and (D), respectively. Given any feasible basis, our new pivot rule for (P) is as follows: B
N
B
N
B
B
B
N
The MBU Pivot Rule 1) If c 0, the current basis is optimal. Otherwise, pick s with c < 0. We refer to x as the driving variable. 2) (Choose leaving variable) If a 0, i = 1; . . . ; m, (P) is unbounded. Otherwise let r = argmin fb =a : a > 0g, and let 1 = jc j=a . 3) (Choose entering variable) If a 0 for every j with c 0, let 2 = +1. Otherwise, let t = argmin fc =ja j : c 0; a < 0g, and let 2 = c =ja j. 3a) If 1 2, pivot on a , and go to step 1. 3b) If 2 < 1, pivot on a , and go to step 2 with the same driving variable x . In this case we refer to x as the blocking variable. s
s
is
i
i
is
is
s
rj
j
j
rs
j
rj
j
rj
t
rt
rs rt
s
t
The basis is updated after a pivot in step 3, as usual. Note that the pivot in step 3a) corresponds to the ordinary choice of the leaving variable, with x the entering variable. The additional logic in step 3b) prevents ordinary pivots which would destroy the nonnegativity of reduced costs. Although this is an intuitive concept, the result of a pivot in 3b) will be a basis which is neither primal nor dual feasible. s
5
The pivot selection in 3b) can also be viewed as a \dual" pivot, since the purpose of such a pivot is to maintain as much dual feasibility as currently exists. However, the choice of subsequent pivots is not based on negative right{hand{side entries, as would be typical for a dual simplex method, but is rather motivated by the continued desire to get the driving variable x into the basis. The typical situation leading to a pivot in 3b), rather than 3a), can be depicted in a simplex tableau (with the basic columns omitted) as: s
N
b
c
?z0
N
s ... [+] ...
=
t ...
(?) ... +
?
... + ...
r
where N = B ?1N , and z0 is the value of the objective for the current basic solution. A simple example which fully illustrates the various stages of the algorithm is given in the next section. We will now prove that the method as described is well{de ned, and solves (P) under a nondegeneracy assumption. The key to our analysis is the following theorem, which characterizes the bases which are obtained when the algorithm performs a sequence of 3b) pivots.
Theorem 1 Consider any pivot sequence produced by the MBU algorithm, corre-
sponding to an initial feasible basis and the choice of a driving variable x . Then following a pivot in step 3b), the next basis produced by the algorithm has the following three properties: a) c < 0. b) If b < 0, then a < 0. c) maxfb =a : b < 0g minfb =a : a > 0g. s
s
i
is
i
is
i
i
is
is
Proof:
The proof is by induction on the number of pivots from step 3b). Note that before the rst such pivot a), b), and c) all hold. Therefore it will suce to assume that the current basis satis es a), b), and c), and prove that following a pivot in step 3b) the three properties continue to hold. a) After a pivot on a , the new value of c will be c = c ? ac a : rt
s
t
s
s
rs
rt
6
Then
c < 0 () ac ? ac < 0 () ?a c > ?ca () 1 > 2; which holds since the pivot was from 3b), not 3a). b) This is immediate in the pivot row r, so consider a row i 6= r such that the right hand side value is negative after the pivot, that is, b = b ? aa b < 0: (1) We must prove that a = a ? aa a < 0: (2) Note that if b = 0, then from (1) we must have b < 0, and also a < 0 since b) holds before the pivot. But then c) is violated before the pivot, a contradiction. Therefore b > 0, and (1) is equivalent to a >b: a b Also a > 0, so (2) is equivalent to a
0 then (3) is exactly equivalent to b b ; a a s
t
rs
rt
s
t
s
rs
rt
it
i
i
r
rt
it
is
is
rs
rt
r
i
is
r
it
i
rt
r
rs
is
it
rs
rt
i
is
r
rs
is
i
r
rs
i
i
is
is
i
r
is
rs
which holds by the choice of the pivot row r. Assume alternatively that b < 0. Then a < 0, since b) holds before the pivot, and (3) is equivalent to b b ; a a i
is
i
r
is
rs
7
which is true since c) holds before the pivot. Thus (3) holds in all cases, as claimed. c) Let 0 = b =a denote the min ratio in step 2), before the pivot in step 3b). We will show that after the pivot, r
rs
maxfb =a : b < 0g = 0 minfb =a : a > 0g; (4) where as above the \double{bar" notation denotes entries in the tableau following the pivot in step 3b). Clearly (4) will establish that c) holds after the pivot. To begin, note that in the pivot row we have a = a =a < 0, and b = b =a < 0, so b =a = b =a = 0. We will rst show that if i 6= r, and b < 0, then i
is
i
i
rs
r
rs
r
rs
is
is
rt
rs
r
r
rt
i
b =a 0; (5) which together with the result for row r will establish the equality in (4). Note that a < 0 was established in the proof of b), above. Substituting in the formulas for b and a from (1) and (2), and the de nition of 0, (5) is equivalent to a b a (6) b ?a b a a ?a a ; which reduces to exactly (3). But in the proof of b) we showed that (3) always holds, and therefore (5) holds. To complete the proof, we need only show that when b 0 and a > 0, then b =a 0: Substituting in the expressions from (1) and (2), and the de nition of 0, again obtains exactly (6), which reduces to (3). Since (3) always holds, the proof is complete. 2 i
is
is
i
is
it
i
r
r
rt
i
it
is
rs
rs
rt
is
i
is
Corollary 2 Whenever the MBU algorithm performs a pivot in step 3a), the next
basis is feasible.
Proof:
This is immediate, from the choice of r, if the choice of a driving variable x in step 1) results directly in the 3a) pivot, with no 3b) pivots rst. When the 3a) pivot is preceeded by a sequence of 3b) pivots, the result follows easily from the choice of r, and parts b) and c) of Theorem 1. s
2
Corollary 3 Assume that each pivot of the MBU algorithm is nondegenerate, that is, minf1; 2g > 0. Then in a nite number of pivots the algorithm either demonstrates that (P) is infeasible, or terminates with an optimal solution. 8
Proof:
From part a) of Theorem 1, on each nondegenerate pivot, whether from 3a) or 3b), the objective value strictly decreases. The result is then immediate since there are only nitely many bases.
2
Corollary 2 establishes that the MBU pivot rule is well{de ned, and Corollary 3 proves niteness under nondegeneracy. Note that the fact that the objective is nonincreasing, as used in the proof of Corollary 3, also means that although the algorithm encounters bases which are infeasible, none of them have objective values lower than the optimal value for (P). In the degenerate case, niteness can be assured by using dual lexicography [6] to break any possible ties for the entering variable. (Tie-breaking for the leaving variable is unnecessary). To see this, note that if dual lexicography is used, then c , the reduced cost of the driving variable, will always be lexicographically strictly increasing. Thus no basis can repeat on the pivot sequence corresponding to a given driving variable, and therefore the driving variable must enter the basis after a nite number of pivots (unless unboundedness is established). Finiteness for the entire algorithm then follows from the build{up property. The dual lexicography would be initiated with the set of nonbasic variables having nonnegative reduced costs, and augmented as more variables become dual feasible. We omit the straightforward details. It is also possible to prove niteness using least{index rules to break ties for both the entering and leaving variables, but the argument is substantially more complex than the one for dual lexicography and is omitted here. To close this section, we describe a dual version of the MBU pivot rule which will be particularly convenient to illustrate the geometry of the method in the next section. The dual MBU rule can easily be derived from the primal rule, and has the \build{ up" feature with respect to the number of nonnegative primal variables. We assume an initial dual feasible basis is available. s
The Dual MBU Pivot Rule 1) If b 0, the current basis is optimal. Otherwise, pick r with b < 0. We refer to r as the driving row. 2) (Choose entering variable) If a 0, j = 1; . . . ; n, (P) is infeasible. Otherwise let s = argmin fc =ja j : a < 0g, and let 1 = b =a . 3) (Choose leaving variable) If a 0 for every i with b 0, let 2 = +1. Otherwise, let q = argmin fb =a : b 0; a > 0g, and let 2 = b =a . 3a) If 1 2, pivot on a , and go to step 1. 3b) If 2 < 1, pivot on a , and go to step 2 with the same driving row r. In this case we refer to q as the blocking row. r
rj
j
j
rj
rj
r
is
i
i
rs
i
is
i
is
rs
qs
9
q
qs
3
Illustrations and Interpretations
To illustrate our new pivot rule fully, we consider its application to a simple example. The solution of this problem by the MBU algorithm demonstrates the various stages of the method. The rst tableau is as follows:
x1 1 0 0 0
x2 0 1 0 0
x3 x4 x5 x6 x7 0 [1] (-2) 4 -5 0 0 1 -4 5/2 1 -1/4 0 2 -1/2 0 -1 1 -1 11/4 Tableau 1.
1 1/2 1 0
We begin with x4 as the driving variable. The min ratio occurs in row r = 1, and we obtain 1 = 1, 2 = 1=2. Thus x5 is a blocking variable, and the pivot is on a1 5 from step 3b), rather than the usual pivot on a1 4. The pivot produces: ;
;
-1/2 1/2 0 1/2
0 1 0 0
0 0 1 0
-1/2 1 -2 5/2 -1/2 [1/2] 0 (-2) 0 1 -1/4 0 2 -1/2 1 -1/2 0 1 1/4 1/2 Tableau 2.
Note that the basis is now neither primal nor dual feasible, but all the reduced costs are now nonnegative, except for that of the driving variable x4. The min ratio occurs in row r = 2, and we obtain 1 = 1, 2 = 1=2. Thus x6 is blocking, and we perform a 3b) pivot on a2 6 to obtain: ;
-1 -1 0 -1 1 0 5/2 -3/2 -1/4 -1/2 0 -1/4 0 1 0 -1/2 1/2 1 1 [1/4] 0 0 (-1/2) 2 3/4 1/2 0 -1/4 0 0 1/4 1 Tableau 3. 10
The driving variable remains x4. The min ratio is in row r = 3, and we obtain 1 = 1, 2 = 1=2. Thus x7 is blocking, and we perform a 3b) pivot on a3 7: ;
3/2 4 5 [1/4] 1 -1/4 -1/2 0 -1/4 0 -1 -2 -2 -1/2 0 1 1 1/2 -1/8 0 Tableau 4.
0 1 0 0
0 17/2 0 -1/2 1 -4 0 2
The driving variable remains x4, and the min ratio occurs in row r = 1. Since there are no negative elements in row 1, we have 2 = 1, and a usual 3a) pivot on a1 4 is performed: ;
6 16 5/4 7/2 2 6 7/4 3
20 1 4 5 0 1 8 0 2 3 0 1/2 Tableau 5.
0 1 0 0
0 34 0 8 1 13 0 25/4
We have obtained a basis which is both primal and dual feasible, hence optimal. The solution of this example demonstrates several typical features of the algorithm. In general, the choice of a driving variable in step 1) leads to a sequence of 3b) pivots, culminating in an ordinary 3a) pivot. Following the 3a) pivot a feasible basis is obtained, with x a basic variable, and all nonnegative reduced costs are preserved. Note that in fact the number of variables with nonnegative reduced costs may actually increase on the pivot sequence, as happened in the example after the rst pivot. When by serendipity a reduced cost becomes nonnegative, it is preserved as nonnegative on all subsequent pivots. The MBU pivot rule can be viewed as an exterior point method, since in general it produces a sequence of basic solutions some of which are neither primal nor dual feasible. The interesting feature of the pivot rule is the guarantee that primal feasibility will eventually be regained. It turns out, however, that the method can also be viewed as a primal simplex method. The reason for this is that at any stage of the algorithm, by b) and c) of Theorem 1, an ordinary pivot in step 3a) would result in a primal feasible solution. Thus the sequence of bases produced by the s
11
algorithm can be associated with a sequence of basic feasible solutions, obtained by at each stage performing a pivot on a . When this is done for the pivot sequence corresponding to the example solved above, the sequence of basic feasible solutions x is as follows: rs
k
x1 = x2 = x3 = x4 = x5 =
(1, 1/2, 1, 0, 0, 0, 0 )> (0, 1/2, 5/4, 1, 0, 0, 0 )> (0, 0, 3/2, 2, 1/2, 0, 0 )> (0, 0, 0, 8, 13/2, 3/2, 0 )> (0, 0, 0, 34, 0, 8, 13)>
Note that in the above sequence of basic solutions, corresponding to a single MBU pivot sequence with driving variable x4, the values of x4 are increasing. This is not coincidental: it follows from (4), in the proof of Theorem 1, that the values of x in the basic feasible solutions associated with a sequence of 3b) pivots are always nondecreasing. This is another interesting property of the MBU pivot rule, which will be discussed further in the next section. To illustrate the geometry of the MBU algorithm, it is convenient to work with the inequality{constrained dual problem (D), rather than the primal problem (P). We will therefore consider an application of the dual MBU algorithm, as described at the end of the previous section. Consider the following linear program: s
min
3x + x + x + 3x + x + x 2 1 2 3 2 4 5 6 ?x1 ? x2 ? x3 ? x4 + x6 = 1 ?x1 + x3 + 2x4 + x5 = 2 x 0
Applying the dual MBU algorithm, initiated with an easily obtained dual feasible basis, results in the following pivot sequence. The driving row is r = 2 throughout: 0 1 (2) 3 1 -1 1 1 0 [-1] -2 -1 0 -2 0 0 1/2 3/2 3/2 2 2
12
0 1 0
1/2 1/2 -1/4
0 1 0
1/3 2/3 2/3 1/3 -1/2 -1/2
1 0 0
0 1 0
1 2 1 1 -3/2 -5/2
3 1 -3
1 0 0
2 -1 1/2
1 0 0
-1 -1 7/2
0 -1 2
1 0 0
1 -1 1
(3/2) 1/2 -1/2 1/2 [-1/2] -1/2 -1/2 -3/2 3/4 5/4 9/4 7/4 (1/3) -1/3 1/3 [-1/3] -2/3 -4/3 1 5/2 3/2 -1 1 [-1] -1 7/2 1/2 0 1 0
2 1 -3
In Figure 1 we graph the dual of the above linear program. The dual constraint
ats s = 0, j = 1; . . . ; 6 are labeled 1 to 6 . The dual solutions encountered on the above pivot sequence are marked by solid circles, labeled 1 to 5, while the corresponding dual feasible solutions obtained by performing an ordinary pivot on [a ] in the rst, second, and third tableaus are indicated by open circles, labeled 2a, 3a, and 4a. (Note that the dual solution (1; 2) corresponding to a given basis can be easily obtained from columns 5 and 6 of the tableau, since c5 = 1 ? 2, c6 = 1 ? 1.) Note that the slack in the rst dual constraint, corresponding to the driving row for the pivot sequence, is increasing in the sequence of dual feasible solutions { this is analogous to the fact that in the primal MBU algorithm the value of the driving variable increases in the associated sequence of primal feasible solutions, as described above and discussed further in the next section. j
rs
4
Relationship to Other Methods
There is already some history of monotonic build-up schemes for LP. In fact the recursive type algorithms ([7], [13], [25]) can be interpreted as algorithms that solve 13
? ? ?? 4 ?? ?? ?? ? 5 4 ? 5 v v f ? 4a @ ? @@ ? ? @@ f ? 3a ? @@ ?? 3 @@v f? (0,0) ? @ v? 2a 2? @ @ ?? 1@v@ ? @@ 3 ? @ ?? @@ ? ? @@ ?? @@ @@ 2 1 @ 6 @ @@ @ Figure 1: A Dual MBU Pivot Sequence
14
subproblems with monotonically increasing size. The Edmonds-Fukuda rule [7] is the best known among the real simplex variants, but in contrast to MBU the primal Edmonds-Fukuda rule preserves primal feasibility (see [5]), not dual feasibility in solving the subproblems, and obtains a build{up via a recursive scheme which is critically dependent on an ordering of the variables. The criss-cross methods [28], [24], [25], [27] are exterior `simplex-like' methods that can be initiated with any basis whatsoever. Furthermore, in contrast to our algorithm, if primal or dual feasibility occurs, it is purely by accident. In Zionts [28] criss-cross method feasibility is preserved once attained, but in Terlaky's nite variant it is not. These methods visits vertices which are neither primal nor dual feasible, like the MBU algorithm, but unlike the MBU algorithm (respectively, dual MBU algorithm) such vertices have no association with adjacent primal feasible (resp. dual feasible) basic solutions. The primal MBU pivot rule is most closely related to a dual version of the shadow vertex algorithm [4], [12], which is itself a particular case of the parametric method [6], [9]. Let B be a dual feasible basis for (P), and let a be any vector with B ?1a > 0. Consider a parametric family of linear programs: (P ()) min fcx : Ax = a ? d; x 0g; where d = a ? b. Then B is an optimal basis for P(0), and P(1) is the original problem P. Increasing from zero, a sequence of parametric dual simplex pivots can be performed to solve P() for all 2 [0; 1], or demonstrate that there is a 2 (0; 1) such that P() is infeasible for . This is the (dual) shadow vertex algorithm for (P). Next consider solving (P) by replacing the right hand side b with the vector a, as above, and augmenting the problem with an additional column A +1 = d having objective coecient ?M , where M is a very large positive number. Then starting with the basis B , the MBU algorithm can be applied with x +1 as the driving variable. From (4) in the proof of Theorem 1, the k'th basis has an associated value 0 = b =a > 0, with 0 0+1, such that the basis is optimal for the problem P() for 2 [0?1; 0 ], where 00 = 0. It is then obvious that applying the MBU pivot rule to the augmented problem is equivalent to solving (P) using the dual shadow vertex algorithm. In solving the problem using MBU, pivoting would be terminated as soon as 0 1, since at this point an optimal solution to P can be recovered. Note that the only way that such a point is not reached is when at some iteration there is no blocking variable, meaning that all coecients in the pivot row are nonnegative and the original problem is infeasible. >From the above discussion it is clear that the dual shadow vertex algorithm is a special case of the MBU pivot rule, and similarly the shadow vertex algorithm is a special case of the dual MBU rule. This analogy suces to settle the issue of the worst{case complexity of the MBU rule, since exponential examples for the shadow n
n
k
k r
k
rs k
k
k
k
k
15
vertex method are known [10]. It is also clear that the MBU algorithm itself can be viewed as a kind of repeated application of the shadow vertex method, where the column corresponding to the driving variable plays the role of the secondary right-hand-side vector d, and other columns which are currently dual infeasible are ignored. In this interpretation x , playing the role of the parameter , is increased until further increase would degrade the objective, and at this point x is nally introduced into the basis. What is perhaps most remarkable is that all of the logic required to realize this interpretation is contained in the very simple MBU pivot rules. During the process of preparing this paper, we learned of Paparizzos' ingenious Exterior Point Simplex Algorithm (EPSA) [21]; see also [22] for a specialization of EPSA to the assignment problem. Interestingly, it turns out that EPSA, like the shadow vertex algorithm, is a special case of the dual MBU pivot rule. Given a problem (P), and a dual feasible basis B , the EPSA algorithm augments the problem with an additional column A +1 = a such that B ?1a > 0. The reduced cost c +1 is set to zero, and by an appropriate min ratio test the additional variable x +1 is pivoted into the basis, becoming the only primal infeasible variable; say in row r. At this point the EPSA algorithm uses a pivot rule which is identical to the dual MBU rule, with r as the driving row. The algorithm either terminates with optimality, when x +1 leaves the basis, or demonstrates that the original problem is infeasible, when no a < 0 can be found in step 2) of the dual MBU procedure. s
s
n
n
n
n
rj
References [1] D. Avis and V. Chvatal (1978), Notes on Bland's Pivoting Rule, Mathematical Programming Study 8, 24-34. [2] I. Adler and N. Megiddo (1985), A Simplex Algorithm Whose Average Number of Steps is Bounded Between Two Quadratic Functions of the Smaller Dimension, Journal of the Association of Computing Machinery 32, 891-895. [3] R.G. Bland (1977), New Finite Pivoting Rules for the Simplex Method, Mathematics of Operations Research 2, 103-107. [4] K.H. Borgwardt (1987), The Simplex Method: A Probabilistic Analysis, Algorithms and Combinatorics Vol 1, Springer Verlag. [5] J. Clausen (1987), A Note on Edmonds-Fukuda Pivoting Rule for the Simplex Method, European Journal of Operations Research 29, 378-383. [6] G.B. Dantzig (1963), Linear Programming and Extensions, Princeton University Press, Princeton, NJ. 16
[7] K. Fukuda (1982), Oriented Matroid Programming, Ph.D. Thesis, Waterloo University, Waterloo, Ontario, Canada. [8] K. Fukuda and T. Terlaky (1989), A General Algorithmic Framework for Quadratic Programming and a Generalization of Edmonds-Fukuda Rule as a Finite Version of Van de Panne-Whinston Method, Mathematical Programming (to appear). [9] S. Gass and T. Saaty (1955) The Computational Algorithm for the Parametric Objective Function. Naval Research Logistics Quaterly 2, 39-45. [10] D. Goldfarb (1983), Worst Case Complexity of the Shadow Vertex Simplex Algorithm, Department of Industrial Engineering and Operations Research, Columbia University, New York, NY. [11] D. Goldfarb and W.Y. Sit (1979), Worst Case Behavior of the Steepest Edge Simplex Method, Discrete Applied Mathematics 1, 277-285. [12] M. Haimovich (1983), The Simplex Method is Very Good! { On the Expected Number of Pivot Steps and Related Properties of Random Linear Programs, Dept. of IE and OR, Columbia University, New York, NY. [13] D. Jensen (1985), Coloring and Duality: Combinatorial Augmentation Methods, Ph.D. Thesis, School of OR and IE, Cornell University, Ithaca, NY. [14] E. Klafszky and T. Terlaky (1989), Variants of the Hungarian Method for Solving Linear Programming Problems. Optimization 20, 79-91. [15] V. Klee and G.J. Minty (1972), How Good is the Simplex Algorithm?, in O. Shisha, editor, Inequalities III, Academic Press, New York, NY, 158-172. [16] C.E. Lemke (1965), Bimatrix Equilibrium Points and Mathematical Programming, Management Science 11, 681-689. [17] I. Lustig (1987), The equivalence of Dantzig's Self-Dual Parametric Algorithm for Linear Programs to Lemke's Algorithm for Linear Complementarity Problems Applied to Linear Programming, Technical Report SOL 87-4, Departement of Operations Research, Stanford University, Stanford, CA. [18] K.G. Murty (1976), Linear and Combinatorial Programming, Krieger Publishing Company, Malabar, Florida, USA. [19] K.G. Murty (1980), Computational Complexity of Parametric Linear Programming, Mathematical Programming 19, 213-219. 17
[20] K. Paparizzos (1989), Pivoting Rules Directing the Simplex Method Through All Feasible Vertices of Klee-Minty Examples Opsearch 26, 77-95. [21] K. Paparizzos (1991), A Simplex Algorithm with a New Monotonic Property, Dept. of Mathematics, Democritus University of Thrace, Xanthi, Greece. [22] K. Paparizzos (1991), An Infeasible (Exterior Point) Simplex Algorithm for Assignment Problems, Mathematical Programming 51, 45-54. [23] C. Roos (1990), An Exponential Example for Terlaky's Pivoting Rule for the Criss-cross Simplex Method Mathematical Programming 46, 78-94. [24] T. Terlaky (1987), A Finite Criss-Cross Method for Oriented Matroids, Journal of Combinatorial Theory (B) 42, 319-327. [25] T. Terlaky (1985), A Convergent Criss-Cross Method, Optimization 16, 683690. [26] M. Todd (1886), Polynomial Expected Behavior of a Pivoting Algorithm for Linear Complementarity and Linear Programming Problems, Mathematical Programming 35, 173-192. [27] Z. Wang (1987), A Conformal Elimination Free Algorithm for Oriented Matroid Programming, Chinese Annals of Mathematics 8 B 1. [28] S. Zionts (1969), The Criss-Cross Method for Solving Linear Programming Problems, Management Science 15, 426-445. [29] S. Zhang (1989) On Anti-Cycling Pivoting Rules of the Simplex Method, Operations Research Letters 10, 189-192.
18