M.J. Todd/Linear programming with variable upper bounds ...... [2] R.H. Bartels and G.H. Golub, "The simplex method for linear programming using LU decom-.
Mathematical Programming 23 (1982) 34--49 North-Holland Publishing Company
AN IMPLEMENTATION OF THE SIMPLEX METHOD FOR LINEAR PROGRAMMING PROBLEMS WITH VARIABLE UPPER BOUNDS* Michael J. T O D D Cornell University, Ithaca, New York, U.S.A. and University of Cambridge, Cambridge, England
Received 16 June 1980 Revised manuscript received 26 February 1981 Special methods for dealing with constraints of the form xj- - 0 for j ~ J z s and for j ~ J ( k ) with k E K z ; and d~= 6k + ~s_(k) 6j -> 0 for k ~ Kz. Thus y' is optimal in (P") (since it is optimal in the relaxed problem with the variable upper bounds deleted) implying that ~ is optimal. Suppose now that the conditions do not hold. Then either we have d / < 0 for j ~ J z × with k ( j ) ~ K e or for j ~ J z s , or we have d ~ < 0 for k ~ K z . But our nondegeneracy assumption assures us that all basic variables are positive. Also, if d / < 0, we have Y~0)> 0. Thus, as in the standard linear programming case, we can improve the current solution by increasing ~ or y~; hence $ is not optimal.
M.J. Todd~Linear programming with variable upper bounds
39
The last of the three conditions is the one that allows us to determine optimality without concerning ourselves with the degeneracy caused by nonbasic fathers. Suppose now that ~ is not optimal, so that one of the optimality conditions, say that corresponding to the index q, is violated. Let b q, dq and d e denote the corresponding column to enter the basis, cost and reduced cost. Thus, if q E Jzx with k ( q ) ~ Ke, we have b q = a q, d e ~ Cq and dq = Ce '~0; if q ~ Jzs, we have b q = - a q, d e = - c e and d q = - g e < 0 ; and if q ~ K z , then b q = a q + ~ j _ ( q ) a I, dq = Cq + ~j_(q) q, and dq = ce + ~J-(q) cJ. Note that in all cases d e = dq - z/rb e. If q ~ Jzx with k(q) E Kp, we wish to increase Ye and thus xe from zero; if q ~ Jzs we wish to increase Ye from zero (and thus move xe away from its father Xk(e));if q E Kz we wish to increase Ye and thus xe from zero along with its 'favorite sons' Ys and ~j for j E J_(k). To determine how far we can increase yq, we first solve F z = b e for z ~ t h e components of z are again indexed by Je U Kp. The corresponding search direction in y space is w, defined by we = 1; ( 1 zi
ifj~Je' if j E J_(q), q ~ Kz, if j E J -- q otherwise;
( 0 zk
ifk~Ke, if k ~ K z "-- q.
wj =
Wk =
Theorem 2. F o r h > 0, dT(y "k ~.W) --- d T y -]- ~.dq - 0 as long as
Furthermore
y + hw is
h O;
2~ O;
h O.
Proof. We have dTw = dq - d~z = dq - dXeF-lb q = dq - ~Xbq = dq, establishing the first part of the theorem. For the second part, note first that B w = b q - Fz = 0, so that B(y + ) , w ) = by = r for all )~->0. The stated inequalities for h are merely translations of the variable upper bounds and nonnegativities on y = y + hw. For example, if k E Kp and p E J e ( k ) , rp -- )tZp = yp ~ Yk = ~k -- )tZk
yields, if Zk -- Zp > O, the bound -< (~k - ~.)/(y~
- y.);
if q ~ Jz and k = k(q), O + A = yq 0, the bound 1~. 0, in which case we wish to move 2q together with all 2i, j ~ fs(q), away from its upper bound 2k(q). We will concentrate on the second case; the first is simpler and can be dealt with by similar methods. We now solve the linear system F z = b q ~ a q + E aJ; Js(q)
the vector z determines the search direction. We compute the minimum ratio from among fp/z;
for p ~ Npx with zp > 0;
(?k -- ?p)/(Zk -- Zp)
for p, k ~ Nex, k ( p ) = k or k ( p ) @ Js(k ) --- ( q U Js(q)),
48
M.J. Todd/Linear programming with variable upper bounds
with zk - zp > O; (rk - Pp)/(1 + zk - zp)
for p, k E N.×, k(p) = q
or k(p) ~ Js(q), q ~ is(k) with l + z k - z v > O; ~d(1 + zk)
for k E Npx, q C Js(k), if 1 + Zk > O.
C a s e 1: h = ~flzp. Then b q is brought into the basis to replace b p. Suppose
q E Js(k), k ~ NFx. Since q must be removed from .Is (and join N n x ) we must then subtract b q from b k. This post-processing should be done after the column exchange. C a s e 2: 2[ = (rk-~p)/(zk-zp). First we pre-process by adding b p to b k (p joins Js), then we proceed as in Case 1. C a s e 3: The two columns b k and b p must be replaced by b q + b p and b ~ - b q. This can be accomplished by first adding b p to b k to get gk (pre-processing); then replacing b p with b p + bq; and finally subtracting b p + b q from /~k to get gk=bk-bq" C a s e 4: The column b k = b k - b q replaces b k.
It will be observed that the first three cases above give rise to 'postprocessing, the subtraction of a column that has just entered the basis from another column of the basis. This post-processing can be performed trivially if F -1 is maintained explicitly or in product form, and for row factorizations the method of Section 3.2 can be modified to allow column addition and postprocessing to be carried out with virtually no increase im cost over simple column addition. However, if a column factorization of F is used, postprocessing requires an additional column exchange. In the dense case, using the factorization F = L U P , this requires an extra ~m: operations to obtain U and adds ½m eliminations to the L -1 file on average. H o w e v e r , note that no extra linear systems need to be solved, either for pre- or post-processing. In order to carry out the various steps of an iteration, it would be useful to have a pointer for each j ~ N that either indicates that j lies in N e x or N z x or gives the index k ~ Npx with j E J s ( k ) . In addition, pre-order and post-order threads facilitate the computation of dk and the formation of b q. Overall, the cost of an iteration is likely to be higher than for a comparably sized linear programming problem with only nonnegativities, but not by much. Of course, this statement requires that U is maintained in core if a column factorization is used; also, partial pricing is more complicated in the new method, since several reduced costs must be computed for each optimality condition to be checked.
5. Conclusion
We have described a method for solving linear programming problems with variable upper bounds that circumvents problems of degeneracy in these special
M.J. Todd/Linear programming with variable upper bounds
49
constraints. The method can be implemented using numerically stable triangular factorizations. The additional work per iteration compared to a problem without such bounds is small.
Acknowledgement I am very grateful to Professor Robert Fourer whose suggestions improved not only the presentation, but also the proposed method; the introduction of the index set JPs, with its computational advantages, is thanks to his comments.
References [1] R.H. Bartels, "A stabilization of the simplex method", Numerische Mathematik 16 (1971) 414--434. [2] R.H. Bartels and G.H. Golub, "The simplex method for linear programming using LU decomposition", Communications of the ACM 12 (1969) 266-268. [3] R.H. Bartels, G.H. Golub and M.A. Saunders, "Numerical techniques in mathematical programming", in: J. Rosen, O. Mangasarian and K. Ritter, eds., Nonlinear programming (Academic Press, New York, 1970) pp. 123-176. [4] R.C. Daniel, "A note on Schrage's generalized variable upper bounds", Mathematical Programming 15 (1978) 349-351. [5] R. Dutton, G. Hinman and C.B. Millham, "The optimal location of nuclear-power facilities in the Pacific Northwest", Operations Research 22 (1974) 478-487. [6] J.J.H. Forest and J.A. Tomlin, "Updating triangular factors of the basis to maintain sparsity in the product form of the simplex method", Mathematical Programming 2 (1972) 263-278. [7] P. Gill and W. Murray, "A numerically stable form of the simplex algorithm", Journal of Linear Algebra and its Applications 7 (1973) 99-138. [8] P, Gill, W. Murray and M.A. Saunders, "Methods for computing and modifying the LDV factors of a matrix", Mathematics of Computation 29 (1975) 1051-1077. [9] F. Glover, "Compact LP bases for a class of IP problems", Mathematical Programming 12 (1977) 102-109. [10] D, Goldfarb, "On the Bartels-Golub decomposition for linear programming bases", Mathematical Programming 13 (1977) 272-279. [11] G. Roodman and L. Schwartz, "The dynamic plant location problem", working paper, University of Rochester (1974). [12] M.A. Saunders, "Large-scale programming using the Cholesky factorization", Report Stan-CS72252, Computer Science Department, Stanford University (Stanford, 1972). [13] L. Schrage, "Implicit representation of variable upper bounds in linear programming", Mathematical Programming Study 4 (1975) 118-132. [14] L. Schrage, "Implicit representation of generalized variable upper bounds in linear programming", Mathematical Programming 14 (1978) 11-20. [15] M.J. Todd, "Traversing large pieces of linearity in algorithms that solve equations by following piecewise-linear paths", Mathematics of Operations Research 5 (1980) 242-257. [16] M.J. Todd, "Numerical stability and sparsity in piecewise-linear algorithms", in: S.M. Robinson, ed., Analysis and computation of fixed points (Academic Press, New York, 1980) pp. 1-24. [17] P. Wolfe, "The composite simplex algorithm", SIAM Review 7 (1965)42-54. [18] H.P. Williams, "Experiments in the formulation of integer programming problems", Mathematical Programming Study 2 (1974) 180--197.