Applied Mathematics and Computation 188 (2007) 1549–1561 www.elsevier.com/locate/amc
A computationally stable solution algorithm for linear programs H. Arsham University of Baltimore, 1420 N. Charles Street, Baltimore, MD 21201, USA
Abstract An active area of research for linear programming is to construct initial Simplex tableau that is close to the optimal solution, and to improve its pivoting selection strategies efficiently. In this paper, we present a new approach to the problem of initialization and pivoting rules: the algorithm is free from any artificial variables and artificial constraints, known as the big-M methods. By supplying the Simplex method without using any large numbers, therefore the result is computationally stable and provides a better initial basis that reduces the number of pivoting iterations efficiency. A three-phase simplex type solution algorithm is developed for solving general linear programs. In phase 1, greater-than constraints are relaxed and the problem is solved starting at the origin. If it is unbounded; then the objective function is modified in order to have a bound feasible solution. Then, in phase 2 the greater-than constraints are introduced in a dual simplex framework. Following this, in the case the original objective function was modified, phase 3 restores optimality for the original objective function. The algorithm is numerically stable and works with the original decision variables. Our initial experimental study indicates that the proposed algorithm is more efficient than both the primal and the dual simplex in terms of the number of iterations. 2006 Elsevier Inc. All rights reserved. Keywords: Computational linear programming; Artificial-free; Pivot algorithm; Advance basis; Big-M-free
1. Introduction An active area of linear programming is to construct initial Simplex tableau, and to improve its pivoting selection strategies efficiently, see e.g. Vieira and Lins [1, and the references therein]. In this paper, we present a new approach to the problem of initialization and pivoting rules: the algorithm is free from any artificial variables and artificial constraints, known as the big-M methods. By supplying the Simplex method without using any large numbers, therefore the result is computationally stable and provides a better initial basis that reduces the number of pivoting iterations efficiency.
E-mail address:
[email protected] 0096-3003/$ - see front matter 2006 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2006.11.031
1550
H. Arsham / Applied Mathematics and Computation 188 (2007) 1549–1561
1.1. Group of simplex methods The primal and the dual simplex methods follow the same general procedure. In the first step a full basis is established. In the proceeding steps we move from one basic solution to another while trying to reduce the degree of infeasibility of the primal and the dual problem respectively. The primal simplex method starts with the initial basic solution X ¼ 0, the first phase tries to find a feasible solution (an extreme point). If this search is unsuccessful, then the procedure terminates with ’’no feasible solution’’, otherwise the second phase is initialized with a basic feasibility. One version of the primal simplex known as the two-phase method introduces an artificial objective function which is the sum of artificial variables, while the other version adds the penalty terms which is the sum of artificial variables with very large coefficients. The latter approach is known as the Big-M method. The second group of the simplex methods is that of the dual simplex algorithm. When some coefficients in the objective P function are positive (i.e. not dual feasible) we must add to the tableau an artificial constraint of the type X j 6 M with M 0 and the summation is taken over all decision variables with positive coefficients in the objective function. Both groups require special tools to start up the algorithm when the respective starting conditions are violated. The primal simplex method starts with feasibility conditions satisfied and strives for the optimally condition. When primal feasibility conditions are not satisfied, one must introduce artificial variables in some constraints and penalty (i.e., the Big-M) terms in the objective function, see e.g. [2]. For the dual simplex the opposite is true. The algorithm starts with optimally conditions satisfied and strives for feasibility. When the dual feasibility conditions are not satisfied, one must introduce an artificial constraint with a large positive number for its RHS. Both groups include all the constraints in their initial tableau. All these Big-M methods can cause serious error. Too small a value of M could make an infeasible appear feasible. Too large a Big-M can create floating-point computational overflow which destroys the numerical accuracy. Paparrizos [4] introduced an algorithm which avoids the use of artificial variables through a tedious evaluation of a series of extra objective functions besides the original objective function. The algorithm checks both the optimality and feasibility conditions after completion of iterations. He indicates ’’We have not been able to construct such rules’’, for entering and exiting variables to and from the basic variable set. The pivot and probe type algorithm suggested in [5] is much complicated. Spivey and Thrall [6] also made an attempt to come up with an algorithm; however it is not for general purpose. Arsham [7–10] introduce an artificial-free algorithm for general linear programs and the classical network problems. This paper develops a simple alternative approach to solve general LP problems without using any artificial variables or the artificial constraint. Our goal is a unification of simplex and dual simplex which is efficient and eliminates the complexities of artificial variables, the artificial constraint, and artificial objective function. In most cases the proposed approach has a much smaller number of rows in its final tableau than either the simplex or the dual simplex. The current algorithm uses a concept of ‘‘active constraints’’ by relaxation-and-restoration of some constraints. At start all constraints are relaxed (i.e. temporarily ignored). After solving the sub-problem using only constraints, we bring in the constraints. Since the solution may not require all constraints to become active, this process frequently uses smaller tableaux with less number of iterations. Since, however, the complete final tableau may be required for post optimality analysis it can be obtained through an additional efficient procedure. Our approach only in a special case replaces and restores objective function, only once for the entire algorithm. This method does not introduce new rules, but in each phase it uses the customary rules of either the simplex or the dual simplex. We present the new solution algorithm in Section 2 and give a proof of finiteness. Section 3 deals with the computational aspects of the algorithm with examples to illustrate all aspects of the algorithm. Section 4 presents concluding remarks and future research directions. 2. The proposed solution algorithm The following proposed solution algorithm is a three-phase simplex-type method, which avoids the use of large numbers known as Big-M’s. It combines the primal and dual simplex method; however it uses parts of
H. Arsham / Applied Mathematics and Computation 188 (2007) 1549–1561
1551
objective function as an auxiliary objective function whenever necessary to locate a corner point. In phase 1 some constraints are relaxed to make the origin being feasible (if possible). After solving the relaxed problem using the current solution as the starting point, Phase 2 applies the dual simplex to solve the original problem while Phase 3 applies the primal simplex. Whenever the feasible region is unbounded, in Phase 1 the origin is served as the starting point for Phase 2 to generate a proper corner point by using a ‘‘temporary’’ objective function. Phase 3 restores the original objective function and applies the primal simplex rule to a conclusion. Our initial experimental study with some small size problems indicates that the proposed algorithm is more efficient than both the primal and the dual simplex in terms of the number of iterations. We start with an LP problem in the following standard form: Max Z ¼ CX Subject to : AX 6 a; BX P b; all X P 0; where b P 0; a > 0: Note that the main constraints have separated into two subgroups. A and B are the respective matrices of constraint coefficients, and a and b are the respective RHS vectors (all with appropriate dimensions). Without loss of generality we assume all the RHS elements are non-negative. We will not deal with the trivial cases such as when A ¼ B ¼ 0, (no constraints) or a ¼ b ¼ 0 (all boundaries pass through the origin). The customary notation is used: Z for the objective, Cj for the cost coefficients, and X for the decision vector. Here, the word constraint means any constraint other than the non-negativity conditions. In this standard form, whenever the RHS is zero, this constraint should be considered as since b is allowed to be 0 but a is not. We depart from the usual convention by separating the constraints into two groups according to 6 and P. To arrive at this form any general problem some of the following preliminaries may be necessary. 2.1. Preliminaries • • • •
Convert unrestricted variables (if any) to non-negative. Make all RHS non-negative. Convert equality constraints (if any) into a pair of P and 6 constraints. Remove 6 constraints for, which all coefficients are non-positive. This eliminates constraints, which are redundant with the first quadrant constraint. • The existence of any P constraints for, which all coefficients are non-positive and RHS positive, indicates that the problem is infeasible. • Convert all 6 inequality constraints with RHS ¼ 0 into P0. This may avoid degeneracy at the origin.
The following provides an overview of the algorithm’s strategy considering the most interesting common case, (a mixture of P and 6 constraints). This establishes a framework to, which we then extend to complete generality. We proceed as follows: Phase 1: Relax the P constraints and solve the reduced problem with the usual simplex. In most cases we obtain an optimal simplex tableau, i.e. a tableau, which satisfies both optimality and feasibility conditions.(The discussion of several special cases is deferred for the moment.) Phase 2: If the solution satisfies all the relaxed constraints, we are done. Otherwise we bring the most ‘‘violated’’ constraint into the tableau and use dual simplex to restore feasibility. This process is repeated until all constraints are satisfied. This step involves several special cases to be discussed after the primary case. Phase 3: Restore the original objective function (if needed) as follows. Replace the last row in the current tableau to its original values and catch up. Perform the usual simplex. If this is not successful, the problem is unbounded; otherwise the current tableau contains the optimal solution. Extract the optimal solution and multiple solutions (if any).
1552
H. Arsham / Applied Mathematics and Computation 188 (2007) 1549–1561
2.2. Special cases Three special cases arise in Phase 1, which may require special treatment. If all constraints are P and the objective function coefficients satisfy the optimality condition, the origin satisfies the optimality condition but is infeasible. Relax all constraints and start with origin. Some cases may not prove optimal (the relaxed problem has an unbounded feasible region). In Phase 1 the current objective function is changed in a prescribed manner. If all constraints are P, the origin is not feasible, and if there is any positive Cj we have to change it to 0. This requires restoration of the original objective function, which is done in Phase 3. Notice that unlike simplex and dual simplex, we focus on the most active constraints rather than carrying all constraints in the initial and all subsequent tableaux. We bring P constraints one by one into our tables. This may also reduce computational effort. An overview of the new solution algorithm is outlined in the following flowchart:
H. Arsham / Applied Mathematics and Computation 188 (2007) 1549–1561
1553
2.3. Detailed process 2.3.1. Phase 1 Recognize three cases as follows: Step 1.1. Set FLAG ¼ 0. This indicates the original objective function is still in use. Check the existence of 6 constraints and the signs of the Cjs. Distinguish there, three possibilities: (a) There is at least one 6 constraint. Go to Step 1.2. (b) There are no 6 constraints, and no Cj is positive. Go to Step 2.5. (c) There are no 6 constraints, and at least one Cj is positive (i.e. A ¼ 0, Cj > 0 for at least one j). Go to Step 1.4. Step 1.2. From the standard problem relax temporarily all P constraints with positive RHS. Solve the subproblem using the usual Simplex. If successful go to PHASE 2. Otherwise, go to Step 1.3. Step 1.3. Are there any P constraints? If yes, go to Step 1.4. Otherwise go to Step 9.1. Step 1.4. Change all Cj > 0 elements to 0. Set the FLAG ¼ 1. The origin is the starting point. Go to the PHASE 2. Step 9.1 Original problem is unbounded. 2.3.2. Phase 2 Bring in relaxed constraints (if needed) as follows: Step 2.1. If there are any remaining relaxed P constraint(s), go to Step 2.2. Otherwise, go to the PHASE 3. Step 2.2. Does the current optimal solution SATISFY all the still relaxed constraints? If yes, go to the PHASE 3. Otherwise, go to Step 2.3. Step 2.3. Find the most violated relaxed constraint. Substitute the current solution, X, into each constraint and select the constraint with the largest absolute difference between the right and left sides (arbitrary tie-breaker). Continue with Step 2.4. Step 2.4. Bring the chosen constraint into the current tableau. (a) Convert the chosen constraint into a 6 constraint and add a slack variable. (b) Use the Gauss Operation to transform the new constraint into the current nonbasic variables. (i.e. Multiply the row for each basic variable by the coefficient of the variable in the new constraint; sum these product rows and subtract this computed row from the new row). Go to Step 2.5. Step 2.5. Iterate dual simplex rules to a conclusion: The dual simplex rule: Use the most negative RHS to select the Pivot Row (if none is successful conclusion for Step 2.5). The Pivot Column has a negative coefficient in the pivot row. If a tie results, choose the column with smallest R/R (i.e. row Cj/pivot row element wherever the pivot row element is negative). If there is still a tie, choose one arbitrary. (If there is no positive RR the problem is infeasible i.e. Step 9.2.) Generate the next tableau by using the Gauss pivot operation and repeat the above. If successful (RHS is nonnegative), go to 2.1. Otherwise, the problem proves infeasible (No negative element in the outgoing row), go to 9.2. Step 9.2 Original problem is infeasible. 2.3.3. Phase 3 Restore the original objective function (if needed) as follows: Step 3.1. If FLAG ¼ 0, go to Step 4.1; otherwise go to Step 3.2. Step 3.2. Replace the Cj in the current tableau to their original values and catch up.Go to Step 3.3.
1554
H. Arsham / Applied Mathematics and Computation 188 (2007) 1549–1561
Step 3.3. Perform the usual simplex pivoting; if not successful the problem is unbounded go to Step 9.3. Otherwise, go to Step 4.1. Step 9.3 Original problem is unbounded. Step 9.4 FINISHED. The current tableau contains the optimal solution. Extract the optimal solution and multiple solutions (if any, i.e. whenever the number of Cj ¼ 0 in the final tableau exceeds the number of basic variables). In order to perform the usual post-optimality analysis we can augment the optimal tableau, whenever there are some unused constraints, to obtain the usual final tableau. This can be done easily and efficiently by applying operations a and b of Step 2.4 in turn to each of the non-active constraints. Notice that the unboundedness can be checked when the selected pivot column has no positive element. Infeasibility is checked whenever the selected pivot row has no negative element. Proof. The general LP problem is deformed into a sequence of sub-problems, each of which is handled by either simplex or dual simplex as appropriate. The proof that these algorithms terminate successfully is well known. The details of the deformation and reformation of the problem is given in sufficient detail that the proof that the algorithm works and terminates successfully for all cases becomes obvious. h Notice that the proposed algorithm starts with a basic solution to the sub-problem and then reduces the dual and primal infeasibility in phases II and III, respectively. The algorithm takes advantage of the optimal solution available from the previous iteration (i.e. re-optimization rather that solving each problem from any scratches). The initial active set includes all constraints with a > 0. As an alternative to the phase I, one may use the simple bounds X0 as the active constraints. 3. Applications and illustrative numerical examples This section deals with the computational aspects of the algorithm with numerical examples to illustrate all aspects of the algorithm. Example 1. Consider the following problem: Max X 1 4X 2 2X 3 Subject to: X 1 X 2 þ X 3 ¼ 0; X 2 þ 4X 3 ¼ 1; and X 1; X 2; X 3 P 0: Converting the equality constraints to inequality constraints, we have: Max X 1 4X 2 2X 3 Subject to: X 1 X 2 þ X 3 6 0; X 2 þ 4X 3 6 1; X 1 X 2 þ X 3 P 0; X 2 þ 4X 3 P 1; and X 1; X 2; X 3 P 0: Relaxing all P constraints, we have: Max X 1 4X 2 2X 3 Subject to: X 1 X 2 þ X 3 6 0; X 2 þ 4X 3 6 1; and X 1; X 2; X 3 P 0:
H. Arsham / Applied Mathematics and Computation 188 (2007) 1549–1561
1555
Phase 1: The initial tableau with the two slack variables as its BV is:
S1 S2 Cj
X1
X2
X3
S1
S2
RHS
1 0 1
1 1 4
1 4 2
1 0 0
0 1 0
0 1
The origin is the solution to the relaxed problem. Phase 2: Bringing in the violated constraint, we have: BVS
X1
X2
X3
S1
S2
S3
RHS
S1 S2 S3 Cj R/R
1 0 0 1 –
1 1 1 4 –
1 4 4 2 1/2
1 0 0 0 –
0 1 0 0 –
0 0 1 0
0 1 1
The variable to go out is S3 and the variable to come in is X3. After pivoting we have: BVS
X1
X2
X3
S1
S2
S3
RHS
S1 S2 X3 Cj R/R
1 0 0 1 –
3/4 0 1/4 9/2 6
0 0 1 0 –
1 0 0 0 –
0 1 0 0 –
1/4 1 1/4 1/2 –
1/4 0 1/4
The variable to go out is S1 and the variable to come in is X2. After pivoting we have:
BVS
X1
X2
X3
S1
S2
S3
RHS
X2 S2 X3 Cj
4/3 0 1/3 7
1 0 0 0
0 0 1 0
4/3 0 1/3 6
0 1 0 0
1/3 1 1/3 2
1/3 0 1/3
This is the end of Phase 2. The optimal solution for this tableau is X 1 ¼ 0, X 2 ¼ 1=3, X 3 ¼ 1=3, which satisfies all the constraints. Therefore, it must be the optimal solution to the problem. Notice that since all constraints are in equality form, all slack/surplus variables are all equal to zero, as expected. The optimal value, which is found by plugging the optimal decision into the original objective function, we get the optimal value ¼ 2. Example 2. Now consider the he following problem: Maximize 3X 1 þ X 2 4X 3 Subject to: X 1 þ X 2 X 3 ¼ 1; X 2 P 2; X 1 P 0:
1556
H. Arsham / Applied Mathematics and Computation 188 (2007) 1549–1561
Converting the equality constraints to inequality constraints, we have: Maximize
3X 1 þ X 2 4X 3
Subject to : X 1 þ X 2 X 3 6 1; X 1 þ X 2 X 3 P 1; X 2 P 2; X 1 P 0: Phase 1: Relaxing all P constraints, we have: Maximize
3X 1 þ X 2 4X 3
Subject to : X 1 þ X 2 X 3 6 1: The initial tableau with the one slack variable as its BV is:
BVS
X1
X2
X3
S1
RHS
S1 Cj
1 3
1 1
1 4
1 0
1
The variable to come in is X1, after pivoting we have: BVS
X1
X2
X3
S1
RHS
X1 Cj
1 0
1 2
1 1
1 3
1
The optimal solution is X 1 ¼ 1, X 2 ¼ 0, X 3 ¼ 0. This solution does not satisfy the constraint X 2 P 2. Phase 2: Bringing in the violated constraint, we have:
BVS
X1
X2
X3
S1
S2
RHS
X1 S2 Cj R/R
1 0 0 –
1 1 2 2
1 0 1 –
1 0 3 –
0 1 0
1 2
The variable to go out is S2, and the variable to come in is X2. After pivoting, we have:
BVS
X1
X2
X3
S1
S2
RHS
X1 X2 Cj R/R
1 0 0 –
0 1 0 –
1 0 1 1
1 0 3 –
1 1 2 –
1 2
H. Arsham / Applied Mathematics and Computation 188 (2007) 1549–1561
1557
The variable to go out is X1, and the variable to come in is X3. After pivoting, we have: BVS
X1
X2
X3
S1
S2
RHS
X3 X2 Cj
1 0 1
0 1 0
1 0 0
1 0 4
1 1 3
1 2
The optimal solution for the relaxed problem is X 1 ¼ 0, X 2 ¼ 2, X 3 ¼ 1. This solution satisfies all the constraints; therefore, it must be the optimal solution to the original problem. Example 3. Consider the following problem: Max 40X 1 50X 2 Subject to : 2X 1 þ X 2 P 5; X 1 þ 2X 2 P 3; and X 1; X 2 P 0: Phase 1: There are no 6 constraints, and no Cj is positive. Go to Step 2.5. To perform the dual simplex, we convert the problem into the dual standard form: Max
40X 1 50X 2
Subject to : 2X 1 X 2 þ S1 ¼ 5; X 1 2X 2 þ S2 ¼ 3; and X 1; X 2 P 0: The initial tableau with the two slack variables as its BV is:
S1 S2 Cj
X1
X2
S1
S2
RHS
2 1 40
1 2 50
1 0 0
0 1 0
5 3
S1 goes out and X1 comes in, after pivoting we have:
X1 S2 Cj
X1
X2
S1
S2
RHS
1 0 0
1/2 3/2 30
1/2 1/2 20
0 1 0
5/2 1/2
Now, S2 goes out and X2 comes in, after pivoting we have:
X1 X2 Cj
X1
X2
S1
S2
RHS
1 0 0
0 1 0
2/3 1/3 10
1/3 2/3 20
7/3 1/3
The optimal solution for this problem is X 1 ¼ 7=3, X 2 ¼ 1=3.
1558
H. Arsham / Applied Mathematics and Computation 188 (2007) 1549–1561
Example 4. As out last numerical example, consider the following problem: Max
X 1 þ 2X 2
Subject to : X 1 þ X 2 P 2; X 1 þ X 2 P 1; X 2 6 3; and X 1; X 2 P 0: Phase 1: Relaxing the P constraints, the initial tableau is: BVS
X1
X2
S3
RHS
C/R
S3 Cj
0 1
1 2
1 0
3 0
3/1
The variable to come in is X2, after pivoting we have:
BVS
X1
X2
S3
RHS
X2 Cj
0 1
1 0
1 2
3
The optimal solution for the relaxed problem is X 1 ¼ 0, X 2 ¼ 3. The solution satisfies all the constraints; therefore, it must be the optimal solution to the original problem. 4. Concluding remarks A new solution algorithm for the general linear programs has been presented. It favorably compares with both primal and dual simplex methods. The classical primal simplex and the dual simplex include all of the constraints in the first and all the subsequent tableaux. We start with only constraints and bring constraints into the tables one by one. This frequently reduces computational effort. The proposed algorithm has simplicity, potential for wide adaptation, and deals with all cases. It is well known that the initial basic solution by the usual primal simplex and dual simplex could be far away from the optimal solution, see e.g. [11]. This motivated relaxation of some of the constraints. Clearly, the application of relaxation to large scale problems lowers complexity see e.g. [12], but the disadvantage is that the rate of convergence could be very slow. As an alternative, one may adapt similar ideas given in [3] for constraint selection criteria. However, his approach is limited to constraints with all RHS P 0. A degenerate vertex can cause computational cycling which is difficult to detect. In the event of degeneracy one must use some anti-degeneracy efficient devices such as ordering, see e.g. [13]. The current algorithm is less likely to cycle since some constraints are brought into the tableau one by one. Similarly redundant constraints create difficulties in most existing solution algorithms. Again since some constraints are brought into the tableau one by one, redundancy is less likely to occur in the algorithm presented. In comparison with primal and dual simplex methods the new algorithm in tested examples generates the final tableau with a fewer iteration. Our proposed approach is a general purpose method for solving LP problems. Clearly, more specialized methods exist that can be used for solving LP problems with specific properties [14–16]. To reduce the number of iterations in the proposed algorithm even further one may select the pivot element so as to maximize ‘‘total increase’’ rather than the ‘‘rate of increase’’ in the classical simplex and dual simplex [17].
H. Arsham / Applied Mathematics and Computation 188 (2007) 1549–1561
1559
The linear programs modeling and applications is an ever fast growing area, see e.g. [18–25, and references therein] for applications and decision making tool in all sectors of private and public administrations. Therefore, this new algorithm and other [26–52] with their nice features will be of interest to these and other areas of LP applications. An immediate work is to construct the largest and general sensitivity analysis regions (SAR) for LP models. Unlike the ordinary sensitivity analysis, the SAR allows us to analyze any type of changes, including dependent, independent, and multiple changes in both the right-hand-side values of the constraints and the coefficients of the objective function simultaneously. The study should include SAR construction for problems with primal and/or dual degenerate optimal solutions; the following LP problem is a good case to work with: Maximize
f ðXÞ ¼ X 1 þ X 2 þ 4X 3
Subject to : X 1 þ 2X 3 6 2; X 2 þ 2X 3 6 2; X 1 6 2; X 2 6 2; X 3 6 2; X 1 P 0; X 2 P 0; X 3 P 0: The feasible region is illustrated in the following figure:
X3 Vertex 1
X2 + 2 X3 = 2
X2
X2 = 0 Vertex 2 X1
X1 + 2 X3 = 2
This problem has two optimal vertices: Vertex 1 with coordinates ð0; 0; 1Þ and Vertex 2 with coordinates ð2; 2; 0Þ. Being a three dimensional problem, however, at both vertices more than three constraints are binding, therefore both are degenerate solutions. Moreover, both are genuine degeneracy caused not by redundant constraints, since deleting only one of the above constraints changes the set of feasible solutions. Therefore, besides having degenerate solution, this nice problem has also multiple solutions. The set of all optimal solution is the edge line segment vertex1–vertex2, shown on the above figure which can be expressed as the convex combination of the two optimal vertices, i.e.:
1560
H. Arsham / Applied Mathematics and Computation 188 (2007) 1549–1561
ðX 1 ¼ 2k2 ; X 2 ¼ 2k2 ; X 3 ¼ k1 Þ, with the parametric optimal value of f ðXÞ ¼ X 1 þ X 2 þ 4X 3 ¼ 4k1 þ 4k2 , where k1 þ k2 ¼ 1, k1 P 0, k2 P 0. Multiple solutions means having more than one optimal solution, in such as a case the decision maker may wish to do further selection or just leave it to what the solver reports. For example, the interior point methods often report a linear combination of the optimal basic solutions. Degeneracy and multiple optima are different concepts. Having multiple optimal solutions does mean that the dual problem has a degenerate solution however, one may have multiple non-degenerates optimal, or a unique solution that is degenerate, or even a combination, as is the case for the above interesting problem. Another area for immediate future research is the development of efficient code to implement this approach and perform an extensive study of computational efficiency compared with existing methods. The implementation should use the state-of-the-art practice instead of tableau-type. At this stage, a convincing evaluation for the reader would be the application of this approach to a problem he/she solved by any other methods. Acknowledgements I am most appreciative to the reviewers for their useful comments and suggestions on an earlier version of this paper. This work is supported by NSF Grant CCR-9505732. References [1] H. Vieira Jr., M. Lins, An improved initial basis for the simplex algorithm, Computers & Operations Research 32 (2005) 1983–1993. [2] J. Camm, A. Raturi, A. Tsubakitani, Cutting big-M down to sizes, Interfaces 20 (1990) 61–66. [3] D. Myers, A dual simplex implementation of a constraint selection algorithm for linear programming, Journal of the Operational Research Society 43 (1992) 177–180. [4] K. Papparrizos, The two phase simplex without artificial variables, Methods of Operations Research 61 (1990) 73–83. [5] A. Sethi, G. Thompson, The pivot and probe algorithm for solving a linear program, Mathematical Programming 29 (1984) 219–233. [6] A. Spivey, W. Thrall, Linear Optimization, Holt, Rinehart & Winston, London, 1970. [7] H. Arsham, An algorithm for Simplex tableau reduction: the push-to-pull solution strategy, Applied Mathematics and Computation 137 (2003) 525–547. [8] H. Arsham, Initialization of the simplex algorithm: an artificial-free approach, SIAM Review 39 (1997) 736–744. [9] H. Arsham, Affine geometric method for linear programs, Journal of Scientific Computing 12 (1997) 289–303. [10] H. Arsham, A comprehensive simplex-like algorithm for network optimization and perturbation analysis, Optimization 32 (1995) 211–267. [11] A. Ravindran, A comparison of the primal-simplex and complementary pivot methods for linear programming, Naval Research Logistic Quarterly 20 (1973) 95–100. [12] I. Adler, R. Karp, R. Shamir, A family of simplex variants solving an mxd linear program in expected number of pivot steps depending on d only, Mathematics of Operations Research 11 (1986) 570–589. [13] G. Dantzig, Making progress during a stall in the simplex algorithm, Linear Algebra and its Applications 114 (1989) 251–259. [14] J. Barnes, R. Crisp Jr., Linear programming: a survey of general purpose algorithms, AIIE Transactions 7 (1975) 212–221. [15] J. More, S. Wright, Optimization Software Guide, SIAM, Philadelphia, PA, 1993. [16] T. Terlaky, S. Zhang, Pivot rules for linear programming: a survey on recent theoretical developments, Annals of Operations Research 46 (1993) 203–233. [17] J. Forrest, D. Goldfarb, Steepest-edge simplex algorithm for linear programming, Mathematical Programming 57 (1992) 341–374. [18] I. Longarela, A simple linear programming approach to gain, loss and asset pricing, Topics in Theoretical Economics 2 (2003) 1064. [19] G. Ferrier, C. Lovell, Measuring cost efficiency in banking: econometric and linear programming evidence, Journal of Econometrics 46 (1–2) (1990) 229–245. [20] A. Ruszczynski, R. Vanderbei, Frontiers of stochastically nondominated portfolios, Econometrica 71 (4) (2003) 1287–1297. [21] P. Asabere, F. Huffman, S. Mehdian, Mispricing and optimal time on the market, Journal of Real Estate Research 8 (1993) 149–155. [22] J. Wojciechowski, G. Ames, S. Turner, B. Miller, Marketing of cotton fiber in the presence of yield and price risk, Journal of Agricultural & Applied Economics 32 (2000) 521–529. [23] Y. Mensah, C. Chiang, The Informativeness of operating statements for commercial nonprofit enterprises, Journal of Accounting and Public Policy 15 (1996) 219–246. [24] I. Ossella, A turnpike property of optimal programs for a class of simple linear models of production, Economic Theory 14 (1999) 597–607. [25] R. Anderson, R. Fok, L. Zumpano, H. Elder, Measuring the efficiency of residential real estate brokerage firms, Journal of Real Estate Research 16 (2000) 139–158. [26] H. Arsham, A tabular simplex type algorithm as a teaching aid for general LP models, Mathematical and Computer Modelling 12 (1989) 1051–1056.
H. Arsham / Applied Mathematics and Computation 188 (2007) 1549–1561
1561
[27] H. Arsham, A simplex type algorithm for general transportation problem: an alternative to stepping stone, Journal of the Operational Research Society 40 (1989) 581–590. [28] H. Arsham, Refined simplex algorithm for the classical transportation problem with application to parametric analysis, Mathematical and Computer Modelling 12 (1989) 1035–1044. [29] H. Arsham, Perturbation analysis of general LP models: a unified approach to sensitivity, parametric, tolerance, and more-for-less Analysis, Mathematical and Computer Modelling 13 (1990) 79–102. [30] H. Arsham, A complete algorithm for linear fractional programs, Computers & Mathematics with Applications 20 (1990) 11–23. [31] H. Arsham, Postoptimality analysis for the transportation problem, Journal of the Operational Research Society 43 (1992) 121–139. [32] H. Arsham, A solution algorithm with sensitivity analysis for optimal matching and related problems, Congressus Numerantium 102 (1994) 193–230. [33] H. Arsham, Managing project activity-duration uncertainties, Omega: International Journal of Management Science 21 (1993) 111– 122. [34] H. Arsham, Stability of essential strategy in two-person zero-sum games, Congressus Numerantium 110 (1995) 167–180. [35] H. Arsham, On the more-for-less paradoxical situations in linear programs: a parametric optimization approach, Journal of Information & Optimization Sciences 17 (1996) 485–500. [36] H. Arsham, A refined algebraic method for linear programs and their geometric representation, Congressus Numerantium 102 (1996) 65–81. [37] H. Arsham, Computational geometry and linear programs, International Journal of Modelling and Algorithms, the successor to the International Journal of Mathematical Algorithms 1 (1999) 251–266. [38] H. Arsham, Stability analysis of the critical path in project activity networks, Civil Engineering and Environmental Systems 15 (1999) 305–334. [39] H. Arsham, A combined gradient and feasible direction pivotal solution algorithm for general LP, Revista Investigacion Operational 19 (1998) 3–19. [40] H. Arsham, Stability analysis for the shortest path problems, Congressus Numerantium 133 (1998) 171–210. [41] H. Arsham, A warm-start dual simplex solution algorithm for the minimum flow network with postoptimality analysis, Mathematical and Computer Modelling 27 (1998) 85–115. [42] H. Arsham, Distribution-routes stability analysis of the transportation problem, Optimization: A Journal of Mathematical Programming and Operations Research 43 (1998) 47–72. [43] H. Arsham, Simplifying teaching the transportation problem, International Journal of Mathematical Education in Science & Technology 29 (1998) 271–285. [44] H. Arsham, Managing cost uncertainties in transportation and assignment problems, Journal of Applied Mathematics & Decision Sciences 2 (1998) 65–102. [45] H. Arsham, Managing cost uncertainties in transportation and assignment problems, International Journal of Mathematical Education in Science & Technology 29 (1998) 764–769. [46] H. Arsham, LP-based solution algorithms for convex hull and related problems, Congressus Numerantium 127 (1997) 179–191. [47] H. Arsham, An artificial-free simplex algorithm for general LP models, Mathematical and Computer Modelling 25 (1997) 107–123. [48] H. Arsham, An algorithm for simplex tableau reduction with numerical comparisons, International Journal of Pure and Applied Mathematics 4 (2003) 57–85. [49] H. Arsham, A computer implementation of the push-and-pull solution algorithm and its computational comparison with LP simplex method, Applied Mathematics and Computation 170 (2005) 36–63. [50] H. Arsham, A big-M free solution algorithm for general linear programs, International Journal of Pure and Applied Mathematics 32 (2007) 24–47. [51] H. Arsham, A hybrid gradient and feasible direction pivotal solution algorithm for general linear programs, Applied Mathematics and Computation 193 (2007) 972–975. [52] H. Arsham, Perturbed matrix inversion with application to LP Simplex Method, Applied Mathematics and Computation 194 (2007) 1003–1013.