An Integer Programming Approach for Linear Programs ... - CiteSeerX

23 downloads 38824 Views 153KB Size Report
Oct 24, 2006 - we can apply these results to obtain the set of valid inequalities, which we call the star inequalities, ..... Call center staffing with simulation and.
An Integer Programming Approach for Linear Programs with Probabilistic Constraints James Luedtke Shabbir Ahmed George Nemhauser H. Milton Stewart School of Industrial and Systems Engineering Georgia Institute of Technology October 24, 2006

Contact: [email protected] Keywords: Integer programming, branch and cut algorithms

Abstract Linear programs with joint probabilistic constraints (PCLP) are known to be highly intractable due to the non-convexity of the feasible region. We consider a special case of PCLP in which only the right-hand side is random and this random vector has a finite distribution. We present a mixed integer programming formulation and study the relaxation corresponding to a single row of the probabilistic constraint, yielding two strengthened formulations. As a byproduct of this analysis, we obtain new results for the previously studied mixing set, subject to an additional knapsack inequality. We present computational results that indicate that by using our strengthened formulations, large scale instances can be solved to optimality.

1

Introduction

Consider a linear program with a probabilistic or chance constraint n o (P CLP ) min cx : x ∈ X, P {T˜x ≥ ξ} ≥ 1 − 

(1)

 where X = x ∈ Rd+ : Ax = b is a polyhedron, c ∈ Rd , T˜ is an m×d random matrix, ξ is a random vector taking values in Rm , and  is a confidence parameter chosen by the decision maker, typically near zero, e.g.,  = 0.01 or  = 0.05. Note that in (1) we enforce a single probabilistic constraint over all rows, rather than requiring that each row independently be satisfied with high probability. Such a constraint is known as a joint probabilistic constraint, and is appropriate in a context in which it is important to have all constraints satisfied simultaneously and there may be dependence between random variables in different rows. Problems with joint probabilistic constraints have been extensively studied; see [19] for background and an extensive list of references. Probabilistic constraints have been used in various applications including supply chain management [14], production planning [16], optimization of chemical processes [12, 13] and surface water quality management [22]. Unfortunately, linear programs with probabilistic constraints are still largely intractable except for a few very special cases. There are two primary reasons for this intractability. First, in general, for a given x ∈ X, the quantity φ(x) := P {T˜x ≥ ξ} is hard to compute, as it requires multi-dimensional integration. Second, except for a few special cases, the feasible region defined by a probabilistic constraint is not convex. Recently, several approaches have been proposed which can find highly reliable feasible solutions to probabilistic programs. Examples of these conservative approximations include scenario approximation [6, 17] and robust optimization, e.g., [4, 5, 9]. These methods are attractive in a context in which high reliability is critically important and solution cost is a secondary objective. However, in a context in which very high reliability is not crucial, for example if the probabilistic constraint represents a service level constraint, a decision maker may be interested in exploring the trade-off between solution cost and system reliability, and would be interested in obtaining solutions which are on or near the efficient frontier of these competing objectives. The aforementioned conservative approximations generally do not yield bounds on the optimal solution cost at a given reliability level , and hence cannot distinguish whether the produced solutions are close to the efficient frontier. This latter context is the motivation for seeking to use integer programming to solve PCLP so that we can obtain solutions that are provably optimal or near optimal. In this work, we demonstrate that by using integer programming techniques, PCLP can be solved efficiently under the following two simplifying assumptions: (A1) Only the right-hand side vector ξ is random; the matrix T˜ = T is deterministic. (A2) The random vector ξ has a finite distribution. Despite its restrictiveness, the special case given by assumption A1 has received a lot of attention in the literature, see, e.g., [7, 8, 19]. A notable result for this case is that if the distribution of the right-hand side is log-concave, then the feasible region defined by the joint probabilistic constraint is convex [18]. This allows problems with small dimension of the random vector to be solved to optimality, but higher dimensional problems are still intractable due to the previously mentioned difficulty in checking feasibility of the probabilistic constraint. Specialized methods have been 1

developed in [8] for the case in which assumption A1 holds and the random vector has discrete but not necessarily finite distribution. However, these methods also do not scale well with the dimension of the random vector. Assumption A2 may also seem very restrictive. However, if the possible values for ξ are generated by taking Monte Carlo samples from a general distribution, we can think of the resulting problem as an approximation of the problem with this distribution. Under some reasonable assumptions it can be shown that the optimal solution of the sampled problem converges exponentially fast to the optimal solution of the original problem as the number of scenarios goes to infinity. Also, the optimal objective of the sampled problem can be used to develop statistical lower bounds on the optimal objective of the original problem. See [1, 3, 21] for some related results. It seems that the reason such a sampling approach hasn’t been seriously considered for PCLP in the past is that the resulting sampled problem has a non-convex feasible region, and thus is still generally intractable. Our contribution is to demonstrate that, at least under assumption A1, it is nonetheless possible to solve the sampled problem in practice. Under assumption A2 it is possible to write a mixed integer programming formulation for PCLP, as has been done, for example, in [20]. In the general case, such a formulation requires the introduction of “big-M” type constraints, and hence is difficult to solve. However, by restricting attention to the case of assumption A1, we are able to develop strong mixed integer programming formulations. Our approach in developing these formulations is to consider the relaxation obtained from a single row in the probabilistic constraint. It turns out that this yields a system similar to the mixing set introduced by G¨ unl¨ uk and Pochet [11], subject to an additional knapsack inequality. We are able to derive strong valid inequalities for this system by first using the knapsack inequality to “preprocess” the mixing set, then applying the mixing inequalities of [11], see also [2, 10]. We also derive an extended formulation, equivalent to one given by Miller and Wolsey in [15]. Making further use of the knapsack inequality, we are able to derive more general classes of valid inequalities, for both the original and extended formulations. If all scenarios are equally likely, the knapsack inequality reduces to a cardinality restriction. In this case, we are able to characterize the convex hull of feasible solutions to the extended formulation for the single row case. We emphasize that although these results are motivated by the application to PCLP, they can be used in any problem in which a mixing set appears along with a knapsack constraint.

2

The MIP Formulation

We now consider a probabilistically constrained linear programming problem, with random righthand side given by (P CLP ) min cx s.t. Ax = b (2) P {T x ≥ ξ} ≥ 1 −  x≥0 Here A is an r × d matrix, b ∈ Rr , T is an m × d matrix, ξ is a random vector in Rm ,  ∈ (0, 1) (typically small) and c ∈ Rd . We assume that ξ has finite support, P that is there exist vectors, ξi , i = 1, . . . , n such that P {ξ = ξi } = πi for each i where πi ≥ 0 and ni=1 πi = 1. We will refer to the possible outcomes as scenarios. We assume without loss of generality that ξi ≥ 0 and πi ≤  for each i. We also define the set N = {1, . . . , n}. Before proceeding, we note that PCLP is NP-hard even under assumptions A1 and A2.

2

Theorem 1. PCLP is NP-hard, even in the special case in which πi = 1/n for all i ∈ N , the constraints Ax = b are not present, T is the m × m identity matrix, and c = (1, 1, . . . , 1) ∈ Rm . We now formulate PCLP as a mixed integer program. To do so, we introduce for each i ∈ N , a binary variable zi , where zi = 0 will guarantee that T x ≥ ξi . Also introducing variables v ∈ Rm to summarize T x, we obtain the MIP formulation given by (P M IP )

min cx s.t. Ax = b, T x − v = 0 v + ξi zi ≥ ξi n X πi z i ≤ 

(3)

i = 1, . . . , n

(4) (5)

i=1

x ≥ 0, z ∈ {0, 1}n .

3

Strengthening the Formulation

Our approach is to strengthen PMIP by ignoring (3) and find strong formulations for the set  n F := (v, z) ∈ Rm (6) + × {0, 1} : (v, z) satisfy (4), (5) . Note that F =

m \

{(v, z) : (vj , z) ∈ Gj } ,

j=1

where for j = 1, . . . , m ( Gj =

(vj , z) ∈ R+ × {0, 1}n :

n X

) πi zi ≤ , vj + ξij zi ≥ ξij

i = 1, . . . , n .

i=1

Thus, a natural first step in developing a strong formulation for F is to develop a strong formulation for each Gj . In particular, note that if an inequality is facet-defining for conv(Gj ), then it is also facet-defining for conv(F ). This follows because if an inequality valid for Gj is supported by n + 1 affinely independent points in Rn+1 , then because this inequality will not have coefficients on vi for any i 6= j, the set of supporting points can trivially be extended to a set of n + m affinely independent supporting points in Rn+m by appropriately setting the vi values for each i 6= j. The above discussion leads us to consider the generic set ( n X n G = (y, z) ∈ R+ × {0, 1} : πi zi ≤ , y + hi zi ≥ hi

) i = 1, . . . , n

(7)

i=1

obtained by dropping the index j in the set Gj and setting y = vj and hi = ξij for each i. For convenience, we assume that the hi are ordered so that h1 ≥ h2 ≥ · · · ≥ hn . The mixing set P = {(y, z) ∈ R+ × {0, 1}n : y + hi zi ≥ hi

i = 1, . . . , n}

has been extensively studied, in varying degrees of generality, by Atamt¨ urk et. al [2], G¨ unl¨ uk and Pochet [11], Guan et. al [10] and Miller and Wolsey [15]. If we ignore the knapsack constraint in G, 3

we can apply these results to obtain the set of valid inequalities, which we call the star inequalities, following [2], l X y+ (htj − htj+1 )ztj ≥ ht1 ∀T = {t1 , . . . , tl } ⊆ N (8) j=1

where t1 < t2 < · · · < tl and htl+1 := 0. In addition, these inequalities can be separated in polynomial time [2, 11, 10]. It has been shown in these same works that these inequalities define the convex hull of P and are facet defining if and only if t1 = 1. We can do considerably better, however, by using the knapsack constraint in G to first the n strengthen o inequalities, and then derive Pk the star inequalities. In particular, let p := max k : i=1 πi ≤  . Then, due to the knapsack constraint, we cannot have zi = 1 for all i = 1, . . . , p + 1 and thus we have y ≥ hp+1 . This also implies that the mixed integer constraints in G are redundant for i = p + 1, . . . , n. Thus, we can write a tighter formulation of G as ( ) n X G = (y, z) ∈ R+ × {0, 1}n : πi zi ≤ , y + (hi − hp+1 )zi ≥ hi i = 1, . . . , p . (9) i=1

Remark 1. In addition to yielding a tighter relaxation, this description of G is also more compact. In typical applications,  will be near 0, suggesting p