Approximation Algorithms for the Multiple Knapsack Problem with ...

20 downloads 0 Views 294KB Size Report
This report has been submitted for publication outside of IBM and will probably ... outside publisher, its distribution outside of IBM prior to publication should be ...
RC xxxxx (yyyyy) revised March xx, 1998 Computer Science/Mathematics

IBM Research Report Approximation Algorithms for the Multiple Knapsack Problem with Assignment Restrictions M. Dawande1

J. Kalagnanam1

P. Keskinocak 1

R. Ravi2

F. S. Salman 2 .

LIMITED DISTRIBUTION NOTICE This report has been submitted for publication outside of IBM and will probably be copyrighted if accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and speci c requests. After outside publication, requests should be lled only by reprints or legally obtained copies of the article (e.g., payment of royalties).

IBM

Research Division Almaden Austin T.J. Watson Beijing Haifa Tokyo Zurich 











1

IBM Research Division, T. J. Watson Research Center, Yorktown Heights, New York 10598,

2

GSIA, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh PA 15213, fravi,[email protected]

fmilind,jayant,[email protected]

Abstract Motivated by a real world application, we study the multiple knapsack problem with assignment restrictions (MKAR): We are given a set of items N = 1; : : :; n and a set of knapsacks M = 1; : : :; m . Each item j N has a positive real weight wj and each knapsack i M has a positive real capacity ci associated with it. In addition, for each item j N a set Aj M of knapsacks that can hold item j is speci ed. In a feasible assignment of items to knapsacks, for each knapsack i M, we need to choose a subset Si of items in N to be assigned to knapsack i, such that (i) Each item is assigned to at most one knapsack (ii) Assignment restrictions are satis ed and (iii) For each knapsack, its capacity is satis ed. We consider twoP objectives (i) Maximize assigned weight P P wconstraint ci and (ii) minimize utilized capacity j i2M j 2Si i:Si 6=; Our results include two 13 approximation algorithms and two 21 approximation algorithms for the single objective problem of maximizing assigned weight. For the bi-criteria problem which considers both the objectives, we present two algorithms with performance ratios ( 31 ; 2) and ( 21 ; 3) respectively. f

f

g

g

2

2

2



2

1 Introduction In this paper, we study the Multiple Knapsack Problem with Assignment Restrictions (MKAR), which is a variant of the Multiple Knapsack Problem [Kar72], [MT90], [MT80], [MT81a], [HF78], [FMW96]. In MKAR, we are given a set of items N = f1; : : :; ng and a set of knapsacks M = f1; : : :; mg. Each item j 2 N has a positive real weight wj and each knapsack i 2 M has a positive real capacity ci associated with it. In addition, for each item j 2 N a set Aj  M of knapsacks that can hold item j is speci ed. Although Aj 's suce to represent the assignment restrictions, for convenience we also specify for each knapsack i 2 M , the set Bi  N of items that can be assigned to the knapsack. We say item j is admissible to knapsack i, if j 2 Bi . In a feasible assignment of items to knapsacks, for each knapsack i 2 M , we need to choose a subset Si of items in N to be assigned to knapsack i, such that: (1) All Si 's are disjoint. (Each item is assigned to at most one knapsack.) (2) Each Si is a subset of Bi , for i = 1; : : :; m. (Assignment restrictions are satis ed.) P wj  ci, for i = 1; : : :; m. (Total weight of items assigned to a knapsack does not (3) j 2Si exceed the capacity of the knapsack.) The main motivation for the study of the MKAR problem came from inventory matching applications in the steel industry [KDTL97][DK98]. The problem of assigning a given set of orders to the production units in the inventory arises frequently in production planning and scheduling. Manufacturability considerations such as the compatibility of orders and production units in terms of quality, size, etc. impose additional assignment constraints. As production operations involve more complex processes and a larger product variety, the problem becomes more constrained. Thus, in general, a given order may not be assignable to all the production units.

One of the main objectives for such aP problem is to nd a feasible assignment with the objective P wj . In many applications, in addition to maximizing of maximizing assigned weight i2M j 2Si assigned weight, minimizing the total capacity of the utilized knapsacks is also important. P ci (or We also study MKAR considering the objectives of minimizing utilized capacity i:Si 6=; equivalently, minimizing waste) and maximizing assigned weight simultaneously. We refer to this problem as the Bicriteria Multiple Knapsack Problem with Assignment Restrictions (BMKAR). For the objective of minimizing total utilized capacity alone, BMKAR is a variant of the variable-size bin packing problem [CGJ97], [CGJ84], [MT89], [FL86]. We focus on obtaining approximate solutions for MKAR and BMKAR in polynomial computation time. An ? approximation algorithm produces a solution with objective function value at most times the optimum, for all instances. The factor is called the performance ratio or the worst case factor of the algorithm. The paper is organized as follows. In Section 1.1, we de ne some terms which are frequently used later in the paper and we give a summary of our results in Section 1.2. Next, we present some hardness results for MKAR in Section 2. In Section 3, we present our algorithms and their analyses for MKAR with the objective of maximizing assigned weight only. We present our results on the bicriteria problem BMKAR in Section 4. We conclude in Section 5 and give some future research directions.

1.1 Preliminaries Without loss of generality we assume that wj  ci , 8j 2 Bi , otherwise j can be removed from Bi . The problem becomes trivial if all Aj 's are disjoint, or if Pj2Bi wj  ci; 8i 2 M . In the case that all Bi 's are disjoint, the problem decomposes into m single 0-1 knapsack problems. Thus, we exclude these cases from consideration. The assignment restrictions can also be represented by a bipartite graph, where the two disjoint node sets of the graph correspond to the sets N and M . Let G = (V; E ) be the corresponding bipartite graph with V = N [ M . Then, there exist an edge (i; j ) 2 E between nodes i and j if and only if j 2 Bi . We de ne the density of a bipartite graph G as the ratio of the number of edges in G to the number of edges in a complete bipartite graph on the vertices of G, i.e. the density of G is equal to jE j=jN jjM j. We call a graph sparse, if its density is low. Note that in the related multiple knapsack problem, the underlying bipartite graph is always complete.

1.2 Summary of Results We rst consider the single objective of maximizing assigned weight under assignment restrictions. We show that this problem is NP-hard in the strong sense, even when all knapsack capacities are equal and the sparsity of the underlying bipartite graph approaches zero. For the 3

objective of maximizing assigned weight we show that two simple algorithms have a worst-case factor of 1/3 for MKAR. The analyses for these algorithms are existentially tight. We give a 1/2-approximation algorithm which is based on solving single knapsack problems successively. This algorithm has a pseudo-polynomial running time. We then give a 1/2-approximation algorithm by linear programming (LP) rounding. The rounding scheme is fast and the LP can be solved by a maximum ow algorithm. This result implies that the bound obtained from the LP relaxation of the IP formulation is at most twice that of the optimal IP value. Finally, we present a greedy algorithm with performance ratio 1/2. Next, we consider BMKAR with the objective of minimizing utilized capacity, subject to a lower bound on the assigned weight. We show that it is not possible to have a polynomial algorithm with constant performance ratio, which satis es the lower bound exactly, unless P = NP . We present two algorithms, with performance ratios (1/3,2) and (1/2,3), respectively. The rst algorithm solves single knapsack problems, hence its running time is pseudo-polynomial. The second algorithm is a greedy algorithm and runs in polynomial time.

2 Complexity In this section, we show that MKAR is NP-hard in the strong sense, even when all knapsack capacities are equal and the bipartite graph representing the assignment restrictions is very sparse.

Theorem 1 MKAR is NP-hard in the strong sense, even when all of the knapsack capacities

are equal.

The proof follows by a reduction from the Numerical 3-Dimensional Matching (N3DM) problem.

Numerical 3-Dimensional Matching (N3DM) INSTANCE: Integers

n; d and ai; bi; ci, i = 1; : : :; n satisfying

X(a + b + c ) = nd i

i

i

QUESTION: Are

i

and 0 < ai ; bi; ci < d for i = 1; : : :; n:

there permutations  and  of f1; : : :; ng, such that

ai + b i + c i = d for i = 1; : : :; n: ( )

( )

This problem is NP-complete in the strong sense (Garey and Johnson [GJ79]).

Proof of Theorem 1 4

Given an instance I of N3DM, construct the following instance I 0 of MKAR. For each ai ; bi; ci in N3DM, there is an item of weight ai ; bi; ci + d, respectively, in MKAR. There are n knapsacks in MKAR with capacity 2d each. Item ai can be assigned to knapsack i only, i = 1; : : :; n. The remaining items can be assigned to any knapsack (Figure 1(a)). We will show that there exists an assignment of total weight n(2d) for I 0 if an only if there exists a 3-dimensional matching for I . First, suppose that there exists an assignment of weight n(2d) for I 0 . In this assignment (1) all items must be assigned, (2) each knapsack must contain exactly one c item, since no two c items can be put to the same knapsack, (3) each knapsack must contain exactly one a item because of assignment restrictions, (4) each knapsack must contain exactly one b item, because of (2) and (3), sizes of b items and the knapsack capacities. Now a matching can easily be constructed. Similarly, given a matching for I , one can easily construct an assignment for I 0, in which all the items are assigned. Simply assign items i; (i) and  (i) to knapsack i. 2 Next, we slightly modify the proof of Theorem 1 to show that MKAR is NP-complete, even if the bipartite graph representing the assignment restrictions has arbitrarily low egde density.

Theorem 2 MKAR problem is NP-hard even if the density of the underlying bipartite graph tends to zero.

Proof Given an instance of N3DM, we modify the instance I 0 of MKAR de ned in the proof of Theorem 1 to obtain instance I 00. We add knapsacks n + 1; : : :; n + k, and items 1; : : :; k + 1, where k is an arbitrary positive integer. Each new knapsack has unit capacity. The rst new item has weight 1=nd and can be assigned to knapsacks n and n + 1. Items 2 through k + 1 have unit weight. While each new item i can be assigned to knapsacks n + i ? 1 and n + i, i = 1; : : :; k, the (k + 1)st item can only be assigned to knapsack n + k (See Figure 1 (b)). Note that the bipartite graph representation of I 00 is connected. It is easy to show that there exists a solution to N3DM if and only if there is an assignment of value 2nd + k for I 00. If there exists a solution to N3DM, we can easily nd an assignment of value 2nd + k in which none of the new items is assigned to the old knapsacks (the rst n knapsacks). Similarly, given an assignment of value 2nd + k, new items must have been assigned to new knapsacks, hence the proof of Theorem 1 remains valid. A complete bipartite graph with the item and knapsack sets of I 00 has (3n + k + 1)  (n + k) edges. On the other hand, in the bipartite graph representation of I 00, there are 2n2 + n +2k edges. As k gets arbitrarily large (in particular, an order of magnitude larger than n), the edge density of bigraph representation of I 00 is O(2=k). Note that this is the minimum possible density of any instance with the same size of item and knapsack sets, since the underlying bipartite graph must be connected. Hence, we can claim that there are very sparse instances of MKAR which are as hard as the N3DM problem. 2 5

1 a1

.

1

a2 .

2d

.

.

1

.

. .

.

1

1

1/nd

1

a1

.

an

a2

2d

.

b1

2d

. .

.

an

2d

. .

bn

.

.

.

.

bn

2d

.

2d

.

.

d+c1

2d

b1

.

. .

d+c1

.

2d

.

.

. .

d+cn

d+cn (a)

(b)

Figure 1: Instances of MKAR Recently, we have heard by personal communication with A. Caprara that he has shown that there exists not PTAS for the classical Multiple Knapsack Problem even when item weights equal pro ts. This result implies that there exists no PTAS for MKAR, either.

3 Maximizing Assigned Weight under Assignment Restrictions In this Section, we rst show that solutions with a certain property which we call the half-full property, are 1/3-approximations. We present two algorithms, which produce solutions with half-full property, i.e. the performance ratio of these algorithms is 1/3. We also show that this performance ratio is existentially tight for both algorithms. Next, we present three di erent algorithms with performance ratio 1/2. The rst algorithm solves m single knapsack problems, the second algorithm is based on rounding an optimum vertex solution of the LP relaxation and the third algorithm is a modi ed greedy algorithm. The run time of the rst algorithm is pseudopolynomial, whereas the second and third algorithms are polynomial.

3.1 Greedy Approaches We rst characterize a property common to the solutions generated by certain greedy approaches.

De nition 3.1 A solution for MKAR has half-full property if the following condition holds: For every unassigned item in the solution, any knapsack to which this item is admissible is at least half full. 6

We show that any solution with the half-full property has at least one third of the assigned weight in the optimum solution.

Theorem 3 Any solution with half-full property is a 1/3-approximation. Proof Let U be the set of items not assigned to any knapsack and AU be the set of knapsacks eligible to accomodate at least one of the items in U , i.e. AU = [j 2U Aj . Let C (AU ) be the

total capacity of the knapsacks in AU and W be the total weight of the items assigned to the knapsacks in the solution. Similarly, let W (AU ) denote the total weight of the items assigned to the knapsacks in AU in the solution. By the half-full property, at least half of the capacity of any knapsack in AU is lled in the solution, i.e. C (AU )  2W (AU ). The proof follows from the following claims:

(1) In an arbitrary feasible solution, let U 0  U be the set of items which are assigned to knapsacks and W (U 0) be the total weight of these items. In any solution, including the optimum solution, W (U 0)  C (AU ). (2) Total weight assigned in any solution, including the optimum solution, is at most W + C (AU ). Proof of Claim (1): By property half-full, the items in U 0 can only be assigned to the knapsacks in AU . Furthermore, the total capacity of the knapsacks in AU is C (AU ). Therefore, the maximum assigned weigth of the items in U 0 is at most C (AU ) in any feasible solution. Proof of Claim (2): The maximum weight one could obtain by assigning items in U to the knapsacks is C (AU ) (from claim (1)). The total weight of the remaining items, i.e. the items in N ? U , is W . Hence, W + C (AU ) is an upper bound on the total weight of any solution.

Since W + C (AU )  W + 2W (U )  3W , the result follows. 2. The following greedy algorithms generate solutions that have half-full property.

Algorithm Greedy Sort items by weight in non-increasing order. Sort knapsacks by capacity in non-decreasing order. At each step, assign the next item in the list to the rst eligible knapsack (if any) considering capacity and assignment restrictions.

Algorithm Greedy Ratio Calculate Wit, which is de ned as the total weight of the remainingPitems that can be assigned wj . At iteration t, pick to knapsack i at the beginning of iteration t. Note that Wi1 = j 2Bi a knapsack with minimum ratio of Wit =ci over all remaining knapsacks. Sort the remaining items admissible to knapsack i in non-increasing order of weight. Assign as many items from the list as possible to knapsack i. 7

Theorem 4 Algorithms Greedy and Greedy Ratio have performance ratio 1/3 for MKAR. Proof: The proof follows by showing that the solutions generated by algorithms Greedy

and Greedy Ratio have the half-full property. Consider a solution generated by one of these algorithms. In that solution, let U be the set of items not assigned to any knapsack and AU be the set of knapsacks eligible to accomodate at least one of the items in U . Pick any item j 2 U . All the knapsacks in Aj (i.e. the knapsacks to which item j is admissible) must be (partially) lled; otherwise, item j would be assigned. Suppose there is a knapsack i 2 Aj , which has more than half of its capacity ci free in the greedy assignment. If wj  ci =2, then both algorithms would assign item j to knapsack i, a contradiction. If wj > ci =2, then by the choice of the algorithms, there must be another item k with wk  wj > ci=2, which is assigned to knapsack i before item j . In this case, again at least half of the capacity of knapsack i is lled. 2 Note that Algorithm Greedy is a 1/2-approximation algorithm for the multiple knapsack problem with the objective of maximizing assigned weight, but its performance ratio is 1/3 if there are assignment restrictions. The performance ratio 1/3 is tight, as shown by the following examples.

Example 1: There are three items with weights M + , M and M , and two knapsacks with

capacities 2M and M + . Item 1 could be assigned to either of the knapsacks whereas items 2 and 3 can only be assigned to knapsack 1. Algorithm Greedy would assign item 1 to knapsack 1 (no other assignments) with total assigned weight M + . In the optimum solution, item 1 is assigned to knapsack 2 and items 2 and 3 are assigned to knapsack 1, with total weight 3M + . Algorithm Greedy Ratio will nd the optimal solution for this example. However, we can easily extend this example to another example for which Algorithm Greedy Ratio gives a solution with performance ratio 1/3.

Example 2: We make n copies of the instance in Example 1. Let knapsacks 1; : : :; n be those

with capacity 2M , and n + 1; : : :; 2n be those with capacity M + . Let items i1 ; i2; and i3 of weight M + , M and M , respectively be admissible to knapsack i, i = 1; : : :; n. In addition, item i1 can be assigned to each of knapsacks n +1; : : :; 2n, for i = 1; : : :; n. Algorithm Greedy Ratio will assign item i1 to knapsack i, i = 1; : : :; n ? 1, item n1 to one of the knapsacks of capacity M + , and items n2 and n3 to knapsack n. This makes a total assigned weight of (M + )n + 2M . In the optimum solution, items i2 and i3 are assigned to knapsack i, while item i1 is assigned to knapsack n + i, for i = 1; : : :; n. Thus, maximum assigned weight is 2Mn + (M + )n  3Mn. The idea of Algorithm Greedy Ratio, which is to sort the knapsacks by remaining admissible weight over capacity, can be used as a preprocessing step to reduce the size of the problem, before applying any other algorithm for MKAR.

Preprocessing Step 8

Initially, t = 1. At iteration t, calculate Wit for all remaining knapsacks. Let k be a knapsack with the smallest ratio Wkt =ck . If this ratio is smaller than 1, assign to knapsack k all the (remaining) items admissible to it and continue to the next iteration; otherwise, stop. It is easy to show that the partial assignment generated by the preprocessing step combined with an optimal assignment for the residual problem is an optimum solution to the original problem. Hence, the preprocessing step partitions the MKAR problem into two independent subproblems, and solves one of the subproblems optimally.

3.2 Solving Single Knapsack Problems Consider an algorithm that runs through the knapsacks in any order and solves a single knapsack problem for each knapsack to nd a maximum weight packing of admissible items. We will call such an algorithm a successive knapsack algorithm. The generic algorithm is as follows.

Successive Knapsack Algorithm:  Initialize: S = N , Weighti = 0, for i = 1; : : :; m.  For each knapsack i, { Solve a single knapsack problem for knapsack i with item set S \ Bi to maximize assigned weight. { Let Si be the set of items packed with total weight Weighti. { Update S by removing Si from it.

Theorem 5 A Successive Knapsack Algorithm is a 1/2-approximation algorithm for MKAR. Proof:

P

Let AW denote the total weight assigned by the algorithm. That is, AW = i2M Weighti. Let S be the set of items remaining unassigned at the end of the algorithm. The proof follows from the following claim. In any feasible solution to MKAR, total weight of items from S that are assigned is at most AW . To prove this claim consider any feasible solution x. Suppose the weight of items in S that are assigned in this solution equals AW +  for some  > 0. Then in solution x, at least one knapsack, say i, has weight greater than Weighti assigned to it from S . However, this is a contradiction because when we lled knapsack i in the algorithm, we used an item set including S , and chose a maximum weight assignment of value Weighti. Thus, any feasible solution can have total assigned weight at most 2AW .

9

3.3 LP rounding: A 12 approximation algorithm The IP formulation of the MKAR problem is as follows: max st

P P wj xij

i2M j 2Bi

P wj xij  ci; i 2 M jPB xij  1; j 2 N 2

i

i2Aj xij 2 f0; 1g;

i 2 Aj ; j 2 N

where the 0-1 variable xij denotes whether item j is assigned to knapsack i. The LP relaxation corresponds to relaxing integrality of these variables. Let us denote the LP relaxation by LPMKAR. Although the LP relaxation of the multiple knapsack problem has a solution that can be constructed easily in O(n) time3 , the LP relaxation solution of the sparse problem cannot be identi ed immediately. However, the relaxation can still be solved eciently by a maximum

ow algorithm. The continuous problem reduces to a maximum ow problem on a directed graph constructed from the bigraph G as follows. Each edge (j; i) of G is directed from the item node j to the knapsack node i and is assigned capacity wj . A source node s is connected to each item node j via an arc (s; j ) with capacity wj . In addition a sink node t is connected to each knapsack node i via an arc (i; t) with capacity ci . Then, the maximum ow from s to t equals the LP relaxation value and the amount of ow on arc (j; i) divided by wj gives the value of xij . Thus, if ow on (j; i) equals wj , i.e. xij = 1, then item j is assigned to knapsack i. If 0 < xij < 1, the variable is said to be fractional (in the corresponding solution). Let x be an optimal basic feasible solution to the LP relaxation and let f denote the vector of fractional variables of x. We denote the subgraph of the bigraph representation G induced by the edges in f by RG and call it the residual graph.

Lemma 6 The residual graph, RG, of the LP relaxation is a forest. Proof: Suppose there exists a cycle, say C . Note that C has to be even. Let C =

I ; K ; I ; K ; : : :; Kl; I , where Ij denote item nodes and Ki denote the knapsack nodes in the cycle. Let fC = (f ; : : :; fk ) denote the vector of fractional values on the edges of C , when C is traversed in the above order. The main idea is that keeping all other values xed, we can perturb the variables on this cycle such that x is a convex combination of 2 feasible solutions, which is a contradiction. The two feasible solutions x and x are obtained by adding the vectors f and f to fC , respectively. The rst perturbation vector is f = ("; ? ww 21 "; ww 21 "; : : :; ?") and the second one is f = (?"; + ww 21 "; ? ww 21 "; : : :; +"). Note 1

1

2

2

1

1

1

1

1

3

I

I

I

I

2

2

It amounts to solving the LP relaxation of a single 0-1 knapsack problem.

10

I

I

I

I

2

that x1 and x2 are feasible to LP-MKAR since the perturbations preserve both the capacity and the assignment constraints. Note also that x is a convex combination of x1 and x2 . 2 We can consider each connected component separately. Let OPT denote the optimum value of LP-MKAR and let AW (Ki) denote the total weight assigned to knapsack Ki in x. In RG, if a knapsack has degree 1, then we refer to that knapsack as a singleton knapsack.

Lemma 7 If RG has a singleton knapsack node, say K , then there is a rounding scheme for 1

knapsack K1 such that

R  OPT ? 1=2AW (K ); 1

1

where R1 denotes the objective function value of the solution obtained after the rounding step for K1.

Proof: Let S denote the set of items which have been assigned (integrally) to knapsack K

1

by the solution x. Note that S is nonempty, otherwise there would be no fractional variable incident on K1. Let W (S ) denote the total weight of items in S . In addition, let I1 be the item which is fractionally assigned to knapsack K1 with value f1 . For ease of notation we will denote wI1 by w1. If W (S )  w1f1 , then we round f1 down to zero. Now K1 has no fractional variable incident on it. Let us denote the new solution by x1. Then, x1 has objective value R1 = OPT ? w1f1 . Since AW (K1 ) = w1 f1 + W (S )  2w1f1, it is true that R1  OPT ? 1=2AW (K1). On the other hand, if w1f1 > W (S ), then we unassign all items in S and round f1 to 1. We also round all other fractional variables incident on I1 to 0. Then, R1  OPT ? W (S )  OPT ? 1=2AW (K1). 2 Note that after the rounding step for knapsack K1, node K1 is removed from RG and the new residual graph is still a forest, as the rounding step does not add any new edges to RG. We call the above rounding step a \type 1" step. Next we de ne a second type of rounding step: a \type 2" step.

Lemma 8 If RG has no singleton knapsack node, then the fractional solution corresponding to RG can be perturbed such that (i) the objective function value does not change and (ii) the number of arcs in RG reduces by atleast 1.

Proof: Suppose there exists no singleton knapsack at the current bigraph RG. Consider a knapsack node K of degree at least 2. Let e1 and e2 be the two edges incident on K , with fractional values f1 and f2 , respectively. Consider the two disjoint paths originating from node K and starting with edges e1 and e2 . Each of these paths must have an item node as its endpoint, for otherwise, we would have a singleton knapsack node. The key observation is that for any of these paths, the leaf node has an assignment constraint which is not tight. Along 11

each path we can do a perturbation on the f values such that we obtain a feasible solution having one less fractional variable and the same objective value. The perturbations are as follows. Let f 1 = (f1; : : :; fl ) denote the vector of fractional values on the edges of the rst path and let I1 ; : : :; Il denote the items on this path. Similarly, let f 2 = (fl+1 ; : : :; fp ) denote the fractional vector and Il+1 ; : : :; Ip denote the items of the second path. We modify the current solution x in f 1 and f 2 by adding the vectors f"1 and f"2 , respectively. The rst perturbation vector is f"1 = ( ww1l "; ? ww2l "; : : :; ") and the second one is f"2 = (? wwl+1l "; wwl+1l "; : : :; ? wwpl "). (See Figure 2). Note that the new solution is feasible to LP-MKAR since the perturbations preserve both the capacity and the assignment constraints. Now we can increase " until one of the fractional values hits 0 or 1. This new solution will have at least one less fractional variable. Thus, in RG, at least one edge is removed. The perturbations do not change the objective function value; a new feasible solution with the same objective value is obtained. 2 Il

fl . . .

.

f3

I2

.

. .

.

f l+2 f l+3 . . .

+ wl /w1 ε - w l/w l+1 ε

K

f l+1

Il+2

. . .

- w l/ w1 ε I1

f1

Il+1

+ wl /w2 ε

I2

.

f2

I1



Il

- w l/w l+2 ε

. . .

Il+2 . .

Ip

. .

.

fp

K

+ wl /wl+1 ε

Il+1

.

- w l/w p ε

Ip

Figure 2: Perturbation of the fractional variables Now using the two types of rounding steps, we have the following rounding scheme.

Step 0: Obtain a vertex solution x to the LP-MKAR. Construct the residual graph RG based on the fractional variables.

Step 1: If there exists a singleton knapsack node in RG, perform a type 1 rounding. Update

RG.

Repeat Step 1 until there are no singleton knapsacks in RG .

Step 2: Pick a knapsack node, say K, of degree at least 2. perform a type 2 rounding. Update RG.

Go to Step 1.

Theorem 9 There is a 1/2-approximation algorithm for MKAR that is based on an LP rounding scheme and runs in time bounded by a polynomial in the input size. Proof: The rounding scheme outlined above yields a 2-approximation algorithm. 12

After every type 1 rounding step, a knapsack node is removed from RG. After every type 2 rounding step, at least one edge is removed from RG. Therefore, the rounding algorithm has at most m + n iterations each of which takes at most O(mn) time. The LP relaxation can be solved by a maximum ow algorithm and a vertex solution can be found in polynomial time (Ahuja, Magnanti and Orlin [AMO93]). After any type 2 step, the objective value does not change. Let Rk denote the objective function value after kth type 1 step. Suppose we pick knapsack Ki at the ith type 1 step. Then, due to Lemma 7 we have

Xk

Rk  OPT ? 1=2( AW (Ki )): i=1

Let p denote the last type 1 step. Note that p  m. Then,

Xp

Rp  OPT ? 1=2( AW (Ki))  OPT ? 1=2OPT = 1=2OPT: i=1

Hence, the feasible integral solution output by the algorithm has objective value at least half of the LP relaxation value. 2

Corollary 10 The LP relaxation bound is 2-good. That is, the optimal objective value of the LP relaxation for the sparse multiple knapsack problem is at most 2 times the optimal objective value of the sparse multiple knapsack problem. Proof: Given an instance I of MKAR, let LP (I ) denote the optimal value for the LP re-

laxation, and let OPT (I ) denote the optimal value for MKAR. In addition, let H (I ) denote the value of the solution output by the rounding algorithm for instance I . By Theorem 9, 1=2LP (I )  H (I ). Since H (I )  OPT (I ), it follows that LP (I )  2OPT (I ), for any instance I. 2 The IP-LP gap of 2 is tight, as shown by the following simple instance . Consider the instance with one knapsack of capacity 2M , and only 2 items of weight M +  and M . The optimum IP solution will have assigned weight M + , whereas optimum value of the LP relaxation is 2M .

3.4 Modi ed Greedy The performance ratio of the greedy algorithms discussed in Section 3.1 was relatively low, because those algorithms assin items to the \wrong" knapsacks in certain instances. They create assignments, in which the remaining unassigned items are admissible only to the knapsacks that are already lled by the items, which are also admissible to some of the unutilized 13

knapsacks. To overcome this drawback, we have developed a modi ed greedy algorithm, which we present in this section. The main idea is to give priority to the items, which are admissible only to few knapsacks, while creating assignments.

Algorithm Modi ed Greedy (1) For each knapsack i, compute remaining admissible weight RAi. (2) Pick a knapsack, say knapsack k, with the smallest RAk . (2.a) If the admissible weight to knapsack k, excluding the admissible weight to the remaining un lled knapsacks is larger than ck =2, assign as much as you can from that weight to knapsack k (note that we can assign at least ck =2). (2.b) If (2.a) does not hold, then greedily assign as much as you can to knapsack k (sorting the items by decreasing order of weight). Again, if the total remaining admissible weight to knapsack k is at least ck , we can assign at least ck =2.

Theorem 11 Algorithm Modi ed Greedy is a 2-approximation for MKAR. Proof Based on the assignment generated by the algorithm, let us partition the knapsacks as follows: Type 1. Knapsacks which are assigned weight in Step (2.a) Type 2. Knapsacks which are assigned weight in Step (2.b) and are at least half full. Type 3. Knapsacks which are assigned weight in Step (2.b) and are less than half full.

Let Ct be the total capacity and Wt be the total assigned weight to type t knapsacks, t = 1; 2; 3. Let W = W1 + W2 + W3 denote the total weight assigned by Algorithm Modi ed Greedy and let W  denote the weight assigned in an optimum solution. Let W20 be the total weight of the remaining unassigned items, which are admissible to type 2 knapsacks. (Note that any remaining unassigned item is admissible only to type 1 or type 2 knapsacks.) The proof follows from the following claims. Claim 1: The weight assigned in an optimum solution is at most C1 + W2 + W20 + W3 . Claim 2: W20  C2 =2. Claim 3: The total weight assigned by modi ed greedy is at least C1 =2 + W2 + W3.

So, the ratio W=W  is at least

C =2 + W + W  W  1 1  0 0 0 C + W + W + W W + W 1 + W =W 2 1

1

2

2

3

2

2

3

2

2

2

14

2

, because W2  C2 =2  W20 and W20 =W2  1. Proof of Claim 1: The maximum weight that can be assigned to type 1 knapsacks is at most C1 . Any remaining unassigned items can only be assigned to type 2 knapsacks, and the total weight of those items is at most W20 . Proof of Claim 2: By induction, at the j-th execution of Step (2.b), the unassigned weight admissible to the knapsack under consideration, say knapsack k, excluding the admissible weight to all the remaining knapsacks is less than ck =2. 2

4 Bicriteria Problems In the surplus inventory matching application that we model, in addition to maximizing assigned weight, minimizing total utilized capacity (or equivalently, total unused capacity of the utilized knapsacks) is also an important objective that needs to be considered. With the two objectives, the choice of the knapsack to which an item is assigned becomes more critical. As the problem gets sparser, any solution that maximizes assigned weight does not necessarily have small capacity, hence the bicriteria problem becomes more interesting for sparse problems. In approximating a bicriteria problem, it is common practice to choose one of the criteria as the objective and to bound the value of the other by means of a constraint. In this section, we consider BMKAR with the objective of minimizing total utilized capacity, subject to a lower bound on the assigned weight. The integer programming formulation of this capacity minimization problem (CM) is as follows. min st

P ci zi

i2M

P P wj xij  T iP Mj B wj xij  ci ; i 2 M j PB xij  1; j 2 N 2

2

2

i

i

i2Aj

zi  xij ; i 2 Aj ; j 2 N xij 2 f0; 1g; It is easy to construct an instance of CM for which the algorithms we have presented so far fail to perform well in terms of minimizing utilized capacity. Consider an instance with k + 1 unit weight items, k knapsacks of capacity k ? 1 and an additional knapsack of capacity k. Item i can be assigned to knapsack i, i = 1; : : :; k +1. In addition, each of the rst k items can be assigned to the (k + 1)st knapsack that has capacity k. The target weight to be assigned is T = k. The optimal solution is to assign k items to the knapsack of capacity k so that the utilized capacity is k. The greedy, greedy ratio, modi ed greedy and the successive knapsack 15

algorithms will all assign one item to each of the rst k knapsacks and utilize k(k ? 1) units of capacity, which is k ? 1 times the optimum capacity. One natural question to ask is: what is the best possible performance guarantee of an algorithm for the objective of minimizing utilized capacity, if the weight constraint must be satis ed exactly? The following theorem shows that the best possible performance ratio for any polynomial time algorithm is log n.

Theorem 12 Any algorithm which satis es the minimum assigned weight constraint exactly

cannot have a performance ratio better than log n (where n is the number of items) for BMKAR with the objective of minimizing utilized capacity, unless NP  DTIME (nlog log n .

Proof The reduction is from the minimum set cover. Given an instance of the minimum set cover, i.e. a set S , with jS j = n and a collection of subsets C ; C ; : : :; Ck of S , construct an 1

instance of BMKAR as follows.

2

For each element of S , create an item of weight : For each subset Ci , create a knapsack with capacity 1. Let the minimum weight we want to assign equal T = n. The minimum capacity needed to assign weight n is equal to the minimum number of knapsacks to assign weight n, which is equal to the cardinality of the minimum set cover. However, minimum set cover is not approximable within (1 ? ) log n unless NP  DTIME (nlog log n [CK98][F96]. 2 In the remaining part of this section, we present two algorithms, which relax the assigned weight constraint in order to minimize the utilized capacity. An ( ; )-approximation algorithm for CR generates solutions with utilized capacity at most times the optimum capacity and with assigned weight at least times the lower bound T . The algorithms we present are (1/3,2)- and (1/2,3)-approximation algorithms.

4.1 Solving Single Knapsack Problems Given a set S  N of items, let Weighti denote the maximum weight of items from S that can be packed into knapsack i. Note that Weighti can be found by a pseudo-polynomial algorithm that solves a single knapsack problem. Let us denote the unused capacity, or waste, in this knapsack by Wastei = ci ? Weighti . Let AW denote the total assigned weight by the following algorithm.

Selective Successive Knapsack Algorithm Initialize: S = N , R = M , Weighti = 0, i 2 M , AW = 0.

Phase I:

While AW < T=3 and R is non-empty: 16

 For i 2 R { Calculate Weighti and Wastei by solving a single knapsack problem for knapsack i with item set S \ Bi .  Pick the knapsack with minimum ratio of Wastei =Weighti, say knapsack k.  Pack items into knapsack k to obtain Weightk.  Remove assigned items from S , and knapsack k from R. Phase II:

 Sort knapsacks in R in non-decreasing order of capacity.  Pick the next knapsack, say k.  If Weightk + AW  T=3, { pack items from S \ Bk to obtain Weightk and stop. Suppose we run the selective successive knapsack algorithm. If the algorithm ends with AW < T=3, then we can conclude that there exists no assignment of value at least T for CM. Since the algorithm is a successive knapsack algorithm, by Theorem 5, we should be able to assign at least T=2, if there is any feasible solution with assigned weight T or more. If the algorithm outputs AW  T=3, in order to make sure that the capacity of the knapsack picked in Phase II is small enough, we run the algorithm by excluding the largest capacity knapsack. We repeat the runs by excluding the k ? 1st largest capacity knapsacks in the kth run, as long as AW output by the algorithm is at least T=3. We stop the runs when AW gets smaller than T=3. Among the solutions output over all runs, the minimum capacity solution is picked.

Theorem 13 By running the selective successive knapsack algorithm at most m times, a (1/3,2)-approximate solution is obtained for the CM problem, if the problem is feasible.

Proof: Suppose the problem is feasible. Consider the run which output the solution with minimum capacity. This solution has AW at least T=3 by the above argument. We need to show the bound on utilized capacity. Consider a solution xH output by the algorithm and an optimal solution xOPT . Modify xOPT to obtain solution xOPT as follows. Let OPT be the set of knapsacks used by the optimal solution. Let H be the set of knapsacks picked in Phase I of the algorithm. Remove from OPT all knapsacks that are also in H . Among items assigned to remaining knapsacks of OPT , unassign those assigned to H in xH . Let WH denote the weight assigned to H in xH . Weight of items removed from OPT is at most 2WH . Clearly, weight of items assigned both in xH and xOPT is at most WH . Let S be the set of items that we are assigned to a knapsack 0

17

in H \ OPT in xOPT but are not assigned anywhere in xH . We claim that weight of S is at most WH . Let Si be the items from S assigned to knapsack i 2 H in xOPT . These items were available when knapsack i was lled by the algorithm. Therefore, weight of Si can be at most Weighti . Thus, weight of S is at most WH , which is at most T=3 by the stopping condition of Phase 1. Total weight removed from OPT is at most 2T=3.

P

For a knapsack subset I , let C (I ) denote i2I ci . For a solution x, let AW (x) denote total weight assigned in x and Waste(x) denote the total waste in x. The following are true. 1) C (OPT 0 )  C (OPT ) since OPT 0  OPT . 2) AW (xOPT )  T=3 because AW (xOPT )  T and the weight of items removed from OPT is at most 2T=3. 3) Waste(xOPT ) + T=3  C (OPT 0 ) because of the de nition of waste and 2). 0

0

For any knapsack i 2 H ,

Wastei  Waste(xOPT )  Waste(xOPT ) : Weighti AW (xOPT ) T=3 0

0

0

The rst inequality follows since the ratio of a knapsack picked by the algorithm is smaller than the ratio of knapsacks in OPT 0 when that choice was made. The average ratio of knapsacks in OPT 0 is at least as big as their ratios when the choice was made since some items have been removed from OPT 0 . The second inequality follows directly from 2). By summing these inequalities over i's, we obtain X Waste  Waste(xOPT ) X Weight : i i T=3 i2H i2H P Since Weighti  T=3, i2H X Waste  Waste(x ): i OPT 0

0

i2H

Using this inequality together with 1) and 3), we get X X C (H ) = Wastei + Weighti  Waste(xOPT ) + T=3  C (OPT 0 )  C (OPT ): i2H

0

i2H

We can also bound the capacity of the knapsack picked in Phase II by C (OPT ). The reason is as follows. Let p be a knapsack in OPT with maximum capacity. There is a run of the algorithm which excludes knapsacks of capacity greater than c(p). In that run, the capacity of the knapsack picked in Phase II, say knapsack t, is at most the capacity of any knapsack in OPT . Since the run with minimum capacity is chosen, C (H ) + c(t)  2C (OPT ). Therefore, capacity utilized by xH is at most twice the capacity utilized by xOPT while the weight assigned by xH is at least one-third of the target T . 2

4.2 Bicriteria Modi ed Greedy In this section, we present an algorithm which selects a knapsack with large admissible weight, but small capacity at each step. The algorithm has two phases. In the rst phase, we assign 18

items to knapsacks such that the total assigned weight at the end of phase 1 is close to, but does not exceed half of the target weight. In the second phase, we choose one more knapsack and assign items to it, such that the the total assigned weight at the end of the second phase is at least half of the target weight.

Algorithm Bicriteria Modi ed Greedy Phase 1:

(1) For each remaining knapsack i, compute the remaining admissible weight RAi and sort the knapsacks in non-increasing order of RAi =ci. (2) Let k be the knapsack with the largest ratio RAk =ck . (2.a) If the admissible weight to knapsack k, excluding the admissible weight to the remaining un lled knapsacks is larger than ck =2, assign as much as you can from that weight to knapsack k (note that we can assign at least ck =2). (2.b) If (2.a) does not hold, then greedily assign as much as you can to knapsack k (sorting the items by non-increasing order of weight). Again, if the total admissible weight is at least ck =2, we can assign at least ck =2. (3) If the total assigned weight is more than T=2, remove the items assigned to the last knapsack and go to Phase 2. Otherwise, go to step (1). Phase 2: For each remaining knapsack b: (1) Sort the items admissible to that knapsack in non-increasing weight. (2) Greedily assign items to knapsack b until the total assigned weight exceeds T=2. Pick the knapsack with the smallest capacity. (We know there exists such a knapsack from Step (3) of Phase 1.)

In the analysis of the algorithm, we rst show that the total capacity used in the rst phase of the algorithm is at most twice the capacity in an optimum solution.

Lemma 14 The capacity used in the rst phase of the algorithm is at most twice the optimum capacity needed to assign weight T .

Proof Based on the assignment in Phase 1, let us partition the knapsacks into four types: Type 1. Type 2. Type 3. Type 4.

Knapsacks which are assigned weight in Step (2.a). Knapsacks which are assigned weight in Step (2.b) and are at least ci =2 full. Knapsacks which are assigned weight in Step (2.b) and are less than ci =2 full. Knapsacks which are not utilized.

Let St denote the set of type t knapsacks, t = 1; 2; 3. Let Wt be the total weight assigned to, Ct be the total capacity and kt be the number of type t knapsacks, t = 1; : : :; 4. 19

Claim 1: Type 3 knapsacks (if any) are the last k3 knapsacks lled in Phase 1.

This is true, because the algorithm chooses the knapsacks in non-increasing order of RAi =ci. Let W20 be the total weight of the items, which are admissible to type 2 knapsacks, but either not assigned at all or assigned to type 1 knapsacks. Claim 2: W20  C2 =2. Proof of Claim 2: By induction, at the j -th execution of Step (2.b), the remaining weight admissible to the knapsack under consideration, say knapsack i, not admissible to the remaining knapsacks, is less than ci =2. Claim 3: Any item assigned to a type 1 knapsack is not admissible to type 3 or 4 knapsacks. Proof of Claim 3: When an item is assigned to a type 1 knapsack, that item is admissible only to the knapsacks that are considered before. By Claim 1, type 3 and type 4 knapsacks are not considered until all type 1 and type 2 knapsacks are assigned, so the items assigned to type 1 knapsacks are not admissible to type 3 and type 4 knapsacks.

Let A(S ) be the total weight admissible to the knapsacks in set S (and possibly to other knapsacks as well). Let RAf1; 2g(S ) denote the total remaining weight admissible to the knapsacks in set S after type 1 and type 2 knapsacks are assigned by the heuristic. Claim 4: If k3 > 0, then for any subset S of type 3 and type 4 knapsacks, A(S2 [ S )  2W2 + RAf1; 2g(S ). Proof of Claim 4: From claims 2 and 3, we have A(S2 [ S )  W2 + W20 + RAf1; 2g(S ). To see why this is true, let us partition the admissible weight to type 2 knapsacks and to the knapsacks in S as follows: A(S2 [ S ) = A(S2 ? S ) + A(S ? S2 ) + A(S2 \ S ).

W 21: total weight of the items assigned to type 1 knapsacks, which are also admissible to type 2 knapsacks, but not to the knapsacks in S . W 20: total weight of the remaining unassigned items, which are admissible to type 2 knapsacks. WS 1: total weight of the items assigned to type 1 knapsacks, which are also admissible to the knapsacks in S , but not to type 2 knapsacks. W (2S )1: total weight of the items assigned to type 1 knapsacks, which are admissible to type 2 knapsacks and to the knapsacks in S.

W (2S )2: total weight of the items assigned to type 2 knapsacks, which are also admissible to type 2 knapsacks and to the knapsacks in S . W 22: total weight of the items assigned to type 2 knapsacks, which are not admissible to the knapsacks in S . W (2S )S : total weight of the remaining items admissible to the knapsacks in S , which are also admissible to type 2 knapsacks.

20

WSS : total weight of the remaining items admissible to the knapsacks in S , which are not admissible to type 2 knapsacks.

>From Claim 3, we know that WS 1 andW (2S )1 are zero, so we have A(S ? S2)  WSS and

A(S ? S )  W 21 + W 20 + W 22. A(S \ S )  W (2S )2 + W (2S )3. 2

2

Summing up both sides of the two inequalities above, we get

A(S [ S )  W 21 + W 20 + W (2) + WSS + W (2S )S  2W (2) + RAf1; 2g(S ) 2

since W 21 + W 20  W (2) by Claim 2, and since WSS + W (2S )S = RAf1; 2g(S ). Claim 5: If k3 > 0, maximum weight one can assign to type 1 and type 2 knapsacks and to a subset S of type 3 and type 4 knapsacks in any solution is at most C1 + 2W2 + RAf1; 2g(S ). Proof of Claim 5: Maximum weightone can assign to type 1 knapsacks is C1 . Maximum weight one can assign to type 2 knapsacks and S is at most A(S2 [ S )  2W2 + RAf1; 2g(S ). So, the total weight one can assign to the knapsacks in S1 [ s2 [ S is at most C1 +2W2 + RAf1; 2g(S ). Corollary 5: If k3 > 0, then the maximum weight one can assign to type 1, type 2 and type 3 knapsacks in any solution is at most C1 + 2W2 + W3  2W ? W3 = T ? W3 .

Let St , t = 1; 2; 3; 4 denote the subset of type t knapsacks used by an optimum solution and let C  denote the total capacity of those knapsacks. Clearly, S4 is not empty (from Corollary 5). Let W  (S ) denote the weight assigned to the knapsacks in set S by the optimum solution. Let RAf1; 2; 3g(S ) be the remaining admissible weight to a subset S of type 4 knapsacks after the type 1,2,3 knapsacks are lled by algorithm modi ed greedy. Claim 6: If k3 > 0, RAf1; 2; 3g(S 4)  W3 . Proof of Claim 6: From Claim 5,

T = 2W  W  (S ) + W (S ) + W  (S  [ S  )  C + 2W + RAf1; 2g(S  [ S ): 1

2

3

1

4

2

3

4

We also have T = 2W  C1 + 2W2 + 2W3 from Corollary 5. Hence,

C + 2W + 2W  W  (S  ) + W (S ) + W  (S  [ S )  C + 2W + RAf1; 2g(S  [ S ) 1

2

3

1

2

3

4

1

2

3

and this implies RAf1; 2g(S3 [ S4 )  2W3. Furthermore, we have 2W3  RAf1; 2g(S3 [ S4 ) = RAf1; 2g(S3) + RAf1; 2; 3g(S4) = W3 + RAf1; 2; 3g(S4) 21

4

and hence RAf1; 2; 3g(S4)  W3 . Claim 7: If k3 > 0, then C (S4 )  C3 . Proof of Claim 7: From Claim 6, S4 is such that RAf1; 2; 3g(S4)  W3 and C (S4 ) is minimum. By the choice of the algorithm, RAi =ci  RAf1; 2; 3g(S4)=C (S4 ) for every type 3 knapsack i used by the algorithm. Hence, W3=C3  RAf1; 2; 3g(S4)=C (S4), and since RAf1; 2; 3g(S4)  W3, we have C (S4 )  C3 . Claim 8: Let H = S1 [ S2 [ S3. If k3 > 0, then C   1=2C (H ). Proof of Claim 8: We consider two cases. Case 1.C3  C1 + C2 . In this case, C (H )  2C3  2C (S4 )  2C  . Case 2.C3 < C1 + C2 . We have C (H ) = C1 + C2 + C3  2[C1 + C2 ]  2T  2C  . Claim 9: If k3 = 0, then C  > C (H ). Proof of Claim 9: Total capacity used by the rst phase of the heuristic is C1 + C2 < T since each knapsack is at least half full.

>From claims 8 and 9 it follows that C (H )  2C  . 2

Lemma 15 The capacity of the knapsack picked in Phase 2 of Algorithm Modi ed Greedy is at most the optimum capacity.

Proof The result follows by using similar arguments as in the proof of Phase 2 of the successive knapsack algorithm.

Theorem 16 Algorithm Modi ed Bicriteria Greedy assigns at least half of the target weight and uses at most three times of the optimum capacity. Hence, it is a (1/2,3)-approximation.

5 Conclusions In this paper, we considered the multiple knapsack problem with assignment restrictions (MKAR). For the objective of maximizing assigned weight, we presented two 1/3 approximation algorithms and two 1/2 approximation algorithms. For the bicriteria problem, where we consider maximizing assigned weight as well as minimizing utilized capacity, we give two approximation algorithms with performance ratios (1/3, 2) and (1/2, 3) respectively. One fundamental issue which remains unanswered is the existence of a polynomial time approximation scheme (PTAS) for the single objective problem. Also, an approximation algorithm with a performance ratio better than 2 is an open question. 22

References [AMO93] A.K. Ahuja, T.L. Magnanti, and J.B. Orlin. Network Flows. Prentice Hall, New Jersey, 1993. [CGJ84] E.G. Co man, M.R. Garey, and D.S. Johnson. Approximation algorithms for binpacking: An updated survey. In G. Ausiello, M. Lucertini, and P. Sera ni, editors, Algorithm Design for Computer System Design, pages 49 { 106. Springer-Verlag, Wien, 1984. [CGJ97] E.G. Co man, M.R. Garey, and D.S. Johnson. Approximation algorithms for bin-packing: A survey. In D.S. Hochbaum, editor, Approximation Algorithms for NP -hard Problems, pages 46 { 93. PWS Publishing Company, Boston, 1997. [CK98] P. Crescenzi and V. Kann. A compendium of NP optimization problems. Available at http://www.nada.kth.se/ viggo/problemlist/compendium.html. [DK98] M. W. Dawande and J. R. Kalagnanam. The multiple knapsack problem with color constraints. Technical Report, RC21138, IBM T. J. Watson Research Center, Yorktown Heights, NY, 1998. [F96] U. Feige. A threshold of ln n for approximating set cover. Proc. 28th annual ACM symposium on theory of computing, 314-318, 1996. [FMW96] C.E. Ferreira, A. Martin, and R. Weismantel. Solving multiple knapsack problems by cutting planes. SIAM J. Optimization, 6(3):858 { 877, 1996. [FL86] D.K. Friesen and M.A. Langston. Variable sized bin packing. SIAM J. Computing, 15:222 { 230, 1986. [GJ79] M.R. Garey and D.S. Johnson. Computers and Intractibility:A Guide to the Theory of NP -Completeness. W.H. Freeman and Co., San Francisco, 1979. [HF78] M.S. Hung and J.C. Fisk. An algorithm for 0-1 multiple knapsack problems. Naval Res. Logist. Quarterly, 24:571{579, 1978. [Kann91] V. Kann. Maximum bounded 3-dimensional matching is MAX SNP-complete. Information Process. Lett., 37: 27-35, 1991. [Kar72] R.M. Karp. Reducibility among combinatorial problems. In R.E. Miller and J.W. Thatcher, editors, Complexity of Computer Computations, pages 85 { 103. Plenum Press, New York, 1972. [KDTL97] J. Kalagnanam, M. Dawande, M. Trumbo, and H.S. Lee. The surplus inventory matching problem in the process industry. Technical Report RC21071, IBM T. J. Watson Research Center, Yorktown Heights, NY, 1997. [MT80] S. Martello and P. Toth. Solution of the zero-one multiple knapsack problem. Euro. J. Oper. Res., 4:322{329, 1980. 23

[MT81a] S. Martello and P. Toth. A bound and bound algorithm for the zero-one multiple knapsack problem. Discrete Applied Math., 3:275{288, 1981. [MT81b] S. Martello and P. Toth. Heuristic algorithms for the multiple knapsack problem. Computing, 27:93{112, 1981. [MT89] S. Martello and P. Toth. Knapsack Problems. John Wiley and Sons,Ltd., New York, 1989. [MT90] S. Martello and P. Toth. Lower bounds and reduction procedures for the bin packing problem. Discrete Applied Math., 28:59{70, 1990.

24

Suggest Documents