the ordered open-end bin-packing problem - Semantic Scholar

0 downloads 0 Views 178KB Size Report
We study a variant of the classical bin-packing problem, the ordered open-end bin-packing problem, where first a bin can be filled to a level above 1 as long as ...
THE ORDERED OPEN-END BIN-PACKING PROBLEM JIAN YANG Department of Industrial and Manufacturing Engineering, New Jersey Institute of Technology, Newark, New Jersey 07102, [email protected]

JOSEPH Y.-T. LEUNG Department of Computer Science, New Jersey Institute of Technology, Newark, New Jersey 07102, [email protected] We study a variant of the classical bin-packing problem, the ordered open-end bin-packing problem, where first a bin can be filled to a level above 1 as long as the removal of the last piece brings the bin’s level back to below 1 and second, the last piece is the largest-indexed piece among all pieces in the bin. We conduct both worst-case and average-case analyses for the problem. In the worst-case analysis, pieces of size 1 play distinct roles and render the analysis more difficult with their presence. We give lower bounds for the performance ratio of any online algorithm for cases both with and without the 1-pieces, and in the case without the 1-pieces, identify an online algorithm whose worst-case performance ratio is less than 2 and an offline algorithm with good worst-case performance. In the average-case analysis, assuming that pieces are independently and uniformly drawn from [0, 1], we find the optimal asymptotic average ratio of the number of occupied bins over the number of pieces. We also introduce other online algorithms and conduct simulation study on the average-case performances of all the proposed algorithms. Received April 2001; revisions received July 2001, December 2001, July 2002, October 2002; accepted November 2002. Subject classifications: Mathematics: combinatorics. Probability: stochastic model applications. Transportation: costs. Area of review: Optimization.

charged to each ticket follow the order of the passenger’s overall itinerary. CBP, the problem from which the current OOBP stems, has been intensively studied since the 1970s both for its wide applicability and its theoretical richness. The readers are referred to the survey by Coffman et al. (1996) and the references therein for a firmer grasp of the problem. An even more related problem is the plain open-end binpacking (POBP) problem where the only departure from the current problem is the former’s lack of the “ordered” requirement. This problem was studied by Leung et al. (2001). There, the problem was found to be NP-hard and the next-fit (NF) heuristic was found to be asymptotically the best online algorithm with a worst-case performance ratio of 2. Also, a polynomial time optimization scheme was found. We shall conduct both worst-case and average-case analyses to algorithms for OOBP. Let us first briefly introduce the measures for gauging the performances of the algorithms. For a given list L and an algorithm A, let AL be the number of bins used when algorithm A is applied to list L and let OPT L denote the optimum number of bins for a packing of L. Define RA L = AL/OPT L. The absolute worst-case ratio RA for algorithm A is defined as   RA = inf r  RA L  r for all lists L 

1. INTRODUCTION In this paper, we consider a variant of the classical binpacking (CBP) problem, which we call the ordered openend bin-packing (OOBP) problem. Like CBP, we are given a list L = p1  p2      pn  of n pieces with each pi being a real number in 0 1 or 0 1 (as will be shown later, pieces of size 1, hereafter to be abbreviated as 1-pieces, play distinct roles in the worst-case analysis) and an infinite collection of unit-capacity bins, and our goal is to pack the pieces into the minimum number of bins. However, unlike CBP, a bin can be filled to a level exceeding 1 given that there exists a designated last piece in the bin such that the removal of this piece brings the bin’s level back to be less than 1. In addition, we require that the designated last piece in a bin be the largest-indexed piece among all pieces in that bin. OOBP models an optimization problem in fare payment in the subway stations in Hong Kong. There, a passenger can purchase a ticket with a standard denomination, say $20, and the amount is magnetically recorded on the ticket. Every time the passenger reaches his/her destination of a subway trip, the ticket will be presented to a machine which automatically deducts the fare from the ticket. If the remaining balance is still positive, the ticket will be returned to the passenger; otherwise, it will not. Thus, the passenger can gain if the fare of a trip is more than the balance in the ticket at the start of the trip. In this situation, the bins correspond to the tickets and the pieces correspond to the fares. For a traveler who makes several trips, his/her goal is to minimize the number of tickets that he/she needs to purchase. The additional “ordered” requirement is imposed on the problem to reflect that the trips that are 0030-364X/03/5105-0759 1526-5463 electronic ISSN

The asymptotic worst-case ratio R A is defined as  R A = inf r  For some N > 0 RA L  r for any L

 with OPT L  N 

When the list Ln consists of n pieces independently drawn from a random population with distribution F , the 759

Operations Research © 2003 INFORMS Vol. 51, No. 5, September–October 2003, pp. 759–770

760

/ Yang and Leung

nA F  for lists over distribution F of average-case ratio R length n is defined as   ALn  n   RA F  = ERA Ln  = E OPT Ln   The asymptotic average-case ratio R A F  is defined as n  R A F  = lim sup RA F  n→

Here, for any algorithm A, we are only interested in R A , its  asymptotic worst-case ratio, and R U 0 1 , its asympA totic average-case ratio when the distribution is U 0 1 . Because in this paper U 0 1 is the only distribution and the asymptotic performances are the only performances that we care about, we shall omit mentioning both U 0 1 and “asymptotic” later on. n to denote For average-case analysis, we use A    EALn  /n and A to denote lim supn→ An . Also, we use n  OPT to denote EOPT Ln  /n and OPT to denote n lim supn→ OPT . For OOBP, as for many other problems,  obtaining R seems to be more difficult A by its definition  and OPT  individually and applying the than obtaining A identity  A  R (1)  A = OPT Later, we will verify that (1) indeed works for OOBP,  and OP T  separately. which justifies finding A We will propose three online algorithms. An online algorithm must pack pieces in the order they arrive without later repacking. A key property of an online algorithm is its nonanticipativity: When packing pieces for lists LL 1 and LL 2 separately, it should pack pieces in the L portions of both lists in the same manner. The three online algorithms are mixed fit (MXF), next fit (NF), and modified best fit with parameter a ∈ 0 1 (MBFa ). We will also propose two offline algorithms, namely, divide-and-pack (DP) and greedy look-ahead next fit (GLANF). We will elaborate on the details of all the above algorithms later. The major theoretical results found are listed as follows. • For any online algorithm A, R A  1630297 with the 1-pieces, and R A  1415715 without the 1-pieces;  • R MXF  35/18 without the 1-pieces, and RMXF  25/13 regardless of the presence of the 1-pieces; • 27/20  R GLANF  3/2 without the 1-pieces, and  RGLANF  3/2 with√the 1-pieces;  • OPT = 2 − 3;  • and R DP = 1. Through computer simulation, we found that MBFa at certain as, and GLANF, have very good average-case performances. The rest of the paper is organized as follows. In §2, we carry out the worst-case analysis and introduce the MXF and GLANF algorithms. In §3, we carry out the averagecase analysis and introduce the DP algorithm. In §4, we introduce the NF and MBFa algorithms and conduct simulation results of all the algorithms. In §5, we point out existing gaps in the current paper and possible directions for future research.

2. WORST-CASE ANALYSIS 2.1. Lower Bounds for Arbitrary Online Algorithms In Leung et al. (2001), the authors were able to use a singleadversary argument to prove that no R A of any online algorithm A for POBP can be lower than 2. For OOBP, as is the same for CBP, it has the property unpossessed by POBP that, given any problem instance, there exists an online algorithm which exactly replicates what an offline optimal algorithm will do to this instance. For a problem with the above property, a proof on the lower bound of the worstcase ratio for online algorithms has to exploit the nonanticipativity property of the online algorithms, and hence has to be involved with multiple adversaries. The reader may refer to Liang (1980), Van Vliet (1995, 1996), and Yao (1980) on CBP to confirm this. Also as evidenced in CBP (Coffman et al. 1996), the gap in worst-case ratio between the bestfound online algorithm and the overall lower bound is hard to close. For OOBP, we are able to achieve a lower bound of 1.630297 for the worst-case ratio of any online algorithm using an argument involving the 1-pieces. Theorem 1. With the 1-pieces, R A  1630297 for any online algorithm A for OOBP. Proof. In general, we use a K-adversary argument. For every n, consider K lists Ln1  Ln2      LnK where for each k = 1     K Lnk consists of ck n identical pieces with size pk . We call a K-dimensional nonnegative integer vector a1      aK  a packing pattern when there could be a resulting bin containing exactly ak pk -pieces when an algorithm A is applied to the long list Ln1 Ln2 · · · LnK . Actually, we may categorize all the packing patterns into K types: Those with a1  1, which we let belong to a set M1 ; those with a1 = 0, a2  1, which we let belong to a set M2 ; those with a1 = a2 = 0, a3  1, which we let belong toa set M3 ; and so forth. We can namethe patterns 1 2     Kk=1 Mk . For convenience, denote kk =1 Mk  by Qk . For any q and k, let pattern q contain aqk pk -pieces. For an online algorithm A, for any q, suppose A produces xq n + On pattern-q bins when being applied to the long list Ln1 Ln2 · · · LnK . By the conservation of the number of pieces, we must have Qk  q=1

aqk xq  ck

∀ k = 1 2     K

By the nonanticipativity of A, we have ALn1 Ln2 · · · Lnk  =

Qk  q=1

xq n + On

For any k, suppose there is a zk such that OPT Ln1 Ln2 · · · Lnk  = zk n + On Then, rk ≡ lim sup ALn1 Ln2 · · · Lnk /OPT Ln1 Ln2 · · · Lnk  n→+

(2)

Yang and Leung Table 1.

must be equal to have zk r 

Qk  q=1

xq

761

Results for Theorem 1.

K

bK

Q

z1

z2

1 2 3 4 5 6

1 2 3 7 43 1807

1 3 8 22 87 1291

1 05 03333 0142857 0023256 0000553

1 06667 0375 0145833 0023268

Qk

/

q=1 xq /zk .

z3

(3)

Therefore, the linear program min!r  2 3 xq  0 for q = 1 2     QK " provides a lower bound for R A . On the other hand, we say that pattern q is totally dominated by pattern q if they belong to the same Mk for some k, and for every k = k     K aqk  aq k . In the above linear program, we apparently do not have to consider any pattern that is totally dominated by another pattern. Consider the following sequence of numbers: b1 = 1, b2 = 2 b3 = b1 b2 + 1 = 3 b4 = b1 b2 b3 + 1 = 7     bk = b1 b2 · · · bk−1 + 1    . For a given K, we let p1 = 1/bK , p2 = 1/bK−1      pK−1 = 1/b2 = 1/2 pK = 1/b1 = 1; and c1 = c2 = · · · = cK = 1. Using a C program, we can identify all undominated patterns. We can then find the zk s by solving certain simple linear programs using CPLEX. Finally, we can solve the linear program mentioned in the preceding paragraph by again using CPLEX. So far, we have the results for K upwards to 6. When K = 7 b7 is already as large as 3,263,443 and the number of undominated patterns is simply unmanageable by our personal computer. Table 1 shows the results, where Q is the number of undominated patterns. Hence, we obtain a lower bound of 1.630297  for R A. Without the presence of the 1-pieces, we have to settle with a slightly lower lower bound. Theorem 2. Without the 1-pieces, R A  1415715 for any online algorithm A for OOBP. Proof. We use the same idea and notation here as in the proof of Theorem 1. The only difference here is that, for a given K, we let p1 = 1/bK+1  p2 = 1/bK      pK−1 = 1/b3 = 1/3 pK = 1/b2 = 1/2; and c1 = c2 = · · · = cK−1 = 1 and cK = 2. Table 2 shows the achievable results for the current setting. Hence, we obtain a lower bound of 1.415715 for  R A. Table 2.

z5

1 06667 1 0375 066667 1 0145833 0375 0666667

For r = maxKk=1 rk , we then

∀ k = 1 2     K

z4

z6

r

1

1 13333 15 162018 1629434 1630297

2.2. The Mixed-Fit (MXF) Algorithm A reasonable algorithm will have the majority of bins filled to levels no less than 1; while under any algorithm, no bin can be filled to a level more than 2. Hence, any reasonable algorithm will have a worst-case performance ratio of no more than 2. Nevertheless, it is not an easy task to find an online algorithm with a provable less-than-2 worst-case performance ratio. Even without the 1-pieces, the mixed-fit (MXF) algorithm is the only such algorithm we have found so far. The mixed-fit (MXF) algorithm: Bins consisting of small pieces are only closed by the arrivals of large pieces. This prevents the premature closing of bins. In this algorithm, we divide the pieces into four types according to their sizes p, with a type-1 piece having 0 < p < 1/3, a type-2 piece having 1/3  p < 1/2, a type-3 piece having 1/2  p < 1, and a type-4 piece having p = 1. Accordingly, we define a type-1 open bin as an open bin containing a number of type-1 pieces, a type-2 open bin as a bin containing one or two type-2 pieces, and a type-3 open bin as a bin containing one type-3 piece. We pack pieces in the order of their arrivals. When the current piece is of type-1, we pack it into the type-1 open bins in the first-fit (FF) manner for CBP, without closing any bins. When the current piece is of type-2, we pack it into type-2 open bins, again in FF manner for CBP, without closing any bins. Suppose the current piece is of type-3 or type-4. We pack this piece into the first type-1 or type-2 open bin whose level is at least 2/3 and close it. If there is no such a bin, we pack the piece into a type-3 open bin and close it. If again there is no such a bin, we pack the piece into a new bin, and close it if the piece is a type-4 piece. The running time for MXF is On log n. The following theorem, whose proof is in the Appendix for its lengthiness, offers an upper bound for R MXF when there is no 1-piece. Theorem 3. Without the 1-pieces, R MXF  35/18  19444.

Results for Theorem 2.

K

bK+1

Q

z1

z2

1 2 3 4 5

2 3 7 43 1807

1 4 13 57 912

1 03333 0142857 0023256 0000553

1 0375 01458333 0023268

z3

1 0375 01458333

z4

1 0375

z5

r

1

1 128571 139007 141496 1415715

762

/ Yang and Leung

The following result, however, does not involve the 1-pieces. Theorem 4. Regardless of the presence of the 1-pieces, R MXF  25/13  19231. Proof. Consider the following sequence of numbers: b1 = 3, b2 = 4 b3 = b1 b2 + 1 = 13     bk = b1 b2 · · · bk−1 + 1    . For a given K, let LK be a list with KK + 2 bK − 1 pieces, which contains first KbK − 1 pieces of size 1/bK , then KbK − 1 pieces of size 1/bK−1    , then KbK−1 − 1 pieces of size 1/b2 = 1/4, and finally 3KbK − 1 pieces of size 1/b1 = 1/3. Apply MXF to LK and we will get K3bK − 1/2 + K K−1 k=2 $k =k bk bins: K bins each containing bK −1 pieces of size 1/bK , KbK−1 bins each containing bK−1 − 1 pieces of size 1/bK−1 , KbK−1 bK−2 bins each containing bK−2 − 1 pieces of size 1/bK−2      KbK−1 bK−2 · · · b2 bins each containing b2 −1 = 3 pieces of size 1/b2 = 1/4, and 3KbK − 1/2 bins each containing b1 − 1 = 2 pieces of size 1/b1 = 1/3. On the other hand, the optimal solution occupies only KbK − 1 bins, each containing one piece of size 1/bK , one piece of size 1/bK−1    , one piece of size 1/b2 = 1/4, and three pieces of size 1/b1 = 1/3. Hence, R MXF  

1 1 1 3 + + +···+ 2 bK − 1 bK−1 − 1 b2 − 1 3 1 1 1 25 + + + =  19231 2 3 12 156 13

Because b5 is already 24,493 and the sequence grows faster than exponentially, the improvement we can gain from going to Ks higher than 4 is negligible.  2.3. The Greedy Look-Ahead Next-Fit (GLANF) Algorithm The greedy look-ahead next-fit (GLANF) algorithm: There is one open bin at any moment. GLANF keeps on filling the current bin with pieces in their original order unless the first piece is 1 or the addition of the next piece will bring the bin’s level to be at least 1. For the latter situation, GLANF makes some greedy effort in filling the current open bin to the highest-possible level. We describe this effort in more detail in the following. When an empty bin is just opened, we can always say that pi1      piw are the pieces in the targeted list L that are not packed yet, where i1 < · · · < iw. To ease our notational burden, we let each ik be k. There must be a nonnegative integer r and 2r + 2 values s0  t0      sr  tr with 1 = s0  t0 < s1 < t1 < · · · < sr−1 < tr−1 < sr  tr = w + 1 such that: (1) for any m between 0 and r, −1 m tm  

m =0 j=sm

pj < 1*

(2) for any m between 0 and r − 1 and any u between tm and sm+1 − 1, −1 m tm  

m =0 j=sm

pj + pu  1

Note that the above s0      tr are unique. For any m between 0 and r − 1, let lm = arg max!pj  tm  j  sm+1 − 1". The current open bin will be filled to the m tm −1 level maxr−1 j=sm pj + plm with the correspondm=0  m =1 ing pieces. When running GLANF, every time a new bin is to be packed the selection of the remaining pieces to be packed into it can be done by one sweep of all the pieces. Thus, every bin takes On time to be packed, and hence the running time of GLANF is On2 . We can prove an upper bound for R GLANF when there is no 1-piece. Theorem 5. Without the 1-pieces, R GLANF  3/2. Proof. We assign weights W · to pieces. For a small piece p ∈ 0 1/2 , we let W p = p; and for a large piece p ∈ 1/2 1, we let W p = 1/2. For any bin B, we let the weight of the bin W B be the total weight of all pieces in it. It is easy to see that W B < 3/2 for any bin in any packing. Thus, W L < 3 · OPT L/2, where W L is the total weight of all pieces in L. We shall show that W L  GLANFL − O1. Combining the two, we get the desired result. It is clear that GLANF fills every bin to a level of at least 1, except the last one. From now on, let us ignore this last bin. If a bin in the GLANF packing either has no large piece or has two large pieces, then the bin has a weight of at least 1. Thus, if a bin has a weight less than 1, it must have exactly one large piece. We now show that after deleting O1 bins, the average weight of bins in the GLANF packing is at least 1. Let Bk be the first bin in the GLANF packing with a weight less than 1. Let the total size of all small pieces in Bk be x and the size of the large piece be a. Because Bk has a weight less than 1, we have x < 1/2. We first assume that the large piece in Bk is not the top piece in the bin. In this case we assert that there is no large piece appearing in any bins following Bk , and hence each of the subsequent bins has a weight of at least 1 and the theorem is proved. Suppose not. Let b be a large piece appearing in bin Bk+j . It is easy to see that b must appear in the list before the top piece in Bk ; otherwise, b would have been packed into Bk by the GLANF algorithm because b is a large piece and the top piece in Bk is a small piece. However, this means that b was skipped over when the GLANF algorithm considered packing b into Bk , which implies that the total size of all small pieces after b in the list that were packed into Bk is larger than or equal to the size of b. Because the size of b is larger than 1/2, the total size of all small pieces in Bk is larger than 1/2, contradicting our assumption that x < 1/2.

Yang and Leung From the above argument, we may assume that the large piece in Bk is the top piece in the bin. Let Bk+j be the first bin after Bk such that (1) Bk+j has a weight less than 1, and (2) the sum of the weights of the bins Bk  Bk+1      Bk+j−1 is less than j. By the previous argument, we may assume that the large piece in Bk+j is the top piece in the bin. Let the size of the large piece in Bk+j be b and the total size of all small pieces in Bk+j be y. By our assumption, we have y < 1/2. We first show that j cannot be 1. Suppose j = 1. The small pieces in Bk+1 must appear in the list after a; otherwise, they would have been packed into Bk by the GLANF algorithm. Because y < 1/2 and the total size of all the pieces in Bk+1 b + y  1 is larger than the size of a, the GLANF algorithm would have packed all the pieces in Bk+1 , instead of a, into Bk . This is the contradiction we sought. Thus, j  2. Assume for the moment that there is no bin among Bk+1  Bk+2      Bk+j−1 , that contains only small pieces. In other words, each bin has one or two large pieces. First, assume that every bin has two large pieces. In this case the total size of all the small pieces in Bk  Bk+1      Bk+j is less than 1, by definition of Bk+j . The GLANF algorithm would have packed all these small pieces along with b, instead of a, into Bk , because the total size of all the former pieces is at least 1 and a is less than 1. Thus, there must be a bin Bk+i such that Bk+i contains one large piece, and each of the bins Bk+1  Bk+2      Bk+i−1 contains two large pieces. Suppose the large piece in Bk+i is the top piece. Then, the total size of all the small pieces in Bk  Bk+1      Bk+i is less than 1, by definition of Bk+j . The GLANF algorithm would have packed all the small pieces in Bk+1      Bk+i−1 , and all the pieces in Bk+i into Bk , because the total size of all these pieces is larger than a. Thus, the large piece in Bk+i must not be the top piece. In other words, the top piece in Bk+i is a small piece. As before, b must appear in the list before the top piece in Bk+i . Let z1 be the total size of all small pieces in Bk+i that appear before b in the list, and let z2 be the total size of all small pieces in Bk+i that appear after b in the list. By the nature of the GLANF algorithm, we have z2  b > 1/2. A moment of reflection shows that the GLANF algorithm would have packed all the small pieces in Bk+1  Bk+2      Bk+i−1 , all the small pieces in Bk+i that appear before b in the list, and all the pieces in Bk+j , into Bk . This is because the total size of all small pieces in Bk  Bk+1      Bk+i is less than 1. Because z2 > 1/2 and y < 1/2, the small pieces can all be packed into Bk without causing it to be closed. Thus, this possibility does not exist either. Finally, we consider the possibility where there is a bin, say Bk+i , among Bk+1  Bk+2      Bk+j−1 , such that Bk+i has only small pieces. We want to show that the weight of Bk+i plus the weight of Bk+j is greater than 2. (In this case the argument continues for subsequent bins, looking for the first bin Bk+l  l > j, such that Bk+l has a weight less than 1.) Observe that b appears in the list before the top piece in Bk+i ; otherwise, b would have been packed into Bk+i as the

/

763

top piece. The small pieces in Bk+j all appear in the list before b. Let z1 be the total size of all pieces in Bk+i that appear before b in the list, and z2 be the total size of all pieces that appear after b in the list. From the nature of the GLANF algorithm, we have z2  b > 1/2 and z1 + y  1. Thus, the sum of the weights of the two bins, Bk+i and Bk+j , is greater than 2.  Borrowing ideas from the worst-case example for the first fit (FF) algorithm for CBP (Johnson et al. 1974), a lower bound for R GLANF when there is no 1-piece can be found in the following theorem, whose proof is in the Appendix. Theorem 6. Without the 1-pieces, R GLANF  27/20 = 135. However, when 1-pieces are allowed in the list, we can “improve” the lower bound for R GLANF to 3/2. Theorem 7. With the 1-pieces, R GLANF  3/2. Proof. Given an arbitrary positive integer K, consider a list LK with K4K + 2 pieces, where the first K pieces have the same size of 1 − 1/2K, the next K pieces have the same size of 1, the next K4K − 1 pieces have the same size of 1/4K, and the last K pieces have the same size of 1. Apply GLANF to LK and we will get 3K bins: K bins each containing one piece of size 1 − 1/2K, one piece of size 1/4K, and one piece of size 1; K bins each containing one piece of size 1; K − 1 bins each containing 4K pieces of size 1/4K; and 1 last bin containing 2K pieces of size 1/4K. On the other hand, the optimal solution occupies only 2K bins: K bins each containing one piece of size 1−1/2K and one piece of size 1, and K bins each containing 4K − 1 pieces of size 1/4K and one piece of size 1.  3. AVERAGE-CASE ANALYSIS For OOBP, we can easily verify that we are able to use (1) in the Introduction. First, OPT · is a subadditive function. (Unlike those for CBP and POBP, the OPT · is a function of lists rather than a function of sets in that not only what the pieces are, but also in what order the pieces appear, affects its value. However, our arguments follow with this fact.) That is, given two lists L1 and L2 , we have OPT L1 L2   OPT L1  + OPT L2  The reason is simple: Combining the optimal solutions for L1 and L2 , we get at least a feasible solution for L1 L2 . Next, when two lists L and L differ only in one piece, the resulting OPT L and OPT L  apparently differ by at most 1. So, using the standard techniques revolving around Azuma’s Inequality (Steele 1997), we have PrOPT Ln  − EOPT Ln    t  2e−t

2 /2n



Finally, it is also obvious that for every algorithm A we have considered, ALn  < n × OPT Ln 

764

/ Yang and Leung

As was pointed out by Coffman et al. (1996), these three properties guarantee the validity of (1) in the Introduction. 3.1. A Lower Bound for Arbitrary Algorithms 

Obviously, we first need to find OPT for OOBP. A trivial  lower bound for OPT is 1/4, due to the facts that no bin can have a level higher than 2 and that the average size of pieces is 1/2. On the other hand, we do find a lower  bound for OPT that is significantly higher than the trivial one. Theorem 8. 

and p1 + · · · + pn−K0 Ln −1 < K0 Ln  + 1 So, by the boundedness of the Epi s, we have lim

Ep1 + · · · + pn−K0 Ln  n

n→

OPT  2 − 3

lim



OPT Ln   K0 Ln +1   ≡ max k = 01n  p1 +···+pn−k  k +1

0 = K

Now, p1 + · · · + pn−k is a decreasing function of k. So, for any k  OPT Ln , it follows that p1 + · · · + pn−k < k Therefore, OPT Ln   K0 Ln +1   ≡ max k = 01n  p1 +···+pn−k  k +1 Given two lists L1 and L2 , by the definition of K0 ·, the smallest L1  − K0 L1  − 1 pieces in L1 add up to less than K0 L1  + 1 and the smallest L2  − K0 L2  − 1 pieces add up to less than K0 L2  + 1. Therefore, the smallest L1  + L2  − K0 L1  − K0 L2  − 2 pieces in L1 L2 add up to less than K0 L1  + K0 L2  + 2. Hence, K0 L1 L2  < K0 L1  + K 0 L2  + 2 That is, K0 · + 1 is a subadditive function. By Fekete’s Lemma (Steele 1997), limn→ EK0 Ln  /n exists and we 0 . Also, by definition, denote it by K p1 + · · · + pn−K0 Ln −1 + pn−K0 Ln   K0 Ln 

n

 0 2 1 n − K0 Ln  1 − K = lim +···+ =  n→ n + 1 n+1 2

Hence,

p1 + · · · + pn−OPT Ln  < OPT Ln 

EK0 Ln  0  =K n

Ep1 + · · · + pn−K0 Ln 

Proof. Let !1     n" be a permutation of !1     n" such that p1  · · ·  pn . Then, we may first prove that

Let OPT Ln  be the number of bins used by POBP when being applied to Ln . Because POBP is a relaxation of OOBP, it follows that OPT Ln   OPT Ln . For any optimal solution of POBP, if there are two pieces pi1 and pi2 such that pi1 < pi2 and pi1 is the last piece of a bin, then swapping the places of these two pieces will produce another solution with no more bins. Keep on swapping and we will get an optimal solution for POBP in which pieces on top of the bins are those largest pieces. Hence, the total size of the remaining smaller pieces does not exceed the optimal number of bins:

n→

However, due to the nature of the order statistics, we have n→



= lim

0 2 1 − K  2

√ 0 = 2 − 3. At last, we get Therefore, K EOPT Ln  EOPT Ln   lim n→ n→ n n EK0 Ln  + 1 0  lim =K n→ n √ = 2 − 3  lim

3.2. The Divide-and-Pack (DP) Algorithm The divide-and-pack (DP) algorithm: For convenience, denote n2/3  by m1 , the quotient of n being divided by m1 by m2 , and the remainder of n being divided by m1 by r. Given a list Ln , DP first divides it into m2 + 1 smaller m2 +1 2 , with L1n containing the first lists L1n  L2n      Lm n , and Ln 2 batch of m1 pieces in Ln , Ln containing the second batch 2 +1 containing the of m1 pieces in Ln , and so forth, and Lm n 2 +1 is remaining pieces in Ln . It will not matter whether Lm n empty or not. Now, for i = 1     m2 , let √ i LiS 3 − 1" n = !p ∈ Ln  p  i iS and LiL n = Ln \Ln . Then, for i = 1     m2 − 1, DP applies i+1L . Finally, DP packs algorithm DP for POBP to LiS n Ln 1L m2 S m2 +1 in the next-fit (NF) the remaining pieces in Ln Ln Ln fashion, on whose description we feel no need to elaborate right here. The tricks we played in the above guarantee that the “ordered” requirement is satisfied even when the pieces are meddled by DP . The description of DP for POBP is as follows: For a given list L, let √ LS = !p ∈ L  p  3 − 1"

and LL = L\LS . Use the asymptotically average-sense optimal algorithm for CBP (there exists such a perfect packing) to pack the pieces in LS into OPT C LS  bins. For convenience, denote LS  by T , L − T by R, and OPT C LS 

Yang and Leung by Q. Then, put min!R Q" pieces in LL onto the existing bins which are not of exactly size 1 (almost surely there are Q of them), one on top of every existing bin. Finally, pack the remaining R − Q+ pieces in LL into R − Q+ individual bins. In total, algorithm DP consumes DP L = max!Q R" bins. Note that, for those Q − R+ bins consisting of pieces solely from LS , because their levels are all below 1, one can always rearrange pieces in them so that the pieces on top are always those largest-indexed. Strictly speaking, the asymptotically average-sense algorithm for CBP is actually not one algorithm, but a sequence of algorithms. In real implementation, we will have to select the most appropriate one from the sequence, depending on the lists to be faced. We will discuss the implementation of DP later on. Theorem 9. √  DP  2 − 3

and also define C6iS and C6iL as the complementary events of C6iS and C6iL , respectively. T√ i is a binomial random variable with parameters m1 and 3 − 1 and Ri is √ a binomial random variable with parameters m1 and 2 − 3. Random variable Ri is independent of random variables Ti and Qi . According to Coffman and Lueker (1991, p. 14), we have     1 1 iS iL PrTi ∈ C6 = O and PrRi ∈ C6 = O  2 m1 m21 So, i+1L EDP LiS  n Ln



 E max!Qi Ri "  Ti ∈ C6iS and Ri ∈ C6iL +O

765

By the central limit theorem, √ √ √ 12 5i  Ti = t −  3 − 1t/2 3 − 3 t approaches the standard normal distribution N 0 1 as t tends to +. The convergence rate can be bounded by the Berry-Essen Theorem (Patel and Read 1982), from which we can derive √ √   3−1 3− 3 Pr 5i < t log t  Ti = t t− 2 12   1  + −y2 /2 1 0, we define event sets √   C6iS = Ti = 0 1     m1  Ti −  3 − 1m1   m1/2+6  1 √   1/2+6  C6iL = Ri = 0 1     m1  Ri − 2 − 3m1   m1

= Emax!Qi Ri "

 E max!Qi Ri "  Ti ∈ C6iS and Ri ∈ C6iL

+E max!Qi Ri "  Ti ∈ C6iS or C6iL

×Pr Ti ∈ C6iS or Ri ∈ C6iL

/

  √

1 iS Pr 5i < 2 − 3m1 − m1/2+6  = O  T ∈ C √ i 1 6 m1 Because Qi  5i for sure, we get

  √

1 iS Pr Qi < 2− 3m1 −m1/2+6 = O   T ∈ C √ i 1 6 m1

(4)

iS When Ti = t, the t pieces in L√ n are i.i.d. random variables uniformly distributed in 0 3 − 1 . Hence, the distribution function for each of these pieces is nonincreasing. According to Karmarkar (1982), Knodel (1981), and Loulou (1984), perfect packing in the CBP sense exists for these pieces: √ √ √

3−1 EQi  Ti = t = E 5i  Ti = t +O t = t +O t 2

Then, again by the definition of C6iS , we have √ EQi  Ti ∈ C6iS  2 − 3m1 + Om1/2+6  1

(5)

For nonnegative random variable a, and nonnegative constants A and B where A  B, because max!a B"  B × 1a < A + a + B − A × 1a  A  we have 

1 m1



 E max!Qi 2− 3m1 +m1/2+6 "  Ti ∈ C6iS +O 1

 

 1  m1

Our remaining job is to show √

E max!Qi  2 − 3m1 + m1/2+6 "  Ti ∈ C6iS 1 √ < 2 − 3m1 + Om1/2+6  1 i+1L and therefore EDP LiS  has the same bound. n Ln

Emax!a B"  B × Pra < A + Ea + B − A √ Treating C6iS  as a, 2 − 3m1 −m1/2+6 as A, and 1 √ Qi  Ti ∈ 2 − 3m1 + m1/2+6 as B, and combining inequalities (4) 1 and (5), we get √

E max!Qi  2 − 3m1 + m1/2+6 "  Ti ∈ C6iS 1 √

 E Qi  Ti ∈ C6iS + 2m1/2+6 + 2 − 3m1 + m1/2+6 1 1 √

× Pr Qi < 2 − 3m1 − m1/2+6  Ti ∈ C6iS 1 √  2 − 3m1 + Om1/2+6  1

766

/ Yang and Leung

Now, we have EDP Ln  =

m2 −1

 i=1





i+1L m2 S m2 +1 E DP LiS  + E NF L1L  n Ln n Ln Ln



2L 2/3  = m2 − 1E DP L1S n Ln  + On √  m2 − 1 2 − 3m1 + Om1/2+6  + On2/3  1 √  2 − 3n + On21+6/3   Indeed, we have succeeded in finding the optimal average-case performances for both OOBP and POBP. √   Namely, we have found that OPT = OPT = 2 − 3  0268. 3.3. Implementation Issues of the DP Algorithm Algorithm DP involves the perfect packing in the CBP sense√ for pieces independently drawn from the distribution U 0 3 − 1 . As described in Coffman and Lueker (1991, pp. 105–106), the perfect packing in turn involves partitioning the above distribution into an infinite sequence of symmetric uniform distributions centered around 2−k s for k = 1 2    In real implementations, we have to make a truncation at some K and treat pieces that fall in the intervals of the first K distributions and pieces that do not differently. For convenience, we further let DPK refer to the truncated version of DP at a particular K. We implement DPK as in the following. Given a list Ln , 1L m2 S m2 L m1 +1 we first partition it into L1S n  Ln      Ln  Ln , and Ln iS as in the description of DP. For each Ln for i = 1   , m2 − 1, we apply to it a truncated version of the √ perfect packing method. To do so, we first partition U 0 3−1 as √

U 0 3 − 1 K  b − ak b × U ak  bk + √ K+1 × U 0 bK+1  √k 3 − 1 3−1 k=1 √ √ where b1 = 3 − 1, a1 = 2 − 3; for k = 2     K, bk = ak−1 , ak = 21−k − bk when 2−k < ak−1 and ak = bk otherwise; and bK+1 = aK . Furthermore, we partition LiS n into 1 iS K+1 iS k LiS      L so that each L contains all pieces n n n that fall in the interval of the kth uniform distribution in the above. Then, we apply the MATCH method described in Coffman and Lueker (1991, p. 100) to each LiS k for k = 1     K so that the pieces there are packed into subbins with size 21−k . MATCH keeps on putting the largest currently unpacked piece into a new sub-bin and trying to add to the bin, if possible, another currently unpacked piece that is the largest possible. Afterwards, we pack the resultK+1 ing sub-bins along with pieces in LiS into bins of size n 1 using the next fit decreasing (NFD) algorithm for CBP. After all pieces in every LiS n for i = 1     m2 −1 have been packed, the rest of the implementation can just follow the description of DP.

=

2/3 . The Each LiS n  specified in DPK is in the order of n chance of there being at least one piece that does not fall in the intervals of the first 2 log2 n/3 distributions is significantly less than 1. These pieces thus have only an O1 contribution to the packing result of LiS n , and hence we can treat them arbitrarily in our implementation of DPK . So, it will be a good choice to let K be around Kn ≡ 2 log2 n/3. The running time for DPKn is On log n. We tested the DPK1000000 , that is, DP14 , using 20 independent 1,000,000-long lists and obtained an estimate 1000000 DP14  0354. Hence, we obtain the estimate 1000000 DP  132. This result is far away from the ideal R 14 ratio 1. This is what the On2/3  bound would foretell.

4. OTHER ALGORITHMS AND SIMULATION RESULTS In this section, we introduce two more online algorithms, NF and MBFa . Unlike MXF, GLANF, and DP, which we introduced earlier, these algorithms do not possess good worst-case or provably good average-case performance ratios. However, NF is very simple and MBFa empirically shows very promising average-case performances at suitable as. Also in this section, we present simulation results on the average-case performances of the various algorithms. Every time the average-case performance for any algorithm A is estimated empirically, we apply A to 20 independent random lists of 1,000,000 i.i.d. uniformly distributed pieces. These 1,000,000-long lists result in sample means less than 0.1%. The next-fit (NF) algorithm: It is a natural online algorithm for OOBP which operates by keeping only one bin open and packing the pieces in the order they arrive. When the open bin is filled to level 1 or above, the bin will be closed and a new open bin will be started. The running time for NF is clearly On. It is easy to obtain that R NF = 2. The argument for  RNF  2 is already stated in §2. To show R NF  2, we construct a bad list of 4K pieces with (1 − 6)-pieces and 26-pieces appearing alternately. NF will produce 2K bins, while the optimal solution will only produce K + 1 bins. The analysis for the average-case performance of NF has already been conducted in the context of the bin-covering problem. Using the result of Csirik et al. (1991), we can  easily obtain NF = 1/e  0368. The proof idea is basically as follows. Let !pi  i = 1 2   " be an infinitely long list independently drawn from U 0 1 . Let M denote the number of pieces contained in an arbitrary closed bin produced by NF when being applied to this list. For any k = 1 2    and any y ∈ 0 1 , we have Prp1 + p2 + · · · + pk  y =

yk  k!

Hence, PrM = 0 = 0

PrM = 1 = Prp1 = 1 = 0

Yang and Leung

/

767

and for m = 2 3   ,

Table 3.

PrM = m  1

Pr x  p1 + · · · + pm−1  x + dx Prpm  1 − x =

Algorithm A

MXF

NF

MBF[0.71]

DP14

GLANF

 Ratio R A

1.22

1.37

1.01

1.32

1.04

=



0

1 0

xm−1 1 dx =  m − 2! mm − 2!



So, EM =

 



m=2

  1 1 = = e mm − 2! m=0 m! 

Our simulation confirms that NF  0368. The modified best-fit (MBFa ) algorithm parameterized by a where 0 < a < 1: By its name, it is modified from the best-fit (BF) algorithm for CBP. Let the open bins at any moment be 1 2     k, with their levels being x1      xk . Assume without loss of generality that x1  · · ·  xk . Let y be the new piece to be considered. The destiny of y is determined as follows: (1) If x1 + y  1 + a, put y into bin 1 and close that bin; (2) If 1  x1 + y < 1 + a and xk + y  1, open a new bin and put y into it; (3) If there is an i between 1 and k such that 1 + a > x1 + y  · · ·  xi−1 + y  1 > xi + y, put y into bin i. If a can be 0, MBFa at that a is just NF. If we define a piece y being fit into a bin with level x to be that x + y  1 + a or x + y < 1, MBFa can be described as operating in exactly the same manner as BF for CBP. The only difference is that in the latter, piece y being fit into a bin with level x is that x + y  1, and no bin will ever be closed. MBFa runs in On log n time. We have only been able to show that R MBF a  2 for any a: For any positive K, let LK be a list consisting of 2K pieces with each being 2 + a/4. Then, running MBFa results in 2K bins with every bin containing one piece, while running the optimal offline algorithm results in K bins with every bin containing two pieces. Based on the simulation results, we may draw a plot of  MBFa vs. a. The plot is presented in Figure 1. From the Figure 1.

Average-case performances of MBF[a] vs. a.

Bin Number/Piece Number

0.4 MBF OPT

0.38

Average-case performance ratios of the algorithms.

results, we know that MBFa achieves its lowest level of about 0.271 at a  071. Also, MBFa for a ∈ 1/2 3/4 is probably worth more investigating.  From the simulation studies, our estimate for MXF is  about 0.326 and GLANF is about 0.279. For the ease of comparison, we put the performances in Table 3. 5. CONCLUSIONS We introduced the NP-hard OOBP and proposed for it five algorithms: MXF, NF, MBFa , DP, and GLANF. Our worst-case analyses showed that no online algorithm could do better than 1.630297 or 1.415715 in performance ratio, depending on whether or not the 1-pieces are allowed in the list. Of all the online algorithms, only MXF has a provable worst-case performance ratio of less than 2 even when there is no 1-piece. For the offline algorithm GLANF, we showed that its worst-case performance is very good when there √is no 1-piece. In addition, we were able to show that 2− 3 is the optimal asymptotic average ratio of number of bins used over number of pieces, that DP achieves this performance in the asymptotic sense, and that this ratio is 1/e for NF. Our simulation results over the uniform distribution demonstrated that MBFa at certain as, and GLANF, possess performance ratios lower than 1.05, and they are much better than DP when the list length is in the millions. For future research, it will be worthwhile to investigate  the upper bounds for R MXF and RGLANF when the 1-pieces are present. Indeed, we conjecture that the bounds should be as tight as what we have achieved for when the 1-pieces are not present. Also, tighter bounds for the worst-case performances of arbitrary online algorithms, MXF, and GLANF in both cases, are certainly needed. Moreover, better online and offline algorithms are still needed. In terms of average-case analysis, just changing the distribution from U 0 1 to U 0 b from some b ∈ 0 1 will render most of our techniques ineffective. Therefore, more work is needed on the average-case analysis of algorithms for more general distributions.

0.36 APPENDIX

0.34 0.32 0.3 0.28 0.26 0

0.2

0.4

0.6 a

0.8

1

Proof of Theorem 3. Hoping that some of the ideas here might be used to tackle the case with the 1-pieces, we build up the framework of the proof with 1-pieces being considered as well. Let !Ln  n = 1 2   " be a series of lists. For any n, Ln is just an arbitrary list with OPT Ln  = n. Using the last type-3 or type-4 piece in Ln as a dividing point, we call the type-1 and type-2 pieces that appear before this piece in the list the early pieces, and the type-1 and type-2 pieces that appear after this piece in the list the late pieces.

768

/ Yang and Leung

After applying MXF to the list, all type-1 and type-2 pieces in closed bins are early; some type-1 open bins have some early pieces and some have purely late pieces; and some type-2 open bins have two early pieces, some have two late pieces, and only O1 type-2 open bins are of different kinds. We let: x1n be the number of closed bins containing two type-3 pieces; x2n be the number of closed bins containing one type-3 and one type-4 pieces; x3n be the number of closed bins containing just one type-4 piece; x4n be the number of closed bins containing a number of early type-1 pieces and one type-3 piece; x5n be the number of closed bins containing two early type-2 pieces and one type-3 piece; x6n be the number of closed bins containing a number of early type-1 pieces and one type-4 piece; x7n be the number of closed bins containing two early type-2 pieces and one type-4 piece; x8n be the number of type-1 open bins having some early pieces; x9n be the number of type-1 open bins having purely late pieces; n be the number of type-2 open bins having two early x10 pieces; and n be the number of type-2 open bins having two late x11 pieces. For i = 1 2     11 and j = 1 2, let anj iEL be the average total size of early (late) type-j pieces in the above ith type of bins. When we apply OPT to Ln , there can be 28 major types of closed bins and O1 other bins. We may describe each type using a 6-tuple nE1  nE2  n3  n4  nL1  nL2 , where nE1 , always taking the value A, which means a number of and possibly none, describes the number of early type-1 pieces in this type of bins; nL1 , taking the values of A and 0, describes the number of late type-1 pieces in this type of bins; and nE2 , n3 , n4 , and nL2 , taking integer values, describe the numbers of early type-2, type-3, type-4, and late type-2 pieces in this type of bins, respectively. The following list the 6-tuples of all the types: Type Type Type Type Type Type Type Type Type Type Type Type Type Type

1: A 0 2 0 0 0; Type 2: A 1 2 0 0 0; 3: A 0 1 1 0 0; Type 4: A 1 1 1 0 0; 5: A 0 1 0 0 0; Type 6: A 1 1 0 0 0; 7: A 2 1 0 0 0; Type 8: A 0 0 1 0 0; 9: A 1 0 1 0 0; Type 10: A 2 0 1 0 0; 11: A 0 0 0 0 0; Type 12: A 1 0 0 0 0; 13: A 2 0 0 0 0; Type 14: A 3 0 0 0 0; 15: A 0 1 0 A 0; Type 16: A 1 1 0 A 0; 17: A 0 1 0 A 1; Type 18: A 1 1 0 A 1; 19: A 0 1 0 A 2; Type 20: A 0 0 0 A 0; 21: A 1 0 0 A 0; Type 22: A 2 0 0 A 0; 23: A 0 0 0 A 1; Type 24: A 1 0 0 A 1; 25: A 2 0 0 A 1; Type 26: A 0 0 0 A 2; 27: A 1 0 0 A 2; Type 28: A 0 0 0 A 3.

For i = 1 2     28, let yin be the number of the above nj be the average ith type of bins and for j = 1 2, let biEL total size of early (late) type-j pieces in the above ith type of bins. By the conservation of the number of early type-2 pieces, we have n n n = y2n + y4n + y6n + 2y7n + y9n + 2y10 + y12 2x5n + 2x7n + 2x10 n n n n n n + 2y13 + 3y14 + y16 + y18 + y21 + 2y22 n n n + y24 + 2y25 + y27 + O1

(A1)

By the conservation of the number of type-3 pieces, we have n 2x1n +x2n +x4n +x5n = 2y1n +2y2n +y3n +y4n +y5n +y6n +y7n +y15 n n n n +y16 +y17 +y18 +y19 +O1

(A2)

By the conservation of the number of type-4 pieces, we have n + O1 x2n + x3n + x6n + x7n = y3n + y4n + y8n + y9n + y10

(A3)

By the conservation of the number of late type-2 pieces, we have n n n n n n n n = y17 + y18 + 2y19 + y23 + y24 + y25 + 2y26 2x11 n n + 2y27 + 3y28 + O1

(A4)

By the various conservations of piece sizes, we have 11  i=1 11  i=1 11  i=1 11  i=1 11  i=1

n an1 iE xi =

28  i=1

n1 n biE yi +O1

n2 n an1 iE +aiE xi =

n1 n an1 iE +aiL xi =

28 

(A5)

n1 n2 n biE +biE yi +O1

(A6)

28  n1 n1 n biE +biL yi +O1

(A7)

i=1

i=1

n2 n1 n an1 iE +aiE +aiL xi =

n1 n2 n an1 iE +aiL +aiL xi =

28  i=1 28  i=1

n1 n2 n1 n biE +biE +biL yi +O1 (A8)

n1 n1 n2 n biE +biL +biL yi +O1 (A9)

From the description of the algorithm, we know an1 4E  n1 n2 n1 n2 2/3 an2 5E  2/3 a6E  2/3 a7E  2/3 a8L  0 a10E  2/3, and an2 11L  2/3. From the well-known result of FF for CBP n n with piece sizes in 0 1/3, we know that an1 8E x8  3x8 /4 n1 n n + O1 and a9L x9  3x9 /4 + O1 (Johnson et al. 1974). All other unmentioned anj iEL s are 0. We can also easily obtain that n1 < 1/2 b1E

n2 b1E = 0

n1 n2 + b2E < 1/2 b2E n1 < 1/2 b3E

n1 b1L = 0

n1 b2E < 1/6

n2 b3E = 0

n2 b1L = 0*

n1 b2L = 0

n1 b3L = 0

n2 b2L = 0*

n2 b3L = 0*

Yang and Leung n1 n2 b4E + b4E < 1/2 n1 b5E < 1

n1 b4E < 1/6

n2 b5E = 0

n1 b4L = 0

n1 b5L = 0

n n n +2y26 /3 + y27 /3 + y28 /3 + O1

n2 b5L = 0*

n1 b6E < 2/3

n1 b6L = 0

n2 b6L = 0*

n1 n2 b7E + b7E < 1

n1 b7E < 1/3

n1 b7L = 0

n2 b7L = 0*

n2 b8E = 0

n1 n2 b9E + b9E < 1

n1 b9E < 2/3

n1 n2 b10E + b10E < 1 n1 b11E

< 4/3

n1 b8L = 0

n2 b11E

n1 b10L = 0

n1 b11L

n2 b11L

= 0

 y1n /2 + y2n /2 + y3n /2 + y4n /2 + y5n + y6n + y7n + y8n + y9n n n n n n n + y10 + 4y11 /3 + 3y12 /2 + 3y13 /2 + 3y14 /2 + y15 /2

= 0

n n n n n n n n + y16 /2 + y17 /2 + y18 /2 + y19 /6 + y20 + y21 + y22 + y23

n2 b9L = 0*

n1 b10E < 1/3

(A10)

n 2x4n /3 + 2x5n /3 + 2x6n /3 + 2x7n /3 + 3x8n /4 + 2x10 /3

n2 b8L = 0*

n1 b9L = 0

769

n n n n n + 2y21 /3 + y22 /3 + y23 + 2y24 /3 + y25 /3

n2 b4L = 0*

n1 n2 b6E + b6E < 1

n1 b8E < 1

/

n n n n n + y24 + y25 + 2y26 /3 + 2y27 /3 + y28 /3 + O1

n2 b10L = 0*

= 0*

n1 b12E < 1

n1 n2 b13E + b13E < 3/2

n1 b13E < 2/3

n1 b13L = 0

n2 b13L = 0*

n n n n n + 2y9n /3 + y10 /3 + 4y11 /3 + y12 + 2y13 /3 + y14 /3

n1 n2 b14E + b14E < 3/2

n1 b14E < 1/3

n1 b14L = 0

n2 b14L = 0*

n n n n n n + 5y15 /6 + y16 /2 + y17 /2 + y18 /6 + y19 /6 + 4y20 /3

n1 n1 b15E + b15L < 5/6

n1 b15E < 1/2

n2 b15E = 0

n2 b15L = 0*

n n n n n + y21 + 2y22 /3 + y23 + 2y24 /3 + y25 /3

n1 n1 b16E + b16L < 1/2

n2 b17E = 0*

 y1n /2 + y2n /2 + y3n /2 + y4n /2 + y5n + y6n + y7n + y8n + y9n n n n n n n + y10 + 4y11 /3 + 3y12 /2 + 3y13 /2 + 3y14 /2 + 5y15 /6

n1 n1 n2 b18E + b18L + b18L < 2/3

n n n n n n + 5y16 /6 + y17 /2 + y18 /2 + y19 /6 + 4y20 /3 + 4y21 /3

n1 n1 b18E + b18L < 1/6* n1 n1 n2 b19E + b19L + b19L < 1 n1 n1 b20E + b20L < 4/3

n1 n1 b19E + b19L < 1/6

n1 b20E < 1

n1 n2 n1 b21E + b21E + b21L < 4/3 n1 n1 b21E + b21L < 1

n2 b20E = 0

n1 n1 b22E + b22L < 2/3

n1 n2 n1 b24E + b24E + b24L < 1

 y1n /2 + y2n /6 + y3n /2 + y4n /6 + y5n + 2y6n /3 + y7n /3 n n n n n + y8n + 2y9n /3 + y10 /3 + 4y11 /3 + y12 + 2y13 /3 + y14 /3

n1 n2 b22E + b22E < 1

n n n n n n n + 5y15 /6 + y16 /2 + y17 + 2y18 /3 + y19 + 4y20 /3 + y21

n2 b22L = 0*

n1 n1 b23E + b23L < 1

n1 + b25L

< 1

n n n n n + 2y22 /3 + 3y23 /2 + 7y24 /6 + 5y25 /6 + 3y26 /2

n2 b23E = 0*

n n + 7y27 /6 + 3y28 /2 + O1

n1 n1 n2 b24E + b24L + b24L < 7/6

n1 n2 + b25L + b25L

< 5/6

28  i=1

n1 n2 n1 b27E + b27E + b27L < 2/3

n1 n1 n2 b27E + b27L + b27L < 7/6

n x2n + x3n + x6n + x7n + y3n + y4n + y8n + y9n + y10 = 0

n1 b28E

Hence, 11 n i=1 xi 28 n i=1 yi

< 1/3*

n1 n2 + b28L + b28L

< 3/2

n1 + b28L

< 1/3

n2 b28E

= 0

Combining these with Equations (A5)–(A9), we obtain 2x4n /3 + 2x6n /3 + 3x8n /4  y1n /2 + y2n /6 + y3n /2 + y4n /6 + y5n n + 2y6n /3 + y7n /3 + y8n + 2y9n /3 + y10 /3 n n n n n + 4y11 /3 + y12 + 2y13 /3 + y14 /3 + y15 /2 n n n n n + y16 /6 + y17 /2 + y18 /6 + y19 /6 + y20

(A15)

When the 1-pieces are not present, we further have

n1 n1 b26E + b26L < 2/3

n1 b28E

n2 b26E = 0*

yin = n + O1

n1 n1 n2 b26E + b26L + b26L < 3/2

n1 + b27L

(A14)

By the definition in the beginning, we also have n1 b25E

n1 n1 b25E + b25L < 1/3*

n1 b27E

(A13)

n 2x4n /3 + 2x6n /3 + 3x8n /4 + 3x9n /4 + 2x11 /3

n1 n1 b24E + b24L < 2/3* n2 + b25E

n + y28 /3 + O1

n2 b20L = 0*

n2 b21L = 0*

n1 b22E < 1/3

n1 n1 n2 b23E + b23L + b23L < 3/2

n n n n n n + 4y22 /3 + y23 + y24 + y25 + 2y26 /3 + 2y27 /3

n2 b19E = 0*

n1 n2 b21E + b21E < 1

n1 b21E < 2/3

n1 n2 n1 b22E + b22E + b22L < 4/3

n1 b25E

(A12)

n 2x4n /3 + 2x5n /3 + 2x6n /3 + 2x7n /3 + 3x8n /4 + 3x9n /4 + 2x10 /3

n2 b16L = 0*

n1 n1 b17E + b17L < 1/2

n1 n2 n1 b18E + b18E + b18L < 1/2

n n n + 2y26 /3 + y27 /3 + y28 /3 + O1

n1 n2 b16E + b16E < 1/2

n1 b16E < 1/6

n1 n1 n2 b17E + b17L + b17L < 1

n2 b12L = 0*

 y1n /2 + y2n /6 + y3n /2 + y4n /6 + y5n + 2y6n /3 + y7n /3 + y8n

n1 n2 b12E + b12E < 3/2

n1 n2 n1 b16E + b16E + b16L < 5/6

n1 b12L = 0

(A11)

2x4n /3 + 2x6n /3 + 3x8n /4 + 3x9n /4

(A16)

 n n n is upper-bounded by ! 11 i=1 xi  xi  0 i = 1     11 yi  0 i = 1     28 (A1)–(A4), (A10)–(A11), and (A13)– (A16)"/n The limit of this upper bound is another linear program of the apparent form. By the simplex method, we find its solution value to be 35/18.  We believe R MXF  35/18 even when there are 1-pieces in the list. However, without being enforced by (A16), the

770

/ Yang and Leung

linear program gives the result of 22/9  24444 and a solution vector indicating that the valid constraints we have come up with so far have not captured the delicate timing of the appearances of the 1-pieces in both the MXF and OPT packings. Therefore, it calls for more subtle arguments than our existing ones to prove the result for the case with 1-pieces. Also, letting the underlying CBP algorithm be FF instead of NF is important because only the former allows n1 the lower bound of 3/4 relevant to an1 8E and a9L . When FF is replaced by NF, we can only achieve a lower bound of 2 using similar arguments. Proof of Theorem 6. The idea here is to have 20K pieces around 1/6, 20K pieces around 1/3, 20K pieces around 1/2, and 20K pieces of size 1 − 6. GLANF will result in 27K bins where there are 4K bins each containing five 1/6-pieces and one 1 − 6-piece, 10K bins each containing two 1/3-pieces and one 1 − 6-piece, 6K bins each containing one 1/2-piece and one 1 − 6-piece, and 7K bins each containing two 1/2-pieces. On the other hand, the optimal algorithm will result in 20K + 1 bins where virtually every bin contains one 1/6-piece, one 1/3-piece, one 1/2-piece, and one 1 − 6-piece. Our choice of the fractional pieces following Johnson et al. (1974) guarantees these outcomes. Let LK be a list consisting of 80K pieces. For k = 1     2K, p30k−29 = 1/6 + 33/18K+k 

p30k−28 = 1/6 − 3/18K+k 

p30k−27 = p30k−26 = 1/6 − 7/18K+k  p30k−25 = 1/6 − 13/18K+k  p30k−24 = 1/6 + 9/18K+k  p30k−23 = p30k−22 = p30k−21 = p30k−20 = 1/6 − 2/18K+k  p30k−19 = 1/3 + 46/18K+k  p30k−18 = 1/3 − 34/18K+k  p30k−17 = p30k−16 = 1/3 + 6/18K+k  p30k−15 = 1/3 + 12/18K+k  p30k−14 = 1/3 − 10/18K+k  p30k−13 = p30k−12 = p30k−11 = p30k−10 = 1/3 + 1/18K+k  and for i = 1     10, p30k−10+i = 1/2 + 1/183K  For i = 60K + 1     80K pi = 1 − 6. Applying GLANF to LK results in 27K bins. For k = 1     2K, bin 10k − 9 contains p30k−29  p30k−28  p30k−27  p30k−26 , p30k−25 , and p60K+10k−9 ; bin 10k − 8 contains p30k−24  p30k−23  p30k−22  p30k−21 , p30k−20 , and p60K+10k−8 ; for i = 1     5, bin 10k − 8 + i contains p30k−21+2i  p30k−20+2i , and p60K+10k−8+i ; and for i = 1 2 3, bin 10k − 3 + i contains p30k−10+i and p60K+10k−3+i . And, for k = 1     7K, bin 20K + k contains p23+k and p53+k . On the other hand, the optimal algorithm results in only 20K + 1 bins. In this solution, for k = 1     2K, bin 10k − 9 contains p30k−29  p30k−18  p30k−9 , and p60K+10k−9 ; for k = 1     2K and i = 2     9, bin 10k − 10 + i contains p30k−30+i+1  p30k−20+i+1  p30k−10+i , and p60K+10k−10+i ; for k = 1     2K − 1, bin 10k contains p30k−28  p30k+11  p30k , and p60K+10k , bin 20K contains p11 ,

p60K−28 , and p60K , and bin 20K + 1 contains p80K . Therefore, R  GLANF  27/20. ACKNOWLEDGMENTS The authors thank Area Editor Rolf Moehring, the associate editor, and two anonymous referees for their comments, which greatly improve the paper. They also acknowledge two referees for pointing out directions for further investigation and errors in early versions of the paper. The first author is supported, in part, by New Jersey Institute of Technology under Grant No. 4-21830 and the National Center for Transportation and Industrial Productivity under Grant No. 9-92518.

REFERENCES Coffman, E. G., Jr., G. S. Lueker. 1991. Probabilistic Analysis of Packing and Partitioning Algorithms. John Wiley and Sons, New York. , M. R. Garey, D. S. Johnson. 1996. Approximation algorithms for bin packing: A survey. D. Hochbaum, ed. Approximation Algorithms for NP-hard Problems. PWS Publishing Company, Boston, MA. Csirik, J., J. B. G. Frenk, G. Galambos, A. H. G. Rinnooy Kan. 1991. Probabilistic analysis of algorithms for dual bin packing problems. J. Algorithms 12 189–203. Johnson, D. S., A. Demers, J. D. Ullman, M. R. Garey, R. L. Graham. 1974. Worst-case performance bounds for simple one-dimensional packing algorithms. SIAM J. Comput. 3(4) 299–325. Karmarkar, N. 1982. Probabilistic analysis of some bin-packing algorithms. Proc. 23rd Annual Sympos. on Foundations of Comput. Sci. IEEE Computer Society Press, Los Alamitos, CA, 107–111. Knodel, W. 1981. A bin packing algorithm with complexity On log n and performance 1 in the stochastic limit. Lecture Notes Comput. Sci. 118 369–378. http://www.springer.de/ comp/lncs/. Leung, J. Y.-T., M. Dror, G. H. Young. 2001. A note on an openend bin packing problem. J. Scheduling 4 201–207. Liang, F. M. 1980. A lower bound for on-line bin packing. Inform. Processing Lett. 10 76–79. Loulou, R. 1984. Probabilistic behavior of optimal bin-packing solutions. Oper. Res. Lett. 3(3) 129–135. Patel, J. K., C. B. Read. 1982. Handbook of the Normal Distribution. Marcel Dekker, New York. Steele, J. M. 1997. Probability Theory and Combinatorial Optimization. Society for Industrial and Applied Mathematics, Philadelphia, PA. Van Vliet, A. 1995. Lower and upper bounds for on-line bin packing and scheduling heuristics. Ph.D. thesis, Erasmus University, Rotterdam, The Netherlands. . 1996. On the asymptotic worst case behavior of harmonic fit. J. Algorithms 20 113–136. Yao, A. C. 1980. New algorithms for bin packing. J. ACM 27 207–227.

Suggest Documents