2010 Seventh International Conference on Information Technology
Efficient Data Centers, Cloud Computing in the Future of Distributed Computing
Doina Bein
Wolfgang Bein
Shashi Phoha
Applied Research Laboratory The Pennsylvania State University University Park, PA 16802, USA
[email protected]
School of Computer Science University of Nevada, Las Vegas Las Vegas, NV 89154, USA
[email protected]
Applied Research Laboratory The Pennsylvania State University University Park, PA 16802, USA
[email protected]
of virtualization technologies. Special software tools like virtualization allow a single machine to be seen as multiple independent machines. In this way, a server can increase its utilization up to 80%, compared with the average 16% obtained currently. Self-management of such servers will involve controlling the power usage, distributed data sharing, failure detection and correction. Mobile data is expected to increase 14-fold by 2014, due to audio and video streaming [6] and a demand increase in maps, sales, data sharing, social networking, and online gaming. Social networks are a very recent venue for cloud computing. Facebook, with over 140 million users (over 70% of current users residing outside US), 600,000 new users daily, recently added a new data center in Ashburn, Virginia, to its current facility in Santa Clara, California [3]. Gaming is another venue for cloud computing. Advanced Micro Devices (AMD) is planning to build in Burbank, California, the world’s fastest commercial supercomputer, a projected one million compute threads across over 1000 graphics-processing units (called GPUs), which translates into one pentaflop per second [4]. The supercomputer, dubbed “Fusion Render Cloud” will run the graphics rendering software produced by OTOY, a California-based software company that produces technology for delivering real-time 3D rendering animation through the browser [5]. The game graphics will be computed and preprocessed on the servers, and the compressed result will be sent over the Internet so that online gamers will not need powerful graphics cards on their computer, supplying them with as much graphical detail as the available bandwidth can handle. One will be able to play the game on a mobile phone by receiving less data per frame but still at the rate of 30-50 frames per second. Improving the energy efficiency of such data centers is of critical importance today. We study the cost of storing vast amounts of data on the servers in a data center and we propose a cost measurement together with an algorithm that minimizes such cost. The memory of a server is of fixed size, so we can consider of unit 1. The data center will receive requests online (one request at a time) to store large chunks of data on its servers. We assume that the size of the chunk is much greater than 0. The memory of a server in a data center can be considered as a bin of size no more than 1, and the server needs to store various items such that the total size of the stored items cannot be more than the memory size of 1. For serving online sequence of requests, we consider two algorithms: the algorithm HARMONICM of [7] and the
Abstract—Large corporations such as Amazon, Google, Microsoft, and Yahoo, use data centers to keep up with the growing demand for communication-hungry Internet services like image and video sharing, social networking, and searching. We study the problem of actually allocating the memory of servers in a data center based on online requests for storage. Given an online sequence of storage requests and a cost associated with serving the request by allocating space on a certain server, we use two efficient algorithms for selecting the minimum number of servers and of the minimum total cost. We show that both algorithms perform almost optimal when the requests have totally random values.1 Keywords- data center, server, storage request, bin packing, online algorithm, storage request
I.
INTRODUCTION
Our society experiences today an explosive growth of Web-based applications based on cloud computing. Following the method from the past when large and expensive computers allowed users to rent their computation power for their terminals, in accordance with Moore’s Law, processors and memory are really cheap when they are bought, run and maintained by the thousands. Carr’s prediction about “computing that’s turning into a utility” [1] is parallel to the way electricity has re-shaped our society and economy during the industrial revolution. This shift in the way business is conducted is motivated by the reduction of the latency time - the time for the information to be exchanged between the user’s browser and the URL. By outsourcing the database software to Internet companies that sell subscriptions to services, and moving into the data centers of the Internet service providers themselves, the latency time is down to the order of millionth of a second [2]. The majority of cloud computing infrastructures as of 2009 consist of reliable services such as storage services, spam filtering, running any application as long as one can specify it in Python and use Google's database, delivered through data centers and built on servers with different levels 1
This material is based upon work supported by the U. S. Army Research Laboratory and the U. S. Army Research Office under the eSensIF MURI Award No. W911NF-07-10376. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the sponsor.
978-0-7695-3984-3/10 $26.00 © 2010 IEEE DOI 10.1109/ITNG.2010.31
70
For a given input sequence R of n items, let A(R) (or simply A) be the number of bins used by algorithm A on the given request sequence R. Let OPT(R) (or simply OPT) be the number of bins used by of an optimal offline algorithm which knows the complete sequence R in advance. The asymptotic performance ratio for an algorithm A is defined to be ⎫⎪ ⎧⎪ A( R) R( A) = lim sup n→∞ sup R ⎨ OPT ( R ) = n⎬ ⎪⎭ ⎪⎩ OPT ( R) There are numerous papers regarding online bin packing under competitive analysis. Johnson [13] showed that the NEXT FIT algorithm has performance ratio of 2 and Johnson et al. [14] showed that the FIRST FIT algorithm has performance ratio 17/10, whereas Yao [15] showed that REVISED FIRST FIT has performance ratio of 5/3. Later, Lee and Lee [7] presented more complex online algorithms. One algorithm called HARMONICM partitions the items into M > 1 classes and uses bounded space of at most M − 1 open bins. Given a positive integer M > 0, the interval (0,1] can be partitioned into M subintervals as follows: ⎛ 1 1⎤ , ⎥ • For each k, 1≤ k < M, let I k = ⎜ ⎝ k +1 k ⎦ ⎛ 1⎤ • Let I M = ⎜ 0, ⎥ ⎝ M⎦ For example, when M = 5 the interval (0,1] is partitioned into M=5 intervals as follows: (0,1] = ⎛⎜ 0, 1 ⎤⎥ ∪ ⎛⎜ 1 , 1 ⎤⎥ ∪ ⎛⎜ 1 , 1 ⎤⎥ ∪ ⎛⎜ 1 , 1 ⎤⎥ ∪ ⎛⎜ 1 ,1⎤⎥ ⎝ 5 ⎦ ⎝ 5 4 ⎦ ⎝ 4 3⎦ ⎝ 3 2 ⎦ ⎝ 2 ⎦ An item ai is called an Ik - piece if ai ∈ Ik, 1 ≤ k ≤ M. If a bin contains only Ik – pieces then it is called an Ik – bin, for any 1 ≤ k ≤ M. Thus we can classify the bins into M categories (types of bins). We note that an Ik – bin can pack at most Ik – pieces, for any 1 ≤ k < M. An IM – bin can pack any number of IM – pieces. An Ik – bin is considered filled if it has exactly k Ik – pieces, for any k, 1 ≤ k < M; otherwise it is considered unfilled. HARMONICM uses O(1) space and has O(n) time complexity. Lee and Lee [7] showed there is no bounded space algorithm with a performance ratio below Π∞. For any ε > 0, there is a number M such that the algorithm HARMONICM that uses M categories of bins has a performance ratio of at most (1 + ε)Π∞ [7], where Π∞ ≈
algorithm CCHk (CARDINALITY CONSTRAINED HARMONICk) of [8] which aim to select the minimum number of servers and of the minimum total cost for a given sequence of storage requests. These algorithms are online, i.e. they receive the requests one at a time and they do not have knowledge of the future requests or the distribution of values of the future requests. An offline algorithm knows the entire sequence R a priori. We modify both algorithms to handle storage requests that are larger than the memory of a single server. The requests for bins of size larger than 1 are treated as a sequence of requests in which the first is the decimal part of the request and the rest are requests for bins of size 1. For example, if a request for a bin of size 5.4 arrives, then the sequence of requests generated is for bins of size 0.4, 1, 1, 1, 1, 1. Our simple extension of the algorithm HARMONICM called HLR (HARMONICM with Large Requests) has the same approximation ratio as HARMONICM. Our simple extension of the algorithm CARDINALITY CONSTRAINED HARMONICk called CCHLR (CARDINALITY CONSTRAINED HARMONICk with Large Requests) has the same approximation ratio as CARDINALITY CONSTRAINED HARMONICk. The paper is organized as follows. In Section 2 we present the classical bin packing problem and some extensions used in the paper. In Section 3 we present the two algorithms HLR and CCHLR for serving online storage requests for the servers in a data center. In Section 4 we compare the performance of the two algorithms when the number of requests and the maximum number of requests per server memory varies. We conclude in Section 5. II.
BIN PACKING
Classic bin packing (see e.g [9,10,11,12]) is a well studied problem, similar to knapsack problem, with numerous applications in memory paging and multiprocessor systems. In the online bin packing problem, we are given a sequence R = { a1, a2, … an } of n items to be stored, arriving one by one. The values ai’s represent the exact size of the items and are in the interval (0,1]. We have no a priori knowledge of the size of the items before receiving them. We have an infinite supply of bins, each of unit size. Upon arrival, an item is assigned to a bin, with the constraint that the sum of the items in a bin should not exceed the bin’s unit capacity. A bin is empty if it stores no item, otherwise is considered used. A bin is considered full if the total size of the items stored is 1. The goal is to minimize the number of items used. In the cardinality constrained bin packing problem, the number of items that can be stored in a bin must not exceed a value M. By restricting the number of items stored in a bin, we might end up with used bins that are not full. Many online algorithms have been proposed for the classical bin packing problem or the cardinality constrained bin packing problem. The efficiency of such algorithms is measured in terms of so called asymptotic competitive ratio.
1.69103 is the sum of series Π ∞ =
∞
∑π i =1
1 and the series i −1
πi, i ≥ 1, is defined as π1 = 2 and πi+1 = πi ( πi-1 ) + 1, for any i ≥ 1. Thus π2 = 3, π3 = 7, π4 = 43, etc.. The second algorithm of Lee and Lee [7], called REFINED-HARMONIC, has a ratio below 1.636 but its space complexity is O(n). Seiden [16] further improves upon the work of Lee and Lee and gives an algorithm HARMONIC++ with asymptotic performance ratio at most 1.58889.
71
third, etc. requests are for bins of size 1, and the first request is for the rest (see Section 1).
In the cardinality constrained bin packing, the number of items in any bins is bounded. Since any Ik – bin can pack at most k pieces for any 1 ≤ k < M, this restriction translates into the fact that the IM – bins can pack at most M pieces.
Algorithm HARMONICM with Large Requests (HLR)
Epstein [8] proposed an online algorithm called CARDINALITY CONSTRAINED HARMONICk (CCHk) for cardinality constrained bin packing; this algorithm is an adaptation of the algorithm HARMONICM of Lee and Lee [7]. The algorithm CCHk is has O(1) space complexity and its competitive ratio is a strictly increasing function of k. When k is large, the competitive ratio approaches 1 + Π∞ ≈ 2.69103. We recall that Π∞ ≈ 1.69103 is the best competitive ratio shown by Lee and Lee [7] for the bin packing with bounded (O(1)) space. We note that there are also a number of results regarding lower bounds on the performance ratio of any online algorithm for bin packing. Vliet [17] shows that no online algorithm for bin packing can have asymptotic performance ratio better than 1.54014 – this is the best known bound known to date. More recently, Epstein [18,19] has considered online bin packing with rejection. In this version of the problem items can be rejected for a cost. Offline versions of this problem have recently received much attention due to the practical applicability of the problem (see Bein et al. [20,21]). Offline algorithms for cardinality constrained bin packing were given by Krause et al. [22,23]. Babel et al. [24] designed a simple online algorithm with competitive ratio 2 for any value of k, some improved algorithms for k=2 and k=3 of competitive ratios 1+√5/5 ≈ 1.44721 and 1.8 respectively, and also proved an almost matching lower bound of √2≈1.41421 for k=2. III.
Initialize: For k = 1 to M+1 do mk = 0. Main algorithm: When an item ai is received do 1. If ai is an Ik – piece, 1 ≤ k < M, then ai will be placed into an Ik – bin 1.1 If the current Ik – bin is unfilled then ai is placed into the current Ik – bin 1.2 Else increment mk and get a new Ik – bin 2. If ai is an IM – piece then ai will be placed into an IM – bin 2.1 If ai fits into the current IM – bin then ai is placed into the current IM – bin 2.2 Else increment mM and get a new IM – bin 3. If ai is a large item such that ⎡ai ⎤ = B then 3.1 We break ai into a sequence of requests ai = {ai1 , ai2, …, ai B }. 3.2 The first item of this sequence ai1 is packed according to Step 1 or Step 2, and the rest of the items of this sequence are placed into IM+1 – bins and mM+1 = mM+1 + B – 1. For the cardinality constrained bin packing problem, the input consists of the sequence R and is a positive integer M that represents the maximum number of items to be fit in a bin. The goal is to partition R into some number of bins S1, …, Sp such that p is minimized and for any 1 ≤ i ≤ p, a j ≤ 1 and | Sj| ≤ M.
ALGORITHMS HLR AND CCHLR
In the classical bin packing problem, we are given n items of size between 0 and 1, strictly greater than 0, which need to be assigned to bins of size 1. Each bin contains items of total size at most 1. The goal is to minimize the number of bins used. The problem is defined formally as :
∑
Theorem 1. Algorithm HLR has the asymptotic approximation ratio of Π∞ ≈ 1.69103… .
The input sequence R of n items R = {a1, a2, … an } needs to be partitioned into a number of bins S1, …, Sp such a j ≤ 1 for any 1 ≤ i ≤ p and p is minimized. that
∑
j∈S i
B. Algorithm CCHLR The algorithm CCHLR uses the same notations as the algorithm HLR with some additional ones. The weight of an item x, denoted by w(x), is defined as follows:
j∈S i
A. Algorithm HLR We use the following notations. Let mk be the number of Ik – bins, 1 ≤ k ≤ M, and mM+1 be the number of bins of unit size, needed for request of size larger than 1. Initially mk are all 0, and are incremented each time a bin is filled. The algorithm HARMONICM puts an item in the corresponding bin if the size of the item is no larger than 1, and keeps at all times an unfilled Ik – bin for each 1 ≤ k ≤ M. In the algorithm HLR, we do the same for items of size no larger than 1; if an item is large (it has a size greater than 1), then we break it into a sequence of requests in which the second,
⎧1 if x is an I k - piece ⎪i ⎪ ⎪1 w( x) = ⎨ if x is an I M - piece ⎪M ⎪1 if x is a large item ⎪ ⎩ If an Ik – bin contains k items then it is considered closed; otherwise it is considered open. Let sk to count the number of items in the current Ik – bin, 1 ≤ k < M.
72
The algorithm CCHLR is presented next.
(0,5]. The second half of the sequence, the items { aN/2, aN/2+1, … aN }, is constructed from the first half as follows: a i + N / 2 = ⎣ai ⎦ + 1 − a i , for each i, 1≤ i ≤ N/2 (the items aN./2+I and ai fit exactly in ⎣a i ⎦ + 1 bins). For example, when N=20, in the sequence of requests {4.45, 4.25, 3.7, 3.85, 0.1, 2.6, 4.85, 3.4, 4.55, 1.35,0.55, 0.75, 0.3, 0.15, 0.9, 0.4, 0.15, 0.6, 0.45, 0.65}, the first half {4.45, 4.25, 3.7, 3.85, 0.1, 2.6, 4.85, 3.4, 4.55, 1.35} is generated randomly and the second half {0.55, 0.75, 0.3, 0.15, 0.9, 0.4, 0.15, 0.6, 0.45, 0.65} is obtained from the first half in such a way that the items a1+a11 fit exactly in ⎡a1 ⎤ number of bins, the items a2+a12 fit exactly in ⎡a 2 ⎤ number of bins, etc.. For a given sequence R (of length N) and a given M, let BHLR be the total number of bins obtained by applying the algorithm HLR, let BCCHLR be the total number of bins obtained by applying the algorithm CCHLR, and let BOPT be the minimum number of bins in which the items can be packed. To measure the performance of the algorithms HLR and B HLR B CCHLR we compute the ratios and CCHLR ; BOPT BOPT smaller the ratio, better is the performance of that algorithm. B B We compute the ratios HLR and CCHLR for M=5 (see BOPT BOPT Fig. 1).
Algorithm CARDINALITY CONSTRAINED HARMONICM with Large Requests (CCHLR) Initialize: For k = 1 to M+1 do mk = 0 and sk = 0. Main algorithm: When an item ai is received do 1. If ai is an Ik – piece, 1 ≤ k < M, then ai will be placed into an Ik – bin 1.1 If the current Ik – bin is not closed then ai is placed into the current Ik – bin and sk is incremented 1.2 Else increment mk , set sk to 0, and get a new Ik – bin 2. If ai is an IM – piece then ai will be placed into an IM – bin 2.1 If ai fits into the current IM – bin then ai is placed into the current IM – bin and sM is incremented 2.2 Else increment mM , set sM to 0, and get a new IM – bin 3. If ai is a large item such that ⎡ai ⎤ = B then 3.1 We break ai into a sequence of requests ai = {ai1 , ai2, …, ai B }. 3.2 The first item of this sequence ai1 is packed according to Step 1 or Step 2, and the rest of the items of this sequence are placed into IM+1 – bins and mM+1 = mM+1 + B – 1. Theorem 2. Algorithm CCHLR has the asymptotic approximation ratio of 1 + Π∞ ≈ 2.69103… .
IV.
SIMULATION RESULTS
We compare the performances of the two algorithms using various values for the number of requests N, the maximum number of items that can go into a bin M, and various sets of requests. We first consider N to take values in the set {20, 100, 200, 300, 400, 500, 600, 700, 800, 900, 1000} and M to take values in the set {5, 10, 15}. For each value of N and each value of M we generate 100 sequences of requests of random real values in the interval (0,5]. Since determining the optimum bin packing is NP-complete, we construct these sequences in a special way to be able to compute the optimum (minimum) number of bins. Each sequence of length N is constructed as follows: The first half of the sequence that contains the items { a1, a2, … aN/2 } takes random real values in the range
Figure 1. M=5 and N takes values between 20 and 1000
B HLR B and CCHLR are shown for BOPT BOPT B B M=10. In Fig. 3 the ratios HLR and CCHLR are shown for BOPT BOPT for M=15. We note that HLR has a better approximation ratio than CCHLR, but the values of these ratios decreasing slowly to 1.09. This shows that for these type of sequences in which the first half is generated at random and the second is computed from the first, both algorithms perform very well In Fig. 2 the ratios
73
in practice for small sequences of requests, when the maximum number of items per bin is small. We can conclude that, for small sequence of requests and for small number of items allowed per memory server, both algorithms perform very well.
Secondly, we consider N to take larger values, in the increments of 1000 in the range [1000,20000], and M to take values in the set {100, 500, and 1000}. We note that B HLR B and CCHLR decrease very slowly from the value BOPT BOPT 1.089.. to the value 1.088… Due to space restriction, only the plots for N=1000 have been included (see Fig. 4). HLR has a very slightly better approximation ratio than CCHLR (the difference between the two is in the order of 10-3) and the values of these ratios decreasing slowly to 1.088, which also shows that for these type of sequences both algorithms perform very well in practice for large sequences of requests, when the maximum number of items per bin is also large. Thus for large sequences of requests and allowing many items per memory server both algorithms perform very well.
Figure 2. M=10 and N takes values between 20 and 1000
Figure 4. N=1000 and M takes values between 5 and 100
For a given value of N, we let M vary to measure how B B the ratios HLR and CCHLR change as M increases, i.e. BOPT BOPT more items are allowed in a bin. Fig. 5 depicts the case when N=11000.
Figure 3. M=15 and N takes values between 20 and 1000
We will see next that these conclusions also apply when the number of requests is large (thousands of requests). For a given N, we let M vary in order to measure how B B the ratios HLR and CCHLR change as M increases, i.e. BOPT BOPT more items are allowed in a bin. Fig. 4 depicts the case B when N=1000. We note that HLR varies slightly around BOPT
BCCHLR varies slightly around the BOPT value 1.089. This shows that CCHLR outperforms HLR. the value 1.09, while
Figure 5. N=11000 and M takes values between 100 and 1000
74
V.
CONCLUSIONS
[8]
We modify the algorithms HARMONICM [7] and CCHk (CARDINALITY CONSTRAINED HARMONICk) [8] to handle storage requests that are larger than the memory of a single server. Our simple extensions, the algorithms HLR (HARMONICM with Large Requests) and CCHLR (CARDINALITY CONSTRAINED HARMONICk with Large Requests) have the same approximation ratio as the original algorithms and are shown that perform very well in practice for the type of sequences in which the first half is generated at random and the second is computed from the first. As future work we propose to investigate the case where the online requests are served by two or more servers that share the bins. Another venue of research is to consider the cost of storing the chunk on a server dependent on the number of already stored items on that server. Recently [25], the cost of the bin has been considered as a non-decreasing, concave function of the number of items stored in that bin.
[9] [10]
[11]
[12]
[13] [14]
[15]
ACKNOWLEDGMENT
[16]
This material is based upon work supported by the U. S. Army Research Laboratory and the U. S. Army Research Office under the eSensIF MURI Award No. W911NF-07-10376. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the sponsor.
[17]
[18] [19] [20]
REFERENCES [1]
N. Carr, “The Big Switch: Our New Digital Destiny”, Norton, February 2008
[2]
R. Avent, “The Geography of Cloud Computing”, Seeking Alpha, June 15, 2009; http://seekingalpha.co m/article/143305-thegeography-of-cloud-computing R. Miller, “Facebook Expands Data Center Space”, October 18th, 2007; http://www.datacenterknowledge.com/archives/2007/10/18/facebookexpands-data-center-space/ P. Ross, “Cloud Computing’s Killer App: Gaming”, IEEE Spectrum Magazine, March 2009, pp. 14 M. Hendrickson, “AMD and OTOY working together on fastest supercomputer ever”, TechCrunch, Jan. 8, 2009; http://www.tech crunch.com/2009/01/08/amd-and-otoy-working-together-on-fastestsupercomputer-ever/ S. Cherry, “Forecast for cloud computing: up, up and away”, IEEE Spectrum, vol. 46, no. 10, pp. 68-68, 2009. C.C. Lee and D.T. Lee, “A simple on-line bin-packing algorithm”, Journal of ACM, vol. 32, no. 3, pp. 562-572, 1985.
[3]
[4] [5]
[6] [7]
[21]
[22]
[23]
[24]
[25]
75
L. Epstein, “Online bin packing with cardinality constraints”, SIAM Journal on Discrete Mathematics , vol. 20, no. 4, pp. 1015-1030, 2006. J. D. Ullman, “The performance of a memory allocation algorithm”, Technical Report 100, Princeton University, Princeton, NJ, 1971. E. G. Coffman, M. R. Garey, and D. S. Johnson, “Approximation algorithms for bin packing: A survey”, In D. Hochbaum (editor), Approximation algorithms, PWS Publishing Company, 1997. E. G. Coffman Jr. and J. Csirik, “Performance guarantees for onedimensional bin packing”, In T. F. Gonzalez (ed.), Handbook of Approximation Algorithms and Metaheuristics, chapter 32, Chapman & Hall/CRC, 2007. J. Csirik and G. J. Woeginger, “On-line packing and covering problems”, In A. Fiat and G.J. Woeginger (editors), Online Algorithms: The State of the Art, chapter 7, pp. 147–177, Springer, 1998. D.S. Johnson, “Fast algorithms for bin packing”, Journal of Computer Systems and Science, vol. 8, pp. 272–314, 1974. D.S. Johnson, A. Demers, J.D. Ullman, M.R. Garey, and R.L. Graham, “Worst-case performance bounds for simple onedimensional packing algorithms”, SIAM Journal on Computing, vol. 3, pp. 256–278, 1974. A.C.C. Yao, “New algorithms for bin packing”, Journal of ACM vol. 27, pp. 207–227, 1980. S. Seiden, “On the online bin packing problem”, Journal of the ACM vol. 49, no. 5, pp. 640-671, 2002. A. Van Vliet,”An improved lower bound for online bin packing algorithms”, Information Processing Letters, vol. 43, pp. 277–284, 1992. L. Epstein, “Bin packing with rejection revisited”, 4th Workshop on Approximation and Online Algorithms (WAOA), pp. 146-159, 2006. L. Epstein, “Bin packing with rejection revisited”, Algorithmica, DOI 10.1007/s00453-008-9188-9. W. Bein, J. Correa, X. Han, “A fast asymptotic approximation scheme for bin packing with rejection”, 1st International Symposium on Combinatorics, Algorithms, Probabilistic and Experimental Methodologies (ESCAPE 07), Springer Verlag, Lecture Notes in Computer Science LNCS 4614, pp. 209–218, 2007. W. Bein, J. Correa, X. Han, “A fast asymptotic approximation scheme for bin packing with rejection”, Theoretical Computer Science, vol. 393, nos. (1-3), pp.14–22, 2008. K.L. Krause, V.Y. Shen, and H.D. Schwetman, “Analysis of several task-scheduling algorithms for a model of multiprogramming computer systems”, Journal of ACM, vol. 22, no. 4, pp. 522–550, 1975. K.L. Krause, V.Y. Shen, and H.D. Schwetman, Errata: “Analysis of several task-scheduling algorithms for a model of multiprogramming computer systems”, Journal of ACM, vol. 24, no. 3, pp. 527–527, 1977. L. Babel, B. Chen, H. Kellerer, and V. Kotov, “Algorithms for online binpacking problems with cardinality constraints”, Discrete Applied Mathematics, vol. 143, nos. 1-3, pp. 238–251, 2004. L. Epstein, A. Levi, “Bin packing with general cost structures”;http://adsabs.harvard.edu/abs/2009arXiv0906.5051E