International Conference on Computing, Networking and Communications, Internet Services and Applications Symposium
On Zap Time Minimization in IPTV Networks Matthew Long, Sridhar Radhakrishnan,
Suleyman Karabuk,
John Antonio
School of Computer Science University of Oklahoma Norman, Oklahoma 73019 E-mail: {mglong, sridhar}@ou.edu
School of Industrial Engineering University of Oklahoma Norman, Oklahoma 73019 E-mail:
[email protected]
School of Computer Science University of Oklahoma Norman, Oklahoma 73019 E-mail:
[email protected]
Abstract—Digital television systems have a clear disadvantage relative to analog systems in users’ quality of experience, most notably in the time required to change channels, or zap time. The goal of this research is to improve the performance of a multicasting IPTV network, both in user experience and in resource consumption. We formulate the problem of assigning IPTV clients to servers as an integer programming model, in variants which minimize channel-change times, overall network capacity consumption, or both. This problem is shown to be computationally hard, and the performance of the models is tested on problems of different sizes. Polynomial-time heuristics are presented which address a relaxed version of the problem, and the performance of these heuristics is measured. Index Terms—IPTV Networks, Multicast, Integer Programming, NP-Hardness
I. I NTRODUCTION There has been explosive growth in video traffic on the Internet due to a combination of an increase in the number of video content providers, ubiquity of devices capable of viewing video content, and keen interest among the viewing public. Video on the Internet is either broadcast (e.g., NBC’s streaming of the Olympic Games) or provided on demand (e.g., Netflix and Hulu’s services). Cable companies deliver all broadcast channels to the end system (e.g., a set-top box, or STB) at home and, depending upon the subscription criteria, a subset of these channels are viewable by the end user. In this scheme the cable company streams as many channels as can be accommodated on the cable. This is true of satellite television, which also requires a specialized STB to deliver content and delivers all broadcast channels to the end system. The significant advantage of systems that broadcast all channels is that the user experiences almost no delay when changing channels, as the content is already being delivered to the user’s STB. The time interval between the user keying in the change channel request and the display of the contents of the new channel on the user’s screen is termed as the zap time. For analog cable and satellite television viewers, the zap time is insignificant. Digital broadcast systems require additional time to buffer and begin decoding the compressed channel, and therefore have somewhat longer zap times. There has been a growing interest in delivering broadcast video channels through the Internet (or other IP networks). Broadcast channels delivered via an IP network is termed as IPTV. IPTV offers significant advantages over the current
978-1-4673-0009-4/12/$26.00 ©2012 IEEE
state-of-the-art. These advantages include a) wherever Internet access is available, broadcast channels can be delivered and this eliminates the need for additional infrastructure (such as cable or satellite), b) the broadcast content can be viewed on any device that has an Internet connection and appropriate video display without the need for specialized STB hardware, and c) bandwidth can be conserved as only the users watching a particular channel need to receive it. Several organizations have set standards for implementing IPTV networks, including ETSI TISPAN, the DVB Project, and the Open IPTV Forum [2], [13], [1]. These standards define functionality, protocols, and architectures for IPTV systems. IPTV standards typically include Video on Demand (VoD) functionality in addition to “broadcast” channels, but VoD is outside the scope of this paper. While there are significant advantages to IPTV, it suffers from long zap times. Imagine a user on a laptop computer switching between broadcast channels. Each time the user switches to a different channel, a program on the laptop will establish a connection to a server in the IPTV network and wait for the new channel to begin streaming from it. This delay is proportional to the network latency between the laptop and the contact server. Depending on the location of the server, this delay will vary, and the greater this delay, the lesser the Quality of Experience (QoE) for the end user. The goal of this research is to improve QoE in an IPTV network by minimizing zap time and bandwidth consumption. Towards this goal, we consider a network of servers connected to form a rooted tree (a multicasting tree) with network delay and bandwidth specified on each link, and a set of clients, each of which requires a subset of the channels carried by the tree. (Each client has a network delay defined between it and each server.) One can imagine these intermediate servers to be connected to end offices of Internet Service Providers and hence be well-provisioned, with minimal server-to-server link delays. The root of the tree is the server, r, that is the source of all channels. Each intermediate node may duplicate the content it receives from its parent and send it to a subset of its children in the tree. To conserve bandwidth, a node only redistributes channels to those of its children who request them. If a server k is receiving a particular channel c, then all servers in the path from k to r have channel c, and can immediately satisfy any request for it. A client is an end system that has requests for a set of
713
channels. (For example, each client may have a set of favorite channels collected over time or a set of channels to which it subscribes.) Each client has to connect to the appropriate server (introducing network delay) for each of its channel requests. Now, given a multicasting tree as above and a set of clients with channel requests, our goal is to find the appropriate server for each client to connect to for each of its channel requests. (Because bandwidth is a limited, potentially shared resource, specifying that every server carries every channel is not an option.) We first develop a series of mixed integer programming (MIP) models of our problem. We then show that the problem is NP-hard. We show the results of solving many problem instances. Toward a scalable solution to these problems, we provide polynomial-time heuristic algorithms for a relaxed version of the problem, which take into consideration various tradeoffs between link capacity and zap time. These heuristics have been evaluated empirically using many problem instances. A. Related Work Zap time is the channel change time as perceived by the viewer. Its primary components are multicast join time, network latency, buffering and decoding, error correction, and access control [10], [17]. Zap time is a major factor in the viewer’s Quality of Experience (QoE): both shorter zap times and consistency in zapping are preferred. A client needs to buffer a certain amount of data to decode the channel stream; this buffer must begin with a random access point (RAP). A RAP is a block of the stream that is independent of the rest of the stream; e.g., an I-frame. (Most video frames are not transmitted in full; displaying such frames requires reference to previous or subsequent frames.) There are two ways to reduce the time to fill the decoder’s buffer: the encoding of the stream could be changed such that less buffering is required, or the buffer could be filled faster. Video encodings can be adjusted to balance bandwidth consumption and buffering requirements by varying the frequency of RAPs within the stream. Waiting for a RAP is a significant source of delay when changing channels; Begen at al. [6], [5] and Sasaki et al. [16] have addressed this issue with retransmission schemes. In both works, each channel stream is buffered, from the most recent RAP, by retransmission servers, and is retransmitted to a client just joining the channel. Begen’s scheme retransmits with a unicast burst, deferring multicast join until the burst is underway. Sasaki’s scheme sets up additional multicast streams for retransmission, delayed by some constant factor relative to the main stream. A client joining the main stream temporarily joins the retransmission stream, so its buffer fills more quickly than it otherwise would (any redundant data, received from both streams, is discarded). Sasaki’s work was tested in a production IPTV system, which showed an average 0.5-second reduction in zap time. Begen’s tests, though not real-world, showed a 65% decrease in mean zap time and a 60% decrease in the variation in zap time [6].
Cha et al. [7] and Qiu et al. [14] have investigated channel popularity dynamics in IPTV systems. Both papers used large channel-change datasets from actual IPTV networks. Cha found that more than 60% of channel changes are linear; i.e., to the next- or previous-numbered channel. Both concluded that channel popularity distributions resemble a Zipf distribution, with the few most popular channels accounting for most of the viewing. Qiu’s main contributions were improved mathematical models of user behavior, which have the potential to improve simulations of IPTV systems in further research. II. P RELIMINARIES The Client Assignment Problem (CAP) is defined as the satisfaction of a set of demands for channels (or, equivalently, channel requests) by assigning each demand to be fulfilled by a node of a multicasting tree (i.e., an IPTV network). A client’s channel requests need not be satisfied by the same server; fulfilling requests from a client with different servers can use fewer network resources or provide better zap time than fulfilling them with a single server. (A client could request multiple channels for a variety of reasons; e.g., picture-inpicture or multi-channel recording.) Channels are assumed to require equal bandwidth, so server-server link capacities (bandwidths) are given in units of channels. Additionally, all components of zap time other than round-trip time are ignored (e.g., buffering and decoding delays). The CAP is an offline problem; demands are unordered and are known in advance. CAP shares some characteristics with facility location and network flow problems, but differs from them in several ways. Facility location (FL) problems involve placing facilities to serve clients. The location of facilities (once placed) and clients are fixed. The cost of serving a client is a function of the distance between the facility and the client. There are multicommodity variants of FL, in which clients may demand different kinds of goods or services, and capacitated variants, in which a facility’s ability to produce is limited. The critical difference between (multicommodity, capacitated) FL and a multicasting IPTV network is that FL features no relationship between facilities, while multicasting has dependencies between servers (i.e., a tree hierarchy). Additionally, facility location is performed in a metric space, but network delay does not obey the triangle inequality and is therefore not a metric. Network flow problems involve defining paths between source and sink nodes in a graph, usually such that the total cost of the paths (flows) is minimized. Variants include multicommodity flow (MCF), in which different commodities share link capacity (different commodities may consume different amounts of link capacity). Typical MCF problems rely on mass balance constraints, which require that an intermediate node’s total input must equal its total output; however, this kind of constraint does not apply in computer networking, an environment in which the “commodities” are data, which may be trivially duplicated. Applegate et al. have taken an approach similar to ours to placing content in VoD content distribution networks to
714
minimize bandwidth consumption and storage consumption [4]. However, unlike their model, the servers in our problem are strictly hierarchical, and channels are distributed by multicasting, rather than unicasting. There are four variants of CAP, which are discussed in Section III. These variants minimize maximum zap time, total bandwidth consumption, or a combination of the two objectives. III. IP M ODELS Let C be the set of clients, where |C| = m. Let S be the set of servers, where |S| = n and n ≤ m. Let K be the set of channels, where |K| = p. Let M = mnp. Let Tij be the link capacity between two servers i and j, in units of channels. Let server 0 be the root server. Let p : S\ {0} → S yield the parent of a server, and let s : S → 2S yield the successors of a server. Let ci be the capacity of server i. Let Zijk be the zap time for client i to receive channel k from server j. (These values are calculated from server-server and server-client latencies.) Let dik = 1 if client i demands channel k, and 0 otherwise. Let yijk = 1 if server i sends channel k to server j, and 0 otherwise. Let xijk = 1 if client i receives channel k from server j, and 0 otherwise. The IP formulation of CAP is then as follows: yijk Minimize Subject to
i∈C j∈S k∈K
∀i ∈ C, k ∈ K : dik =
xijk
(1)
yjlk ≤ M yp(j),j,k
(2)
j∈S
∀j ∈ S\ {0} , k ∈ K :
l∈s(j)
∀i ∈ C, j ∈ S\ {0} : ∀j ∈ S :
xijk ≤ M yp(j),j,k
(3)
k∈K
yp(j),j,k ≤ Tp(j),j
present in MZ, MZCB, and MBCZ, is an upper bound on the zap time for satisfied demands. From the input data and the values of these variables, the zap time for each demand and overall bandwidth consumption can be calculated for a given solution. A. Variant MB: Minimize bandwidth consumption The first variant of the CAP minimizes the overall bandwidth consumption of the network. This variant is the model as stated above. B. Variant MZ: Minimize maximum zap time The second variant of the CAP minimizes the maximum zap time, over all clients and all channels. This variant assigns clients so as to achieve the best overall performance/QoE, regardless of cost (i.e., network resource consumption). This variant has an additional integer variable, z, and the additional constraint (6) that the zap time for any demand does not exceed z. (6) ∀i ∈ C, j ∈ S, k ∈ K : xijk Zijk ≤ z The objective function of this variant is z; i.e., the upper bound on zap time over all demands. C. Variant MZCB: Minimize maximum zap time, controlling bandwidth consumption The third variant of the CAP combines the concerns of cost and performance, giving priority to cost (bandwidth consumption). This variant is equivalent to MZ, with the additional constraint (7) that the network’s bandwidth consumption must remain “close” to the optimal value as determined by MB. This is accomplished by adding a constraint to MZ that limits bandwidth consumption to less than or equal to MB’s optimal bandwidth, B, plus a tolerance percentage t.
(4)
k∈K
∀j ∈ S :
yijk ≤ (1 + t) B
(7)
i∈C j∈S k∈K
xijk ≤ cj
(5)
i∈C k∈K
Constraint 1 ensures that each channel demand is satisfied by one server. Constraints 2 and 3 control channel propagation (i.e., they ensure a path to the root for each server for every channel it carries). The root server, as the source of all channels, is excluded from both of these constraints. (M is an upper bound on the sums in constraints 2 and 3.) Constraint 2 requires that a server must be receiving a channel in order to send it to another server. Constraint 3 requires that a server sending a channel to a client must be receiving that channel from its parent server. Constraint 4 ensures that server-toserver channel capacities are respected. Constraint 5 ensures that servers’ outgoing channel capacities are respected. The x- and y- variables of a solved problem instance encode an optimal propagation of channels from the root server to other servers, and from servers to clients. The z variable,
As in MZ, the objective function of MZCB is z, the upper bound on zap time over all demands. D. Variant MBCZ: Minimize bandwidth consumption, controlling zap time The fourth variant of the CAP, like the third, addresses both the performance and the cost of the solution. This variant prioritizes zap time (QoE) over bandwidth consumption. MBCZ has constraint 6, and an additional constraint (8), which limits maximum zap time to less than or equal to MZ’s optimal maximum zap time, Z, plus a tolerance percentage t. z ≤ (1 + t) Z (8) yijk (total bandMBCZ’s objective function is i∈C j∈S k∈K
width consumed by sending channels between servers), as in MB.
715
IV. NP-H ARDNESS OF CAP The NP-hardness of CAP is proved by reduction from the optimization version of minimum set cover, along the lines of the argument used in [15]. Set cover is the problem of, given a finite set X and a family F of subsets of X such that every element of X belongs to at least one subset in F , finding a s minimum-cardinality covering C ⊆ F such that X = s∈C
[8]. The optimization version of minimum set cover is known to be NP-hard [11]. The reduction is accomplished by transforming set cover to a special case of CAP Variant 4 (MBCZ). The set X is mapped to the set of clients in CAP. The set F of subsets of X is mapped to the set of servers in CAP; there is an additional server, the root, which is not mapped to any member of F . The root is the parent of every other server. The root has zero capacity (i.e., it cannot serve clients), and each other server has capacity equal to the cardinality of its corresponding member of F . Each link between servers has one unit of capacity and zero zap time. The zap time between the client mapped to x ∈ X and the server mapped to f ∈ F is zero if x ∈ f , and is one otherwise. There is one channel, which is demanded by all clients. CAP Variant MBCZ minimizes the total bandwidth consumed by satisfying all demands, while maintaining a maximum zap time within a certain tolerance of its optimal value. For this reduction, the tolerance is 0%; e.g., the maximum zap time must be equal to its optimal value. Minimizing maximum zap time, for this special case, means maintaining zero zap time for all clients; this means that the client mapped to x ∈ X can only connect to the server mapped to f ∈ F if x ∈ f . Minimizing the total bandwidth consumed, in this case, is equivalent to minimizing the size of the cover, because each server corresponds to a set that can contribute to the cover, and each server will consume either zero bandwidth, if not used, or one bandwidth, if it is used in the cover (because there is only one channel, and a channel is transmitted at most once between any pair of servers). Because any instance of set cover can be represented as an instance of CAP (MBCZ), it is concluded that CAP (MBCZ) is at least as hard as minimum set cover; i.e. it is NP-hard. V. T HE R ELAXED CAP Toward a fast approximation algorithm for CAP, we relax the problem by removing link and node capacity constraints. This makes the problem significantly easier, primarily due to the removal of the link capacity constraints, which, as bundling constraints, are known to increase the difficulty of network flow problems [3]. We present three polynomial-time heuristic algorithms for the relaxed CAP (RCAP): BZ, BZP, and GRC. BZ addresses zap time, GRC addresses bandwidth consumption, and BZP addresses both zap time and bandwidth consumption. A. Best Zap Time (BZ) The Best Zap Time heuristic assigns each channel request to the server which gives the least zap time for that client and
Require: tree, demands, latencySS, latencyCS, outcap Ensure: xs, ys 1: initialize xs, ys {initialized to zero} 2: demandList ← demands grouped by channel 3: sort demandList in ascending order of size {size is measured in number of clients} 4: calculateZapTimes(tree, latencySS, latencyCS, outcap) 5: while |demandList| > 0 do 6: d ← demandList.pop() {get last (largest) demand} 7: s ← bestServer() {server with greatest remaining capacity, ignoring capacity constraints} 8: makeAssignment(d, s) 9: end while Figure 1.
Greatest Residual Capacity (GRC) algorithm
channel. BZ is an online algorithm, which processes clients in arbitrary order, recalculating zap times after each client’s channel requests have been fulfilled. (Satisfying a client’s requests is assumed to be an atomic operation.) B. Best Zap Time in Path (BZP) The goal of BZP is to compromise between minimizing bandwidth consumption and reducing zap times. It takes advantage of the digital nature of the channels being distributed: if server i is sending channel k to a client, channel k is available to every server on the path from i to the root of the server tree, at no additional bandwidth cost. For each group of clients who want channel k, BZP finds the server s that gives the least zap time for that set of clients. Then, each client requesting k is assigned to receive k from the server in the path from s to the root which gives the best zap time. C. Greatest Residual Capacity (GRC) The Greatest Residual Capacity (GRC) heuristic is a straightforward adaptation of the Longest Processing Time (LPT) heuristic for multiprocessor job scheduling. LPT assigns each job, in non-increasing order of processing time, to the processor with the greatest remaining capacity. GRC addresses bandwidth (link capacity) consumption by grouping demands by channel and assigning each such group (from largest to smallest) to the server with the most remaining capacity. Figure 1 shows the GRC algorithm. RCAP is equivalent to Extendible Bin Packing (EBP) if link capacities are ignored. An item is equivalent to a group of demands for the same channel, as long as the group is assigned to a server as a unit. A bin with a capacity is equivalent to a server node which can serve a maximum number of demands. Exceeding the capacity of a bin is equivalent to assigning more clients to a server than that server can serve. Dell’Olmo and Speranza show that the Longest Processing Time (LPT) heuristic approximates EBP to within a perfor√ mance ratio of 2 2 − 2 [9]; therefore, GRC achieves the same performance on CAP if only node capacity is considered. VI. P ERFORMANCE E VALUATION The CAP models’ performance was tested for increasing problem sizes. For these tests, the server and client counts were set at 5 and 20 respectively, then scaled by 1.25 after each
716
VII. C ONCLUSIONS We have formulated the Client Assignment Problem (CAP) as an integer programming model (in four variants) in Section III, and in Section IV, we have shown CAP to be NP-hard.
100
MB MZ MZCB MBCZ
Time (s)
10
1
0.1
0.01
10 Size of problem (servers)
Figure 2.
Mean runtime by CAP variant (95% confidence intervals shown) 250
MB MZ MZCB MBCZ
240 230 220
Zap time (ms)
size step. The networks used were random trees. Ten channels were used. Each link in a server tree had [1, 10] units of capacity. Server-server and server-client links had [16, 32]ms and [32, 64]ms latency, respectively. Each client requested two of the ten channels, at random. Each server had [ nd , 2 nd ] units of capacity, where d is the total number of demands in the problem instance. The tolerance used for variants MBCZ and MZCB was 10%. At each size step, 50 CAP instances were generated, and each was solved using the Gurobi Optimizer [12]. Problem generation, optimization, and result aggregation were performed by Python code, which interacted with Gurobi via its Python API. Figures 2 - 4 show the mean runtime, mean maximum zap time, and mean bandwidth consumption at each size step, with 95% confidence intervals. The time required to optimize CAP instances grows exponentially as problem size increases (Figure 2), with variant MZCB’s runtime growing the most quickly. The difference between MZ’s bandwidth consumption and MBCZ’s (Figure 4) shows that allowing maximum zap time to exceed its optimal value by a small amount (10% in this case) can reduce network resource consumption by a large amount (approximately 50% in this case). There is a similar difference between MB’s and MZCB’s bandwidth consumption, but it is not as pronounced. Heuristics BZ, BZP, and GRC were used to solve 50 CAP instances (ignoring capacity constraints) with 10 servers and 40 clients, and two random channel requests per client. Figures 5 and 6 show maximum zap time and bandwidth consumption distributions, respectively. BZ gives the best zap times and the worst bandwidth consumption, while GRC gives the worst zap times. While BZP consumes less bandwidth than GRC, it violates server capacity constraints to a greater degree than does GRC; it may therefore be less useful as a basis for approximating optimal solutions to CAP. (Though not discussed in detail in this paper, we have found that solutions to RCAP by these algorithms, used as initial states, in some cases significantly improve the quality or speed of the optimizer’s solutions. BZP, however, was shown to have no significant effect on the final solution’s maximum zap time, bandwidth consumption, or optimizer runtime.) Finally, the runtime of the optimizer on all four CAP variants was compared to the runtime for BZ, BZP, and GRC for problems of larger sizes. (Again, there are four clients for each server.) As seen in Figure 7, optimizing CAP is computationally expensive, while solving RCAP is much less so. Included in this plot are runtimes for optimizing RCAP, to illustrate the difficulty added to the problem by the link and server capacity constraints. BZ’s (online) runtime grows more quickly than that of the other heuristics, though more slowly than that of the optimizer; this is caused by the recalculation of zap times after satisfying each client’s demands.
210 200 190 180 170 160 150
4
5
6
7
8
9
10
11
Size of problem (servers)
Figure 3. Mean maximum zap time by CAP variant (95% confidence intervals shown)
In Section V, we introduced heuristic algorithms for a relaxed CAP. Finally, in Section VI, we show the performance of the CAP models and of the heuristics for the relaxed CAP. In an actual IPTV network, channels may be introduced at multiple levels of the network (for example, local news channels will be introduced at the local level, not at the national level). Representing this capability in an updated CAP would increase the verisimilitude of our model. Removing the assumption that all channels require equal bandwidth would also increase the usefulness of the CAP. CAP as formulated in Section III is an offline problem; it assumes that the network is initially idle, and that demands are unordered and are known ahead of time. This is a simplification, as actual television channel changes are spontaneous and unpredictable. Zap times are precalculated for each combination of client, server, and channel. The channel component is not needed unless there is some initial state. Adding initial state (i.e., starting with non-idle network) is easily accomplished by adding constraints to fix some x- and y-variables. This approach could be iterated in order to apply the optimizer to an online CAP. However, optimization is too time-consuming to be used in practice, in this kind of system. A fast approximation algorithm for the CAP would allow much larger simulations to be performed, and could potentially be applied to improve the performance of an actual IPTV system.
717
35
50
MB MZ MZCB MBCZ
40
25 Number of Problems
Network capacity consumption (channels)
30
GRC BZ BZP
20
30
20
15
10
10
5
4
5
6
7
8
9
10
0
11
0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56
Size of problem (servers)
Link Capacity Consumed (channel)
Figure 4. Mean bandwidth consumption by CAP variant (95% confidence intervals shown)
Figure 6.
Bandwidth consumption distributions for BZ, BZP, and GRC 100
50
10
Time (s)
40
Number of Problems
MB MZ MZCB MBCZ MB-relaxed MZ-relaxed MZCB-relaxed MBCZ-relaxed GRC BZ BZP
GRC BZ BZP
30
1
20 0.1
10
0.01 0
1
10
100
Problem size (servers)
0 1 2 3 4 5 6 7 8 9 10111213141516171819202122232425262728293031323334353637 Maximum Zap Time (ms * 10)
Figure 5.
Figure 7. Mean runtime (per problem) for CAP model variants and RCAP heuristics
Maximum zap time distributions for BZ, BZP, and GRC
R EFERENCES [1] Open iptv forum service and platform requirements v2.0, 12 2008. [2] Etsi ts 182 027 v3.5.1 iptv architecture: Iptv functions supported by the ims subsystem, 3 2011. [3] Ravindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin. Network Flows. Prentice Hall, 1993. [4] David Applegate, Aaron Archer, Vijay Gopalakrishnan, Seungjoon Lee, and K. K. Ramakrishnan. Optimal content placement for a large-scale vod system. In Proceedings of the 6th International COnference, CoNEXT ’10, pages 4:1–4:12, New York, NY, USA, 2010. ACM. [5] A.C. Begen, N. Glazebrook, and W. Ver Steeg. A unified approach for repairing packet loss and accelerating channel changes in multicast iptv. In Consumer Communications and Networking Conference, 2009. CCNC 2009. 6th IEEE, pages 1–6, Jan. 2009. [6] Ali Begen, Neil Glazebrook, and William Ver Steeg. Reducing channelchange times with the real-time transport protocol. IEEE Internet Computing, 13(3):40–47, May-June 2009. [7] Meeyoung Cha, Pablo Rodriguez, Jon Crowcroft, Sue Moon, and Xavier Amatriain. Watching television over an ip network. In IMC ’08: Proceedings of the 8th ACM SIGCOMM conference on Internet measurement, pages 71–84, New York, NY, USA, 2008. ACM. [8] Thomas Cormen, Charles Leiserson, Ronald Rivest, and Clifford Stein. Introduction to Algorithms. McGraw-Hill Book Company, 2 edition, 2001. [9] P. Dell’Olmo and M. G. Speranza. Approximation algorithms for partitioning small items in unequal bins to minimize the total size. DISCRETE APPLIED MATHEMATICS, 94(1-3):181 – 191, 1999.
[10] H. Fuchs and N. Farber. Optimizing channel change time in iptv applications. In Broadband Multimedia Systems and Broadcasting, 2008 IEEE International Symposium on, pages 1 –8, 31 2008-april 2 2008. [11] Michael R. Garey and David S. Johnson. Computers and Intractability: A Guide to the Theory of NP-Completeness. W. H. Freeman and Company, 1979. [12] Gurobi Optimization. Gurobi optimization (http://www.gurobi.com/). [13] DVB Project. Dvb-iptv standards. http://www.dvb.org/technology/standards/. [14] Tongqing Qiu, Zihui Ge, Seungjoon Lee, Jia Wang, Qi Zhao, and Jun Xu. Modeling channel popularity dynamics in a large iptv system. In Proceedings of the eleventh international joint conference on Measurement and modeling of computer systems, SIGMETRICS ’09, pages 275–286, New York, NY, USA, 2009. ACM. [15] R. Ravi and A. Sinha. Multicommodity facility location. In Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, SODA ’04, pages 342–349, Philadelphia, PA, USA, 2004. Society for Industrial and Applied Mathematics. [16] C. Sasaki, A. Tagami, T. Hasegawa, and S. Ano. Rapid channel zapping for iptv broadcasting with additional multicast stream. In Communications, 2008. ICC ’08. IEEE International Conference on, pages 1760 –1766, may 2008. [17] Peter Siebert, Tom N. M. van Caenegem, and Marcel Wagner. Analysis and improvements of zapping times in iptv systems. IEEE Transactions on Broadcasting, 55:407–418, 2009.
718