Hierarchically Clustered P2P Streaming System - Semantic Scholar

4 downloads 15677 Views 168KB Size Report
the traditional client-server based or content delivery networks based solutions, Peer-to-Peer (P2P) video streaming solution utilizes the uploading bandwidth of ...
Hierarchically Clustered P2P Streaming System Chao Liang

Yang Guo

Yong Liu

ECE Dept. Polytechnic University Brooklyn, NY, 11201 Email: [email protected]

2 Independence Way Thomson Lab Princeton, NJ 08540 Email: [email protected]

ECE Dept. Polytechnic University Brooklyn, NY, 11201 Email: [email protected]

Abstract— Peer-to-peer video streaming has been gaining popularity. However, it is still challenging to manage a P2P system efficiently to support high video playback rate. In this paper, we propose HCPS - Hierarchically Clustered P2P Streaming system that can support a streaming rate approaching the optimum upper bound with short delay, yet is simple enough to be implemented in practice. In HCPS, the peers are grouped into clusters and a hierarchy is formed among clusters to retrieve video data from the source server. By actively balancing the uploading capacities among clusters, and executing the perfect scheduling algorithm [1] within each cluster, the system resource can be fully utilized. The simulation experiments driven by the traces collected from a real P2P streaming system demonstrate the effectiveness of HCPS.

I. I NTRODUCTION Video-over-IP applications, e.g. Youtube [2], have recently attracted a large number of users on the Internet. In addition to the traditional client-server based or content delivery networks based solutions, Peer-to-Peer (P2P) video streaming solution utilizes the uploading bandwidth of end users to distribute video content at low infrastructure cost. Several P2P based systems have been deployed to provide on-demand or realtime video streaming over the Internet [3], [4], [5], [6]. Our recent measurement study [7], [8] on one popular P2P streaming system recorded over 200, 000 simultaneous users watching a live broadcast of an event during 2006 Chinese New Year Eve. We expect to see more P2P streaming systems and applications on the Internet in near future. According to the overlay structure, P2P streaming systems can be broadly classified into two categories: tree-based and mesh-based. The tree-based systems, such as ESM [3], have well-organized overlay structures and typically distribute video by actively pushing data from a peer to its children peers. One major drawback of tree-based streaming systems is their vulnerability to peer churn. A peer departure will interrupt video delivery to all peers in the subtree rooted at the departed peer. In a mesh-based P2P streaming system, peers are not confined to a static topology. Instead, the peering relationship are established/terminated based on the content availability and bandwidth availability on peers. A peer dynamically connects to a subset of random peers in the system. Peers periodically exchange information about their data availability. Video chunk is pulled by a peer from its neighbor who has already obtained that chunk. There are a lot of mesh-based P2P streaming systems deployed recently, such as CoolStreaming [4], PPLive [7]. Those systems are very robust to peer churns. However, the dynamic peering relationships make the video distribution efficiency unpredictable. Consequently, users suf-

fer from the video playback quality degradation, such as low video bit rates, long startup delays and frequent freezing. Our measurement results on PPLive [8] showed that most programs have bit rates around 400 kbps and the start-up delays for a channel range from a few seconds to a few minutes. This makes them unsuitable for programs with high video bit-rate and realtime playback requirement. In this paper, we propose a novel P2P streaming architecture that can support high data rate video streaming with short delays. The key to achieve this goal is to form structured mesh topology, which organizes peers into clusters such that the uploading bandwidth of peers can be efficiently utilized to redistribute video quickly to all peers. Due to bandwidth and content bottleneck [9], it is quite challenging to sustain high streaming rate for all peers in a P2P streaming system. As studied in [1], the maximum video streaming rate in a P2P streaming system is determined by the video source server’s capacity, the number of peers in the system, and the aggregate uploading capacity of all peers. A perfect scheduling algorithm was proposed to achieve the maximum streaming rate. In such a scheduling algorithm, each peer uploads the video content obtained directly from the server to all other peers in the system. To guarantee 100% uploading capacity utilization on all peers, different peers download different content from the server and the rate at which a peer downloads content from the server is proportional to its uploading capacity. Figure 1 shows an example of how different portions of video data are scheduled among three heterogeneous peers with perfect scheduling algorithm. There are three peers in the system. The server has capacity of 6. The upload capacities of a, b and c are 2, 4 and 6 respectively. Suppose peers all have enough downloading capacity, the maximum video rate can be supported in the system is 6. To achieve that rate, the server divides video chunks into groups of 6. Peer a is responsible for uploading 1 chunk out of each group while b and c are responsible for uploading 2 and 3 chunks within each group. This way, all peers can download video at the maximum rate of 6. To implement such a perfect scheduling algorithm, each peer needs to maintain a connection and exchange video content with all other peers in the system. In addition, the server needs to keep track of the uploading bandwidth of all peers and push different chunks to them at rates proportional to their capacities. A real P2P streaming system can easily have a few thousand of peers. At present, it is unrealistic for a regular peer to maintain thousands of concurrent connections. It is also challenging for a server to push video chunks to thousands of peers at different rates in realtime.

S

calculated as

c a

Fig. 1.

b

Perfect Scheduling Example

To address this problem, we propose HCPS - Hierarchically Clustered P2P Streaming system, to approach the theoretical maximum video streaming rate with realistic peer setting. The proposed system consists of two levels. At the lower level, all peers are organized into bandwidth-balanced clusters. Each cluster consists of a cluster head and a small number e.g. 20 ∼ 40, normal peers. Cluster heads of all clusters form a super cluster at upper level and retrieve video from the video source server in P2P fashion. After obtaining video at the upper level, a cluster head acts as a local video proxy server for normal peers in its cluster at lower level. Normal peers within the same cluster collaborate according to the perfect scheduling algorithm to retrieve video from their cluster head. In this new architecture, the server only distributes video to cluster heads; a normal peer only maintains connections to its neighbors in the same cluster; and a cluster head connects to both other cluster heads and normal peers in its own cluster. Two-level HCPS already has the ability to support a large number of peers with mild requirement on the number of connections on the server, cluster heads and normal peers. By employing perfect scheduling at both upper and lower level, HCPS can approach the theoretical streaming rate upper bound. HCPS has mesh topologies at both levels, which makes it robust to peer churns. Cluster heads play important roles in bridging the two levels. The departure of a cluster head only affect a small number of peers in its cluster. As will be shown in Section IIB, HCPS cluster management mechanism ensures the quick recovery from cluster head departures. In addition, in a basic configuration of HCPS, video data can reach any peer within two hops. This largely improves the delay performance upon normal mesh-based solutions. The paper is organized as follows. In Section II, we first formulate the maximal supportable streaming rate problem and deduce the peer clustering strategy. Then we describe in details the system architecture and the peer management scheme. In Section III simulation results are presented. The paper is concluded in Section IV. II. HCPS: H IERARCHICALLY C LUSTERED P2P S TREAMING S YSTEM As described in Section I, perfect scheduling achieves the maximum streaming rate by constructing a fully connected mesh among all participating peers and the source. A peer then relays the received content from the source to other peers in the system. The streaming rate from the source to peers is

us +

Pn

i=1 ui } (1) n where us refers to the upload bandwidth of server and ui refers to the bandwidth of the ith node of total n nodes. For a P2P streaming system with a large number of peers, the outdegree of peers becomes prohibitively large. Hence the perfect scheduling algorithm does not scale well as the number of peers increases. In this paper, we propose the Hierarchically Clustered P2P Streaming (HCPS) scheme to address the scalability issue faced by the perfect scheduling algorithm. We mainly focus on the HCPS architecture design. HCPS implementation in real network environment can be found in a recent follow-up paper [10]. Instead of forming a single large mesh, HCPS groups the peers into clusters. The number of peers in a cluster is relatively small so that the perfect scheduling can be successfully applied within each cluster. One peer in a cluster is elected as the cluster head and functions as video proxy for this cluster. The cluster heads receive the streaming content by joining another cluster that is one level above its own cluster in the system hierarchy. Clusters form a tree-like topology with clusters as vertices. The cluster heads serve as the “links” that connect two clusters. The streaming content is pushed from the upper-level clusters to the lower-level clusters.

rmax = min{us ,

S

Cluster Head Head Mapping

1

2 a1

a3

b1

a2

b3 b2

Top Level

r

r r

a1

r

r

a3

b1

r

Base Level b3

3 b2

a2

5

4

Fig. 2.

8

6

7

Hierarchically Clustered P2P Streaming System

Figure 2 illustrates a simple example of two-level HCPS hierarchy. At the base level, peers are grouped into small size clusters. The peers are fully connected within a cluster. Each cluster has one cluster head. At the top level, all cluster heads and the video server forms two clusters. Video server (source) distributes the content to all cluster heads using the perfect scheduling algorithm at the top level. At the base level, each cluster head acts as the video server in its cluster and distributes the downloaded video to other peers in the same cluster, also using the perfect scheduling algorithm. The number of connections on each normal peer is bounded by the size of its cluster. Cluster heads additionally maintain connections in the upper level cluster, i.e., the number of connections for cluster heads could be doubled. While decreasing the number of connections of peers, HCPS shows good scalability. Suppose that the cluster size

is bounded by N max , and the source can support up to N s top layer clusters. The two-layer HCPS system, as shown in Fig. 2, can accommodate up to N s (N max )2 peers. Assume N s = 10 and N max = 20, HCPS supports up to around 4,000 peers. The maximum number of connections a peer needs to maintain is 40 for cluster head and 20 for normal peer, which is quite manageable. More peers can be accommodated by adding more levels into the hierarchy [11]. Assume each cluster at base level can just spare bandwidth equal to the playback rate, then the two additional levels with (N max )2 peers can be introduced and one node in the cluster can be elected as the server of these two levels. In this way, the system can sustain at least N s (N max ) ∗ (N max )2 more peers, i.e., 80K peers, only with two more levels. The hierarchy of HCPS shares some similarities with NICE [12]. NICE utilizes hierarchical architecture to minimize the control overhead of low bandwidth application layer multicast. HCPS is targeted at high bandwidth P2P video streaming. In NICE, nodes at one layer appear at all layers below. In HCPS, a node will not appear at more than two layers. A. Peer Clustering Strategy The key question in designing HCPS is how to cluster peers so that the supportable streaming rate can be maximized. The maximum streaming rate, rmax , is determined by Equation (1), and can be achieved using the perfect scheduling algorithm in a fully connected mesh. The mesh constructed in HCPS is not fully connected, which may reduce the maximum supportable streaming rate. In the following, we investigate the peer clustering strategies that would allow HCPS to support the streaming rate close to rmax . We first formulate an optimization problem to find the maximum supportable streaming rate for a given HCPS mesh topology. We then describe a heuristic peer clustering strategy that allows HCPS to support a rate close to the maximum streaming rate. 1) Maximal supportable streaming rate for a given mesh topology: Assume there are C clusters, N peers, and one source in the HCPS mesh. Cluster c has Vc peers, c ∈ [1, C]. Denote by ui the peer i’s upload capacity. A peer can participate in a HCPS cluster either as a normal peer or as a cluster head. For instance, in Fig. 2, peer a1 joins two clusters. It is the cluster head in cluster 3 and a normal peer in cluster 1. Denote by uic the amount of upload capacity of peer i contributed to cluster c as a normal peer, and by hic the amount of upload capacity of peer i contributed to cluster c as a cluster head. Hence u(a1)3 = 0, h(a1)3 ≥ 0 since peer a1 is the head in cluster 3; and u(a1)1 ≥ 0, h(a1)1 = 0 since peer a1 is a normal peer in cluster 1. u(a1)c = h(a1)c = 0 for all other clusters since peer a1 is not member of them. Further denote by us the total source upload capacity; and by usc the amount of source capacity used for cluster c. Similarly, usc = 0 if the source does not participate in cluster c. If rcmax represents the maximum streaming rate for cluster c using perfect scheduling algorithm, the maximum supportable streaming rate for a given cluster-based HCPS mesh, rHCP S ,

can be formulated as following optimization problem.  rHCP S = max s min [rcmax |c = 1, 2, . . . , C] {uic ,hic ,uc }

(2)

subject to: rcmax

= min



PN

i=1 (uic

Vc

N + hic ) X , hic + usc

(3)

i=1

C X

(uic + hic ) ≤ ui

(4)

c=1 C X

usc ≤ us

(5)

c=1

where Eqn. (3) is true for all c ∈ [1, C] and Eqn. (4) is true all for i ∈ [1, N ]. The value of uic , hic , and usc is determined by the HCPS topology. The maximum supportable streaming rate for a given mesh topology is the streaming rate that can be supported by all clusters. Since the cluster head participates in both upper layer and lower layer clusters and the source’s upload capacity is used by several top layer clusters, the supportable streaming rate for HCPS can be maximized by adjusting the allocation of clusters’ upload capacity and source’s upload capacity. This explains the Equation (2). The first term in Equation (3) represents the average upload capacity per peer in cluster c; and the second term represents the cluster head’s upload capacity (note that cluster head can be either the source server or a peer). Since the maximum value of streaming rate at cluster c, rcmax , is governed by the theoretical upper bound (see Equation (1)), this leads to the Equation (3). Further, the amount of bandwidth allocated for the upper layer and low layer clusters must not surpass its total upload capacity, which explains Equation (4). Finally, for the source, the total allocated upload capacity for all clusters must not surpass the source’s total upload capacity, which explains Equation (5). 2) Peer clustering heuristics: The goal is to have a HCPS mesh topology that supports the streaming rate close to the optimal rate rmax . In the following, we use numerical analysis to examine the major factors that affect the HCPS’s performance. Assume there are 400 peers and one source node. The cluster size is set to be 20. The peers are grouped into 20 base layer clusters and one top layer cluster for cluster heads. The maximum supportable streaming rate for HCPS is computed according to the optimization problem as formulated in Equation (2). The peers’ upload capacity obeys the distribution as described in Table I. The distribution in Table I is from the measurement study conducted in [13]. TABLE I BANDWIDTH D ISTRIBUTION Uplink(kbps) 128 384 1000 5000

Fraction of nodes 0.2 0.4 0.25 0.15

We first assign bandwidth randomly to the 20 nodes of one cluster according to the bandwidth distribution, then copy

the allocation to other clusters. Hence all clusters have the homogeneous bandwidth allocation and the node with largest upload bandwidth is selected as the cluster head. We solve the linear programming problem. The solution shows that the value of rHCP S is very close to the theoretical upper bound (> 99%). In contrast, when each node has i.i.d. upload bandwidth according to the probability distribution shown in Table I, rHCP S only achieves roughly 60% of the theoretical upper bound. In the first scenario, the clusters are perfectly balanced (they are actually the same); while in the second scenario, they are not. According to Equation (2), the maximum supportable streaming rate, rHCP S , takes the minimum cluster streaming rate among all clusters. The cluster streaming rate (Equation (3)) is the minimum of cluster average upload capacity and the cluster head’s rate. Intuitively, the peers should be divided into clusters with equal (similar) average upload capacity to avoid wasting resources. Based on the above discussion, we propose following heuristics: • •



The discrepancy of individual clusters’ average upload capacity per peer should be minimized. The cluster head’s upload capacity should be as large as possible. The cluster head’s capacity allocated for the base layer capacity has to be larger than the average upload capacity to avoid being the bottleneck. Furthermore, the cluster head also joins the upper layer cluster. Ideally, the cluster head’s rate should be ≥ 2rHCP S . The number of peers in a cluster should be bounded from the above by a relatively small number. The number of peers in a cluster determines the out-degree of peers. A large size cluster prohibits a cluster from performing properly using perfect scheduling.

Due to the peer dynamics, i.e., the peers join and leave the system all the time, HCPS mesh needs to be dynamically adjusted to have consistent high supportable streaming rate. Below we describe the dynamic peer management in HCPS that implements the above heuristics. B. Dynamic Peer Management HCPS system has a bootstrap node whose task is to manage HCPS topology to balance the resources among clusters. Meanwhile, a cluster head manages the peers in its cluster at the base level. Due to the peer churn, the average upload capacity per peer within each cluster varies over time. Hence the supportable streaming rates of clusters change and the system’s supportable streaming rate, rHCP S , may decrease. The bootstrap node’s task is to handle the peer arrival and peer departure in such a way that rHCP S is maintained at the highest possible level. To deal with the unpredictable peer churn, the bootstrap node periodically moves peers among clusters to achieve good resource balance. Below we describe three key operations in HCPS that handle the peer dynamics: peer join, peer departure, and cluster re-balancing. Detailed peer management is included in [11].

1) Peer join: Depending on the new arrival’s upload capacity compared to the current rHCP S , the new peer is classified into three categories: HPeer(resource rich peer) if u ≥ rHCP S +δ, MPeer(resource medium peer) if rHCP S −δ < u < rHCP S + δ, and LPeer(resource poor peer) otherwise, where u is new peer’s upload capacity, and δ is a configuration parameter. Denote by N max the maximum number of nodes allowed by a cluster. All clusters whose number of peers is less than N max are eligible to accept the new peer. If the upload capacity of the new peer, u, is greater than some eligible cluster heads’ upload capacity by a margin, the peer is assigned to the cluster with the smallest cluster head upload capacity. The new peer is to replace the original cluster head, and the original head becomes the normal peer and stay in the cluster. If the new peer does not replace any cluster head, it is assigned to a cluster according to the peer type. Specifically, the peer is assigned to the cluster with the minimum average upload capacity if the peer is HPeer; the peer is assigned to the cluster with the smallest number of peers if it is MPeer; and the peer is assigned to the cluster with the maximum average upload capacity if it is LPeer. The idea behind this is to balance the upload resources among clusters.The new peer is redirected to the corresponding cluster head, and bootstrap node asks the cluster head to admit the new peer. In case the cluster is full after the new peer joins, the cluster will split itself into two clusters and then inform the bootstrap node. Cluster split process is described in [11]. 2) Peer departure: The handling of peer departure is straight forward. If the peer is a normal peer, it informs the cluster head of its departure. The cluster head takes the peer off its cluster member list, and informs other peers in the same cluster about its departure. In case the departing peer is the cluster head, it informs the bootstrap node its departure. The bootstrap node selects the backup peer from existing peers in the cluster as the new cluster head. This promotion process requires the remaining peers of the cluster and other cluster heads to update member list, since both clusters linked by the head are influenced directly. Meanwhile, the new cluster head needs to take over the original management task. Once a peer crashes, the handling is the same after the peer is sensed inactive. 3) Cluster re-balancing: The clusters may lose balance in terms of the number of peers and the amount of resources in a cluster as the result of peer churn. In HCPS, the bootstrap node periodically attempts to re-balance the clusters. At the end of an epoch, the bootstrap node first attempts to balance the cluster size. The clusters are sorted in the descending order of cluster size. If the gap between the clusters with the largest and the smallest number of peers is greater than the threshold = max{α · N max , β · N }, where N is the average cluster size, these two clusters will be merged and then splitted into two balanced clusters. The above process continues until no clusters violates the condition. In the second phase of cluster re-balancing, the bootstrap

400 350 300

Node Degree

node tries to balance the resources. The clusters are sorted in the descending order of average upload capacity per peer. If the average upload capacity difference of the clusters with highest and lowest upload capacity is greater than the threshold of θ·u, where u is the average upload capacity, these two clusters will be merged and then splitted into two balanced clusters. C. Peer Chunk Retrieval Delay Analysis

In this section we use trace-driven simulation to demonstrate that HCPS is able to achieve a high supportable streaming rate. A flow level simulator is developed to evaluate the system performance. The simulator simulates the dynamic peer management and periodic cluster re-balancing exactly. However, the data traffic is simulated at flow level. Such abstraction speeds up the simulation and still gives us accurate streaming rate of system. The simulation is driven by the trace collected from the measurement study of a large scale P2P live streaming system PPLive [7]. We extract the information of peers arrival and their life time from a one-day PPLive channel trace collected at April 3, 2006. The upload capacity of peers are assigned randomly according to the distribution of Table I. We choose the period of cluster re-balancing to be one minute and the server upload capacity is set to be 2M bps. Fig. 3 depicts the number of peers in the system over the period of 24 hours. Peer churn is evident in this figure. The number of peers peaks around 9:00AM-12:00PM when the system has more than 200 concurrent users. A. HCPS Supportable Streaming Rate Fig. 4 plots the HCPS streaming rate vs. theoretical upper bound using perfect scheduling algorithm during the peak hour (9:00AM-12:00PM) with two re-balancing threshold settings.

150

50 0 0

2

4

6

8

10

12

14

16

18

20

22

Time(h)

Fig. 3.

One-day Degree Evolution of Popular Channel

Both curves of HCPS streaming rate follow the trend of perfect scheduling streaming rate closely. The oscillation of perfect scheduling streaming rate is due to the upload capacity change caused by peer churn. The oscillation of HCPS streaming rate is due to the following two reasons: (i) the upload capacity change; and (ii) the resource imbalance among HCPS clusters. While the first factor is caused by the nature of P2P system, the second factor can be mitigated by adjusting the re-balancing configuration parameters. For instance, in Fig. 4(a), the rebalancing parameters (see Section II-B.3) are set to be α = 0.2, β = 0.4 and θ = 0.4; while the parameters used in Fig. 4(b) are α = 0.15, β = 0.3 and θ = 0.3. Tighter threshold 1400 HCPS Perfect 1300

Playback Rate

III. S IMULATION R ESULTS

200

100

1200

1100

1000

900

800 0

20

40

60

80

100

120

140

160

180

Time(m)

(a) α = 0.2, β = 0.4 and θ = 0.4 1400 HCPS Perfect 1300

Playback Rate

The above peer dynamic management scheme allows HCPS adapt to peer arrivals and departures and achieve a high streaming rate. In addition, peers in HCPS can retrieve video with much shorter delays than peers in other P2P streaming systems. For the clarity of illustration, assume the propagation delay between any two machines is tp and the transmission delay of a chunk is ts . After the server generates a new chunk at time t0 = 0, it takes t1 = tp + ts for the chunk to arrive at one cluster head C0 at the upper level. C0 will then forward that chunk to all other cluster heads at the upper level according to the perfect scheduling algorithm. By time t2 = 2tp + N max ∗ ts all cluster head will receive that chunk. Each cluster head will forward that chunk to all peers in its cluster. Following the similar argument, all peers will receive that chunk by t3 = t2 +2tp +N max ∗ts = 4tp +2N max ∗ts . Given the designed small number N max , small chunk transmission delay and normal propagation delay , the peer chunk retrieval delay can be made very small. For example, if tp = 50ms, N max = 20 and ts = 10ms, the worst-case peer chunk retrieval delay is only 600ms. To the contrast, the measured chunk retrieval delays in a simple mesh-based P2P streaming systems varies from tens of seconds to minutes [8].

250

1200

1100

1000

900

800 0

20

40

60

80

100

120

140

160

180

Time(m)

(b) α = 0.15, β = 0.3 and θ = 0.3 Fig. 4.

Rate Evolution in Peak Time

helps HCPS to improve the streaming rate a little bit as shown in Fig. 4(b) compared with Fig. 4(a). However, the tighter threshold increases the re-balancing overhead. In practice, the parameters have to be carefully toned to strike the right balance

TABLE II 0-4 1152 1068

4-8 1206 1105

8-12 1175 1043

12-16 1214 1124

20-24 1144 1059

Percentage of Supportable Nodes

1 0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 800 900 1000 20

60

80

100

120

140

160

0.75 800 1000 1200 1400 20

40

60

80

100

120

140

160

180

Time(m)

Fig. 6.

Average fraction of received content in HCPS

IV. C ONCLUSIONS In this paper, we proposed a hierarchically clustered P2P streaming (HCPS) framework that is capable of efficiently utilizing users’ upload capacity and approaching the maximum achievable streaming rate of a P2P streaming system. The small cluster size in HCPS enables the adoption of the perfect scheduling algorithm to achieve high peer bandwidth utilization. The hierarchical architecture design makes it easy for HCPS to accommodate a large number of peers. We formulated an optimization problem to characterize the maximum streaming rate of HCPS. A distributed protocol is proposed for HCPS to approach the maximum streaming rate. Results obtained from simulations driven by traces collected from a real P2P streaming system demonstrated that the average achieved streaming rate by HCPS is constantly within 10% of the maximum streaming rate.

180

Percentage of supportable nodes in HCPS

As the streaming rate increases further, less users can sustain full rate. The fraction of full-rate users decreases quickly as shown in Fig. 5. This is because HCPS maximizes the streaming rate by balancing the supportable streaming rate of individual clusters. Once the fixed streaming rate surpasses HCPS supportable rate, most of clusters streaming rates become smaller than the fixed rate, which explains the rapid decrease of number of users that can support full rate. However, the fraction of content received by users decreases more gradually as shown in Fig. 6. Table III further gives the results of average fraction of received content for different CBR streaming rate using HCPS and perfect scheduling. Again, HCPS approaches the performance of perfect scheduling algorithm. Above results suggest that using advanced TABLE III AVERAGE R ATIO UNDER D IFFERENT P LAYBACK R ATE Fixed Playrate(kbps) Perfect(%) HCPS(%)

0.8

R EFERENCES 40

Time(m)

Fig. 5.

0.85

coding, may be a good way to combat the temporary lack of upload resources in system.

Constant-bit-rate (CBR) coding is typically used in on-line streaming service. In this section, we examine how HCPS performs in supporting CBR content. Fig. 5 depicts the fraction of users that can watch the content at the given rate. Almost all users in HCPS can stream video up to 800 kbps. With the current encoding technology, the on-line content uses streaming rate in the range of 400 kbps to 800 kpbs. Hence HCPS is able to support most users in this range.

0.5 0

0.9

0.65 0

16-20 1009 899

B. Performance in Supporting Fixed Rate Streaming

0.55

0.95

0.7

AVERAGE R ATE PER T IME Z ONE Time Zone(h) Perfect(kbps) HCPS(kbps)

1

Fraction of Received Content

between management overhead and system performance. To better understand the persistent performance, Table II reports the average streaming rate over a four hour period during the day. In the table, the re-balancing threshold setting of HCPS is the same as in Fig. 4(a), so are the following simulation results. The HCPS streaming rate is within 90% of the theoretical upper bound all the time.

400 100 100

600 100 99.9

800 100 99.6

1000 99.4 97.5

1200 93.7 89.2

1400 82.1 77.2

coding technique, such as layer coding or multiple-description

[1] R. Kumar, Y. Liu, and K. Ross, “Stochastic Fluid Theory for P2P Streaming Systems,” in Proceedings of IEEE INFOCOM, 2007. [2] Youtube, “Youtube Homepage,” http://www.youtube.com. [3] Y.-H. Chu, S. G.Rao, and H. Zhang, “A case for end system multicast,” in Proceedings of ACM SIGMETRICS, 2000. [4] X. Zhang, J. Liu, B. Li, and T.-S. P. Yum, “DONet/CoolStreaming: A data-driven overlay network for live media streaming,” in Proceedings of IEEE INFOCOM, 2005. [5] PPLive, “PPLive Homepage,” http://www.pplive.com. [6] PPStream, “PPStream Homepage,” http://www.ppstream.com. [7] X. Hei, C. Liang, J. Liang, Y. Liu, and K.W.Ross, “Insights into PPLive: A measurement study of a large-scale P2P IPTV system,” in Proc. Workshop on Internet Protocol TV (IPTV) services over World Wide Web in conjunction with WWW2006, 2006. [8] X. Hei, C. Liang, J. Liang, Y. Liu, and K. Ross, “A Measurement Study of a Large-Scale P2P IPTV System,” IEEE Transactions on Multimedia, 2007. [9] C. Gkantsidis and P. R. Rodriguez, “Network Coding for Large Scale Content Distribution,” in Proceedings of IEEE INFOCOM, 2005. [10] Y. Guo, C. Liang, and Y. Liu, “Adaptive Queue-based Chunk Scheduling for P2P Live Streaming,” Polytechnic U., Tech. Rep.,, 2007. [Online]. Available: http://eeweb.poly.edu/faculty/yongliu/docs/aqcs.pdf [11] C. Liang, Y. Guo, and Y. Liu, “Hierarchically Clustered P2P Streaming System,” Polytechnic U., Tech. Rep.,, 2007. [Online]. Available: http://wan.poly.edu/docs/p2pclustertech.pdf [12] S. Banerjee, B. Bhattacharjee, and C. Kommareddy, “Scalable Application Layer Multicast,” in Proceedings of ACM SIGCOMM, 2002. [13] C. H. Ashwin R. Bharambe and V. N. Padmanabhan, “Analyzing and Improving a BitTorrent Network Performance Mechanisms,” in Proceedings of IEEE INFOCOM, 2006.