Scalable Video-on-Demand Streaming in Mobile Wireless ... - CiteSeerX

0 downloads 0 Views 166KB Size Report
through the managed infrastructure of the carrier mobile net- work. The simplest ... The hybrid architecture in [4] only utilizes the WLAN while. PatchPeer uses ...
Scalable Video-on-Demand Streaming in Mobile Wireless Hybrid Networks Tai T. Do, Kien A. Hua, Alex Aved, Fuyu Liu

Ning Jiang

School of Electrical Engineering and Computer Science University of Central Florida Orlando, FL, 32826 USA Email: tdo, khua, aaved, [email protected]

Microsoft Corporation 1 Microsoft Way Redmond, WA 98052 USA Email: [email protected]

Abstract—Video-on-demand service in wireless networks is one important step to achieving the goal of providing video services anywhere anytime. Typically, carrier mobile networks are used to deliver videos wirelessly. Since every video stream comes from the base station, regardless of what bandwidth sharing techniques are being utilized, the media stream system is still limited by the network capacity of the base station. The key to overcome the scalability issue is to exploit resources available at mobile clients in a peer-to-peer setting. We observe that it is common to have a carrier mobile network and a mobile peerto-peer network co-exist in a wireless environment. A feature of such hybrid environment is that the former offers high availability assurance, while the latter presents an opportunistic use of resources available at mobile clients. Our proposed video-on-demand technique, PatchPeer, leverages this network characteristic to allow the video-on-demand system scale beyond the bandwidth capacity of the server. Mobile clients in PatchPeer are no longer passive receivers, but also active senders of video streams to other mobile clients. Our extensive performance study shows that PatchPeer can accept more clients than the current state-of-the-art technique, while maintaining the same Qualityof-Service to clients.

I. I NTRODUCTION We are interested in providing on-demand service to archived videos in wireless networks. On-demand video service in wireless networks has a number of potential applications. In Advanced Traveler Information Systems (ATIS), the service can be used to disseminate captured scenes of an accident to serve as a collision warning or an indication of possible road congestion ahead. In a university campus setting, students can watch archived lectures with their portable video playback devices while being at any place on campus. In a sport stadium, fans can watch replays or past interviews related to their teams. We share the argument with the authors in [1] that the vast majority of commercial wireless applications need high availability assurance. These applications are best supported through the managed infrastructure of the carrier mobile network. The simplest approach to stream video data to users 1 is to create a unicast connection from the server to each requesting user, where the video server is situated behind the base station. Given the high-bandwidth requirement of video data, 1 User,

node, client, and peer are used interchangeably in this paper

this approach will quickly consume all available bandwidth of the downlink communication when multiple users request the video service. Multicast as a stream sharing mechanism can improve the spectrum utilization of wireless communication in multipoint services. The Multimedia Broadcast Multicast Service (MBMS) has been introduced to 3G-cellular networks [2] as one possible multicast technique. Our goal in this paper is not to propose a new multicast service, but how to utilize the available multicast service to efficiently support video-on-demand service in a wireless network. Patching [3] is a technique developed for the Internet that enables video-on-demand service to utilize the multicast service at the network layer. The basic idea of Patching is to allow a client to join an existing regular multicast for the remainder of the video, and download the missed portion over a dedicated patching stream. A straightforward application of the Patching technique in a wireless environment is to implement the Patching technique at the media server. This direct application of Patching still has limited scalability due to the fact that every media stream has to come from the base station. To overcome the scalability issue of Patching in a wireless environment, our approach is to explore the wireless communication architecture. We observe that a carrier mobile network, also known as wireless wide area networks (WWAN), and a mobile peer-to-peer network, also known as wireless local area networks (WLAN), can co-exist in a typical wireless environment. Each mobile device has two radio interfaces, one for the WLAN and one for the WWAN. In the WLAN, mobile devices communicate with each other using the IEEE 802.11b protocol in its ad-hoc mode. In the WWAN, the base station communicates with mobile devices using the 1xEV-DO (Evolution-Data Only) protocol, also known as HDR (High Data Rate). A feature of this hybrid wireless environment is the WWAN offers high availability assurance, while the WLAN presents an opportunistic use of resources available at mobile clients. Our proposed video-on-demand technique, called PatchPeer, takes advantage of the distinct feature of the hybrid wireless network to overcome the scalability issue of the original Patching technique in a traditional wireless network. A client just joining PatchPeer can receive the regular stream from an existing regular multicast from the base station

for the remainder of the video, and the patching stream from a neighboring peer for the missed portion of the video. The main contribution of this paper is the design of PatchPeer, which to the best of our knowledge is the first design to provide on-demand video services using multicast technique in a heterogeneous network of WWAN and WLAN. Furthermore, this paper can serve as a study case to illustrate the design principles when supporting a specific application in an integrated wireless network. We also perform extensive performance study to evaluate the performance of PatchPeer under various system settings. Our performance study shows that PatchPeer performs better than the original Patching technique in all settings. The two systems that are most closely related to PatchPeer are [4], [5]. In [4], the paper uses the WLAN in two modes, access point and ad hoc. The base layer of the media stream is delivered through the access point, while the enhancement layers are delivered through multiple paths in ad hoc mode. The hybrid architecture in [4] only utilizes the WLAN while PatchPeer uses both WWAN and WLAN. In [5], the paper focuses on finding an energy efficient solution in supporting media streaming applications in wireless hybrid peer-to-peer systems. Since the WWAN interface consumes significantly higher energy than the WLAN interface, the paper proposes two collaborating modes, master-slave and peer-to-peer, where clients collaborate to stream the media content from the server. Both of these previous works focus on improving the unicast connection between the media server and mobile clients, while PatchPeer considers multicast connections. The rest of the paper is presented as follows. Section II describes the detailed design of PatchPeer. Section III presents the results of our performance study. Finally, the paper is concluded in section IV. II. P ROPOSED PATCH P EER T ECHNIQUE A. System Overview Fig. 1 shows the typical interactions between a requesting peer with its neighboring peers and the server in PatchPeer. The requesting peer first sends the server the ID of the requested video, and receives from the server the starting time of the latest regular channel that is currently streaming the video. The requesting peer then requests a patching stream from a neighboring peer to compensate for the initial missing part of the video. If a neighbor can provide the patching stream, the requesting peer receives the patching stream from this neighboring peer and the regular stream from the base station. Otherwise, the requesting peer receives both the patching stream and regular stream from the base station. When searching for a patching stream provider, the requesting peer only searches within its one-hop neighborhood. The one-hop search scope is utilized due to the simplicity and efficient usage of network resources of the one-hop communication. B. Patching Stream Provider from One-hop Neighbors Each peer in PatchPeer caches the prefix of the video it is watching in a forwarding buffer (f b). The idea is early peers

Base Station

Neighbor Peer 3: Request a patch

4a: Send the patch stream 4b: No patch stream

1: Request Video VID

2: Send LastRegularTime 5a: Send the regular stream 5b: Send the regular and patch streams

Requesting Peer

Fig. 1.

Collaboration Diagram of PatchPeer

can use the video content in their f b to provide the patching stream to neighboring peers, who request the same video at a later time. Suppose a peer p wants to request a video v. By contacting the server, p knows the time the last regular multicast of the video started. If there is no available regular multicast of the video, p issues a V ideoRequest message to the server, and will be served the entire video by the server as shown later in section II-C. Otherwise, p starts looking for a patching stream provider in its neighborhood by broadcasting a P atchRequest message to its one-hop neighbors. In order to be a patching stream provider for p, a one-hop neighbor p′ has to satisfy two conditions: •



Data Condition: the f b of p′ must hold enough data of the requested video to compensate for the skew, defined as currentT ime − LastRegularT ime, where currentT ime is the current time and LastRegularT ime is the time the last regular multicast of the requested video started. Network Condition: there must be enough bandwidth in the WLAN to support the new data flow from p′ to p without compromising the QoS of existing data flows.

While the first condition is easy to check, the second condition warrants a separate research topic of its own [6]. In [6], the authors point out that existing approaches supporting QoS in wired networks or multichannel wireless networks are not applicable to single-channel wireless networks, such as IEEE 802.11 networks. The chief problem comes from the shared nature of the wireless medium in single-channel networks; any node with a wireless transmitter can transmit and contend for the wireless channel. To cope with the problem, the authors in [6] propose a distributed contention-aware admission control for new flows in ad hoc networks, called CACP. We adapt the proposed solutions in CACP to PatchPeer as follows. 1) Prediction of Available Bandwidth : We would like to estimate the neighborhood available bandwidth for a mobile node. Neighborhood available bandwidth is the maximum amount of bandwidth that a node can use for transmitting without depriving the reserved bandwidth of any existing flows in its carrier-sensing range. The neighborhood available bandwidth is measured as a weighted function of the last estimation of available bandwidth and the amount of unused bandwidth observed during the period from the time of the last estimation to the current time. Specifically, each mobile node uses the following formula to estimate the neighborhood

available bandwidth: Bneighbor = αBneighbor + (1 − α)

neighbor Tidle Bchannel Tp

where weight α ∈ [0, 1], Bchannel is the channel capacity, neighbor Tidle is the amount of channel idle time during every period of time Tp . 2) Prediction of Flow Bandwidth Consumption : The bandwidth Bc consumed by the connection between p′ and p is simply the video streaming rate, which equals the video playback rate. 3) Selection of a Patch Provider : Upon receipt of the P atchRequest message from the requesting peer p, a neighboring peer p′ determines if it satisfies the Data Condition, which is a simple check. Next p′ determines if the Network Condition can be met as follows. p′ first computes its neighborhood available bandwidth, Bneighbor ,as shown in section II-B1. If Bneighbor > Bc , p′ passes the Network Condition; otherwise, p′ fails the Network Condition. If both conditions are met, p′ issues a P atchRequestAck message to p with the status flag set to T RU E. After a timeout period, p checks all replies from its neighbors. If there is no positive response, i.e. no response with the status flag set to T RU E, p sends a V ideoRequest message to the server, and will be served by the server according to the procedure described in section II-C. Otherwise, p selects a candidate from the set of responding neighbors to be its patching stream provider, and sends a JoinRegularM ulticast message to the server. When the server receives the JoinRegularM ulticast message from the client, it simply enters the client to the list of clients that share the on-going regular multicast. To select a patching stream provider from the set of candidate peers, p can use one of several heuristic selection methods. In this paper, we use Closest Peer selection method, which selects the candidate that is closest to p. C. Server Main Routine A token of the V ideoRequest message contains (pID, vID, |pb|), where pID and vID are the identification of the requesting peer and the requested video, |pb| is the size of the playback buffer employed at the requesting peer. There is a subtle difference between the two buffers, forwarding buffer and playback buffer, at a peer. The forwarding buffer f b is used by each peer to cache the prefix of the video, such that a peer can provide a patching stream to another peer later. On the other hand, the playback buffer pb is employed by each peer to cache the regular stream of the video, while the peer is receiving the patching stream from either the video server or another peer. When the server receives the V ideoRequest message, if the server does not have any free channel, the request will be rejected. Otherwise, the server executes Algorithm 1. We describe the main components of Algorithm 1 as follows. If there is no regular channel currently serving video v, the free channel is used to start a new regular multicast for

v (steps 2- 5). Otherwise, skew and workload are computed (steps 6, 7 - 11). If skew exceeds the size of the optimal patch window, the free channel is used for a new regular multicast (steps 12 - 13). Otherwise, the free channel is used for a patching multicast for a duration of workload (steps 14-17). Finally, the server’s data structures are updated (step 18), and the requesting peer is notified of the ids of the downloading channels (step 19). Algorithm 1 Server Routine Input: v: the requested video W (v): size of the optimal patch window for v |pb|: size of the playback buffer LastRegularID, LastRegularT ime: channel ID and start time of the last regular multicast of v F reeID: ID of the free channel Output: Send IDs of the downloading channels to the requesting peer 1: Set RID = N U LL, P ID = N U LL, and W orkload=0 2: if none of the existing regular multicast currently serving v then 3: RID = F reeID, P ID = N U LL 4: Jump to step 18 5: end if 6: skew = currentT ime − LastRegularT ime 7: if skew < |pb| then 8: tempW L = skew 9: else 10: tempW L = |v| − min{|pb|, |v| − skew} 11: end if 12: if skew > W (v) then 13: RID = F reeID, P ID = N U LL 14: else 15: RID = LastRegularID, P ID = F reeID 16: Duration of the patch stream: W orkload = tempW L 17: end if 18: Update server data structures 19: Issue a message containing (RID, P ID) to the requesting peer

D. Loading and Playing Video at Peers A peer p executes the following steps to download and play the video. The peer uses two loaders, Lp and Lr , for loading the patching stream and the regular stream respectively. The V ideoP layer then playbacks the data as follows. Data received by loader Lp from the patching stream is rendered immediately by the V ideoP layer. At the same time, data received by loader Lr from the regular stream is stored in the playback buffer pb. When there are no more data arriving from the patching stream, the V ideoP layer renders data from pb. The V ideoP layer stops when there are no more data from pb. The prefix of the video is also saved in the forwarding buffer f b.

E. Caching at Forwarding Buffers The forwarding buffer at each peer can store a number of video prefixes. When the buffer is full, a peer needs to evict an existing video prefix in the buffer to create space for the incoming video prefix. We divide the available space of the forwarding buffer into a number of equal-sized chunks; each chunk can store a video prefix. Peers use one of the following common replacement policies to decide which video prefix to be evicted: Least Frequently Used (LFU), Least Recently Used (LRU), First In First Out (FIFO), and Random. F. Recovery Procedures While communication between mobile clients and the base station in the WWAN is fairly stable, communication disruptions are expected to happen quite often in the WLAN. Two main sources of communication disruptions are node mobility and node transient. A receiving peer can detect communication disconnection with the patching stream provider by monitoring the data rate of the incoming stream. After the receiving peer detects the disconnection, it can initiate a new search for another patching stream provider in its one-hop neighbors to receive the remaining patching stream. If there is no alternative patching stream provider, the disconnected peer contacts the server. If the server does not have a free channel to serve the peer, video service for this peer is deemed as not recoverable. G. An Example We use a simple scenario to illustrate the highlights of the PatchPeer system. At time 0, peer p1 joins the system. Since p1 is the first peer requesting the video, p1 receives the entire video from a regular stream R1 from the video server. At time 6, peer p2 joins the system. Because there isn’t any peer in the neighborhood of p2 , and the patch window size for this video is W = 5, p2 also receives the entire video from a regular stream R2 from the video server. At time 9, peer p3 joins the system. This time, p3 receives the patching stream from p2 , and the regular stream from R2 . Fig. 2(a) and Fig. 2(b) show the relative locations of peers, and the different multicast streams at time 9. At time 11, p2 moves out of the communication range of p3 , but p3 has not finished downloading the patching stream yet. p3 first searches in its neighborhood to find a replacement for p2 , and it finds p1 . p3 then resumes the downloading of the remaining of the patching stream from p1 . Fig. 2(c) and Fig. 2(d) show the locations and multicast streams of PatchPeer at time 11. III. P ERFORMANCE S TUDY OF PATCH P EER We use number of accepted requests as the performance metric in our simulation study. A request is considered to be accepted if the requesting peer receives and plays the entire video without loss of frame (jitter ) and long delay (glitch). We carry out sensitivity analysis study with respect to certain workload factors and system design parameters such as caching parameters, video characteristics, and user characteristics. Table I summarizes the parameters used in our simulation. Unless noted otherwise, the simulation uses default

TABLE I PARAMETERS USED FOR PERFORMANCE EVALUATION Parameter Simulation Time (hours) Operating Area (mxm) Mean Request Arrival Rate λ (1/second) Number of Mobile Nodes WLAN Transmission Range (m) WLAN Bandwidth (kbps) Cache Replacement Policy Peer Forwarding Buffer Size (playback minutes) Number of Chunks in a Forwarding Buffer Forwarding Peer Selection WWAN Bandwidth (kbps) Number of Videos Normal Playback Rate (kbps) Video Length (minutes) Zipf Skew Factor (video access frequency) Mean Speed (mps) Speed Delta (mps) Mean Pause Time (seconds) Pause Time Delta (seconds)

Default 10 1000x1000

Variation N/A N/A

0.2 100 250 6000 LFU

0.05-0.3 N/A N/A N/A N/A

60

20-60

3 Closest Peer 2400 30 300 45

N/A N/A N/A 1-100 N/A 10-60

0.271 5 0.5 5 1

N/A 0-15 0-1.5 N/A N/A

values for various parameters as reported in Table I. Nodes move according to the Random Waypoint model. Video access frequency is modeled as Zipf distribution, and video length follows log-normal distribution. We develop a discreet event simulator in C++ for PatchPeer and Patching [3]. Our simulator only simulates the media stream level, and does not simulate the network protocol stack. While detailed simulation of the network protocol stack certainly resembles realistic scenarios very closely, it is unnecessary for our simulation objectives and also too timeconsuming in many simulation experiments with large number of nodes and long simulated time. Each simulation setting is run 100 times with different seed numbers, and the reported result is the average value of these 100 runs. Each simulation run lasts for 10 simulated hours. A. PatchPeer vs. Patching In this section, we compare PatchPeer against Patching [3]. The performance of Patching can be affected by the client playback buffer size, inter-arrival time of requests, and video length [3]. We use the optimal patching window as shown in [3] to determine the size of the client playback buffer size; hence, we only compare PatchPeer against Patching under the effect of inter-arrival time of requests, video length, and number of videos. Since Patching is not affected by node mobility, we dedicate a separate section, III-B, to investigate the effect of mobility on PatchPeer. In this section, nodes are not moving. For all figures, the x-axis is the simulation time, and the y-axis is the total number of accepted requests so far in the simulation. Figures 3, 4, and 5 show that PatchPeer consistently outperforms Patching in all conditions, confirming that the additional resources contributed by peers indeed improve

ad-hoc range

ad-hoc range

p1

R1

R1 R2 P1

p2

R2 P1

p1

p2

Server

p3

Lp

Lr

Lp

Lr

Lp

Server

p3

Lp

Lr

Lr

Lp

Lr

Lp

Lr

Antenna Antenna

cell coverage

PB

PB

PB

player

player

player

FB p1

(a) At time 9

FB p2

PB

PB

PB

player

player

player

FB

FB p3

(b) At time 9 Fig. 2.

cell coverage

p1

(c) At time 11

FB p2

FB p3

(d) At time 11

Peer Locations and Multicast Streams at two different times

the scalability of the overall system. Fig. 3 shows that when the number of videos increases from 1 to 100, the number of accepted requests after 10 simulated hours in PatchPeer reduces from 7,000 to around 450, while that number in Patching reduces from 225 to 117. When there are more videos, Patching has to split the server bandwidth to serve each video. As a result, it is less likely to find a regular channel that is multicasting the same video a user is requesting. Because fewer requests can share a regular multicast, the server bandwidth is depleted quicker. PatchPeer can mitigate the problem in Patching. When there is no regular channel multicasting the video that is being requested, PatchPeer will not be able to help. On the other hand, if there is an on-going regular multicast of the requested video, and the server has already used up its network bandwidth, the requesting peer can still get a patching stream from a neighboring peer. We also recall from section II-B that a neighboring peer in PatchPeer has to satisfy two conditions to serve a patching stream to a requesting peer, namely the Network Condition and the Data Condition. When there are more videos, the probability that a peer has a given video prefix in its forwarding buffer is smaller. In other words, many neighboring peers satisfy the Network Condition, but fewer peers meet the Data Condition. The effect is that PatchPeer alleviates the server bottleneck problem in Patching more effectively when there is less number of videos. We fix the number of videos to 50 for Fig. 4 and Fig. 5. Fig. 4 compares Patching and PatchPeer under different workloads. Even when the workload is still small (λ = 0.05), the performance of Patching is already saturated; hence, the total number of accepted requests at each simulation hour by Patching is the same for different request rates. While Patching’s performance is limited by the network bandwidth at the server, PatchPeer can scale the system beyond the server’s network capacity by leveraging available resources at peers. As a result, PatchPeer can accept increasing number of requests when the workload is heavier. In Fig. 5, both Patching and PatchPeer can accept more requests when the average video length is shorter. In the case of Patching, with shorter videos, the time each regular and patching multicast occupies a server channel is shorter, resulting in faster release of the server resources. In PatchPeer, for shorter videos, the patching stream established between two peers is also shorter; hence, the WLAN can serve more

patching streams when the video length is short. The combined effect of quicker release of server channel and peer channel explains why PatchPeer can accept more requests when video length is short. B. PatchPeer with moving nodes We carried out a number of experiments to evaluate the effect of node mobility on PatchPeer with different forwarding buffer sizes, video lengths, and node densities. Due to space constraint, we only represent the results for different standard mean speeds and various forwarding buffer sizes. Each graph in Fig. 6 shows the result for a given forwarding buffer size. Each line in each graph represents the result for a given mean speed. When the forwarding buffer size is increased, the performance gain in PatchPeer is most obvious when node mobility is not too high. On the other hand, when node mobility is high, it causes frequent disconnection of patching streams between peers. As a result, buffer size does not have a significant effect on the performance of PatchPeer with high node mobility. IV. C ONCLUSION PatchPeer is a novel video-on-demand delivery technique for wireless environments that overcomes the limitations of the Patching technique in a traditional wireless network. Utilizing the available resources from mobile clients, PatchPeer allows the video-on-demand system to scale beyond the network capacity of the server. We verify the performance of the two techniques in our simulation study with different workload factors and system design parameters. R EFERENCES [1] H. Luo, R. Ramjee, P. Sinha, L. E. Li, and S. Lu, “Cellular and hybrid networks: Ucan: a unified cellular and ad-hoc network architecture,” in ACM MobiCom, 2003. [2] L. (ed.), “Broadcast and multicast a vision on their role in future broadband access networks,” http://cordis.europa.eu/ist/ct/proclu/c/broadcast.htm, 2005. [3] Y. Cai and K. A. Hua, “Sharing multicast videos using patching streams,” Multimedia Tools Appl., vol. 22, no. 2, pp. 125–146, 2003. [4] Y. He, I. Lee, X. Gu, and L. Guan, “Centralized peer-to-peer video streaming over hybrid wireless network,” in ICME, 2005, pp. 550–553. [5] M. K. H. Yeung and Y.-K. Kwok, “Energy efficient media streaming in wireless hybrid peer-to-peer systems,” in IPDPS, 2008. [6] Y. Yang and R. Kravets, “Contention-aware admission control for ad hoc networks,” IEEE Transactions on Mobile Computing, vol. 4, no. 4, pp. 363–377, 2005.

8000

1600

1600 PatchPeer Patching

6000

1200

1200

5000

4000

3000

Number of accepted requests

1400

1000

800

600

600

400

1000

200

200

2

3

4

5 6 7 Simulation Times (hours)

8

9

0 1

10

2

3

400

200

3

4

5 6 7 Simulation Times (hours)

8

9

600

400

1600

2

3

4

5 6 7 Simulation Times (hours)

8

9

1000 800 600

1600

600

200 0 1

10

(a) Avg. length = 10 minutes Fig. 5.

3

4

5 6 7 Simulation Times (hours)

8

9

9

10

(a) Buffer Size = 20 minutes

9

10

600

0 1

2

3

4

5 6 7 Simulation Times (hours)

(c) Avg. length = 60 minutes

1200 speed: 0(m/s) speed: 1(m/s) speed: 5(m/s) speed: 10(m/s) speed: 15(m/s)

1000

800

600

400

0 1

speed: 0(m/s) speed: 1(m/s) speed: 5(m/s) speed: 10(m/s) speed: 15(m/s)

800

600

400

200

2

3

4

5 6 7 Simulation Times (hours)

8

9

10

0 1

(b) Buffer Size = 40 minutes Fig. 6.

10

800

10

200

8

9

1000

200

2

Number of accepted requests

Number of accepted requests

1000

5 6 7 Simulation Times (hours)

8

PatchPeer Patching

400

1200 speed: 0(m/s) speed: 1(m/s) speed: 5(m/s) speed: 10(m/s) speed: 15(m/s)

4

10

Effect of video length on PatchPeer and Patching

1200

200

5 6 7 Simulation Times (hours)

1200

(b) Avg. length = 30 minutes

400

4

1400

800

200

3

3

1800 PatchPeer Patching

1000

400

2

2

(c) λ = 0.3

1200

400

600

9

400

0 1

Number of accepted requests

1200

800

8

600

10

1400 Number of accepted requests

1400

9

10

200

1800

8

9

Effect of request arrival rate on PatchPeer and Patching

PatchPeer Patching

5 6 7 Simulation Times (hours)

8

800

(b) λ = 0.1

1800

4

5 6 7 Simulation Times (hours)

PatchPeer Patching

800

0 1

10

Fig. 4.

3

4

1000

(a) λ = 0.05

2

3

1200

200

2

2

(c) 100 videos

Number of accepted requests

Number of accepted requests

600

0 1

0 1

10

1000

800

1000

9

PatchPeer Patching

1000

0 1

8

1200 PatchPeer Patching

1600

5 6 7 Simulation Times (hours)

Effect of number of videos on PatchPeer and Patching

1200

0 1

4

(b) 30 videos Fig. 3.

Number of accepted requests

800

400

(a) 1 video

Number of accepted requests

1000

2000

0 1

Number of accepted requests

PatchPeer Patching

1400

Number of accepted requests

Number of accepted requests

PatchPeer Patching 7000

Effect of mobility on PatchPeer with different buffer sizes

2

3

4

5 6 7 Simulation Times (hours)

8

(c) Buffer Size = 60 minutes

Suggest Documents