Admission Control and QoS for Continuous Media ... - Semantic Scholar

1 downloads 0 Views 645KB Size Report
Admission Control and QoS for Continuous Media. Displays in MANETs. Shahram Ghandeharizadeh. Department of Computer Science. University of Southern ...
Admission Control and QoS for Continuous Media Displays in MANETs Shahram Ghandeharizadeh

Shyam Kapadia

Bhaskar Krishnamachari

Department of Computer Science University of Southern California Los Angeles, CA 90089, USA Email: [email protected]

Department of Computer Science University of Southern California Los Angeles, CA 90089, USA Email: [email protected]

Department of Computer Science Department of Electrical Engineering University of Southern California Los Angeles, CA 90089, USA Email: [email protected]

Abstract— We consider continuous media delivery over a mobile ad-hoc network of vehicles equipped with car-to-car peer-topeer (C2P2) devices. While the provision of high-bandwidth continuous media content with tight QoS requirements is challenging even in static networks, it is considerably more challenging when the network topology is dynamic due to node mobility. In this paper, we develop a unified client-centric, distributed admission control framework for such a C2P2 network. Under this framework, we develop several admission control strategies, namely, server count-based admission (SC), server bandwidthbased admission (SB), path bandwidth-based admission (PB), and finally a mobility-based admission (MAD [4]) policy, which can be further enhanced using a second-level sampling-based admission policy. We also develop QoS utility models to quantify the performance of these policies. These policies are then evaluated through extensive experimental simulations. Our results show conclusively that traditional admission control strategies do not work, and that MAD can provide orders of magnitude improvement when compared with no admission control.

I. I NTRODUCTION During the past few years, automobile manufacturers have been marketing and selling vehicles equipped with entertainment systems. These systems typically consist of a DVD player, a fold-down screen, a video game console, and wireless headphones. In its present form, storage and content are tied together. This limits the number of available titles to those DVDs and CDs in the vehicle. We envision a separation of storage and content where content is staged on demand across the available storage for previewing. This would provide passengers access to a large repository of titles. In this vision, vehicles are equipped with car-to-car, peer-to-peer (C2P2) devices which form a mobile ad-hoc network (MANET). Each C2P2 is equipped with abundant amount of storage, a processor, and a wireless networking card. A C2P2 might integrate into the navigation system of a vehicle and its existing network for delivery of data to the on-board folddown screen or wireless headphones. Different C2P2 devices might store different clips and exchange these clips with one another to support on-demand delivery of continuous media. The principle characteristic of continuous media such as video and audio is their high sustained bit rate requirement. If a system delivers a clip at a rate lower than its pre-specified rate without special precautions (e.g., pre-fetching), the user might observe frequent disruptions and delays with video and

random noises with audio. These artifacts are collectively termed hiccups. A hiccup-free display is the ideal quality of service (QoS) provided to an end user. A second QoS criterion is the observed startup latency, defined as the delay observed from when a user references a clip to the onset of display. All C2P2 devices may replicate popular titles, but they must collaborate in order to provide users with a large choice of content. Each C2P2 may contribute a fraction of its storage in a peer-to-peer manner to be occupied by the system-assigned clips. These clips might be referenced by users of those mobile C2P2 devices that are network reachable (including the local user). This study focuses on the QoS metrics when a user references a clip that is available on one or more remote C2P2s. A C2P2 must offer its passengers only those clips that it can download and play in a hiccup-free manner within a reasonable amount of time (e.g. by satisfying a maximum tolerable startup latency constraint). Techniques that address these challenges are impacted by the characteristics of the MANET, placement and delivery scheduling of data, and admission control policies in support of a hiccup-free display. Let us consider each in turn. Mobility, a key MANET characteristic, is the primary challenge because it dictates the life-time of paths between a producer and a consumer of data. When the display of data is overlapped with its delivery, a path repair may incur delays that result in data starvation and hiccup. In addition, topology changes impact the availability of both data [6] and bandwidth. The availability of data is also impacted by its placement across the nodes and its degree of replication [7]. One may replicate data at the granularity of either a clip [3] or a block [5]. The system may replicate the first few blocks of a clip more frequently. This enables the display of a remote clip to consume the clip’s first few blocks from either local storage or C2P2 devices that are a few hops away, minimizing startup latency. Techniques to schedule delivery of data impact the QoS observed by active displays. As we shall demonstrate (see Appendix), content replication across multiple servers and block-switching techniques provide a significant degree of protection against the frequent disruptions and topological changes caused by the mobile environment, by dynamically providing connections to proximate servers. Finally, given the high data-rate requirements of continuous

media displays and limited bandwidth resources, the system must be configured with an admission control policy to handle multiple simultaneous clip downloads. In particular, the admission control policy must strive to do a “good job” of rejecting requests for download that are unlikely to be satisfied within a reasonable amount of time. Section III formalizes and quantifies this qualitative notion by providing some utility models for QoS that take into account the number of rejected requests, and the number of admitted requests that are successful and the number of admitted requests that fail to meet a specified startup latency constraint. Compared to traditional admission control in static environments, there are some unique challenges in the C2P2 MANET environment where we have a variable topology network. For one, the admission control must be performed in a distributed manner as there is no central coordination point for the network. Further, it is hard to estimate the resources that will be available for a clip download (since the topology will change during the duration of the download). The key contribution of this paper is the design and analysis of such simple admission control policies using C2P2/MANET simulation studies. Even though these techniques are simple they capture significant components that would be part of making an appropriate admission control decision. The policies consider components such as number of available servers, the available bandwidth at each server, the average time for which the server is in the range of the client and the bottleneck bandwidth along the path (traditional approach used in wired networks). A more powerful and efficient admission control scheme can be designed by appropriately weighing each of these components and then combining them. This study serves as a first step toward this goal by evaluating policies that consider each of the components individually to make the admission control decision and then observing the performance in terms of some utility models. Recent studies addressing QoS in MANETs are as follows. A survey of QoS in MANETs is provided by [14], [11], [17]. These studies either build on top of the existing routing protocols [18], [12] like DSR,AODV etc or integrate the QoS metrics as part of the routing protocol [10], [16], [19]. There have been studies that support both QoS and multicast routing [1], [13]. Admission control has been a well known problem dealt with in wired networks [9]. However, for wireless ad hoc networks not much work has been done. In this paper we have used the conventional definition of admission control with simple policies to study the number of rejected, satisfied and satisfied requests in a MANET (C2P2 network). The rest of this paper is organized as follows.The importance of data placement and retrieval scheduling policies for single client display scenarios is demonstrated in the Appendix. Next, Section II formulates the admission control problem, and presents a general framework and several specific policies to address this challenge. Section III evaluates these policies using a simulation study on the basis of utilitybased QoS models. Our key findings are that the use of intelligent admission control policies can significantly reduce

the number of admitted unsatisfied requests. Moreover, a mobility-prediction based admission policy that we propose, termed MAD, can provide an order of magnitude improvement over a naive approach that admits all requests. Our conclusions and future research directions are contained in Section IV. II. A DMISSION C ONTROL Section II-A formalizes admission control as the process of admitting those requests that are able to download a fixsized file within a specified duration of time. Section IIB develops a framework for the admission control policies. Finally, Section II-C presents four alternative policies for this framework. A. Introduction and Problem Statement While it is possible to overlap the delivery and display of continuous media clips using buffering, this strategy runs the risk of incurring hiccups if network resources becomes temporarily unavailable during delivery. In this work, we will focus our attention on the delivery of clips that will be played only after the entire clip is downloaded. We will assume that each request specifies the file size of the clip to be downloaded and a maximum download time (i.e. startup latency). We formulate the admission control problem as follows: A request to display a clip comes with a specification that all blocks of be downloaded to the requesting C2P2 (denoted as C2P2  ) within  time units. Note that  (the download time or the startup latency) can be significantly smaller than the playback 1 time of the clip . For example, an audio clip with a display time of 12 minutes might specify   as 1 minute. This means that all blocks of this clip must be materialized in one minute in order for a request referencing to be admitted to the system. We shall consider two flavors of admission control: instantaneous and delayed. In instantaneous admission control, the decision to either reject or admit a request must be made almost immediately after a request is issued. In delayed admission control, the admission decision must be made in  time units where   . An admitted request is termed a failed request if the system fails to materialize all blocks of clip in   time units. Thus, assuming requests are issued to a system, an admission control policy yield three classes of requests. First, a fixed number of admitted and rejected requests, termed and , respectively. Note    . Second, some of the admitted requests might fail, denoted as  ,  . And, finally, some of the admitted requests are serviced successfully, denoted as  , where     . An admission control policy should strive to maximize both and  (ideally, to a maximum of ). By maximizing the total number of admitted requests ( ), a larger percentage of requests are processed by the C2P2 MANET. By maximizing  , a larger percentage of admitted requests are processed successfully. In Section III, we will develop QoS models that define a policy’s utility as a weighted combination of these raw metrics. 1 The

terms playback and display are used interchangeably in this paper



B. An Admission Control Framework We now propose a framework for solving the admission control problem assuming an ad-hoc network (e.g. no centralized base stations in a cellular wireless network). We focus on distributed, client-centric, admission control policies that enable the C2P2 client device to either admit or reject a request. For the rest of this paper, C2P2  denotes a device whose user has requested the display of a title. In order to make an intelligent admission decision, the admission control component of C2P2  must first obtain information about the state of the network and availability of resources. This information is then weighed against the specifications of the request in order to determine if the request can be admitted. The admission control policy may consider several factors such as the number of servers, the availability of residual bandwidth at servers, the bottleneck bandwidth on the network path to the server, and whether it is possible to predict from the mobility pattern of the cars that they might be able to collaborate to deliver the requested clip. In this paper, we design and evaluate policies that consider each of the above mentioned factor individually. A meta-policy may combine these policies by assigning each an appropriate weight. In a mobile ad-hoc network such as C2P2, knowledge of the current bandwidth on the path between a client and server is insufficient to determine if a given request can be satisfied. This is because the available bandwidth changes dynamically and might be either greater or lower in the future (due to unpredictable mobility-induced changes in the network structure and network traffic). In order to contend with the dynamic nature of a C2P2 network, flexible but simple admission control policies must be designed that consider different kinds of information about the current state of the network, and try to predict/estimate based on this information whether the required resources for a given request can be provided. In our admission control framework, we develop two kinds of metrics: a Request Metric (  ), and an Information  Metric ( ). The request metric  (which is normalized to be a number between 0 and 100) is a measure of the amount of resources required for a given request. Hence it depends on the QoS parameter of interest. In particular, we define as the ratio of the requested bandwidth over the nominal maximum wireless link bandwidth   as follows:

&%



      

(1)

where  "!$# is the size of the clip in Mb (MB), and   is also defined in Mbps (MBps).  is not policy-dependent, the information metric  While is, and represents quantitatively the information obtained by the client about the current and future resources available in the network. The information metric is normalized to be a  is monotonic in the estimated number between 0 and 100.  , the more resource availability — the higher the value of likely it should be that a given request can be satisfied. Both  and  have no units. Now the admission control decision

is simply a matter of comparing the two metrics: and  predicts sufficient  , and deciding whether the given availability of resources for the request with metric ' . A simple yet flexible form for the admission control policy is  and  and compare simply to take the difference between  with a threshold ( . When *) ,+-( , a policy admits the corresponding request. Otherwise the request is rejected. Note that based on our definitions, the threshold can potentially take on values from )  (which corresponds to allowing all requests) to  (which corresponds to rejecting all requests). Thus if ( is too low, there is a danger of greater unsatisfied requests due to bandwidth contention. On the other hand, if it is too high, the number of satisfied requests will be low because too few requests are accepted. The optimal choice of threshold may be an intermediate value and could be scenario-dependent as we shall see in our experiments. Of course, the optimal choice of threshold is also policy is defined differently for each policy. dependent, since When a request is issued by a client, in instantaneous mode, a policy must either accept or reject requests using the threshold computation. With the delayed approach, the admission control policy may analyze the rate of data flow for  time units, where     . A request is admitted if: (1) a policy admits this request initially,

     and (2) the average observed bandwidth during /. . This approach is advantageous because it recognizes the dynamic environmental changes may either affect delivery of in a positive or a negative manner. A positive change is one where a server becomes network reachable due to mobility. The reverse of this is a negative change where a server is no longer network reachable because of mobility. Obviously, as  approaches  , the admission control is provided with more time to make a decision, reducing the number of admitted requests that fail (minimizing  ). However a large  may be quite undesirable for several reasons. First, it might delay a user longer than necessary prior to rejecting this user. Second, it might waste network bandwidth by allocating resources to a request that cannot be satisfied. In the above scenario, a rejected request based on an initial instantaneous decision might also be delayed  time units to enable the system to see if changes in the environment during  time units may facilitate admission of this request. In this case, the client tries to service the rejected request and observes the amount of available bandwidth during  time units. Assume 0 units of data arrive during this period,  -0  size(X). After  time units, it invokes the admission control policy once again with size(X) )10 . If the request is admitted then it proceeds to service this request. Otherwise, the request is rejected. This approach suffers from the following limitation. When bandwidth is scarce, a request that should be rejected might compete for the available bandwidth with other active displays, causing admitted requests to fail (reducing  ). We do not evaluate this approach in our simulations. Content replication and switched proximate server selection are important for robust delivery of high-rate content (see Appendix). In all our admission control policies, the client

always chooses the closest server from candidate servers. When C2P2  ’s path to a server breaks at time  , C2P2  might have downloaded 0 bytes of a clip ,   0    !$# &% . The bandwidth    required to download the remainder of a clip is     . If this bandwidth exceeds the wireless network bandwidth, the request is discarded as a failed request. Otherwise, C2P2  identifies another reachable server and begins downloading the remainder of the clip from that server. C. Admission Control Policies This section details four policies for our proposed framework: 1. Server Count (SC): The server count-based admission control policy counts the number of C2P2

devices that are within  hops of C2P2  , denoted as . Next, it identifies and counts the number of devices that contain the referenced  value for SC is the density of clip X, denoted by . The these servers:





 "

(2)

The main strength of this policy is its simplicity — it might be implemented using the following probe based approach. C2P2  sends a controlled flood within  hops. When a node receives a probe, it replies to C2P2  with its identity and whether it contains clip . Based on these replies, C2P2  evaluates and . C2P2  may wait for time units prior to rendering a decision. The value of might be dictated by the duration of time required for 2  network transmissions:  transmissions to reach nodes that are  hops always and, another  transmissions for their reply to arrive at C2P2  . This policy suffers from several weaknesses. First, in instantaneous mode, this policy ignores bandwidth limitations and fails to detect scenarios where all severs might be busy processing other requests. In delayed mode, this policy is provided with some measure of available bandwidth. Second, this policy ignores mobility where the servers might be moving away from C2P2  . Once again, delayed mode of admission control provides this policy with some measure of this parameter. 2. Server Bandwidth (SB): This policy considers available bandwidth of each server, requiring each probe to return the % available bandwidth of each server  ,   #  , containing the referenced clip. Note that the server of choice for C2P2  may change over the duration of the clip download (  ). Since each server  is a potential candidate and has some probability  metric of being chosen over   the considers the average  available bandwidth across all the servers  . SB defines as:

 %



     #   

 

(3)

where   is the maximum

network bandwidth available to a C2P2 device. By using in the denominator, this policy considers the fraction of nodes that may act as servers. Similar to SC, SB is also relatively simple to implement using a local probe.

3. Path Bandwidth (PB): This policy is similar to that used on traditional admission control and bandwidth reservation in wired networks, and considers the bottleneck bandwidth on the path between the client and each server within  hops. This policy only considers the average of all the instantaneous bottleneck bandwidth along the shortest path from the client to each server (  ). These estimates will rapidly become stale as the nodes move and the paths between the C2P2  and the servers change. Formally, this policy considers the available bandwidth of most utilized node, BW(   ), that lies in the multi-hop path between each server and the client C2P2  :



#    %   "  

   

(4)

To implement this policy, a probe accumulates the available bandwidth of nodes on its path back from a server to C2P2  . C2P2  identifies the intermediate node with least available bandwidth as    . This policy does not consider dynamic evolution of the network where    changes over time. In static scenarios, PB improves upon SB because it considers scenarios where the servers have sufficient bandwidth but the intermediate hops are data-starved. However as we shall see, this is not necessarily a good strategy in MANETs. Note that in all the policies SC, SB and PB the probes constitute the control traffic which is negligible in comparison to the amount of data that is being exchanged between the client and the servers. 4. Mobility Prediction (MAD): The mobility-based admission control (MAD) policy tries to leverage the mobility pattern in the network by estimating the duration of time a server  will be in the radio range of the requesting client C2P2  . This depends on both the direction and speed of  and C2P2  , C2P2  sends out a probe message with a certain time to live to% obtain the list of proximate servers  . Assuming  #  !"  denotes the duration of time  and C2P2  are expected to be in radio range, for MAD:





 #   #   !" 

 

%

 "

(5)

Here, is the total number of nodes reachable from the client via the probe. Note that both the numerator and denominator are in units of time. This policy only considers the time that a server is within radio range of C2P2  . This is conservative because it does not consider the time the servers are reachable via multiple hops. Also this policy does not consider the available bandwidth at each server. We considered a hybrid policy that combines MAD with SB that takes into account both the mobility of the servers and the available bandwidth at each server. However, we have not presented the results for such a policy here. III. P ERFORMANCE E VALUATION In this section, we quantify the performance trade-off associated with the alternative policies. We conducted many experiments with different parameter settings. In order to summarize

Models

Value of a rejected request,  (M)

Value of a successfully admitted request,  (M)

Value of a failed admitted request,  (M)

Economy Standard Premium

0 0 -0.5

1 1 1

0 -1 -5

TABLE I T HREE UTILITY MODELS TO

QUANTIFY QUALITY OF SERVICE WITH ALTERNATIVE POLICIES .

the lessons learnt, we developed a utility models to summarize the QoS observed with each policy in one number. These models are presented in Section III-A. Section III-B presents our experimental environment that considers both stationary and mobile servers. Section III-C demonstrates superiority of MAD to an environment without admission control. Finally, Section III-D demonstrates that MAD typically outperforms SC, SB, and PB. This section also includes an in-depth analysis of MAD with different thresholds and system loads. A. Utility models for QoS This section describes three utility models used to quantify the QoS provided by the different admission control policies. Recall that an admission control policy divides the total number of requests into three groups: rejected requests,  requests that are admitted but fail (due to unsatisfied requirements), and  requests that are admitted and succeed in providing the clip within the specified deadline. Different users may value each of these differently, as per their utility model. For the purpose of evaluating our experimental results we consider three distinct utility models that are representative of different levels of Quality of Service that can be provided  in a C2P2 ad hoc network. In each model , we define the joint utility as a weighted sum:

%    #  %     #  %  (6)  % where a weight # indicates the value of component  in 

#



model . Table I shows the weights for three QoS models employed to evaluate the experimental results. The economy model does not penalize a policy for either rejected or failed requests and only rewards accepted requests. The Standard model is indifferent to rejections, but penalizes failed admitted requests as much as it rewards successfully served admitted requests. The Premium model resembles a high QoS environment. It penalizes rejected requests (albeit slightly), rewards admitted requests that are satisfied, and highly penalizes admitted requests that fail. The high penalty for a failed admitted request reflects our intuition that users are likely to be annoyed if their C2P2 device initiates a download, spends a considerable amount of time (maybe   time units) only to discover that it cannot display a clip because an incomplete file is downloaded. While these utility models can be applied on a per-user basis and different QoS levels can be integrated within the same system in practice, for the purposes of analyzing our experimental result and comparing the different admission

control policies, we shall apply each QoS utility model to the system as a whole for the duration of the experiment. B. Experimental Setup The results presented in this section are based on a new simulator for C2P2 networks written using C  programming language2. The simulator models road stretches and cars that navigate these stretches. A car might be configured with a C2P2 device that provides a fixed amount of network bandwidth. Each C2P2 device implements the DSR [8] routing policy. However, we believe that a the choice of the routing protocol does not impact the trends in the observed results. A proactive protocol like DSDV would have shown similar results. We analyzed different scenarios consisting of a different number of road-stretches, different number of C2P2-equipped cars, mobility patterns, and request specifications. Our basic experiment consists of thirteen bi-directional road-stretches, numbered from 0 to 12, see Figure 1. Three stationary servers (these could be parked vehicles, access points, or base stations) are located on the following roadstretches: 1, 6, and 11, numbered  ,   , and  , respectively. The radio-range of each C2P2 spans 3 road-stretches: its current road stretch and its two adjacent road stretches. Thus, a C2P2 on either road-stretch 0 or 2 is in the radio range of   which is on road-stretch 1. Note that there exists four road stretches that are dark because they are not 1-hop reachable by a server. However, given sufficient number of clients and transitive routing of packets across these clients, these dark road stretches may lit-up as they become network reachable to either one or two servers. We analyzed configurations consisting of 13 possible client C2P2 nodes (in addition to the 3 server nodes), with the simultaneous active displays ranging from 1 to 13 in our experiments. Initially, all cars are assigned to the road-stretches such that they are evenly spaced across the road-stretches. For example, with 13 cars, one car is assigned to each road-stretch. The initial direction of each car is chosen randomly (i.e. to move left or right). The speed of each car is fixed at 5 meters per second. Once a car reaches the end of either road stretch 0 or 12, it switches 2 We initially considered using the ns-2 simulator that is widely used in the MANET community for these simulation experiments, but found that significant programming extensions to ns-2 would be required to implement the kinds of online per-client dynamic admission-control policies we are interested in evaluating. The simulator we have written is designed to incorporate such policies with ease and provides for much faster simulations. We plan to make the simulator available in the near future for use by others investigating admission control.

s0

0

1

s1

2

3

4 Fig. 1.

5

6

s2

7

8

9

10

11

12

Thirteen road stretches with 3 stationary servers.

directions and moves toward the opposite end. Within each configuration, the cars that are requesting clips (corresponding to the number of active displays) are chosen randomly and they re-issue requests periodically every 60 seconds. A total of 100 requests are issued in each experiment. This means that with 10 active displays, each client issues 10 requests on average. All three servers are assumed to contain a replica of the clips being requested. Rejected requests do not consume any network bandwidth and are terminated immediately. Once a request is admitted to the system, it downloads a clip for 60 seconds. The maximum network bandwidth of a C2P2 device is assumed to be 10 Mbps. The referenced clip is a media clip with a bandwidth requirement of 340 Kbps. The display time of each clip is fixed at 12 minutes. We require the clip (size 30MB) to be downloaded in 60 seconds, requiring a download bandwidth requirement of 4 Mbps. To analyze the impact of server mobility, we modify this basic experimental setup to include scenarios where all servers are also mobile. We also analyzed the impact of load in two ways: first, by varying the number of simultaneous clients issuing requests, termed active displays. Second, by changing the download bandwidth requirement to 8 Mbps, corresponding to a clip size 60MB and a media playback time of 24 minutes at 340 Kbps rate. C. A Case for Admission Control We start with experimental results that justify the use of intelligent admission control policies in an ad-hoc network of C2P2 devices. Figure 2 shows utility of MAD (Instantaneous mode of operation) with alternative models when compared with an environment that does not utilize an admission control policy. Figurer 3 shows the number of rejected, satisfied, and unsatisfied requests for both MAD and the no admission control case. (As we shall show in Section III-D, MAD is generally a superior admission control policy.) Without admission control, the simulator admits a request upon its arrival independent of server availability. Figure 2 shows the performance of MAD for two different thresholds: -20 and 40. The x-axis of this figure shows the number of concurrent clients that initiate the display of a clip, termed number of active displays. This controls the load in the environment and is increased from 1 to 13. The y-axis of this figure is the utility of each model. We present results for all models shown in Table I. Each presented data point is an average of results obtained from 10 experiments utilizing different seeds. (The random seed impacts the order and identity of clients issuing requests.)

With 3 stationary servers and a maximum wireless bandwidth of 10 Mbps per C2P2 device, our experimental environment supports a total bandwidth of 30 Mbps. All active clients issue their requests at the same time. A total of 100 requests are issued by all clients in this environment. This implies that with two or fewer number of active displays, if there were either no dark regions or unpredictability due to mobility, without admission control, a total of 100 requests would be satisfied. Figure 3.b shows that only one half of requests are served successfully with 2 or fewer active displays. This is because of mobility and dark regions that cause an active client to starve for data, failing to download a clip in   time units. This is reflected in the number of unsatisfied requests admitted to the system, see Figure 3.c. A primary observation from Figures 2 and 3 is that an environment with no admission control is clearly inferior to Instantaneous-MAD with ( =-40. With the Premium model, MAD shows a much better performance compared to the other policies. With the Standard utility model, MAD remains superior. With the Economy model, however, no admission control outperforms MAD when ( =-20. This threshold renders MAD too conservative, forcing it to admit too few requests. While all these requests are processed successfully, see Figure 3.c where  =0 with ( =-20, MAD does reject requests that can be processes successfully. With a more relaxed threshold (-40), these requests are admitted into the system, enabling MAD to outperform the environment with no-admission control when using the Economy model. Note that utility of MAD with ( =-20 is a constant positive with the Economy, Standard, and Premium models. When ( =40, MAD’s utility drops as a function of load because the number of unsatisfied requests increases with this threshold, see Figure 3.c. D. Detailed Experimental Results Tables II, III, and IV show utility of each model in both Instantaneous and Delayed mode. Each table summarizes results for a specific model: Economy, Standard, and Premium. A table reports the maximum observed utility for each policy and the corresponding threshold at which this maximum is realized. We employ a set notation to show threshold values because the same maximum might have been observed for several ( values. Two different system loads are presented: First, a light system load where 2 random clients issue requests at a time. Even if both reference different clips from the same server, the available network bandwidth (10 Mbps) accommodates both requests, allocating a total of 8 Mbps. Second, a high

Admit-All Reject-All SC SB PB MAD

2 Active Instantaneous Max Threshold utility  43.00 N/A 0 N/A 45.00  45.00  45.00  45.00 

Displays Delayed, =15 Sec Max Threshold utility  42.00 N/A 0 N/A 43.00  43.00  43.00  43.00 

10 Active Instantaneous Max Threshold utility  22.00 N/A 0 N/A 30.00   27.00  27.00  29.00 

TABLE II A



E CONOMY MODEL ,   = 4 M BPS , AND 16 C2P2 DEVICES : 3

COMPARSION OF ALTERNATIVE ADMISSION CONTROL POLICIES WITH THE STATIONARY SERVERS AND

Admit-All Reject-All SC SB PB MAD

13

MOBILE CANDIDATE CLIENTS .

2 Active Displays Instantaneous Delayed, =15 Sec Max Threshold Max Threshold   utility utility -14.00 N/A 32.00 N/A 0 N/A 0 N/A    -101,-80,-60,-40 11.00 32.00    -101,-80,-60,-40 13.00 32.00 12.00   32.00  -101,-80,-60,-40 18.00   33.00 

10 Active Displays Instantaneous Delayed, =15 Sec Max Threshold Max Threshold   utility utility -56.00 N/A 13.00 N/A 0 N/A 0 N/A    1.00 18.00   -101,-80,-60 1.00 13.00 1.00  13.00  -101,-80,-60 21.00   21.00  

TABLE III A

COMPARSION OF ALTERNATIVE ADMISSION CONTROL POLICIES WITH THE STATIONARY SERVERS AND

Admit-All Reject-All SC SB PB MAD

13

2 Active Displays Instantaneous Delayed, =15 Sec Max Threshold Max Threshold utility utility   -242.00 N/A -12.80 N/A -10 N/A -10 N/A -8.90 -8.90   -8.90 -8.90     -8.90 -8.90     9.80 9.80

10 Active Displays Instantaneous Delayed, =15 Sec Max Threshold Max Threshold utility utility   -368.00 N/A -14.70 N/A -10 N/A -10 N/A -8.90 -5.40    -8.90 -8.90     -8.90 -8.90     9.30 9.30



P REMIUM M ODEL ,   = 4 M BPS , AND 16 C2P2 DEVICES : 3 13 MOBILE CANDIDATE CLIENTS .

COMPARSION OF ALTERNATIVE ADMISSION CONTROL POLICIES WITH THE STATIONARY SERVERS AND



S TANDARD M ODEL ,   = 4 M BPS , AND 16 C2P2 DEVICES : 3

MOBILE CANDIDATE CLIENTS .

TABLE IV A

Displays Delayed, =15 Sec Max Threshold utility  18.00 N/A 0 N/A 22.00   19.00  19.00  22.00  

system load where 10 random clients issue requests simultaneously, requiring 40 Mbps and exhausting the available bandwidth. (With 3 servers, a maximum bandwidth of 30 Mbps is supported by the system assuming an even distribution of active displays across the clients.) With each load, a total of 100 requests are issued to the system. The utility shown with the Economy model, see Table II, is the number of satisfied requests. A low system load results in a higher number of satisfied requests with all policies when compared with a high load. This value is maximized by increasing the number of admitted requests, explaining why Instantaneous mode outperforms the Delayed mode. With the Standard utility model assigning a -1 to each failed admitted request, Delayed mode of operation becomes superior to Instantaneous, see Table III. This holds true across all policies. Note that SB and

PB are no different than Admit-all in this case. This is reflected by the fact that a threshold of -101 (admits all requests) is one of the thresholds that maximizes utility of SB and PB with the Standard model. With the Premium model assigning a -5 to each failed admitted requests and a -0.1 to each rejected request, MAD outperforms all other policies, see Table IV. Note that the Delayed mode continues to improve an Admit-all policy. Moreover, a policy that rejects all requests provides a comparable utility to SC, SB, and PB. Tables II, III, and IV show the superiority of MAD with the Premium and Standard models. With these models, a Delayed mode of operation that samples available bandwidth for 15 seconds further enhances all policies (including the naive Admit-all policy). In the rest of this section, we analyze the characteristics of MAD as a function of different thresholds

Premium Utitlity

Number of Rejected Requests 100

50

90

0

MAD, Threshold = −20

MAD, Threshlod = −20

80

−50

70

−100

60 MAD, Threshold = −40

−150

50 −200

40 −250

MAD, Threshlod = −40 30 No Admission Control

−300

20 −350

10 No Admission Control

−400

1

2

3

4

5

6

7

8

9

10

11

12

13

Number of Active Displays

0 1

2

3

4

2.a) Premium

6 7 8 9 Number of Active Displays

10

11

12

13

12

13

12

13

3.a) Rejected

Standard Utitlity 30

Number of Satisfied Requests 100

20

90

MAD, Threshold = −20

10

80

0

70

MAD, Threshold = −40 −10

60

−20

50

−30

MAD, Threshold = −40

40 No Admission Control

No Admission Control −40

30

−50

20

−60

10

−70 1

5

2

3

4

5

6 7 8 9 Number of Active Displays

10

11

12

13

0 1

MAD, Threshold = −20

2

3

2.b) Standard

4

5

6 7 8 9 Number of Active Displays

10

11

3.b) Successful Admit

Economy Utitlity 50

Number of Unsatisfied Requests 100 90

MAD, Threshold = −40 40

80 70 No Admission Control

No Admission Control

30

60 50

MAD, Threshold = −20

20

40 MAD, Threshold = −40 30

10

20 10 MAD, Threshold = −20

0 1

2

3

4

5

6 7 8 9 Number of Active Displays

10

11

12

13

0 1

2

3

4

5

6 7 8 9 Number of Active Displays

10

11

2.c) Economy

3.c) Failed Admit

Fig. 2. A comparison of Instantaneous-MAD with an environment that employs no admission control. The environment consists of 16 C2P2 devices: 3 stationary servers and 13 mobile clients.

Fig. 3. Rejected ( ), successfully admitted ( ), and failed admitted ( ) requests with different number of active displays and 16 C2P2 devices: 3 stationary servers, and 13 mobile clients.

with mobile servers, Instantaneous versus Delayed mode of operation, and different loads (both number of active displays and bandwidth required during   ). Figure 4 shows the utility of MAD with alternative models for both stationary and mobile servers as a function of alternative threshold (( ) settings. This figure shows the following. First, the utility of all the models improves when servers are mobile. Second, the utility of all models becomes a fixed constant when ( +   . Third, the observed trends when comparing the Delayed mode with the Instantaneous mode in a mobile server environment are similar to the discussion of stationary servers. We now describe the first two observations in turn. (See the discussion of Tables II, III, and IV for an explanation of the third observation.) To explain the first, note that there are no permanent dark regions with mobile servers.

With permanent dark regions (stationary servers), an actively displaying client might travel in one of these regions where no bandwidth is available for a sustained period of time. Upon exiting such an area, this client attempts to download the remaining portion of a clip at a higher rate. When this rate exceeds 10 Mbps (maximum network bandwidth), this request is discarded as a failed request because it is impossible to download the remaining portion of the clip in the remaining time. With transient dark regions as is the case with mobile servers, the likelihood of such scenarios is minimized. In Figure 4 (and Figure 5), MAD rejects all requests with (-+   . This causes the Standard and Economy models to converge to zero, while the Premium model converges to 10 because each rejected request has a value of -0.1 (see Table I) and there are 100 requests. This explains the second





50

40

40

30

30

20

20

10

10

0

0

UTILITY

UTILITY

50

−10 −20

−20

−30

−30 Premium Stationary Standard Stationary Economy Stationary Premium Mobile Standard Mobile Economy Mobile

−40 −50 −60 −101

−10

−80

−60

−40

−20

0

20

40

60

80

Premium With 2 Active Displays Standard With 2 Active Displays Economy With 2 Active Displays Premium With 10 Active Displays Standard With 10 Active Displays Economy With 10 Active Displays

−40 −50 −60 −101

100

−80

−60

−40

−20

THRESHOLD

20

40

60

80

100

80

100

5.a) 4 Mbps Media Type 50

40

40

30

30

20

20

10

10

0

UTILITY

UTILITY

4.a) Instantaneous 50

−10 −20

0 −10 −20

−30

Premium With 4MbpS Media Standard With 4MbpS Media Economy With 4MbpS Media Premium With 8MbpS Media Standard With 8MbpS Media Economy With 8MbpS Media

−30 Premium Stationary Standard Stationary Economy Stationary Premium Mobile Standard Mobile Economy Mobile

−40 −50 −60 −101

0 THRESHOLD

−80

−60

−40

−20

0

20

40

60

80

−40 −50

100

THRESHOLD

−60 −101

−80

−60

−40

−20

0 20 THRESHOLD

40

60

4.b) Delayed

5.b) 2 Active Displays

Fig. 4. Analysis of Instantaneous and Delayed MAD with mobile and stationary servers. The environment consists of 16 C2P2 devices: 3 servers, and 13 mobile clients. With mobile, servers and clients move at the same speed. With stationary, servers are parked in road stretches 1, 6, and 11.

Fig. 5. A comparison of Delayed MAD ( = 15 Sec) with different load parameters and 16 C2P2 devices: 3 mobile servers, and 13 mobile clients.

observation. Figure 5 compares Delayed MAD with different system loads. While Figure 5.a shows two different number of simultaneous displays (2 and 10), Figure 5.b shows two different bandwidth requirements during   (4 and 8 Mbps). Both figures show a reduced utility for all models with a higher system load. Moreover, the threshold that maximizes the utility of a model changes for a given system load. For example, the maximum utility with the Premium model is observed at a different threshold; at -60 with 8 Mbps and -20 with 4 Mbps, See Figure 5.b. This is also observed with the Standard and Economy models. This again emphasizes that for a given utility model, each policy has an ideal threshold (    which provides the best performance. IV. C ONCLUSIONS AND F UTURE R ESEARCH D IRECTIONS The primary contribution of this paper is a mobility based admission control policy (MAD) for delivery of continuous

media in an ad-hoc network of C2P2 devices. Our experimental results demonstrate the following. First, with continuous media, an environment that employs admission control is superior to one without admission control. Second, MAD outperforms traditional admission control policies (for wired networks) when the environment is penalized for failing to service admitted requests. Third, with a delayed approach, under heavy load conditions, the performance of all policies including the naive admit-all approach improves considerably. We are currently extending this study in several ways. First, we are developing techniques that enable our framework to determine the appropriate threshold value (( ) for use with MAD. Based on the experimental results of Section IIID, these techniques estimate system load and the expected amount of work imposed by a display to determine ( . Second, we are investigating the use of a deadline driven data delivery technique as an extension of DSR to further maximize the number of admitted requests that are satisfied with MAD. The discussions of Section III assumed data is delivered from a

  the  . With the deadline driven server to a client at a rate of  

approach, each block is tagged with a lifetime. This a

  enables  server to push data to a client at a faster rate than   when the ad-hoc network is idle. Moreover, deadlines might expedite delivery of those blocks that are needed more urgently. Finally, we are investigating the design and evaluation of a meta-policy that combines SC, SB, and PB with MAD. This policy assigns a weight to each component in order to consider both available bandwidth and mobility. V. ACKNOWLEDGMENTS This research was supported in part by NSF grant IIS0307908, National Library of Medicine LM07061-01, and an unrestricted cash gift from Microsoft Research. R EFERENCES [1] B. Bellur, R. Ogier, and F. Templin. Topology broadcast based on reverse-path forwarding. Internet-Draft Version 01, IETF, March 2001. Work in progress. [2] E. Cohen and S. Shenker. Replication Strategies in Unstructured Peerto-Peer Networks. In Proceedings of the ACM SIGCOMM, August 2002. [3] S. Ghandeharizadeh and T. Helmi. An Evaluation of Alternative Continuous Media Replication Techniques in Wireless Peer-to-Peer Networks. In Third International ACM Workshop on Data Engineering for Wireless and Mobile Access (MobiDE, in conjunction with MobiCom’03), September 2003. [4] S. Ghandeharizadeh, T. Helmi, S. Kapadia, and B. Krishnamachari. A Case for a Mobility based Admission Control Policy. In In Proceedings of the 10th International Conference on Distributed Multimedia Systems, September 2004. [5] S. Ghandeharizadeh, B. Krishnamachari, and S. Song. Placement of Continuous Media in Wireless Peer-to-Peer Networks. IEEE Transactions on Multimedia, April 2004. [6] Shahram Ghandeharizadeh, Shyam Kapadia, and Bhaskar Krishnamachari. PAVAN: a policy framework for content availabilty in vehicular ad-hoc networks. In VANET ’04: Proceedings of the first ACM workshop on Vehicular ad hoc networks, pages 57–65. ACM Press, 2004. [7] Shahram Ghandeharizadeh, Shyam Kapadia, and Bhaskar Krishnamachari. Comparison of Replication Strategies for Content Availability in C2P2 Networks. In To Appear in MDM ’05: Proceedings of the 6th International Conference on Mobile Data Management. ACM Press, 2005. [8] D. B Johnson and D. A Maltz. Dynamic source routing in ad hoc wireless networks. In Imielinski and Korth, editors, Mobile Computing, volume 353. Kluwer Academic Publishers, 1996. [9] E. Knightly and N. Shroff. Admission control for statistical qos: Theory and practice. IEEE Network, vol. 13, no. 2, pp. 20-29, 1999. [10] Seoung-Bum Lee, Gahng-Seop Ahn, Xiaowei Zhang, and Andrew T. Campbell. INSIGNIA: An IP-based quality of service framework for mobile ad hoc networks. Journal of Parallel and Distributed Computing, 60(4):374–406, 2000. [11] P. Mohapatra, J. Li, and C. Gui. Qos in mobile ad hoc networks. IEEE Wireless Communications, pp. 44-52, June 2003. [12] Elena Pagani and Gian Paolo Rossi. A framework for the admission control of qos multicast traffic in mobile ad hoc networks. In Proceedings of the 4th ACM international workshop on Wireless mobile multimedia, pages 2–11. ACM Press, 2001. [13] C. Perkins, E. Royer, and S. Das. Ad hoc on demand distance vector (aodv) routing. Internet-Draft Version 07, IETF, November 2000. Work in progress. [14] D. Perkins and H. Hughes. A survey on quality of service support in wireless ad hoc networks. Journal of Wireless Communications and Mobile Computing (WCMC), Speical Issue on Mobile Ad Hoc Networking Research, Trends and Application, 2(5) pp. 503-513, 2002. [15] J. D. Salehi, Z. Zhang, J. F. Kurose, and D. Towsley. Supporting Stored Video: Reducing Rate Variability and End-to-End Resource Requirements through Optimal Smoothing. In Proceedings of the 1996 ACM Sigmetrics Conference, May 1996. [16] Prasun Sinha, Raghupathy Sivakumar, and Vaduvur Bharghavan. CEDAR: a core-extraction distributed ad hoc routing algorithm. In INFOCOM (1), pages 202–209, 1999.

[17] K. Wu and J. Harms. Qos support in mobile ad hoc networks. Crossing Boundaries - an interdisciplinary journal, Vol 1, No 1, 2001. [18] H. Xiao, W. Seah, A. Lo, and K. Chua. A flexible quality of service model for mobile ad-hoc networks. In IEEE VTC, 2002. [19] C. Zhu and M. Corson. Qos routing for mobile ad hoc networks. In INFOCOM, June 2001.

A PPENDIX Before we move on to address the challenges of admission control for multiple simultaneous displays, we first need to understand the basic data placement and delivery scheduling strategies that are needed to provide satisfactory performance even in a single display scenario. This section focuses on the display of a single clip in a C2P2 network. It shows that content replication and block-level switched retrieval of data from proximate servers provides low-latency delivery of high data-rate content. The basic principle behind replication and block-switched retrieval is as follows. The clip is replicated on servers, and divided into identifiable segments known as blocks (of size  ). The client is assumed to be able to determine the identity and location of these servers through a resource discovery protocol. (See [2] for a description of techniques to identify nodes with relevant data in an unaddressable wired network.) When the retrieval of a clip is initiated, the client requests the first block from the nearest server. As each successive block is retrieved, this process is repeated, with the client always requesting the information from the nearest server. There are therefore two key parameters of interest: the block size  , and the number of servers . Intuitively, as the number of servers is increased and the block size is decreased, there is a greater diversity of choices (higher likelihood that at least one server is always near the client in the network) and greater adaptation to mobility (due to increased frequency of switching), enabling an improvement in the quality of continuous media retrieval. This requires greater overhead as the frequency of switching increases. However, this approach depends on the block size and speed of the cars which in turn determines the frequency of switching. Note that if the mobility characteristics of the environment lead to frequent changes in the proximate server when a decision is to be made per block then the benefits of choosing the closest server may become lower the cost incurred due to the added overhead of extremely frequent switches. In other cases, when the proximate server does not change on a per block basis the benefits obtained are substantial and exceed the overhead in the server selection process. (Typically much lesser overhead is incurred in probing to find out if the current server is the ”closest” than switching to a new server, and if this switches are not that frequent then compared to the duration of data transfer this overhead also becomes negligible.) We validate this through simulation results but have ignored the switching overhead. Figure 6 shows sample runs of data delivery using blockswitched retrieval tested via ns-2 simulation of a 20-node MANET with random waypoint mobility.

4

(a) Block size 1MB

x 10

14

Startup latency (seconds) 50

45

12

Packet Sequence Number

40

server switches

10

35

30

Packet Arrival Time

8

1MB 25

6 20

4

Playback Time

Startup Latency

path failures

1KB

15

10

2 5

0

0

50

100

150

200

250

0

0

2

4

Time (s)

6.a block size = 1 MB 4

14

10

12

8

10

12

Total no of packet transmissions 5

1.6

12

Packet Arrival Time

x 10

1.55

server switches

10

Packet Sequence Number

8

7.a Startup latency

(b) Block size 1KB

x 10

6

No of servers

1.5

1.45

8

1MB 1.4

6 1KB

4

path failure

Playback Time

Startup Latency

1.35

1.3

2 1.25

0

0

50

100

150

200

1.2

Time (s)

Simulation results of packet arrival times for a sample run involving block-based staggered non-redundant retrieval from 8 servers for two different block sizes; maximum node velocity 30 meters/sec; delivery rate 2 Mbps and playback rate 4 Mbps.

Other simulation parameters are: area is 200x200m, linkbandwidth is 10 Mega bits per second (Mbps), clip-display time is 120 Seconds, and the underlying routing protocol is Dynamic Source Routing (DSR) [8]. The vertical lines in Figure 6.a represent times when there is a server switch. There are two sloping curves. The top one shows the packet arrival times, with some small gaps that correspond to path breaks due to mobility. The second curve is a line which represents the rate of data playback as a function of time3 . For illustration, we have chosen a playback rate of 4 Mbps in this figure, while the delivery rate is 2 Mbps. The point where the playback time curve intersects the x-axis represents 3 We

focus on clips with a Constant Bit Rate (CBR) display requirement. Hiccup-free delivery of Variable Bit Rate (VBR) encoded clips can be conceptualized as a sequence of CBR delivery schedules [15].

2

4

6

No of servers

6.b block size = 1 KB Fig. 6.

0

7.b Total transmitted packets Startup latency and total bandwidth usage (in terms of packets hops) for hiccup-free delivery as a function of the number of servers for low and high block sizes (max. velocity 30 m/s, delivery and playback rate 4 Mbps) for a single-client scenario. Fig. 7.

the minimum startup latency for hiccup-free display. When there are greater numbers of path failures, the startup latency deteriorates. When a smaller block size (such as 1KB) is used, the client refreshes its server connection frequently and there are fewer intra-block path failures in this scenario and as a result the corresponding startup latency can be reduced4. Figure 7 shows how the startup latency and the total innetwork bandwidth usage varies as a function of the number of servers for different block sizes. The first observation is that the startup latency is reduced significantly as the number of servers (i.e. the degree of 4 Naturally, choosing a smaller block-size also increases the number of route setup messages required by the underlying DSR protocol, but this additional overhead is relatively insignificant compared to the large bandwidth application data.

replication) is increased. We can see reductions of more than 20 times in some cases, by increasing the number of servers from 1 to 12. Figure 7 also shows that the startup latency can be reduced by up to 12 times with a 1 KB block size when compared with a 1 MB block size. These results show an order of magnitude QoS improvements in the startup latency metric and significant reduction in bandwidth usage with sufficient replication and block-switched retrievals. Having addressed the strategies needed for robust singledisplay downloads, we now turn to the challenges inherent in the download of multiple simultaneous displays. Given the limited bandwidth of wireless channels, the high data-rates for continuous media delivery, and the dynamics of the C2P2 MANET topology, intelligent admission control is a must.

Suggest Documents