Simple measurement-based admission control for DiffServ ... - CiteSeerX

81 downloads 8798 Views 217KB Size Report
Nokia Research Center ... periodically sent to Bandwidth Broker (BB) of the routing domain, which makes the admission control decisions. The .... In our approach, we have a Call Admission Control (CAC) agent in all routing domain nodes.
Simple measurement-based admission control for DiffServ access networks Jani Lakkakorpia Nokia Research Center P.O. Box 407, FIN-00045 NOKIA GROUP, Finland ABSTRACT In order to provide good Quality of Service (QoS) in a Differentiated Services (DiffServ) network, a dynamic admission control scheme is definitely needed as an alternative to overprovisioning. In this paper, we present a simple measurement-based admission control (MBAC) mechanism for DiffServ-based access networks. Instead of using active measurements only or doing purely static bookkeeping with parameter-based admission control (PBAC), the admission control decisions are based on bandwidth reservations and periodically measured & exponentially averaged link loads. If any link load on the path between two endpoints is over the applicable threshold, access is denied. Link loads are periodically sent to Bandwidth Broker (BB) of the routing domain, which makes the admission control decisions. The information needed in calculating the link loads is retrieved from the router statistics. The proposed admission control mechanism is verified through simulations. Our results prove that it is possible to achieve very high bottleneck link utilization levels and still maintain good QoS. Keywords: differentiated services, bandwidth broker, measurement-based connection admission control

1. INTRODUCTION Since fiber cannot be cost-effectively delivered everywhere, the “last mile” in many access networks may often consist of relatively narrow-bandwidth links (e.g., leased lines). DiffServ1 can improve the utilization of these links, but it does not create new bandwidth. Without dynamic admission control, narrow-bandwidth access networks can become heavily congested or seriously underutilized. The former kind of situation could happen if there is no admission control. In those circumstances, all non-adaptive applications that need some sort of delay, jitter or packet loss guarantees (e.g., Voice over IP and UDP-based streaming) could suffer. The latter kind of situation could happen as a consequence of parameter-based admission control (PBAC), which can be too conservative for traffic sources with “loose” QoS requirements and variable bit rate (e.g., a video streaming source with an average bit rate considerably smaller than its peak bit rate). It is possible to do admission control by probing the path with active measurements2-7 or use an agent called Bandwidth Broker8-12 to assist in the decision whether a connection is admitted to the network. This paper is organized as follows: Sections 2 and 3 give brief overviews on connection admission control using active measurements and Bandwidth Broker approach, correspondingly. Section 4 presents our measurement-based admission control scheme, while sections 5 and 6 validate the proposed scheme through simulations. Section 7 concludes the paper and proposes some future enhancements to the presented scheme.

2. ON CONNECTION ADMISSION CONTROL USING ACTIVE MEASUREMENTS References 2-7 are some of the most recent contributions on the field of measurement-based connection admission control (MBAC). In Refs. 2 and 3 (Breslau et al.), simulations are used to compare the performance of several measurement-based endpoint admission control algorithms proposed in the literature. This literature includes the following papers: Ref. 4 (Elek et al.) proposes a controlled-load service that provides a network state with bounded and well-known worst-case behavior. The basic idea is that a host must probe the path to the receiver before sending the actual data. Ref. 5 (Bianchi et al.) proposes a scheme in conformance with the DiffServ framework, where routers only need to give higher priority to data packets than probing packets. Ref. 6 (Cetinkaya et al.) introduces a technique called a

[email protected]; phone +358 50 4839493; fax +358 7180 36851

egress admission control, where resource management and admission control are performed only at egress routers without any coordination among backbone nodes or per-flow management. Ref. 7 (Kelly et al.) describes a framework where the admission control decisions are taken by the end-systems. These decisions are based on the results of probe packets that the end-systems send through the network. In all the above designs, the hosts probe the network to detect the level of congestion. Most of these proposals are targeted for Integrated Services (IntServ) model. Some of them, however, can be used with Differentiated Services (DiffServ) model as well.

3. BANDWIDTH BROKER APPROACH 3.1 Background In “A Two-bit Differentiated Services Architecture for the Internet” (RFC 26388) Nichols et al. have introduced a Bandwidth Broker agent that has the information of all resources in a specific domain. Bandwidth Broker could (in addition to its other possible duties) be consulted in admission control decisions. This would remove the need to probe the path before sending the actual data. In addition to RFC 26388, QBone Bandwidth Broker Advisory Council home page9 provides information on Bandwidth Brokers. O. Schelén’s Ph. D. thesis10 and several research papers11-13 (included in his thesis) present an admission control scheme, where clients can make reservations through Bandwidth Broker agents. For each routing domain, there is a Bandwidth Broker responsible for admission control. This agent maintains information about reserved resources on each link in its routing domain. Bandwidth Broker knows the domain topology by listening to OSPF16 messages and link bandwidths through Simple Network Management Protocol (SNMP). The proposed architecture provides scalable resource reservations for unidirectional virtual leased lines. Reservations from different sources to the same destination are aggregated as their paths merge toward the destination. If there are enough resources in the destination domain, prefix aggregation can be performed (all reservations for a given destination domain are aggregated as their paths merge). Bandwidth Brokers are responsible for setting up police points at the edges for checking commitments. BB1

BB2

BB3

Fig. 1: Bandwidth broker agents and their routing domains.

3.2 Motivation for link load measurements In our opinion, the use of static reservations (parameter-based admission control, “bookkeeping”) can leave the network seriously underutilized. This is due to the fact that average bit rates can be substantially lower than the corresponding (requested) peak rates. Link load measurements are needed for more efficient network utilization. Expedited Forwarding (EF)14 and Best Effort (BE) loads have already been mentioned as “required metrics” for the QBone architecture15. Measurements might also be needed because of mobility – static source-destination reservations may not be sufficient in the case of mobile users. In the bookkeeping-based approach, the bandwidth allocation could be changed during handovers, but this would probably require very fast response times from the Bandwidth Broker. Of course, MBAC involves some gambling. It is possible that all admitted traffic sources start sending data at their peak rates at the same time. However, the probability for such event is extremely small.

4. SIMPLE MEASUREMENT-BASED ADMISSION CONTROL FOR DIFFSERV ACCESS NETWORKS We present a simple measurement-based admission control mechanism for DiffServ access networks. We don’t probe the path with active measurements but rather extend Schelén’s Bandwidth Broker approach10-13. The extra information needed for admission control decisions is retrieved from the router statistics (using e.g., SNMP), and it is periodically sent to Bandwidth Broker agent of a routing domain. This approach will result into lower connection setup times than using active measurements since there is no need for probing the path before sending the actual data. 4.1 Combining measurements and reservations In our approach, we have a Call Admission Control (CAC) agent in all routing domain nodes. One of these agents will act as Bandwidth Broker managing resources by storing the information on reservations and measured link loads (received from other CAC agents) within the routing domain. Bandwidth Broker knows the routing topology by listening to OSPF16 messages. Link bandwidths within the routing domain are obtained through SNMP10. CAC14 BB1

BB2

CAC31

BB3 CAC11

CAC33

CAC13 CAC32 CAC12

CAC21 CAC24

CAC23

CAC22

Fig. 2: Bandwidth broker & other CAC agents and their routing domains.

In addition to reserved link bandwidths for different traffic classes (e.g., EF and AF418), the CAC decision is based on measured link loads on the path between the endpoints. If there is not enough unreserved or unused bandwidth on the path, access is denied. The idea is that maximum reservable bandwidth could be bigger than link capacity (overbooking allowed). If the maximum reservable bandwidth is big enough, it is the unused bandwidth only that matters. The relationship between maximum reservable bandwidth and link bandwidth is configurable for each traffic class. This parameter, αclass, defines whether we want to play it safe or trust on measurements. All CAC agents monitor and update their “local link loads” by using exponential averaging (see e.g., Ref. 19) on the statistics obtained from their local router:

load class,i = (1 − w)× load class,i −1 + w × max(load class, j ) load class , j =

bitsclass , j s × bandwidth

(1)

The number of dequeued bits during a sampling period (s), bitsclass,j, is obtained from router statistics. A possible value for s could be e.g., 500 ms. During a measurement period (p), we will sample the link loads p/s times, and at the end of each measurement period, we will select the maximum value to represent the current load. A possible range for measurement period (p) could be e.g., from 500 milliseconds to 10 seconds. Exponential averaging weight (w), measurement period (p) and sampling period (s) should be carefully selected. The “optimal” values for w and p depend on how fast we want to adapt to changes in link loads. Small value for s makes the scheme more sensitive to bursts, while bigger value might give a better estimation of the average load19.

Whenever a load report arrives to Bandwidth Broker agent, the database is updated and the applicable unused link bandwidths are re-calculated:

unused _ bandwidthclass = (threshold class − load class ) × bandwidth

(2)

Table 1: Part of BB database; αEF = 0.8, αAF4 = 0.8, αEF+AF4 = 1.0, thresholdEF = 0.6, thresholdAF4 = 0.6, thresholdEF+AF4 = 0.8. Link

Bandwidth

A→B

10 Mbps

Traffic class EF AF4 EF+AF4

Max. reservable bandwidth 8 Mbps 8 Mbps 10 Mbps

Unreserved bandwidth 4 Mbps 2 Mbps 0 Mbps

Measured load 40% 20% 60%

Unused bandwidth 2 Mbps 4 Mbps 2 Mbps

New connections request resources (peak rate from source to destination) from the Bandwidth Broker of their own routing domain (other Bandwidth Brokers may have to be consulted as well, if the destination is not in the same domain). If there are enough resources (see Fig. 3 - Fig. 4), requested bandwidth for the admitted connection is subtractedb from all applicable unreserved_bandwidth values along the pathc. Otherwise, the connection is rejected. Policing is needed for all admitted flows to keep their bit rates below the agreed ones. 4.2 Call admission control for two traffic classes As a second enhancement to Schelén’s approach10-13, we want to do call admission control for two traffic classes: EF and AF4. The motivation for doing CAC also for AF4 (in fact, it could be any AF class) is that there are a lot of new real time applications with “loose” QoS requirements. This means that these traffic sources (e.g., video or audio streaming) do not need “virtual wire” treatment and thus their packets do not have to be marked as EF packets. Some statistical guarantees, however, have to be provided. For EF traffic, CAC is relatively straightforward – we only have to check that the EF and EF+AF4 reservations & loads are below their corresponding limits (Algorithm 1, see Fig. 3). For AF4, however, CAC decision is more complex. This is due to weighted scheduling between AF queues. We have (at least) the following alternative ways to do CAC for AF4: −

Algorithm 1: Configure the weights for AF4 in “strict priority” fashion (over other AF classes) and apply similar CAC as for EF: check that AF4 and EF+AF4 reservations & loads are below their corresponding limits (see Fig. 3).



Algorithm 2: Check that EF+AF4 reservations & loads are below their corresponding limits and check that the following equation holds (see Fig. 4):

load EF +

load AF 4 ≤1, weight AF 4

(3)

where weightAF4 is the scheduling weight for AF4 queue (e.g., 0.5) so that the sum of all AF weights is one. 4.3 Data distribution All CAC agents send their link loads (from links that are directly attached to their local router) periodically (every p seconds) to Bandwidth Broker. These packets should be given the best possible treatment in terms of packet delay and loss by marking them with some appropriate DiffServ code point (DSCP).

b

The requested bandwidth is added to the applicable unreserved_bandwidth values once the connection is torn down. Note that we could also subtract something (e.g., half of the requested_bandwidth) from the unused_bandwidth values and check whether there is enough unused bandwidth left (unused_bandwidth ≥ half of the requested_bandwidth). This, however, is not trivial and it is left for future studies. c

Bandwidth Broker: for each admission request: classify connection (class = EF or AF4) admit = true for all links on the source-destination path: if ((unused_bandwidthclass ≤ 0) OR (unreserved_bandwidthclass < requested_bandwidth) OR (unused_bandwidthEF+AF4 ≤ 0) OR (unreserved_bandwidthEF+AF4 < requested_bandwidth)) admit = false if (admit == true) for all links on the source-destination path: unreserved_bandwidthclass =– requested_bandwidth unreserved_bandwidthEF+AF4 =– requested_bandwidth

Bandwidth Broker: for each AF4 admission request: classify connection (class = AF4) admit = true for all links on the source-destination path: if (((loadEF + loadAF4 / weightAF4) > 1) OR (unreserved_bandwidthclass < requested_bandwidth) OR (unused_bandwidthEF+AF4 ≤ 0) OR (unreserved_bandwidthEF+AF4 < requested_bandwidth)) admit = false if (admit == true) for all links on the source-destination path: unreserved_bandwidthclass =– requested_bandwidth unreserved_bandwidthEF+AF4 =– requested_bandwidth

for each connection tear-down: classify connection (class = EF or AF4) for all links on the source-destination path: unreserved_bandwidthclass =+ requested_bandwidth unreserved_bandwidthEF+AF4 =+ requested_bandwidth

for each connection tear-down: classify connection (class = AF4) for all links on the source-destination path: unreserved_bandwidthclass =+ requested_bandwidth unreserved_bandwidthEF+AF4 =+ requested_bandwidth

for each load update arrival: update database (unused_bandwidth)

for each load update arrival: update database (unused_bandwidth)

All CAC agents (including Bandwidth Broker): timer expires: update link loads send update to Bandwidth Broker set timer to expire after p seconds

All CAC agents (including Bandwidth Broker): timer expires: update link loads send update to Bandwidth Broker set timer to expire after p seconds

Fig. 3: CAC Algorithm 1 (for EF and AF4 connections).

Fig. 4: CAC Algorithm 2 (for AF4 connections only).

5. SCHEME VALIDATION AND PERFORMANCE EVALUATION 5.1 Simulation cases and network topology We chose to validate the proposed admission control scheme through simulations. We also wanted to compare the performance of parameter-based and measurement-based admission control algorithms. The following four cases were identified: 1) 2) 3) 4)

No admission control Parameter-based admission control, Algorithm 1 (see Fig. 3) Measurement-based admission control, Algorithm 1 (see Fig. 3) Measurement-based admission control, Algorithm 2 (see Fig. 4)

Since we claimed that measurements were necessary for better network utilization, performance comparison between parameter-based and measurement-based call admission control was needed. Only the two extremes (bookkeeping only vs. measurements only) were simulated. The PBAC parameters were the following (no overbooking, because we don’t know the actual link loads): αEF = 0.8, αAF4 = 0.8, αEF+AF4 = 1.0. The MBAC parameters (for both Algorithm 1 and Algorithm 2) were the following: thresholdEF = 0.6, thresholdAF4 = 0.6 and thresholdEF+AF4 = 0.8. The following exponential averaging parameters were used for the measured link loads: s = 500 ms, p = 500 ms, w = 0.5. CAC was not applied for the TCP-based, adaptive traffic sources (AF1-AF3 in our case). 6 Mbps 24 Mbps 110 Mbps

Core router Edge router ”Access network gateway”

Fig. 5: Example access network topology.

An example topology for our simulations is illustrated in Fig. 5. Our example access network consist of three fiber links with a bandwidth of 110 Mbps and three “microwave branches” with substantially less bandwidth (first hop from the fiber: 24 Mbps, second hop from the fiber: 6 Mbps). 5.2 Network equipment All routers in our example topology implemented the standard Per-Hop Behaviors (PHB); EF14 was realized as a priority queue17, and AF18 & BE with Deficit Round Robin20 system consisting of five queues. Default weights (quantums) for AF4, AF3, AF2, AF1 and BE queues were the following: 15, 8, 4, 2 and 1. Congestion management for EF was implemented with a token bucket rate limiter (rate: 0.9×link bandwidth, bucket size: 3×MTU=4500 bytes). For AF queues, Weighted Random Early Detection (WRED) was applied. In our case this meant that RED21 was applied to each AF queue separately. All RED queues used AQS Weight of 0.25 and all queues had a maximum size of 50 packets. Other RED parameters (for all AF queues) were the following: − − −

Drop Precedence = 1: MinThresh = 25 pkts, MaxThresh = 45 pkts, MaxDropPr = 1.0 Drop Precedence = 2: MinThresh = 15 pkts, MaxThresh = 35 pkts, MaxDropPr = 1.0 Drop Precedence = 3: MinThresh = 5 pkts, MaxThresh = 25 pkts, MaxDropPr = 1.0

5.3 Traffic characteristics All connections were set up between the “access network gateway” and some edge router. New connections arrived at each edge router with exponentially distributed (with a mean of 0.67 seconds) interarrival times. For simplicity, we used exponentially distributed holding times with the same mean (100 seconds) for all connections. Without connection admission control, this would have resulted (on average; based on Little’s resultd) into 150 simultaneous connections per edge router. The star points of our topology were considered as “hot spots” where arrival intensity was doubled. Our traffic mix consisted of Voice over IP (VoIP) calls, videotelephony, video streaming22, web browsing23-24 and email downloading24 (see Appendix 1 for traffic source descriptions). There were three different service levels (Gold, Silver and Bronze) within each AF class. Signaling traffic between the Bandwidth Broker and all other CAC agents was also modeled. Bandwidth Broker agent was physically located at the “gateway” that connects the access network to service provider’s core network. Service mapping was done according to Table 2: Table 2: Traffic mix & service mapping. Service

Service level

PHB

BB messages VoIP calls

-

Videotelephony Video streaming

Gold Silver Bronze Gold Silver Bronze Gold Silver Bronze Gold Silver Bronze

Fast web browsing

Web browsing

E-mail downloading

EF EF

Share of all subscribers 40%

Requested bandwidth (≈ peak rate) 30 kbps

EF AF41 AF42 AF43 AF31 AF32 AF33 AF21 AF22 AF23 AF11 AF12 AF13

10% 2.67% 2.67% 2.67% 2.67% 2.67% 2.67% 5.33% 5.33% 5.33% 6% 6% 6%

60 kbps 250 kbps 250 kbps 250 kbps -

5.4 Simulation methodology We used a modified version of the ns-2 simulator26. Six simulations with different seed values were run in each simulated case. Simulated time was always 600 seconds of which the first 200 seconds were discarded as “warming period”. We were interested in the tradeoff between connection blocking probability and the following QoS metrics for

d

N = λT

different traffic aggregates: end-to-ende delay, packet loss, achieved bit rates for TCP-based traffic sources (virtual links of 270 kbps were used to limit the sending rates of all TCP-based sources) and packet losses on individual links. Link utilization levels were naturally observed, since they are the essential input of our admission control scheme. Link utilization and reservation level graphs were drawn for the first run of each simulated case.

6. SIMULATION RESULTS 6.1 Results without admission control The reason for including this case (Fig. 6 - Fig. 7, Table 3) is to show what happens when a DiffServ network gets overloaded: the quality of video streaming is most probably bad due to excessive packet drops. EF traffic does not suffer yet, but it would do so with higher connection arrival rate. Traffic on Bottleneck Link (24 Mbps)

The Cumulative Distribution of End−to−End Delay (6 simulation runs, 95% confidence interval)

1.2

1

EF µ = 3.15 ± 0.04 ms P = 4.77 ± 0.07 ms 95 P = 5.40 ± 0.00 ms 99.9

0.9

EF & EF+AF4 loads vs. link capacity

99.9

0.7

P{End−to−end delay

Suggest Documents