use of deficit round robin scheduling with different levels of quantum rate to provide fairness, and different weights to provide absolute and relative throughput.
A-DRAFT: An Adaptive QoS Mechanism to Support Absolute and Relative Throughput in 802.11 Wireless LANs †
Wasan Pattara-Atikom† , Sujata Banerjee∗ † , and Prashant Krishnamurthy† Telecommunications Program, University of Pittsburgh, Pittsburgh, PA 15260 ∗ Hewlett-Packard Laboratories, Palo Alto, CA 94304 {wapst7,prashk,sujata}@pitt.edu
ABSTRACT
1.
Several distributed QoS mechanisms have been proposed to augment the existing distributed coordination function of the IEEE 802.11 MAC protocol to provide QoS support for QoS-sensitive applications. These mechanisms use wellknown QoS enabling techniques such as priority assignment and fair scheduling within existing 802.11 MAC parameters. Although these mechanisms provide differentiated throughput for different classes of traffic, they cannot provide both relative and absolute throughput support simultaneously. In this paper, we propose a new mechanism called ADRAFT that supports both absolute and relative throughput in an adaptive and a fully distributed manner. This paper describes an adaptive mechanism that supports absolute throughput as long as the total demand from this class is below the effective channel capacity. The proposed mechanism also provides relative or fair throughput support with low variation and a high degree of fairness even in a saturated network with a large number of MSs. We make use of deficit round robin scheduling with different levels of quantum rate to provide fairness, and different weights to provide absolute and relative throughput. We evaluate the performance of the proposed mechanism via mathematical analysis and confirm this analysis with simulations.
As Wireless local area networks (WLANs) become ubiquitous and an integral part of the telecommunications infrastructure, they will be increasingly used for multimedia applications (an example is the recent interest in using Voice Over IP (VoIP) over wireless LANs). Multimedia applications are time sensitive and require specific throughput and delay bounds creating an urgent need for Quality of Service (QoS) support in WLANs. The current medium access control (MAC) protocol in IEEE 802.11 WLANs, called the Distributed Coordination Function (DCF) is not suitable for QoS-sensitive traffic. Several priority-based QoS mechanisms have been proposed as extensions to DCF over the last few years, including an extension to IEEE 802.11 called IEEE 802.11e, to provide service differentiation among different traffic classes. However, it has been reported in the literature [1–4] that priority-based mechanisms suffer from degradation of service differentiation of high priority traffic, starvation of low priority traffic, high throughput variation and a lack of fairness. In order to overcome the unfair apportioning of bandwidth created by binding the channel access to the priority of traffic class, fair scheduling mechanisms have been considered as alternatives to provide QoS in WLANs. Fair scheduling algorithms attempt to partition the network resource fairly amongst flows in proportion to a given flow weight. The idea here is to regulate the waiting time of traffic, so that traffic in each class has a fair opportunity (based on its flow weight) to be sent. This is different from the schemes that bind channel access to priority because the bandwidth is fairly apportioned between different traffic classes. By using different weights, mechanisms based on fair scheduling can be used to provide absolute throughput and/or relative throughput support. By relative throughput support, we mean the ability to allocate different bandwidths to different traffic classes in proportion to their requirements. Web access, file sharing, and instant messages, are examples of applications that require relative throughput support. By absolute throughput support, we mean the ability to provide a specific throughput. Media distribution and voice telephony are examples of applications that require absolute throughput support. The drawbacks of current fair queueing mechanisms include one or more of the following. They cannot recognize different QoS requirements as they are designed to support only either relative throughput or absolute throughput, it is difficult to translate the user requirements into an appropriate metric at the MAC layer, and the variability of throughput is high.
Categories and Subject Descriptors C.2.5 [Computer - Communication Networks]: Local and Wide-Area Networks
General Terms Algorithms, Performance, Design
Keywords Wireless LANs, Quality of Service, Throughput
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. MSWiM’04, October 4-6, 2004, Venezia, Italy. Copyright 2004 ACM 1-58113-953-5/04/0010 ...$5.00.
117
INTRODUCTION
3.
In this paper, we propose a new adaptive QoS mechanism called Adaptive Distributed Relative and Absolute Fair Throughput (A-DRAFT). A-DRAFT can provides both absolute throughput and relative throughput support at the same time in the same mechanism under reasonable loads. Moreover, it can adaptively throttle the available bandwidth to traffic flows belonging to relative throughput class and extend the ability to support absolute throughput under overloaded conditions. A-DRAFT is based on fair queue scheduling, but it is fully distributed and simple to implement. It provides high and relatively constant aggregate throughput, a high degree of fairness and low throughput variation even for a large number of Mobile Stations (MSs). We briefly discuss IEEE 802.11 and its QoS extensions in the next section. In Section 3, we describe related work in this area. We describe the proposed A-DRAFT scheme in Section 4 and the simulation set up and numerical results in Section 5. We conclude the paper in Section 6.
2.
RELATED WORK
Several approaches have been investigated to provide QoS support in IEEE 802.11 WLANs. These approaches can be classified into 2 categories: 1) priority-based, and 2) fairscheduling-based. The priority-based mechanisms [5, 7, 8] aim at providing service differentiation between low priority traffic and high priority traffic while the fair-schedulingbased mechanisms [3, 9–12] aim at providing fair throughput among different bandwidth requirements. Among these various approaches, QoS is usually provided by tuning one or more of 3 WLAN parameters: 1) Inter-Frame Space, 2) Contention Window, and 3) Backoff Interval. Like 802.11e, in priority-based mechanism, a smaller value of these parameters is assigned to traffic with higher priority enabling them to access the medium before traffic with a lower priority. Several drawbacks of the priority-based mechanisms, including degradation of service differentiation of high priority traffic, unfair apportioning of bandwidth, and starvation of low priority traffic, have been discussed in the earlier section. In order to overcome the unfair apportioning of bandwidth created by binding the channel access to priority of traffic class, fair-scheduling-based mechanisms have been considered as alternatives. Fair-scheduling mechanisms attempt to partition the network resource fairly amongst flows in proportion to a given flow weight. The idea here is to regulate the waiting time of traffic, so that traffic in each class has a fair opportunity (based on its flow weight) to be sent. With different weights, different classes of requirement can be supported. Distributed Weighted Fair Queuing (DWFQ) [12] and Distributed Fair Scheduling (DFS) [10] consider relative throughput support, i.e. ability to allocate fair throughput among different traffic classes, while Assured Rate MAC Extension (ARME) [11] considers absolute throughput support, or ability to deliver specific throughput. The drawbacks of these fair-scheduling-based mechanisms include one or more of the following. It is difficult to translate the user requirements into an appropriate metric at the MAC layer and the variability of throughput and delay is high. They cannot recognize different QoS requirements as they are designed to support only one QoS service. A comprehensive review of QoS support in IEEE 802.11 WLANs can be found in [13].
IEEE 802.11 AND 802.11E
The DCF mode of IEEE 802.11 is based on Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA). The Mobile Station (MS) first senses the channel and it can transmit a packet if the channel is idle for a time equal to DCF Inter-frame Space (DIF S). If the channel is busy, the MS enters a deferral (backoff) period to reduce the chance of collisions with other contending MSs. Here, the MS selects a backoff interval (BI) that is uniformly distributed between zero and a contention window (CW ). CW is initially set to CWmin and it is doubled after every consecutive collision until CW reaches CWmax . The ongoing standards activity to support QoS in WLANs is called 802.11e. The main idea is to assign different values of IF S, CWmin , and CWmax to different classes of traffic. The distributed mechanism in 802.11e is called Enhanced Distributed Channel Access (EDCA) [5]. EDCA consists of up to 4 prioritized queues. A new type of IFS named Arbitrary IFS (AIFS) is added. Higher priority traffic receives a smaller AIF S, CWmin , and CWmax than lower priority traffic. Although EDCA considers differentiation QoS, it cannot provide a specific QoS service. In ideal situations, some studies report good service differentiation (e.g., improved throughput, access delay and dropped rate for high priority traffic [6]). However, other studies, [1–4], have pointed out fundamental short-comings of priority-based mechanisms in general. In this mechanism, the priority of a traffic class is statically combined or fixed with the right to access the medium; therefore, the high priority traffic almost always has the right to access the medium. Although this approach is beneficial to high priority traffic, it could lead to other problems ranging from unfair occupation of bandwidth to starvation of low priority traffic. High variation of throughput and delay in a saturated condition is an issue that remains unsolved. Finally, there is no easy mapping scheme to convert the user or network requirement (i.e., throughput or delay) to the EDCA parameters, i.e. IF S, CWmin , CWmax . That is, it is not clear what the values of AIF S and CW should be to provide specific throughput of 64 Kbps with 50 ms delay.
4.
A-DRAFT
The primary advantage of A-DRAFT is that it is an adaptive mechanism that can provide both relative throughput and absolute throughput to different traffic classes in a fully distributed manner and it does not suffer from the drawbacks of the mechanisms discussed previously.
4.1
Protocol Description
The traffic at each MS is categorized into classes with different QoS requirements, i.e., absolute throughput and relative throughput. A traffic class j at MS i (T Cij ) with throughput requirement (λji ) is allocated a service quantum of Q bits every tji second. For the sake of simplicity, we will assume that there is only one traffic class per MS and omit the superscript in what follows. The rate of service quantum accumulation for T Ci is Qri = tQi . With time, each traffic class in each MS independently accumulates service quanta with its respective quantum rate. Traffic classes requiring a
118
4.3
higher throughput are given higher service quantum rates. The “Deficit Counter” of T Ci (DCi ) is increased continuously with time at a rate of Qri as shown in Eq. 1, and is decreased by the size of the frame (in bits) whenever a frame is transmitted successfully as shown in Eq. 2. DCi (t) = DCi (t) =
DCi (t0 ) + Qri × (t − t0 ) DCi (t) − F rame Size(t)
In 802.11, BI is selected randomly from a range between zero and CW and the value of CW is doubled for every consecutive collision. We can say that the calculated BI consists of two components: collision avoidance and collision reduction. The collision avoidance component is the uniform randomization of BI selection between zero and CW . This component is used even before a collision occurs ad it is meant to prevent potential collisions among traffic flows that enter the contending period at the same time. The collision reduction component is the exponential expansion of CW . This component is applied only after collisions have occurred as it is meant to reduce the probability of further collisions. On the one hand, we can see that different values of BI are assigned to avoid collisions. On the other hand, it can be viewed that the assignment of BI is a distributed transmission scheduling mechanism. The reason is that the value of BI determines the time at which a frame will be transmitted. Unfortunately, the calculated BI of 802.11 can vary in a wide range because it is randomly selected from a range which can be expanded exponentially for each collision. Therefore, it is difficult to estimate the next transmitting time which is important for QoS-sensitive applications. The way in which BI is calculated has a significant impact on the performance in general. According to our simulations, the throughput and delay performance of the mechanisms based on exponential backoff, e.g. DCF, EDCA, DWFQ, and ARME, exhibit high variation, particularly in overloaded situations. We believe that exponential backoff algorithm is the primary cause of this problem. In this paper, we propose a new way to calculate the value of BI by 1) modifying the collision avoidance component, 2) modifying the collision reduction component to be more efficient, and 3) adding another component depending on the weight that provides fairness and QoS for different classes of traffic flows. The collision avoidance component is a simple random number generated from a uniform distribution between 0.9 and 1.1 Eq. 4. The collision reduction component is a new mechanism called negative exponential backoff described below. The component depending on the weight involves a deterministic calculation of BI, described below.
(1) (2)
The simplicity of this mechanism is that each MS simply and independently sets the quantum rate equal to the desired throughput for the traffic class, that is, Qri = tQi = λi . The rest of the mechanism makes use of Qri thereby eliminating the problems of weight assignment. Then, DCi is mapped to IF Si to provide an advantage for channel access while Qri is mapped to Wi to provide fairness. These parameters are calculated as follows: IF Si
=
DIF S − α
DCi (t) Q
(3)
log10 (Li ) · P F 2−ci · µ(0.9, 1.1) (4) Wi Qri W ri = (5) R Qri W ai = ω · (6) R IF Si is the calculated InterFrame Space. BIi is the calculated backoff interval, Li is the length of the frame to be transmitted, and P F is the persistence factor. ci is the number of consecutive collisions which will be reset to zero after a successful transmission. µ(0.9, 1.1) is a random number generated from a uniform distribution between 0.9 and 1.1. In Eq. 5, W ri (called the relative throughput weight of MS i) is equal to the fraction of the channel bandwidth demanded by T Ci . R is the raw data rate of the WLAN. For example, if λi = Qri = 200 Kbps and R = 2 Mbps, W ri = 0.1. W ai (called the absolute throughput weight ) is the weight of a traffic class requiring absolute throughput. We will use Wi to represent either W ri or W ai depending on the requirement of T Ci . The weights and ω are key parameters in achieving support absolute throughput and relative throughput and will be described later. Now, we discuss these equations and their impact in more detail in the following sections. BIi
4.2
BI Parameter
=
4.3.1
Deterministic Calculation of BI
(Li ) The term log10 in Eq. 4 depends on the weight of a Wi traffic flow. If the packet size is assumed to be constant, we can see that value of BI is calculated deterministically based solely on the value of Wi (like DFS in [10]). From this equation, the values of calculated BIi for T Ci (each with a different weight) will be in different ranges as shown in Fig 1. The larger the weight is, the smaller is the value of the range. For example, if the packet size is fixed at 1000 bytes, a traffic flow with weight of 0.01 will have a BI of 300 while a traffic flow with weight of 0.1 will have a BI of 30. Because of this, the values of calculated BIs of different traffic classes (or weights) are distributed over a range and separated from each other. Thus, the collisions among different classes are reduced automatically. The logarithmic function is applied to minimize the variations in BI owing to different frame sizes. The term µ(0.9, 1.1) represents the collision avoidance component. As discussed previously, this component is used to prevent collisions from traffic flows within the same traffic class, i.e. the same Li and Wi . To
IFS Parameter
Rather than having a fixed value of IFS per traffic or per priority class as proposed in 802.11e, A-DRAFT calculates an instantaneous value of IFS depending on the current value of the deficit counter (DC) as given in Eq. 3. This is one of the fundamentally differences between A-DRAFT and 802.11e. Unlike priority-based QoS mechanisms where high priority traffic with smaller IFS always has a right to access the network earlier than low priority traffic, the right to access the channel in A-DRAFT is determined by the size of DC but not the quantum rate. Because of this, starvation is alleviated and fairness can be maintained. Once a flow uses up its DC, other flows can take turns to access the medium as their DCs increase. The parameter α is a constant scaling factor that translates Q/DCi into an appropriate value of IF Si .
119
reduce collision among traffic flows of different classes, we propose a negative exponential backoff coupled with a positive exponential backoff to reduce the probability of collision between a newly arriving frame and a frames that was previously transmitted, but not delivered successfully due to collisions as described as the following. Collisions
DCF
solute throughput and relative throughput support in the proposed mechanism. The main idea is that flows from an absolute throughput class receive as much bandwidth as requested, and the flows from relative throughput classes fairly share the rest of the bandwidth according to their weights. The ability to provide absolute throughput support can be maintained as long as the total demand from the absolute throughput class is below the effective channel capacity. We define the fair share of channel capacity as the proportion of throughput that a flow should receive according to its weight. Let λkx denote a traffic flow of class x at MS k with a weight of Wx and λe denote the effective channel capacity of the WLAN. We once again assume one traffic class per MS and omit the superscript k in what follows. Here, we assume that λe is a constant. The fair share of λx (denoted by λx ) is simply equal to the product of the effective channel capacity and the ratio of the weight of the MS and the sum of all weights of all flows in the network as shown in Eq.7.
DRAFT
in g llid M S Co s 1 as Cl
0 1
Newly Entering Class 1 MS
in g l lid M S Co s 2 as Cl
BI of Newly Entering Class 2 MS
2
. . .
. . . Smaller BI
. . . Larger BI
Smaller BI
Larger BI
= Range of BI to be selected for collision avoidance
Figure 1: Range of BI between DCF and A-DRAFT after collisions
λx
4.3.2
Negative Exponential Backoff
(7)
∀i
In negative exponential backoff, rather than increasing the range of CW in response to a collision, we propose reducing the range of BI after every consecutive collision as shown in Eq. 4. At first glance, this method might appear counterintuitive to what is commonly used for collision reduction. However, it is important to emphasize that, BI is no longer randomly selected from a range starting from zero to CW . Instead, BI is calculated deterministically and distributed in the same way as the traffic in each MS is distributed within and among all classes, as shown in Fig. 1. As the value of BI is fixed based on Wi , increasing the range of BI may not be necessarily the best alternative for decreasing the probability of collision. Depending on the number of traffic flows in each class, either increasing or decreasing the range of BI can be efficient to reduce the probability of collision. On the one hand, if the number of traffic flows with larger weight are more than those with the smaller weight, increasing the range of BI will be a better alternative. The reason is that the larger weight flows will have smaller BI values and decreasing the range of BI makes all these colliding flows clustered with those large weight flows which, in turn, increases the probability of collision. If there are more traffic flows with smaller weight than those with higher weight, decreasing the range of BI can be beneficial to reduce the probability of collisions. Moreover, the colliding frames have an incentive to decrease BI because a frame that has undergone collisions will be less likely collide with newly arriving frames of the same class. With a small BI, a frame that has undergone collisions will be retransmitted and potentially successfully received sooner than newly arriving frames of the same class. As a result, the variation of throughput can be reduced as shown in our simulation results in Section 5. In the next section, we will discuss the details and rationale for weight parameters as well as the concept of fair share of throughput.
4.4
Wx λe · P Wi
=
To achieve absolute throughput support, we introduce a parameter called ω. The parameter ω is a factor that is used to multiply to the normal weight (W ri ) to escalate the fair share of flows that belong to the absolute throughput class. As a result, the fair share of flows from the absolute throughput class (λx∈AT ) are escalated as shown in Eq.8. The reason for this is that ω multiplies only the weight of flows that belong to the absolute throughput class but not the weight of flows in the relative throughput class. From Eq. 5- 6, we can substitute the weight by the quantum rate in Eq. 8 to show this relationship in a different way. In this equation and in what follows, AT and RT represent all flows in absolute throughput and relative throughput classes respectively. λx∈AT
W ax λe · P Wi
=
∀i
=
λe ·
P
ω·
ω · Qrx P Qri + Qrj
∀i∈AT
(8)
∀j∈RT
We now suggest a way to compute an appropriate value for ω. Suppose, the value of ω is made large enough such that λx∈AT ≥ Qrx , the experienced throughput will be equal to the desired throughput (given by the quantum rate). This is because the fair share of the channel bandwidth, is larger than the specified requirement. Based on this observation, we can now determine the condition under which the network can provide absolute throughput support. The condition shown in Eq. 9 is the heart of the proposed mechanism. λx∈AT
Qrx
≤
λe ·
ω·
Qrj
∀j∈RT
ω
The weights W ri and W ai , the parameter ω, and the concept of fair share of throughput are key in achieving ab-
≤
P
ω · Qrx P Qri + Qrj
∀i∈AT
P
Condition to Support Absolute Throughput and Adaptability
Qrx
≤
λe −
X
Qri
∀j∈RT
(9)
∀i∈AT
As long as the ratio between the sum of quantum rates of all the flows in the relative throughput class and ω is less
120
than or equal to the available bandwidth (i.e., the difference between effective bandwidth and the sum of the quantum rates of all absolute throughput flows), absolute throughput can be supported and the flows from absolute throughput will receive as much bandwidth as they are promised. In the basic DRAFT (non-adaptive), absolute throughput can be supported if the value of ω is calculated appropriately. The appropriate ω can be computed a priori from an estimate of the aggregate throughput requirements of flows from the relative throughput class and flows from the absolute throughput class, as shown in Eq.10. P Qrj ∀j∈RT P ω ≥ (10) λe − Qri
mechanism works relatively well. Other sophisticated or advanced feedback control mechanisms can be substituted to improve the performance. In this simple feedback control mechanism, the adaptive quantum rate will be reduced by an amount equal to a sensitivity parameter ki if the network is detected to be violating a critical condition i. The critical condition is declared when the experienced throughput of a relative throughput class is below a certain threshold. This threshold is calculated as the ratio between the quantum rate and ω. The average experienced throughput is calculated using the standard exponential averaging method to smooth out the instantaneous experienced throughput. The instantaneous experienced throughput is calculated from the amount of data sent (in bits) and the time required to transmit these data. We arbitrarily set ki = 10−i % and i = 1, 2, 3. The sensitivity parameter and number of levels represent a tradeoff between the complexity and the performance of the mechanism. We will discuss this in greater detail in Section 5. For example, if the experienced throughput T Ck ∈ RT is lower than 120% of Qrk /ω, a critical condition 1 (i = 1) is declared and the adaptive quantum rate is reduced by 0.1% (k1 = 10−1 %). If the experienced throughput is detected to be lower than 100% of this ratio, the critical condition 2 (i = 2) is declared and the adaptive quantum rate is further reduced by 0.01% (k2 = 10−2 %). Finally, if the experienced throughput is detected to be lower than 80% of the ratio, the critical condition 3 is declared and the adaptive quantum rate is further reduced by 0.001% (k3 = 10−3 %). When the aggregate throughput of the relative throughput class reduces and/or the critical condition subsides, the adaptive quantum rate is slowly increased to the original quantum rate value (which equals the desired throughput). The pseudo code is shown below.
∀i∈AT
For example: A WLAN is expected to support flows from the absolute throughput class up to 1 Mbps and flows from the relative throughput class up to 3 Mbps. If we assume λe to be 1.6 Mbps in a 2Mbps WLAN, the appropriate value of ω computed from Eq. 10 is 5. This hypothetical network can support a total offered load up to 4 Mbps in a 2 Mbps WLAN. In this set up, if the offered load is not overly excessive, absolute throughput will be maintained. Otherwise, it will deteriorate gracefully to relative throughput. Two possible ways are suggested to maintain this absolute throughput support. First, we can limit the amount of committed throughput (or total quantum rate) from both absolute throughput class and relative throughput class. Usually, this requirement can be achieved via admission control, for example, at the AP. The AP can monitor how much bandwidth is committed and then accepts or denies a new flow according to the available bandwidth. Without admission control, absolute throughput cannot be guaranteed. In this paper, we propose an adaptive mechanism to maintain the above condition for larger relative throughput loads. We start by multiplying both sides of Eq. 9 by the quantum rate of class k (Qrk ). According to Eq. 11, as long as the experienced throughput of flows in the relative throughput class (the right side of Eq. 11) equals or is higher than the ratio between the quantum rate and ω (the left side of the equation), the ability to provide absolute throughput will be maintained. P Qrk ·
Qrj
∀j∈RT
≤
ω
Qrk · [λe −
X
1: 2: 3: 4: 5: 6: 7: 8:
Three parameters are important to implement the adaptive feature: 1) experienced throughput, 2) quantum rate, and 3) ω. The experienced throughput can be measured independently at each MS while the quantum rate is a parameter specified by each MS. The parameter ω is assumed to be a standard value for a network. The simplicity of the scheme is that these three parameters are available locally at each MS. Therefore, each MS can monitor its own experienced throughput and implement the control in a fully distributed way. Therefore, we assume that the relative throughput traffic class can be freely added into the network. As the number of flows of the relative throughput class increases, the fair share of each flow will decrease.
Qri ]
∀i∈AT
Qrk ω
≤
X Qr P k · [λe − Qri ] (11) Qrj ∀j∈RT
if(1.0*Qr/w < exp_thr