a field test using Cisco Aironet 1130AG are listed in Table I. Data Rate (Mbps) ..... at R+, the value of Pδ(R) is reset so that next time the station is able to ...
RFC: A Robust and Fast Rate Control Scheme in IEEE 802.11 Wireless Networks Yanxia Rong, Liran Ma, Amin Y. Teymorian, Xiuzhen Cheng, Hyeong-Ah Choi Department of Computer Science George Washington University, Washington DC, 20052, USA. Email: {yxrong,lrma,amin,cheng,hchoi}@gwu.edu
Abstract—In this paper, we study the rate adaptation problem in IEEE 802.11 based wireless networks. The majority of existing rate control schemes are based on a well believed intuition that data rate should be adjusted according to a comprehensive view of real-time channel conditions. To be specific, the rate is reduced on frame losses which indicate a degradation in channel condition, while it is increased upon successful transmissions which imply a better channel condition. This intuition works well under the case of frame losses in that a lower rate employs more robust modulation so that the chance of a frame being successfully recovered is increased. However, we discover that the latter part of the intuition is misleading in several important scenarios. As a result, it mistakenly directs the rate changes such that the throughput is heavily degraded. To address this issue, this paper proposes a sequential hypothesis testing based robust and fast rate control scheme termed RFC. It relies on a novel combination of sequential hypothesis testing and short-term loss ratio to determine the better sending rate for a device. Through extensive OPNET simulations, RFC is shown to outperform mainstream rate adaptation solutions, and be capable of quickly and automatically adapting to dynamic channel conditions.
I. I NTRODUCTION The proliferation of Wi-Fi network infrastructure has made IEEE 802.11 based networks an important part of the present Internet access network. Modern commodity 802.11 devices support multiple physical (PHY) layer transmission rates (e.g., 6 Mbps, 36 Mbps, 54 Mbps, etc.) as specified in the 802.11 standard [1]. These available data rates can be dynamically switched at a device so as to match the channel conditions, with the goal of optimizing its throughput. The process of rate switching is known as rate adaptation, which is a critical factor to improve the system performance. Yet, there is no rate adaptation mechanism regulated by the standard. The major challenge of rate adaptation is how to accurately estimate the channel condition so that an appropriate rate can be selected. Obviously, inaccurate estimations will result in poor rate selections. Nonetheless, measuring channel quality is a non-trivial problem because the channel condition is dynamically influenced by factors such as signal fading, interference, hidden terminals, and mobility. The patterns and causes of transmission failure are important factors in conducting rate adjustment. The existing metrics include packet loss ratio, signal-to-noise ratio (SNR), bit error rate (BER), and etc.
With the above channel condition measures, a variety of rate adaptation schemes [2]–[7] have been proposed in the literature. For example, OAR [3] employs physical layer metric SNR to direct rate adjustments. On the other hand, ARF [2], AARF [4], SampleRate [5], ONOE [6], and RRAA [7] are based on packet losses and successes. A common technique shared by these schemes is that the rate selection can be conducted based on some pre-defined threshold. To be specific, the value of a channel indicator metric is compared against a list of threshold values representing boundaries between the data rates. The differences among them lie in the combination of the aforementioned metrics and threshold values. Since channel quality is directly represented in physical layer measurement such as SNR and BER, schemes that employ these measurements seem to have the best potential to significantly improve the throughput. To be specific, the larger the SNR, the better the chance of receiving data with a low BER, and thus a higher rate is favorable. Nevertheless, Ref. [8] shows that SNR has little predictive value for loss ratio. In fact, the loss ratio is probably most attributed to multi-path fading rather than attenuation or interference. Additionally, as demonstrated in [7], SNR variations notably degrade the accuracy of rate estimation. Lastly, information exchange between physical layer and MAC layer violates IEEE 802.11 standards. For those schemes that reply on observing packet transmissions, the implicit assumption is that the transmission rate should be increased (decreased) upon transmission successes (failures). It is reasonable to believe that severe packet losses indicate deteriorated channel conditions given that the transmission failure is not caused by a hidden terminal [7]. Thus, the rate should be reduced to a lower rate, where a more robust modulation scheme is used to achieve better noise tolerance. However, the assumption no longer holds in the case rate increase because a reduced packet loss ratio is not necessarily a symbol of improved channel quality. We discover that this widely accepted intuition for rate increase incurs unnecessary rate switching oscillation in some scenarios, which greatly deteriorates the throughput. Moreover, the existing schemes use a fixed number observations to make rate adaptation decisions, which makes them incompetent of accommodating to fast changing channel conditions. Both of the above two
2
observations are described in Subsection IV-A in details. As a result, none of these packet loss based schemes can provide a satisfactory solution to direct rate adaptation. Therefore, the fundamental challenge of instructing rate adaptation remains to be how to accurately estimate the channel condition for a candidate rate when sending at a different rate in real-time. In addition, an ideal rate control mechanism should be able to capture the dynamic features of channel conditions so as to choose the optimum rate. With these challenges in mind, we design a robust and fast sequential hypothesis testing based rate control scheme termed RFC that increases network throughput. It makes use of sequential hypothesis testing to guide its choice of an optimum transmission rate. The acceptance of a hypothesis is made based on observations of data frame transmissions. Powered by sequential hypothesis testing, RFC is able to exploit the evanescent possible gain in the wireless channel with a few number of observations. Moreover, it is capable of automatically adapting to different channel scenarios. Another nice feature of RFC is that it does not require any type of specialized hardware or modification to the IEEE 802.11 standard. The rest of this paper is organized as follows. Section II discusses related work. In Section III-A, a brief introduction to hypothesis testing is presented. The proposed rate control scheme is elaborated in Section IV. The performance of RFC is evaluated in Section V. Finally, our conclusion and future research directions appear in Section VI. II. R ELATED W ORK Rate control for 802.11 wireless networks first appeared as the ARF algorithm [2], which has inspired many followup schemes. ARF is the most popular rate control algorithm in 802.11b WLAN products. In this scheme, a probe packet is sent upon either 10 consecutive transmission successes or expiration of a timer. If the probe packet succeeds, ARF increases the transmission rate. Conversely, a rate drop decision is made upon two consecutive transmission failures. It is shown that the performance of ARF suffers severely when multiple nodes are contending for the wireless channel [9]. On the other hand, if the channel conditions change slowly, its rate increase strategy will result in rate switching oscillation, and thus the throughput is degraded. In order to improve ARF’s short-term and longterm adaptation, Adaptive ARF (AARF) is proposed in [4]. It chooses to adapt the threshold (i.e., consecutive transmission successes) which is used to decide when to increase the current rate, using a binary exponential backoff. Still, they lack the ability to opportunistically capture the short dynamics of the channel condition [7]. Different from ARF type of approaches, the key idea behind RBAR [10] is that the rate adaptation is carried out by the receiver instead of the sender. Specifically, a receiver estimates channel quality and selects an appropriate rate during the RTS/CTS frame exchange. The motivation is that the receiverbased rate selection during the latest RTS/CTS should provide channel quality estimation that is closer to the actual condition than sender-based approaches such as ARF. Their simulation
results demonstrate that RBAR performs consistently well because of its ability to efficiently estimate channel quality. However, the information on the chosen rate must be carried back to the sender via a CTS frame, thus violating the current 802.11 standard. The OAR protocol [3] attempts to improve throughput by utilizing durations of high-quality channel condition. Relying on channel coherence times typically exceeding multiple packet transmission times, OAR sends multiple back-to-back data packets whenever the channel quality is good. Moreover, over longer time periods, OAR ensures that the time-shares of channel access granted to each device are the same as achieved by single-rate. Theoretically, network throughput can be enhanced using this technique. However, in reality, predicting the channel coherence time is a difficult task. SampleRate [5] is also a widely used rate control scheme because of its efficiency in static settings. It is implemented in a popular Linux kernel device driver for WLANs named MadWiFi [11]. In SampleRate, the expected transmission time for each rate is updated on a per transmission basis. Additionally, probe packets are periodically transmitted at other data rates in order to examine whether a higher rate can be supported under current channel conditions. A new data frame is then transmitted at the rate that currently has the smallest expected per-packet transmission time. This allows SampleRate to be more aggressive than its peer algorithms because it inherently favors a higher data rate. The price paid for this aggressiveness is frequent transmission rate fluctuations that can lead to significant packet loss. Moreover, as pointed out by [7], SampleRate is too sensitive to failure of probe packets, i.e., a single lost probe packet results in an extended period that station can not increase its rate. Another mechanism included in MadWiFi is ONOE [6]. It is a credit based algorithm where the value of the credit is determined by the combination successful transmission, erroneous transmission, and retransmission. For example, ONOE keeps raising its credit value for successful transmissions at a particular rate. Once a certain threshold is met, the current transmission rate is increased to the next higher rate. Similarly, the credit is deducted for failed transmissions/retransmission and a rate drop decision is made when the credit goes below a certain threshold value. Therefore, it is conservative in rate selection compared to SampleRate and is less sensitive to individual packet loss than ARF. However, ONOE samples the channel condition every one to ten seconds (long-term estimation) while it is shown that if the sampling period is over than several hundred milliseconds, the station lacks the capability to capture the short-term opportunistic gain of the wireless channel, [7]. One of the arguably best rate control algorithms is the Robust Rate Adaptation Algorithm (RRAA) [7]. After a thorough systematic study, the authors discover that the five design guidelines adopted by most existing algorithms can be misleading in a variety of scenarios. RRAA further improves throughput over previous work by utilizing a short-term loss ratio to direct rate changes. Their simulation results validate
3
their hypothesized throughput increase. Nevertheless, RRAA uses a fixed number of frames to gauge the current channel condition, and subsequently guide rate adjustment. A fixed window, by nature, does not respond to some channel dynamics fast enough. Moreover, the intuition adopted by RRAA for rate increase results in severe throughput degradation in some important scenarios, as will be shown in IV-A. In addition to rate control performed at the MAC layer, physical layer diversity is explored in [12]. Through a combination of experiment, simulation, and analysis, the traits of physical layer diversity are discovered. One of the main findings is that physical layer channel diversity can help reduce the effective level of MAC contention, thus allowing for improved throughput. Furthermore, when designing a rate adaptation scheme, it is very important to differentiate the reasons that can lead to packet loss. Important observations concerning packet loss in a 38-node urban multi-hop 802.11b network are presented in [8]. Additionally, characteristics and network usage statistics of an IEEE 802.11 WLAN in various settings are examined in [13] and [14]. Our proposed solution, RFC, is different from pervious work in that it uses sequential hypothesis testing to efficiently and automatically make a better choice of rate. The algorithm is designed to make rate selections under different network scenarios that include various loss patterns and dynamic changes in channel condition. These features enable RFC to outperform the existing mainstream rate adaptation techniques. Further, RFC provides a cost-effective performance enhancement to Wi-Fi networks (i.e., it can be easily implemented in a access point or network card driver), and requires neither specialized hardware, nor changes to the underlying IEEE 802.11 standard. III. P RELIMINARY A. Sequential Hypothesis Testing In this section, we introduce sequential hypothesis testing because it is used by the RFC algorithm. Assume that there are two hypotheses, H0 and H1 , and two corresponding probability distribution functions, P (x|H0 ) and P (x|H1 ), respectively. In order to make a decision whether H0 or H1 is true, a sequence of observations x1 , x2 , · · · need to be made. Given x1 , x2 , · · · , xn , we can compute a ratio named ρ(n) ρ(n) =
P [x1 , x2 , · · · , xn |H1 ] . P [x1 , x2 , · · · , xn |H0 ]
(1)
If any two observations are independent from each other, Eq. (1) can be rewritten as ρ(n) = ρ(n − 1)
P [xn |H1 ] , P [xn |H0 ]
(2)
P [x1 |H1 ] . P [x1 |H0 ]
(3)
where ρ(1) is ρ(1) =
If ρ(n) is larger than a predetermined threshold, it implies that the likelihood that x1 , x2 , · · · happened under H1 is larger than that under H0 . Hence, a conclusion can be drawn that H1 is true. On the other hand, if ρ(n) is smaller than another predetermined threshold, it implies that the likelihood that x1 , x2 , · · · happened under H0 is larger than that under H1 . As a result, H0 is accepted to be true. If ρ(n) is not large enough or small enough to make a decision, additional observations need to be made. Then, a new probability ratio is obtained by accumulating the difference of the likelihood using Eq. (2). For a hypothesis test, it is possible that a wrong decision is made. There are two possible cases that a wrong decision is committed. The first kind of error is that we accept H1 but H0 is actually true. The probability for such an error is denoted as α. The second kind of error is that we accept H0 but H1 is actually true. The probability that we commit such an error is denoted by β. In order to terminate a sequential test with a correct decision, there has to be enough confidence, i.e., α must be small when H1 is accepted, and β must be small if H0 is accepted, respectively. The values of α and β are application-specific. Given the required α and β values, we can next compute two threshold values A and B. The two threshold values A and B should be chosen carefully such that the two types of errors are guaranteed to be less than α and β, respectively. Suppose a sample sequence (x1 , x2 , · · · , xn ) leads to the acceptance of H1 , i.e, a test is terminated with H1 being accepted. It means that both P [x1 , x2 , · · · , xn |H1 ] ≥ (1 − β) and P [x1 , x2 , · · · , xn |H0 ] ≤ α hold. As a result, ρ(n) =
P [x1 , x2 , · · · , xn |H1 ] 1−β ≥ . P [x1 , x2 , · · · , xn |H0 ] α
(4)
Hence, we have an upper bound for A, A ≤ 1−β α . A similar β derivation gives a lower bound for B, B ≥ 1−α . As shown in [15], a test with values of A and B corresponding to their upper and lower bounds, respectively, can provide at least the same level of precision as a test performed using the precise values of A and B. With these threshold values, the sequential hypothesis testing can: i) accept H1 and terminate if ρ(n) ≥ A; ii) accept H0 and terminate if ρ(n) ≤ B; iii) conduct additional observations while B < ρ(n) < A. B. Rates in 802.11 Standards: the Basics Once a rate is chosen by the MAC layer, it needs to be notified to the PHY layer. The notification is delivered to PHY Layer Convergence Protocol (PLCP) sublayer via a primitive, which is a “fundamental instruction” used for interaction between the PHY and the MAC layers. To be specific, the PHY-TXSTART.request primitive includes a parameter named DATARATE that specifies the data rate that the MAC frame payload will be transmitted at. However, the preamble is transmitted at the lowest possible rate supported by the PHY layer (e.g., 1Mbps for 802.11b and 6Mbps for 802.11a). The header of a MAC frame is also transmitted at a low rate of
4
C. Transmission Rates vs. Coverage In wireless communication, signals may be attenuated exponentially by transmission through a medium. The attenuation of a transmitted signal is related to the modulation scheme used by the signal. As it is discussed in Subsection III-B, higher rates typically employ denser modulation encodings, which implies that they are more vulnerable to channel errors. As a result, a higher BER is required to successfully recover the frames sent at a higher rate. Accordingly, a higher rate ends up carrying a shorter transmission range. Taking IEEE 802.11a as an example, the detailed results [16] obtained from a field test using Cisco Aironet 1130AG are listed in Table I. Data Rate (Mbps) 6 9 12 18 24 36 48 54
Indoor Range (m) 100 91 84 76 69 60 45 24
Outdoor Range (m) 198 190 183 168 152 130 91 30
TABLE I T RANSMISSION RATES AND THEIR COVERING RANGE .
1 PPDU contains the MAC Protocol Data Unit (MPDU) with PLCP preamble (synchronization pattern) and header.
IV. RFC D ESIGN In this section, we first discuss the motivation for developing RFC. Next, the detailed design of RFC is elaborated. A. Motivation 1) What is a good guideline to increase rate?: Most current rate adaptation designs follow an intuitively and seemingly effective guideline, which is that the transmission rate should be increased upon transmission successes. However, our study shows that it can be misleading in practice and incur significant performance penalty. In this section, we critically examine this intuition using OPNET experiments and simple analysis. We choose RRAA as the target scheme because it is arguably the best scheme reported so far implemented with the above guideline. In the first scenario, as shown in Fig. 1(a), a static station is placed at a position where it is outside the transmission range of the AP when using rate R, say 24 Mbps, but inside the AP’s transmission range of the rate R− , i.e., 18 Mbps. The AP keeps sending packets to the station with a packet size of 1024 bytes and packet inter-arrival time of 0.01 seconds. We execute two runs of simulations, one with RRAA applied and the other with fixed data rate set to be 18 Mbps. The result is shown in Fig. 1(b). As we can see, if RRAA is applied, the station’s throughput fluctuates dramatically as time goes on, while if no rate control algorithm is applied (the AP fixes its data rate to be 18Mbps), the station’s throughput stays as a constant (at around 1.08Mbps) and more than that achieved by applying RRAA. 1.2
R_ R
xxxxxxxxx xxxxxxxxx xxxxxxxxx xxxxxxxxx xxxxxxxxx xxxxxxxxx xxxxxxxxx xxxxxxxxx xxxxxxxxx
Access Point
Static Client
Throughput (Mbps)
1Mbps, 2Mbps, or 6Mbps depending on the preamble that was sent (long, short, or fixed 12-symbol OFDM, respectively). In order to actually spread the signal, an 802.11 transmitter combines the PLCP Protocol Data Unit (PPDU)1 with a spreading sequence through the use of a binary adder. For 1Mbps and 2Mbps operation in 802.11b, the spreading code is the 11-chip Barker sequence, which is 10110111000. Nevertheless, 5.5Mbps or 11Mbps operation use complementary code keying (CCK) to provide the spreading sequences at these higher data rates. This is necessary to the efficient process of data in order to achieve the higher data rates. The modulator converts the spread binary signal into an analog waveform through the use of different modulation types, depending on which data rate is chosen. For example with 1Mbps operation, the differential binary phase shift keying (DBPSK) is used. For 2Mbps transmission, differential quadrature phase shift keying (DQPSK) is adopted. This technique enables the data stream to be sent at 2Mbps while using the same amount of bandwidth as the one sent at 1Mbps. A similar method is used for the higher rates such as 5.5Mbps and 11Mbps in the modulator. Since both of the signal spreading and modulation are done in hardware level, there is no overhead for a rate switch from the perspective of the MAC layer. Subsequently, it is not necessary to consider the cost of switching rate in a rate control scheme design for MAC layer. However, different rate does introduce different energy consumption in the hardware level.
0
(a) A static case. Fig. 1.
Fixed Rate is the straight line
1.1 1 0.9
RRAA 2
4
6
Time (seconds)
8
0.8
(b) Throughput: RRAA vs. Fixed Rate.
A scenario that a static station suffers rate switching oscillations.
When the AP transmits at 18 Mbps, the station has a nearly perfect channel condition (i.e., packet loss ratio < 10%). Then according to RRAA, the AP will switch to 24 Mbps, which results that the station is outside the AP’s transmission range and barely receives any packets from the AP (packet loss ratio > 85%). Then the AP has to switch back to 18 Mbps, at which the good channel condition will lead AP to switch to 24 Mbps again. As a result, the AP will keep oscillating between 24 Mbps and 18 Mbps. In RRAA, at 24 Mbps, to switch back to 18 Mbps, the AP needs at least 11 transmissions (if all the first 11 transmissions fail) to figure out that the loss ratio at 24 Mbps is already larger than a threshold, and at 18 Mbps, the AP transmits at most 20 packets before increasing to 24 Mbps.
5
Thus, overall, about one third of the transmission attempts failed. In addition, it is reported in the MadWifi forum that SampleRate suffers the same problem, i.e., stations keep oscillating between two rates, in similar realistic scenarios [17]. Therefore, the low loss ratio at the current data rate does not necessarily imply good channel condition at the next higher data rate. It is worth mentioning that some frame probing based schemes such as ARF and AARF also perform better than RRAA. In these schemes, a probe packet is sent out to assess the channel condition at a higher rate. If the probe packet succeeds, the transmission rate is increased. Since the channel condition is being examined by the probe frame, it is more accurate than deriving from the situation of current rate transmission. However, channel condition estimation by employing probe frames has two flaws. First, an accidentally successful probe can trigger incorrect rate increase [7]. Second, an unsuccessful probe may lead to serious throughput degradation. Thus, the existing probing based schemes fail to accurately estimate the channel condition because the small number of probing packets cannot capture the real channel condition at the higher rate. 2) Is a fixed sample size good enough?: Another common technique used by existing schemes (such as ARF, AARF, and RRAA) is to make a fixed number of transmission observations so as to direct rate changes. However, due to the dynamic feature of the wireless environment, employing a fixed number of observations is not robust enough. Thus, the fixed sampling size makes a scheme either too aggressive or too conservative. The side effect of the aggressiveness by adopting a small sampling size (e.g., ARF) is demonstrated in [7]. Next, we will show how a reasonablely large sampling size (e.g., RRAA) can lead to the degradation of throughput. In a scenario, a mobile station moves away from the AP at the speed of 10m/s while downloading files from a FTP server through the AP. The station sends downloading requests to the server every 2 seconds and each file size is 50000 bytes. We also execute two runs of simulations, one with RRAA applied and the one with a fixed transmission rate at 6 Mbps for both AP and the client. We found that the overall throughput received by the station when the rates are fixed at 6 Mbps is about 18% more that when RRAA is applied. The reason that RRAA, an algorithm aiming to adapt to channel variations, fails to outperform the one with no algorithm applied is that the fixed sampling size is not acute enough. For fixed sampling size algorithms, it is always difficult to determine the number of observations needed so that both the accurateness (the estimated channel is accurate) and the acuteness (the station can quickly switch to the rate that benefits the station’s throughput) are achieved. B. RFC Algorithm Design philosophy: As illustrated in Sec. IV-A, to decide whether to increase or decrease or remain at the current rate, it is desirable to infer the current channel condition both accurately and acutely, which the fixed sampling size
tests fall short of. So the proposed RFC algorithm adopts the technique, sequential hypothesis testing. If whether to change the rate is treated as a hypothesis and whether to remain at the current rate is treated as the alternative hypothesis, the sequential hypothesis testing stops the test and accepts one of the hypotheses as soon as the probabilities of committing errors are bounded, i.e., hypothesis is accepted with large enough confidence. Thus, by applying the sequential hypothesis testing, both accurateness and acuteness can be achieved2 . The proposed RFC employs step-wise rate adjustment, i.e., it increases or decreases the current rate by one level at a time. The criteria used for increasing (decreasing) the current rate is whether or not the higher (lower) rate can provide more throughput. The throughput is determined by the sending rate and the loss ratio at that rate. However, computing throughput is not as straightforward as directly applying the nominal sending rates specified in the 802.11 standard because different portions of a MAC frame are sent at different rates. Specifically, the payload of a MAC frame is sent at the current nominal rate, and the header of a frame is always sent with the lowest rate. As an example, the transmission time T x(R) of different rate R in 802.11a that is required to transmit a packet with the payload 1300 of bytes is shown in Table II. Note that the time spent for the backoff procedure is not included in the example. The throughput also depends on the loss ratio of transmissions at that rate. If at the rate R (with transmission time T x(R)) the loss ratio is p(R), the throughput is proportional to 1−p(R) T x(R) . Data Rate (Mbps): R 6 9 12 18 24 36 48 54
Transmission Time (s): T x(R) 0.001936 0.001331 0.001011 0.000707 0.000554 0.000402 0.000326 0.000302
TABLE II R ATE AND TRANSMISSION TIME PAIRS .
To keep stations from continuously attempting the higher rate that has bad channel condition while not impede stations to opportunistically increase to the rate that may provide higher throughput, when a station begins oscillating, RFC sets penalties in a way that makes it harder for stations to attempt the higher rate shortly. Once the station stabilizes at the higher rate, then the penalty is revoked. 1) Rate Decrease: Intuitively, when a station transmitting with the current data rate, say R, is suffering from a large 2 To test between two point hypotheses H : θ = θ and H : θ = θ , the 0 0 1 1 Neyman-Pearson test is the most powerful one, i.e., with one error probability fixed, the other error probability is minimized. Instead, the sequential hypothesis testing requires the two error probabilities to be bounded. In addition, the Neyman-Pearson test is a fixed sampling size test while the sampling size in the sequential hypothesis testing is a variable. Compared with NeymanPearson test, the sequential hypothesis testing requires a smaller sampling size in average [15].
6
packet loss, the station should start to consider whether to switch to the lower one, R− . Since at this time R− may also suffer a large packet loss (if the same station transmits using R− ), the station only switches to R− if it can achieve a higher throughput at R− than at R. Suppose the station is at the rate R and its current loss ratio is p(R). The next lower rate is R− and its loss ratio is p(R− ). The transmission times using R and R− are T x(R) and T x(R− ), respectively. Then the criteria for decreasing rate is 1 − p(R− ) 1 − p(R) > . (5) T x(R− ) T x(R) Equivalently , when p(R) > 1 −
(1 − p(R− ))T x(R) , T x(R− )
(6)
the throughput provided by the current rate R is not as high as the one provided by the lower rate R− . We define P−∗ as the critical value of p(R) for switching the rate from R to R− , as shown in Eq. (7). P−∗
= 1−
(1 − p(R− ))T x(R) . T x(R− )
(7)
Note that the p(R− ) may be obtained through probing packets transmitted at the rate R− . Detailed probing techniques can be found in [18] and [19]. However, without a considerable amount of probing packets, p(R− ) will not be estimated accurately. Here, we use the historical knowledge p(R− ) perceived by the station when it transmits at the rate R− last time so as to avoid the overhead induced by using probing packets. The obsoleteness of the historical values is compensated by using a parameter λ in the test (will be shown later). When p(R− ) is available, which implies that the critical value P−∗ is available, the task becomes to test whether the current loss ratio p(R) is larger than P−∗ , i.e., whether to change the rate (one hypothesis) or stay at the current rate (the alternative hypothesis). The test has to be finished as soon as possible while the accepted hypothesis is true with a high probability, e.g., larger than 0.95. In RFC, two sequential hypothesis tests, denoted by Treg and Tf ast , run in parallel to determine whether to switch to the next lower rate. In each test, we want to test whether to stay at the current rate (denoted as hypothesis H0 ), or switch to the next lower rate (denoted as hypothesis H1 ). In the first test, Treg , if p(R) < P−∗ , the current rate is maintained, and if p(R) ≥ λP−∗ , the rate is switched to the next lower rate. Here, λ is an algorithm design parameter that plays a very important role in the performance of RFC. The primary functions of λ include: 1) quickly respond to the dynamic changes in channel conditions; 2) compensate for the inaccuracy (i.e., obsoleteness) of P−∗ . Therefore, Treg is testing H0 : p(R) < P−∗ versus H1 : p(R) ≥ λP−∗ . Treg test terminates whenever a decision is made (i.e., either H0 or H1 is determined to be true). The outcomes are: (1) If H1 is accepted that the station should switch to the next lower rate, the station computes the current
loss ratio, p(R), and switches to the rate R− (implying the other test Tf ast is forcibly terminated), in which both Treg and Tf ast will be started to test whether to switch to an even lower rate. (2) If H0 is accepted, the station stays at the current rate without interrupting the other ongoing test Tf ast , the value of p(R) is updated, and a new Treg begins with the next available transmission attempt. There are also some cases where the channel condition can degrade quite dramatically, i.e., when the loss ratio reaches over a large value, denoted by P∆ , in a short period of time. Under these situations, a station wants to reduce its rate as soon as possible in order to reduce its packet loss. For this reason, the second hypothesis test, Tf ast , runs in parallel to the Treg . The Tf ast is to test H0 : p(R) < P−∗ versus H1 : p(R) ≥ P∆ . The Tf ast terminates when either H0 or H1 is accepted to be true. So similarly, the outcomes of Tf ast are: (1) If the H1 is accepted true, the loss ratio of the rate R is updated, the station decreases its rate to R− , and then both Treg and Tf ast are started to test whether to switch to the next lower rate. (2) If H0 is accepted, the station updates the loss ratio p(R), and starts the Tf ast again with the next transmission attempt. Thus, once either Treg or Tf ast decides to decrease the rate, the loss ratio is updated, the other test is terminated and the rate is decreased. If either Treg or Tf ast terminates by deciding to stay at the current rate, the loss ratio is updated, and the test will be started again without interrupting the other ongoing test. 2) Rate Increase: Recall that the sequential hypothesis tests, Treg and Tf ast , which test whether to switch to the lower rate, will update the current loss ratio when either of them terminates. Thus, the station can utilize the latest updated loss ratio to decide whether to switch to the higher rate. Note that only when Treg or Tf ast terminates by accepting H0 , i.e., staying at the current rate instead of decreasing to lower rate, the station starts to consider whether to increase the rate. This is reasonable because by accepting H0 , the current loss ratio is inferred to be small. However, as we illustrated in Section IV-A1, small loss ratio at the current rate is not necessarily a good indication for rate increasing in that the higher rate may be suffering a large packet loss. Although some algorithms such as ARF use probing packets to test the higher rate, a small number of probing packets may fail to estimate the accurate packet loss ratio, as pointed out by [7]. Nevertheless, the station does not want to wait for too long testing the higher rate’s loss ratio before it switches to the higher rate that might have good channel condition. Also, the station wants to avoid the oscillation between the rate that has low loss ratio and the higher rate that has large loss ratio, e.g., the case shown in Section IV-A1. Bearing the aforementioned issues in mind, we propose a feedback mechanism for increasing the rate. Specifically, at each rate R, to decide whether to increase the rate, the station compares the loss ratio p(R) with an opportunistic threshold, Pδ (R), which is initially set to a maximum value,
7
Pmax (R). If the loss ratio, p(R), latest updated by Treg or Tf ast (after either Treg or Tf ast terminates by accepting H0 ) is smaller than the threshold, Pδ (R), the station increases the rate to the next higher one, R+ . Then at R+ , if the Treg or Tf ast test terminates by accepting H1 , i.e., large loss ratio at R+ , the station switches back to R, and halves the value of Pδ (R). In other words, whenever the station switches from R+ back to the R, the threshold Pδ (R) is halved. When the value of Pδ (R) reaches the minimum value, Pmin (R), where Pmin (R) = 21m Pmax (R) for some integer m, the value of Pδ (R) is not halved and the station compares the loss ratio p(R) with the current value of Pδ (R) to decide whether to increase the rate. If after the station switches to the higher rate R+ and the Treg or Tf ast terminates by accepting H0 , i.e., small loss ratio at R+ , the opportunistic threshold of R, Pδ (R), will be reset to Pmax (R). Intuitively, if the station decreases the rate from R+ to R, which implies bad channel condition at the higher rate R+ , the opportunistic threshold Pδ (R) is made smaller so that it is harder for the station immediately switches back to R+ . If the station continuously oscillates between R+ and R, the value of Pδ (R) is decreased exponentially in order to prevent the station from attempting R+ shortly. Once the station stabilizes at R+ , the value of Pδ (R) is reset so that next time the station is able to opportunistically increase rate from R to R+ . Assuming the two error probabilities are α and β, as defined in Section III-A, the two thresholds used by the algorithm are set as A = (1 − β)/α and B = β/(1 − α). Initially, Rδ (R) = Rmax (R) for each rate R. The algorithm RF C appears below. Algorithm RF C 1: i = 1, ρr = 1, ρf = 1; R = current rate; 2: while true do 3: Make the ith observation, xi ; if Treg (xi ) or Tf ast (xi ) accepts H1 then 4: 5: R = the next lower rate; 6: ρr = 1, ρf = 1; 7: Rδ (R) = Rδ (R)/2; 8: If Rδ (R) < Rmin (R), then Rδ (R) = Rmin (R); 9: else 10: if Treg (xi ) or Tf ast (xi ) accepts H0 then 11: p(R) is updated; 12: If Treg (xi ) accepts H0 , ρr = 1; 13: If Tf ast (xi ) accepts H0 , ρf = 1; 14: For R− , Pδ (R− ) = Pmax (R− ); if p(R) < Pδ (R) then 15: 16: R = the next higher rate; 17: ρr = 1, ρf = 1; end if 18: 19: end if end if 20: 21: i + +; 22: end while. Algorithm Treg (xi ) if xi is a successful transmission then
1:
ρr = ρr (1 − λP−∗ )/(1 − P−∗ ); else ρr = ρr (λP−∗ )/P−∗ = λ; end if If ρr ≥ A, return H1 and terminate Treg ; 7: If ρr ≤ B, return H0 and terminate Treg ; Algorithm Tf ast (xi ) 1: if xi is a successful transmission then 2: ρf = ρf (1 − P∆ )/(1 − P−∗ ); 3: else 4: ρf = ρr P∆ /P−∗ ; 5: end if 6: If ρf ≥ A, return H1 and terminate Tf ast ; 7: If ρf ≤ B, return H0 and terminate Tf ast ; Several issues need to be addressed for RF C. When Treg (Tf ast ) terminates, the value of p(R) is updated. The way RF C updates the value of p(R) is simply dividing the observed packet losses by the number of observations used in the latest Treg (Tf ast ) test. A more general way to update p(R) is by pn+1 (R) = ²pn (R) + (1 − ²)p(R), where 0 < ² < 0.5, i.e., viewing p(R) as a time series and the latest knowledge p(R) is given more weight (1 − ²) than the weight ² given to the historical knowledge pn (R). Secondly, the Treg and Tf ast test whether the current loss ratio is smaller than P−∗ , whose value is calculated using the historical value of p(R− ). If the station has never used R− before, the p(R− ) is set to be zero (then P−∗ is the same as the critical loss ratio defined in [7]). To avoid the value of p(R− ) becoming too obsolete, if the historical value of p(R− ) lasts over a certain period of time, the value of p(R− ) will be rest, i.e., reset to zero. 2: 3: 4: 5: 6:
V. E VALUATION In this section, we evaluate the performance of RFC with respect to the arguably best rate adaptation algorithm, i.e., RRAA [7], in the literature. Both RFC and RRAA are implemented into OPNET Modeler 11.5 [20]. In particular, the MAC layer source code is modified so that either the RRAA or RFC can be used as the underlying rate adaptation scheme by the wireless stations. Note that since the the hidden terminal case is not our primary focus, current implementation supports the version of RRAA introduced in [7] as RRAA-BASIC.3 However, for conciseness of notation, we use the acronym “RRAA.” A. Simulation Topology and Parameters The performance of these two algorithms are examined in the network topology shown in Fig. 2. The topology includes a static AP that is connected to a server using a wired connection. There is also a client (denoted as ‘C’ in the figure) with movement capabilities that is connected to the server through the AP via a wireless link. This topology represents simple yet comprehensive scenarios used in other 3 The other version of RRAA includes an adaptive RTS module termed ARTS that is used for hidden terminal cases. This functionality is unnecessary for our evaluation, and is hence omitted without invalidating the presented results.
8
rate adaptation performance studies mentioned in Section II. The scenarios include: 1) The client is statically located in a position where it is outside of the transmission range of a rate but within the range of a lower rate; 2) The client moves both away from AP in order to simulate channel degradation; 3) Finally, a jammmer, which is located between the AP and the client, is activated in order to simulate a noisy channel condition. Server xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Jammer
xxxxxxxxxx xxxxxxxxxx xxxxxxxxxx xxxxxxxxxx xxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxx xxxxxxxxxxxxxx
C
C
AP
“Relative Throughput” in our simulation result figures, i.e., throughput of RFC Relative Throughput = throughput of RRAA . 1) Static Case: The client is placed outside the transmission range of 24Mbps but inside that of 18Mbps. Figs. 3(a) and 3(b) show the relative throughput of RFC compared to RRAA for both TCP and UDP traffic, respectively. It is safe to draw the conclusion that RFC outperforms RRAA in both applications. The higher traffic load, the better throughput gains. The best throughput enhancement is about 35% when the traffic load reaches 10Mbps. The main reason leading to this improvement is that RFC no longer suffers the undesirable oscillation between two rates. On the other hand, RRAA follows the plausible intuition that the rate should be increased if the successful transmission ratio is high enough at the current rate. Thus, the performance of RRAA is penalized for the frequent rate switches induced by this intuition.
The simulation topology.
To be consistent with previous work (e.g., [7]), we adopt the following simulation parameters: • The AP and the client use channel 36 (i.e., a frequency range from 5170 MHz to 5190 MHz), and the jammer generates noise on the 5150 MHz to 5210 MHz band. • The client moves from being in close proximity of the AP to being at the edge of the AP’s transmission range, or vice-versa. In either case, it can travel at rates of 1, 2, 4, 8, 10, 12, 15, 20, and 30 m/s. • The jammer generates noise for periods that are 1 second in length and separated by periods of silence that vary in length between 0.3 seconds and 1.5 seconds. • The value of P∆ used in Tf ast is an application-specific parameter. For environments in which the channel conditions are good and stable, a small value of P∆ may be adopted. On the other hand, a large value of P∆ may be used when the channel conditions change relatively dramatically. In our simulation, P∆ is set to be 0.4. • There lacks a good technique to choose the value of Pmax (R) and λ. The value of Pmax (R) is set to be the value of the opportunistic threshold, PORI (R), defined in [7]. For λ, we have tried different values through extensive OPNET simulations. The algorithm provides consistent performance when λ is set to be 1.8. The theoretical justification of selecting the value of λ will be studied in future research. The other parameter m is set to be 4. • Both of α and β are set to be 5%. B. Simulation Results In this section, we will present comparison results between RRAA and RFC in terms of throughput improvement under three different scenarios. To simplify the presentation, throughput improvement is measured according to the ratio of the number of useful bits (excluding protocol overhead) received by RFC to that received by RRAA. This ratio is denoted
1
1.3 1.2 1.1 2
4
6
8
Traffic Load (Mbps)
(a) TCP traffic. Fig. 3.
1
1.4 Relative Throughput
Fig. 2.
Relative Throughput
1.4
1
1.3 1.2 1.1 2
4
6
8
1
Traffic Load (Mbps)
(b) UDP traffic. The client is static.
2) Mobile Case: In this case, the ability of responding to dynamic channel condition degradation is examined for both RFC and RRAA. The channel quality degradation is introduced via the client movement in this scenario. The client is initially located in close proximity to an AP and concludes its movement at the edge of the AP’s transmission range. The faster the client moves, the quicker the channel downgrades. As indicated by Figs. 4(a) and 4(b), the throughput gain obtained by RFC grows as the client’s speed increases in general. Thus, we can conclude that RFC outperforms RRAA in terms of adapting to dynamic changes in channel condition. This performance improvement reflects RFC’s inherent capability of faster response to channel degradation compared to fixed sampling size schemes such as RRAA. Powered by the sequential hypothesis testing, RFC’s rate decrease algorithm is able to make correct decisions with fewer observations than RRAA. It is worth noticing that higher throughput improvement is achieved in UDP case than that in TCP case. It is because the congestion control mechanism of TCP limits the sending traffic when a large packet loss occurs. 3) Interference Case: We now compare the performance of RFC and RRAA in the scenarios of an interfered channel, which is caused by an active jammer. The level of interference is controlled by the the periods of silence (termed as silence width) between the jammer’s noise pulses. Generally speaking, as the silence width becomes smaller, the channel condition gets worse (i.e., switching more frequently between non-
9
1
1.2 1.1
10
5
15
20
25
Speed (m/s)
1
1.4 Relative Throughput
Relative Throughput
1.3
1
1.3 1.2 1.1 10
5
(a) TCP traffic Fig. 4.
15
20
25
Speed (m/s)
1
(b) UDP traffic
The client is moving away from the AP.
interference and interference). As it is demonstrated in Figs. 5(a) and 5(b), the throughput of RFC is slightly higher (around 5%) than that of RRAA for both TCP and UDP when the silence width is less than 0.2 seconds. This can be attributed to the fact that both algorithms are able to make correct decision when the change of channel quality is slow (i.e., bigger silence width). Yet, once the frequency of the channel condition changing reaches a certain point (silence width being 0.2 and 0.1 seconds for TCP and UDP, respectively), RRAA lags behind RFC to make rate adaptation decision in that RRAA uses fixed number of observations. Thus, RFC achieves about 20% more throughput over RRAA. Note that, for TCP traffic, the throughput improvement of RFC starts to decease (but still outperforms RRAA) as the silence silence width passes 0.1 seconds. Again, the reason is that the packet loss is high enough to trigger the TCP congestion control mechanism to reduce the sending rate. On the other hand, for UDP traffic, the throughput improvement keeps increasing because there is no regulation on the data sending speed in UDP.
1.0
1.2 1.1
0.8
0.6
0.4
0.2
Silence Width (Second)
(a) TCP traffic
1
0.1 0.02
1.3 Relative Throughput
Relative Throughput
1.3
1.0
1.2 1.1
0.8
0.6
0.4
0.2
1
0.1 0.02
Silence Width (Second)
(b) UDP traffic
Fig. 5. The client is stationary when there is a active the jammer in vicinity.
C. Remarks In this section, we have evaluated the performance of RFC under various scenarios. Based on the simulation results, we draw the following conclusions: • In general, RFC offers a consistent throughput enhancement over RRAA in both TCP and UDP traffics. In some cases, the improvement of TCP is lower than that of UDP in that the former’s built-in congestion control mechanism restricts the sending rate when a large packet loss occurs. • Backed up by sequential hypothesis testing, RFC is able to accommodate dynamic channel condition changes with
a small number of observations. Moreover, the feedback mechanism helps RFC to avoid unnecessary rate switching oscillations. As a result, RFC greatly improves the system throughput. VI. C ONCLUSION In this paper, we first examine the rate adaptation problem in IEEE 802.11 wireless networks. Then, our efficient sequential hypothesis testing based rate control scheme termed RFC is presented. RFC does not use any kind of specialized hardware, and it provides a cost-effective performance enhancement to wireless networks. More importantly, RFC increases the throughput over other mainstream rate adaptation solutions because it is capable of quickly and automatically adapting to different types of channel conditions. As a part of our future work, we plan to explore how to assign values to λ and Pmax in dynamic settings so that the throughput gains of RFC can be further improved. R EFERENCES [1] “The IEEE 802.11 lan/man wireless lans standard.” [Online]. Available: http://standards.ieee.org/getieee802/802.11.html [2] K. Ad and M. Leo, “WaveLAN(c)-ii: a high-performance wireless LAN for the unlicensed band,” Bell Labs Technical Journal, vol. 2, no. 3, pp. 118–133, 1997. [3] B. Sadeghi, V. Kanodia, A. Sabharwal, and E. Knightly, “Opportunistic media sccess for multirate ad hoc networks,” in MobiCom ’02, 2002, pp. 24–35. [4] M. Lacage, M. H. Manshaei, and T. Turletti, “IEEE 802.11 rate adaptation: a practical approach,” in MSWiM ’04: Proceedings of the 7th ACM international symposium on Modeling, analysis and simulation of wireless and mobile systems, 2004, pp. 126–134. [5] J. Bicket, “Bit-rate selection in wireless networks,” Master’s thesis, Massachusetts Institute of Technology, February 2005. [6] “ONOE rate control.” [Online]. Available: http://madwifi.org/browser/trunk/ath rate/onoe [7] S. H. Y. Wong, S. Lu, H. Yang, and V. Bharghavan, “Robust rate adaptation for 802.11 wireless networks,” in MobiCom ’06, 2006, pp. 146–157. [8] D. Aguayo, J. Bicket, S. Biswas, G. Judd, and R. Morris, “Link-level measurements from an 802.11b mesh network,” in SIGCOMM ’04, 2004, pp. 121–132. [9] K. Namgi, “IEEE 802.11 mac performance with variable transmission rates,” IEICE transactions on communications, vol. 88, no. 9, pp. 3524– 3531, 2005. [10] G. Holland, N. Vaidya, and P. Bahl, “A rate-adaptive MAC protocol for multi-hop wireless networks,” in MobiCom ’01, 2001, pp. 236–251. [11] “Madwifi: a linux kernel device driver for wireless lan chipsets.” [Online]. Available: http://madwifi.org/ [12] S. Choi, K. Park, and C. kwon Kim, “On the performance characteristics of WLANs: revisited,” in SIGMETRICS ’05, 2005, pp. 97–108. [13] A. Balachandran, G. M. Voelker, P. Bahl, and P. V. Rangan, “Characterizing user behavior and network performance in a public wireless LAN,” in SIGMETRICS ’02, 2002, pp. 195–205. [14] D. Kotz and K. Essien, “Analysis of a campus-wide wireless network,” in MobiCom ’02, 2002, pp. 107–118. [15] A. Wald, Sequential Analysis. New York: J. Wiley & Sons, 1947. [16] “Cisco aironet 1130ag series ieee 802.11a/b/g access point data sheet.” [Online]. Available: http://www.cisco.com [17] “Madwifi ticket tracking system: ticket number 989.” [Online]. Available: http://madwifi.org/ticket/989 [18] D. S. J. De Couto, D. Aguayo, J. Bicket, and R. Morris, “A highthroughput path metric for multi-hop wireless routing,” in MobiCom ’03, 2003, pp. 134–146. [19] R. Draves, J. Padhye, and B. Zill, “Routing in multi-radio, multi-hop wireless mesh networks,” in MobiCom ’04, 2004, pp. 114–128. [20] “Opnet network modeler and simulator.” [Online]. Available: http://www.opnet.com/