An Active Queue Management Scheme for Internet Congestion Control and Its Application to Dierentiated Services 1
Ling Su and Jennifer C. Hou Department of Electrical Engineering The Ohio State University Columbus, OH 43210-1272 Tel: +1 614 292 7290 (voice), +1 614 292 7596 (fax) fsul,
[email protected]
Abstract In this paper, we propose a new active queue management algorithm, called Average Rate Early Detection (ARED). An ARED gateway measures/uses the average packet enqueue rate as a congestion indicator, and judiciously signals end hosts of incipient congestion, with the objective of reducing packet loss ratio and improving link utilization. We show (via simulation in ns-2) that the performance of ARED is better than RED and comparable to BLUE in terms of packet loss rate and link utilization. We also explore the use of ARED in the context of Dierentiated Services architecture. We rst show analytically that the widely referenced queue management mechanism, RED with in and out (RIO) cannot achieve throughput assurance and proportional bandwidth sharing. We then extend ARED and propose a new queue management mechanism, called ARED with in and out (AIO), in the assured services architecture. To share surplus bandwidth in a rate-proportional manner, we incorporate into AIO the derived analytic results, and propose a enhanced version of AIO, called the dierentiatedrate AIO (DAIO) mechanism. Finally we validate the design of AIO and DAIO by comparing their performance against that of RIO and BLUE with in and out (BIO). Index Terms | Congestion control, active queue management, dierentiated services architecture, assured services, RED, RIO, BLUE.
1 The work reported in this paper was supported in part by NSF under Grant No. ANI-9804993. Any opinions, ndings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily re ect the views of the NSF.
1 Introduction A central tenet of the current Internet architecture is that congestion control is performed mainly by TCP transport protocols at end hosts. While numerous studies have been made to improve TCP, TCP connections may still suer from high loss ratio in the presence of network congestion. On the other hand, as continuous media multicast applications (which usually do not deploy TCP for congestion control) become widely deployed on the Internet, it becomes dicult, if not impossible, to exclusively rely on end hosts to perform end-to-end congestion control. It has been agreed upon that the network itself must now participate in congestion control and resource management. Several approaches have been proposed for congestion control and/or for bandwidth sharing among competing connections, e.g., fair-queuing scheduling mechanisms that make resource reservation and schedule packets from individual ows at routers according to their fair share, active queue management mechanisms that judiciously drop packets at intermediate routers before congestion occurs, mechanisms that deploy concrete incentives for connections to use end-to-end congestion control, and most recently, pricing mechanisms. In particular, the Internet Engineering Task Force (IETF) is advocating deployment of explicit congestion noti cation (ECN) and active queue management mechanisms at routers as a mean to congestion control. By \active queue management," we mean core routers inside networks are equipped with the capability to detect incipient congestion and to explicitly signal trac sources before congestion actually occurs. Active queue management mechanisms dier from the traditional drop tail mechanism in that in a drop-tail queue packets are dropped when the buer over ows, while in active queue management mechanisms, packets may be dropped early before congestion occurs [2]. Several active queue management mechanisms have been proposed, e.g., random drop [9], early packet discard [15], early random drop [11], random early detection (RED) [8] and its variations (FRED [13], stabilized RED (SRED) [21], and balanced RED (BRED) [1]), and BLUE [4], among which RED has received perhaps the most attention. A RED gateway detects congestion by monitoring the average queue size and randomly dropping packets if the average queue length exceeds some lower bound so as to notify connections of incipient congestion. RED has been shown to successfully prevent global synchronization (that results from signaling all TCP connections to reduce their congestion windows at the same time), reduce packet loss ratios, and minimize the bias against bursty sources. The downside of RED is, however, that it still incurs high packet loss ratio when the buer size is not large enough. (FRED, BRED, and SRED, on the other hand, are variations of RED with dierent design objectives. The former two aim mainly to improve the fairness of RED among connections of dierent RTTs, while the latter intends to keep the queue length stabilized by estimating the number of active connections.) Recently, a fundamentally dierent active queue management mechanism, called BLUE, was proposed. Dierent from RED, BLUE uses the instantaneous queue length as the congestion indicator and controls congestion by responding to both packet loss and idle link events. By adjusting the dropping probability based on the history of these events, BLUE is shown to incur small packet loss ratio even in small buer size cases. In spite of the several mechanisms proposed, none of them (perhaps except BLUE) are eective in terms of reducing 1
packet loss ratio, reducing packet delay, and achieving high link utilization [23]. In this paper, we rst propose a new active queue management algorithm, called Average Rate Early Detection (ARED). An ARED gateway measures/uses the average packet enqueue rate as a congestion indicator, and judiciously signals end hosts of incipient congestion, with the objective of reducing packet loss ratio and improving link utilization. As one of the most important components in the Dierentiated Services architecture is the queue management mechanism used at core routers, we then explore the use of ARED in the context of assured services (AS) architecture. We rst show analytically that the widely referenced queue management mechanism, RED with in and out (RIO), cannot achieve throughput assurance and proportional bandwidth sharing. We then propose, as an extension of ARED, a new queue management mechanism called ARED with in and out (AIO) to improve the assurance level for AS connections. To share surplus bandwidth in a rate-proportional manner, we incorporate the derived analytic results in AIO, and propose an enhanced version of AIO, called the dierentiated-rate AIO (DAIO) mechanism. Finally we validate the design of AIO and DAIO by comparing their performance against that of RIO and BLUE with in and out (BIO). The rest of this paper is organized as follows. In Section 2, we provide the background materials for active queue management and outline the requirements for queue management mechanisms in the AS architecture. In Section 3, we present ARED. In Section 4, we validate the design of ARED by comparing its performance against that of RED and BLUE in both simple and complicated topologies. In Section 5, we propose, as an extension of ARED, a new queue management mechanism, AIO, for the AS architecture. In Section 6, we analyze the stationary throughput of TCP under the AS architecture and show that RIO cannot achieve throughput assurance and proportional fairness. Based on the derived analytic results, we then propose an enhanced version of AIO, DAIO. In Section 7, we present the performance results of RIO, BIO, AIO, and DAIO under a wide range of trac sources and topologies. Finally, we conclude the paper in Section 8 with several future avenues for research.
2 Background Materials
aggregate input rate
11 00 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11 00 11
Link cleaning rate
Regulate congestion by dropping or marking packets
Figure 1: Gateway under congestion 2
2.1 Active Queue Management Congestion occurs when the aggregate trac volume at an input link is higher than the capacity of the corresponding output link (Fig. 1). One obvious approach to congestion control is to dedicate a separate queue for each ow at a router and/or employ certain rate-based fair queuing algorithm to regulate packet ows. However, this approach does not scale well, because queue management on a per- ow basis is usually of non-negligible time complexity. Among all the queue management mechanisms that do not require per- ow buer management, drop tail queue is the most widely deployed and simplest one. Since a drop tail queue drops packets and conveys congestion signals only at the time of congestion (i.e., buer over ow), there is a signi cantly long time period between the instant when congestion occurs and the instant when end hosts reduce their sending rates. During this time period, packets may be sent and eventually dropped. Also, because TCP connections exhibit bursty behavior, a drop tail queue is prone to drop multiple packets of one connection. This may undesirably shut down the TCP congestion control window of the connection. Moreover, drop tail may result in global synchronization (that results from signaling all TCP connections to reduce their congestion windows at the same time), which is usually followed by a sustained period of link under utilization. To remedy the aforementioned drawbacks, several active queue management algorithms have been introduced, e.g., RED and its variation FRED, SRED, and BRED, and BLUE, just to name a few. By \active," we mean that a router that employs such a mechanism is responsible for detecting congestion and notifying end hosts of (incipient) congestion so that the latter can adapt their sending rates before congestion actually occurs. These algorithms dier in
(1) the parameter used as an index of trac load (and congestion); (2) the policy used to detect congestion (or the likelihood of congestion); and (3) the policy used to adjust the packet dropping probability in response to (an increased likelihood of) congestion.
Because of the dierent policies used, queue management mechanisms also dier in terms of time complexity and scalability. In what follows, we summarize the two mechanisms, RED and BLUE, that are most relevant to our proposed mechanisms.
Random Early Detection (RED): RED starts to drop packets long before the buer is full, provid-
ing early congestion indication to connections before the buer over ows. RED operates by calculating the average queue length, avg queue, upon packet arrival using a low-pass lter with an exponentially weighted moving average. The parameter avg queue is used to measure trac load. The policy used to detect likelihood of congestion is characterized by two thresholds, minth and maxth. If avg queue is less than minth, the router is considered congestion free and no arriving packets are dropped. When avg queue is greater than maxth, the router is likely to incur congestion, and all arriving packets are 3
dropped. When avg queue is between the two thresholds, the probability of dropping a packet linearly increases with avg queue from 0 to pmax , where pmax is the maximum packet dropping probability. As one of the rst active queue management algorithms proposed, RED was shown to prevent global synchronization, accommodates bursty trac, incurs little overheads, and coordinates well with TCP under serious congestion conditions. The performance of RED, however, heavily depends on whether or not the two thresholds are properly selected. If the thresholds are set too small, buer over ow is avoided but at the expense of link under utilization. On the other hand, if the thresholds are set too large, congestion occurs before end-hosts are noti ed to reduce their sending rates, and a large amount of bandwidth is wasted on transporting packets that will be eventually dropped. On heavily congested links, it may be dicult to keep both high link utilization and low packet loss ratio simultaneously. Another problem is whether the average queue length is a good index for early congestion detection. The performance of an active queue management mechanism lies in how eectively it can detect congestion early so that congestion noti cation can be delivered in time to end hosts before the buer over ow actually occurs. With the reason of accommodating bursty trac, RED uses the average queue length as an index of trac load, and uses the buer space between the minth and maxth thresholds to accommodate the delay between (early) congestion detection and end-hosts' response to congestion. However, by virtue of the inherent bursty characteristics of network trac, the (instantaneous) queue length varies rapidly and may be very dierent from the value of avg queue. As a result, buer over ow may occur when avg queue is still less than maxth, especially when the buer size is not large enough.
BLUE: In BLUE, the instantaneous queue length and the link utilization are used as the indices
of trac load, and a single dropping probability p is maintained and used to mark or drop packet upon packet arrival. If the instantaneous queue length exceeds a pre-determined threshold, L, a BLUE gateway increases p by the amount of delta (which is a system parameter). To avoid dropping packets too aggressively, BLUE keeps a minimum interval, freeze time, between two successive updates of p. Conversely, if the link is idle (i.e., the queue is empty), the BLUE gateway decreases p by the amount of delta periodically (once every freeze time. By adjusting p with respect to the instantaneous queue length and link utilization (idle events), BLUE is shown to make the instantaneous queue length converge to the pre-de ned threshold.
Comparison between RED and BLUE: The fundamental dierence between RED and BLUE is
that BLUE uses, instead of the average queue length, the instantaneous queue length and link utilization history as indices of congestion. As discussed above, RED cannot simultaneously achieve low packet loss and high link utilization. In contrast, BLUE is able to keep the queue converge to the ideal operational point even with very small buer sizes, while retaining all the desirable features of RED. BLUE also shows that queue occupancy is not the only parameter for congestion to be observed and controlled. There are more eective control variables and control policies. 4
2.2 Overview of Dierentiated Services There has been a major eort these past few years aimed at augmenting the single class \besteort" service of the current Internet to include services oering a variety performance guarantees. Recently, the IETF Diserv (Dierentiated Services) working group has proposed, based on the two proposals by Clark [3] and Jacobson [12], a dierentiated services architecture that relies on packet tagging and lightweight router support to provide premium and assured services that extend beyond best eort [20]. In particular, the class of assured services (AS) is intended to give the customer the assurance of a minimum throughput (called the target rate), even during periods of congestion, while allowing it to consume, in some fair manner, the remaining bandwidth when the network load is low. More speci cally, the two major requirements of assured services are:
Throughput assurance: Each ow should receive its subscribed target rate on average; Fairness: The surplus (unsubscribed) bandwidth is either evenly or proportionally shared among AS
connections By \proportional bandwidth sharing," we mean that the surplus bandwidth is shared among AS connections in reverse proportion of their target rates.
The basic idea behind the dierentiated service architecture is to tag packets based on the Service Level Agreement (SLA) at the edge routers, indicating the preferential treatment that the packets expect. Core routers inside a network make packet forwarding or dropping decisions according to the tags. In the AS architecture, there are thus two major components: (i) the packet marking mechanism (which includes meters and markers) at edge routers to classify packets as in-pro le or out-of-pro le prior to their entering the network, and (ii) the queue management mechanism used at core routers to dierentiate and forward packets based on their service class and the marking made by the edge router. In the current AS architecture, the token bucket and the time sliding window (TSW) are the two most commonly used pro lers, while RED with IN and OUT (RIO) has received the most attention among all the router mechanisms proposed for assured services. Succinctly, RIO can be viewed as the combination of two RED instances with dierent dropping probabilities. Two sets of parameters, out (minth in, maxth in, pin max ) and (minth out, maxth in, pmax ), are used for IN packets and OUT packets, respectively. Usually the parameters for OUT packets are set much higher than those for IN packets so as to start dropping OUT packets well before any IN packet is discarded. Moreover, the average queue length used for OUT packets is calculated based on the total number of packets in the queue, regardless of their marking, while that for IN packets is calculated based on the number of IN packets present in the queue. As reported in [10, 14, 26], the throughput assurance and fairness requirements of assured services cannot be met under many cases. In particular, RIO tends to favor connections with small target rates and/or small round trip times (RTTs). This is because TCP reacts to packet loss by halving its congestion window. TCP then increases its congestion window by one packet in every RTT. Since the congestion window of a connection with a large target rate (RTT) is usually larger than that of a connection with 5
Upon receiving packet p: 1. if (link idle) f /* turn o the link idle timer */ 2. reset link idle timer(); 3. link idle = false;
g
4. r = estimate rate(p); /* use Eq. (3.1) */ 5. if (r > jj ( q len > L && r > ) f /* is the link bandwidth and is a constant that is less than 1 */ 6. if (pd > unif rand(0, 1)) 7. drop(p); 8. else /* if enqueue, update estimate average rate. */ 9. rold = r; 10. if ( curren time ? last update > freeze time ) f 11. pd pd + d1 ; 12. last update = current time; 13. g 14. g When the link becomes idle: 15. set link idle timer(freeze time); 16. link idle = true; Upon link idle timer expiration: 17. pd pd ? d2 ; 18. set link idle timer(freeze time);
Figure 2: The ARED Algorithm. a small target rate (RTT), it takes more time for a large-target-rate (large-RTT) connection to regain its original rate after packet loss. Moreover, since a TCP connection always attempts to utilize surplus bandwidth, the connection with a small target rate (RTT) will continue to increase its congestion window after its congestion window restores, and hence in the case that multiple TCP connections experience packet loss, the small-target-rate (small-RTT) connection may capture the bandwidth that is originally subscribed by the other large-target-rate (large-RTT) connections.
3 A New Active Queue Management Mechanism { ARED As discussed in Section 2.1, when a router noti es end hosts of incipient congestion is a critical factor of how eective the active router management mechanism is. If a router signals congestion prematurely, packet loss can be avoided but at the expense of low link utilization. As high link utilization usually leads to high throughput, maintaining high link utilization while reducing packet loss is an important objective for active queue management mechanisms. In this section, we propose a new active queue management mechanism, called Average Rate Early Detection (ARED), that ful lls the above objective, while retaining the many desirable features of RED. Fig. 2 gives the pseudo code of the ARED algorithm. Instead of using queue occupancy as a 6
congestion index, ARED uses the average packet enqueue rate as the index. Since d (instantaneous queue length) = packet queue rate ? packet clearing rate
dt
and the packet clearing rate is the link bandwidth , in contrast to almost all the other queue management mechanisms, ARED control the rate of queue occupancy change (instead of queue occupancy) and intends to keep the queue stabilized at an operational point at which the aggregate packet enqueue rate is approximately equal to or slightly below the packet clearing rate (link capacity). Upon arrival of a packet, an ARED gateway estimates the packet enqueue rate using
rnew
(1 ? e?T=K ) T` + e?T=K rold ;
(3.1)
where T is the packet interarrival time, K is a constant and ` is the current packet length. (K is usually set to a large value.2 In ARED, a packet is dropped with probability p only if (i) the estimated rate r is larger than or (ii) the instantaneous queue length is larger than a pre-determined value L and the ratio of r to the link capacity is larger than a pre-determined value , where L is the average queue length to which the ARED queue asymptotically converges, and is set to 0.95 for a wide range of network topologies and trac loads. (In our experiments, we set it to 1/3 of the buer size.) We term condition (i) as the congestion condition and condition (ii) as the congestion alert condition. In addition, the probability p is increased by an amount of d1 if the packet interarrival time is larger than freeze time under the congestion or congestion alert conditions. On the other hand, p is decreased by an amount of d2 for every freeze time time units for which the link has been idle. Both d1 and d2 are tunable system parameters, and are set to 0.01 and 0.001, respectively, in the simulation.
Parameter selection: L speci es the queue length to which the ARED queue asymptotically converges, and hence Q ? L is the maximum burst an ARED gateway can accommodate. When an ARED
gateway is under the congestion alert condition, it attempts to keep the packet enqueue rate at an operational point , or equivalently, it clears the queue at an approximate rate of (1 ? ) by increasing p by an amount of d1 every freeze time time units). To prevent the queue from over ow, the following equation should hold: (3.2) (Q ? L) = (1 ? ) d1 freeze time: 1
If the values of L and d1 are determined, one may determine the value of by solving Eq. (3.2).
7
Ftp Source
s0
1ms 100Mb
r
20ms 33Mb
d Sink
s1
1ms 100Mb
Ftp Source
Figure 3: The simulation environment for the rst experiment.
4 Evaluation of ARED In this section, we evaluate ARED by comparing it against drop-tail, RED, and BLUE under a wide range of trac conditions. Drop-tail serves as a baseline case in which no active queue management mechanism is employed. All trac sources are equipped with ECN support [6].3 All the simulations were performed in ns-2 [27] which provides accurate packet-level implementation for various network protocols, such as TCP/UDP, and various buer management and scheduling algorithms, such as droptail and RED. All algorithms used in the simulation, except ARED and BLUE, were part of the standard ns-2 distribution.
Throughput versus average queue length at a core router: In the rst experiment, we consider
a simple topology (shown in Fig. 3) with two TCP connections connecting to the same destination. Each TCP connection has a window size of 200 packets that is roughly equal to the delay bandwidth product (each packet is 10K bits). The two connections are started at slightly dierent times. The buer size varies from 15 to 140 packets in drop-tail, and is xed at 100 packets in both RED and ARED. The threshold minth ranges from 3 to 40 packets and the threshold maxth is set to 3 times of the minth value in RED. The parameters, delta, and freeze time, in BLUE are set to 0.01 and 20 ms, respectively. The parameters, K , , d1 , d2 , and freeze time are set to 20 ms, 0.95, 0.01, 0.001, and 20 ms, respectively. The parameter L ranges from 5 to 50 packets in both BLUE and ARED. 20 sets of parameters are chosen for each mechanism, and ten 10-second simulations were run for each parameter set. Fig. 4 gives the throughput versus average queue length observed at the router r under drop-tail, RED, and ARED, with each mark representing the average result of ten simulation runs for a given parameter set. 2 Note that instead of using a constant exponential weight, we use a weight of e?T=K . As reported in [25], if a constant weight were used, the estimated rate would be sensitive to the packet length distribution. In contrast, by using e?T=K the estimated rate asymptotically converges to the actual rate. A larger (smaller) value of K in Eq. (3.1) gives a long-term (short-term) average of packet enqueue rate. It has been found that the performance of ARED is not sensitive to the value of K . In our experiment, K is set to 200 ms. 3 ECN is implement using two bits of TOS eld in the IP header and two bits of currently reserved ags eld in the TCP header. When a network router experiences congestion, it explicitly signals the sources by setting the bits in the header, instead of dropping their packets. It has been shown in [6] that ECN successfully prevents packet loss.
8
Average Queue Length(packet)
150 BLUE DropTail RED ARED 100
50
0 0.3
0.4
0.5
0.6 0.7 Throughput(%)
0.8
0.9
1.0
Figure 4: The performance of drop-tail, RED, and ARED. The x-axis is the ratio of throughput to link bandwidth, and the y-axis is the average queue length in packets. As shown in Fig. 4, ARED outperforms RED and BLUE in terms of the ratio of throughput to average queue length (and hence average packet delay), while drop-tail performs worst among all the mechanisms. This suggests that ARED achieves high throughput with a small buer size.
Throughput and packet loss ratio for individual connections: In the second experiment, we
consider a simple topology (shown in Fig. 5) with 16 TCP hosts connecting to their respective destinations via a common bottleneck link (with link capacity set to 10 Mb). The buer size is set to 200 packets at each router. All TCP connections are bulk data transfer (FTP), with their start times arbitrarily chosen. The round trip time varies from 22 ms to 90 ms, and are dierent for the 16 TCP connections. The parameters, L, delta, and freeze time, in BLUE are set to 1/3 of the buer size, 0.01 and 20 ms, respectively. Finally, the parameters, K , L, , d1 , d2 , and freeze time are set to 20 ms, 1/3 of the buer size, 0.95, 0.01, 0.001, and 20 ms, respectively. s1 s2
10ms
s3
20ms
s4
d1
1ms 10Mbs
100Mbs 5ms d2 d3
35ms 10Mbs 5ms r0
r1
d4
d16
s16
Figure 5: A network topology with a single bottleneck link. 9
8.0 5e+05 RED BLUE ARED FIFO
6.0
Loss rate(%)
Throughput(bps)
4e+05
3e+05
4.0
2.0
2e+05
1e+05
RED BLUE ARED Drop−tail
0.0 1
6
11
16
1
6
11
16
Flow Id
Flow ID
(a) Throughput
(b) Packet loss ratio
Figure 6: Performance Comparison among drop tail, RED, BLUE, and ARED in terms of throughput and packet loss ratio. Fig. 6 gives the average throughput and the packet loss ratio of each connection over a 20-second interval. Both BLUE and ARED are more eective in evenly sharing link bandwidth and reducing packet loss. RED incurs the worst performance in terms of bandwidth sharing, while drop-tail incurs the worst performance in terms of reducing packet loss. To further study why BLUE and ARED outperform the other mechanisms, we give the curves of queue occupancy and average packet enqueue rate under the various mechanisms in Figs. 4{4, respectively. As shown in Fig. 4, the sawtooth-like curve of RED suggests that the feedback mechanism of RED may be too \aggressive" and achieves congestion control at the expense of low link utilization. Both BLUE and ARED keep the queue length at its desired operational point (i.e., L). As shown in Fig. 4, the average packet enqueue rate under BLUE and ARED is more stable than RED and is approximately at its desirable operational point.
Performance under bursty trac: In the third experiment, we study whether or not the various
mechanisms accommodate bursty trac well in the simple network topology shown in Fig. 5. VBR video or other forms of interactive trac like Telnet fall in the category of bursty trac, and may be modeled by a long lived TCP connection with a large delay bandwidth product and a small congestion window. Connection n0 incurs a RTT that is 6 times larger of the other connections and a maximum congestion window of size 8 packets, while the other connections have congestion windows of size 12 packets. Connection n0 is considered as a connection with bursty trac. In the simulation, the buer size varies from 8 to 24 packets in drop-tail. The threshold minth ranges from 3 to 14 packets, the threshold maxth is set to 3 times of the minth value, and the buer size is set to 4 times of the minth 10
200
200
FIFO
150 Queue Length(Kbs)
Queue Length(Kbs)
150
RED
100
50
100
50
0
0 0
5
10 Time
15
20
0
5
(a) Drop-tail
15
20
(b) RED
200
200
BLUE
ARED
150 Queue Length(Kbs)
150 Queue Length(Kbs)
10 Time
100
50
100
50
0
0 0
5
10 Time
15
20
0
(c) BLUE
5
10 Time
15
20
(d) ARED
Figure 7: Queue occupancy under drop-tail, RED, BLUE, and ARED. value in RED. The parameters, delta, and freeze time, in BLUE are set to 0.01 and 20 ms, respectively. The parameters, K , , d1 , d2 , and freeze time are set to 20 ms, 0.95, 0.01, 0.001, and 20 ms, respectively. In BLUE and ARED, the threshold L ranges from 3 to 14 packets and the buer size is set to 4 times of the L value. For each given parameter set, the simulation is run for 10 seconds, and the throughput of connection n0 is calculated in each one second period. Fig. 4 gives all the 1-second measurement results. X -axis is the parameter varied, and y-axis is the ratio of throughput of connection n0 to the bottleneck bandwidth. (Note that for each xed x value, there are 10 y-axis marks representing the 10 1-second measurements over the 10 second interval.) A drop-tail gateway is highly biased against bursty trac unless the queue length is large. This is because drop-tail tends to drop multiple packets from the same bursty connection. The RED gateway best protects bursty trac, because it makes decisions based on the average queue length instead of the instantaneous queue length. ARED does not perform as well as RED, but if the queue length is kept not too small (larger than 5 packets), ARED is not biased against bursty trac and performs better than BLUE. BLUE performs better that of drop-tail, but worse than that of RED and ARED, because it relies too much on the instantaneous information.
11
7e+06
RED
6e+06
6e+06
5e+06
5e+06 Average Input Rate(b/s)
Average Input Rate(b/s)
7e+06
4e+06
3e+06
BLUE
4e+06
3e+06
2e+06
2e+06
1e+06
1e+06
0
0 0
5
10 Time(s)
15
20
0
5
(a) RED
10 Time(s)
15
20
(b) BLUE
6e+06
ARED
Average Input Rate(b/s)
5e+06
4e+06
3e+06
2e+06
1e+06
0 0
5
10 Time(s)
15
20
(c) ARED Figure 8: The average packet enqueue rate under RED, BLUE, and ARED.
5 Extension of ARED for Dierentiated Services { AIO As ARED outperforms RED in (i) providing better service discrimination and fair bandwidth sharing and (2) achieving low packet loss and high link utilization, it is natural to extend ARED in the context of AS architecture. Fig. 10 gives the pseudo codes of the ARED with In and Out (AIO) mechanism. Succinctly, AIO can be considered as a combination of two ARED instances, one for in-pro le (IN) packets, and the other for all the packets. AIO diers from RIO in the following aspects. First, AIO uses the average packet enqueue rate as the control variable. This control variable is directly related to the bandwidth requirement of assured services, and in some sense, a more accurate trac load index than the average queue length in the AS architecture. (AIO starts to drop packets when the average enqueue rate of IN packets is larger than .) Second, AIO takes into account of both packet arrival and link utilization history to determine the packet dropping probability p: p is updated when the average enqueue rate exceeds certain threshold or when the link becomes idle.
12
4.0
3.0
3.0 Throughput(%)
Throughput(%)
4.0
2.0
1.0
0.0
2.0
1.0
7
12 17 Queue Length(packet)
0.0
22
2
5
(a) Drop-tail
8
11
14
8 11 Thresh Length(pakect)
14
(b) RED 4.0
4.0 3.5
3.0 Throughput(%)
Througput(%)
3.0 2.5 2.0 1.5
2.0
1.0
1.0 0.5 0.0
2
5
8 11 Thresh Length (packet)
0.0
14
(c) BLUE
2
5
(d) ARED
Figure 9: Performance of drop-tail, RED, BLUE, and ARED under burst trac. X-axis is the parameter varied (i.e., buer size, minth, L, and L for drop-tail, RED, BLUE, and ARED, respectively), and y-axis is the ratio of throughput of connection n0 to bottleneck bandwidth.
6 Enhancement of AIO with Proportional Bandwidth Sharing In this section, we analyze the stationary behavior of a bulk data transfer TCP connection under the AS architecture and show that RIO cannot achieve throughput assurance and proportional fairness. Based on the derived analytic results, we then propose an enhanced version of AIO, DAIO.
6.1 A TCP Performance Model Under the Dierentiated Services Architecture Several analytic models have been proposed to characterize the stationary behavior of a bulk data transfer TCP ow as a function of packet loss ratio, round trip time, and/or other parameters. The most 13
/* Parameters: Lin : Threshold for the length of the IN packet queue; L: Threshold for the packet length; qlenin: Length of IN packet queue; qlen: Packet queue length; pid (pod): Dropping probability for IN (OUT) packets; di1 (do1 ): Amount of probability increment for IN (OUT) packets; avg ratein : Average arrival rate for IN packets; avg rate: Average arrival rate for all packets. */ Upon receiving packet p: 1. if (p:tag == IN) f 2. reset link idle timer(); reset IN queue idle timer(); 3. rin = estimate in rate(p); 4. r = estimate rate(p); 5. if ( rin > link bandwidth jj ( qlenin > Lin && rin > link bandwidth)) f 6. if (pin d > unif rand(0; 1)) 7. drop(p); 8. else f /* if enqueue, update estimate average rate. */ 9. avg ratein = rin ; avg rate = r; 10. g 11. if ( curren time ? last in update > freeze time ) f 12. pid pid + di1 ; last in update = current time; 13. g 14. g 15. g 16. else f /* p:tag == OUT */ 17. reset link idle timer(); 18. r = estimate rate(p); 19. if ( r > link bandwidth jj (qlen > L && r > link bandwidth)) f 20. if (pod > unif rand(0; 1)) 21. drop(p); 22. else /* if enqueue, update estimate average rate. */ 23. avg rate = r; 24. if (curren time ? last update > freeze time) f 25. pod pod + do1 ; last update = current time; 26. g 27. g 28. g /* else */ When the IN packet queue becomes empty: 29. set IN queue idle timer(freeze time); Upon IN queue idle timer expiration: 30. pid pid ? di2 ; 31. set IN queue idle timer(freeze time); When the packet queue becomes empty (i.e., the link becomes idle): 32. set link idle timer(freeze time); Upon link idle timer expiration: 33. pod pod ? do2 ; 34. set link idle timer(freeze time);
Figure 10: ARED with In and Out (AIO). 14
well-known result (derived in [5, 7]) is that the steady state throughput of a long-lived TCP connection is upper bounded by C; attainable throughput < MSS (6.1) p t p r
where MSS is the segment size, tr is the round trip time, C is a constant, and p is the packet loss probability. Ott et al. [19] derived a stationary distribution of congestion window size. Mathis et al. [18] gave a more exact de nition of the parameter p { the number of congestion signals per acknowledged packet. Padhye et al. [22] proposed a model that captures the impact of not only TCP's fast retransmit mechanism but also TCP's timeout mechanism on the throughput. These models are usually derived under the following assumptions: (A1) The sender always has date to send if it is permitted by the congestion window; (A2) The lifetime of the connection under consideration is at least in the order of several round trip times; (A3) The receiver advertisement window is always large enough so that the congestion window is not constrained by the advertisement window. (A4) Packet losses are independent events with the same, small probability; Assumptions (A1)-(A2) are necessary to derive the stationary behavior of bulk transfer TCP connections, while (A3) is made to isolate the eect of the receiver advertisement window. The only assumption that hinders the results from being directly applied to the AS architecture is (A4), because the packet loss probability becomes dependent on the buer occupancy (and indirectly on the congestion window sizes of competing connections) in RED.
Our model: We follow the stochastic model of TCP congestion control reported in [22]. The model
focuses on TCP's congestion avoidance mechanism in which the congestion window size, W , is increased by 1=W each time an ACK is received, and is halved each time packet loss is detected (i.e., triple duplicate ACKs are received). The model gives a relatively simple analytic expression of the throughput of a saturated TCP sender. Since we do not intend to derive an accurate throughput formula (but instead would like to study the eect of dierent bandwidth requirements on the throughput), we do not consider timeout eects here. We model the TCP congestion avoidance behavior in terms of \rounds." A round begins with the back-to-back transmission of W packets, where W is the current TCP congestion window size. Once all packets in a congestion window have been sent in a back-to-back manner, no other packets are sent until the rst ACK is received for these W packets. Receipt of an ACK marks the end of the current round and the beginning of the next round. The duration of a round is equal to the round trip time and is assumed to be independent to the congestion window size. At the beginning of the next round, W 0 new packets will be sent, where W 0 is the new congestion window size. Let b be the number of packets acknowledged by an ACK. Many TCP receiver implementations send one cumulative ACK for two consecutive packets received (i.e. delayed ACK), and hence b is usually equal to 2. If W packets are sent in the frist round and are all correctly received and acknowledged, then W=b ACKs will be received. Since each ACK increase the window size by 1=W , the window size is W + 1=b. That is, under the congestion avoidance mechanism and in the absence of packet loss, the window size increases linearly in time, with a slope of 1=b packets per round trip time. 15
We assume that if a packet is lost in a round, all remaining packets transmitted until the end of the round are also lost. Therefore we de ne p as the probability that a packet is lost, given that either it is the rst packet in its round or the preceding packet is not lost. We de ne a \cycle" to be a period between two packet losses. Between the two instances, the sender is in the congestion avoidance phase, and the congestion window size increases by 1=b per round. Immediately after the packet loss, the window size is reduced by a factor of two. Suppose a TCP sender starts to send data at time t = 0. For any given time t > 0, we de ne Nt to be the number of packets transmitted and acknowledged in the interval [0; t], and Bt = Nt =t to be the goodput on that interval. Note that Bt is the number of packets sent and acknowledged per unit of time. Thus Bt represents the goodput, rather than the throughput, of a connection. We de ne the long term steady-state TCP goodput, B as
Nt B = tlim !1 Bt = tlim !1 t :
(6.2)
To facilitate the analysis, we also de ne the following terms:
Ni : the number of packets sent in the ith cycle and eventually received. Di : the duration of the ith cycle. rij : the duration of the j th round in the ith cycle. Wi : the congestion window size at the end of the ith cycle. Xi : the round in which a packet is lost in the ith cycle. i : the number of packets sent in the last ((Xi + 1)th) round in the ith cycle. Wr : the ideal IN packet window size for a ow with target rate r. pid , pod , pd : the average packet dropping probability for IN packets, OUT packets, and both packets, respectively.
We are interested in establishing a relationship between the goodput, B (r), of a TCP connection with target rate r, and the RIO queue in dierent trac congestion states, as well as that between B (r) and the packet dropping probability p. We model fWi g as a Markov regenerative process with rewards fNi g, and de ne ): B = EE((N (6.3) D)
As shown in Eq. (6.3), we need to derive E (N ) and E (D) in order to derive the long term throughput B. Consider the ith cycle shown in Fig. 11. The ith cycle begins after a triple duplicate ACK is received, at which point the current congestion window size becomes Wi?1 =2, i.e., half of the window size before the triple duplicate ACK is received. The congestion window size then increases 1=b in each round and the number of packets in a congestion window is increased by 1 each b rounds. Suppose that 16
Acked IN packet
Packets sent
Acked Out packet Lost packet Wi
W (i-1) 2 1
b
2
3
b
4
b
Round Peroid(RP )
Xi
Number of Rounds
i
Figure 11: congestion window size evolution during a cycle the rst packet that is lost is the i -th packet in this cycle and that the packet loss occurs in the Xi -th round. In this case, Ni = i ? 1. (Note that although an additional Wi ? 1 packets are sent in the following round before packet loss is detected, they will not be received and hence are not gured into Ni.) Thus, E (N ) = E () ? 1: (6.4) Next, we derive E (D) and E (W ). Derivation of E () depends on the packet dropping probability (which is related to the current queue state, and the class tag an arrived packet carries), and will be deferred to the next subsection. P i+1 rij . Under the assumption that the round trip time rij In the ith cycle, the duration Di = jX=1 is a random variable that is independent of the congestion window size, we have
E (D) = (E (X ) + 1) E (tr );
(6.5)
where E (tr ) is the average round trip time. To derive E (X ), we rst observe the following relationship between Wi and Xi : (6.6) Wi = Wi?1 =2 + Xbi : To ease analysis, we assume that Xi and Wi are mutually independent. It then follows that: E (X ) = b E2(W ) : (6.7) To derive E (W ), we rst express the total number, Ni , of packets sent in the ith cycle as
Ni =
X
Xi =b?1 k=0
( W2i?1 + k) b + i 17
(6.8)
= Wi?21 Xi + X2i ( Xbi ? 1) + i = X2i ( W2i?1 + Wi ? 1) + i ;
where i is the number of packets sent in the last round, and is assumed to be uniformly distributed between 1 and Wi , i.e., E ( ) = E (W )=2, and the third equality results from the use of Eq. (6.6). Equating Eqs. 6.4) and the expectation of (6.9), we have E (N ) = E () ? 1 = E (2X ) ( 3E (2W ) ? 1) + E (2W ) : (6.9) Substituting E (X ) = bE (2W ) and solving E (W ) in Eq. (6.9), we then have
E (W ) = =
b?2 4
+
q
?b)2 + 3b (E () ? 1) 16 2
(2
(6.10)
b
p24b(E() ? 1
3 4
if E () is large enough 3b Finally, from Eqs. (6.3), (6.5), (6.9), and (6.11), we have p p 1 (E () ? 1) + 24b(E6b()?1) ? 24b(E12()?1) ?1 p24b(E()+1) B = = p 6(E () ? 1) + Eb (t 2) 24b(E () ? 1) E (tr ) r q 3 ( 6 + 1)E(tr ) E () 2b = E (tr ) ; and hence pE() B / E (t ) : r
(6.11)
(6.12)
Derivation of E () under the assured services architecture: In the ith cycle, there are (approx-
imately) Xi rounds. Suppose the target rate is r. The window size designated for IN packets is then Wr = r tr , and the ratio of IN packets to all packets in a cycle can then be expressed as
Wr Xi = Ni
Wi?1 4
Wr
+ Wi 2
+ X ii
? Xi 1
r = Wi?W ; 1 + Wi 4
(6.13)
2
and the average ratio can be approximated as
Wr : (6.14) E (W ) i The probability that the lost packet in the ith cycle is the kth packet (i.e., i = k) can be approximated = E ( WNr Xi ) =
as:
3 4
Pr(i = k) = (1 ? pid )k (1 ? pod )(1?)k (pid + (1 ? )pod ): 18
(6.15)
Finally, E () can be approximated as: 1 X k Pr(
pid + (1 ? )pod (6.16) (1 ? (1 ? pid ) (1 ? pod )1? )2 k=1 4Wr pi + (1 ? 3E4W(Wri ) )pod 3E (Wi ) d ( 4Wr pi + (1 ? 4Wr )po )2 ; by approximating 1 ? (1 ? pid) (1 ? pod)1? = pid + (1 ? )pod d 3E (W ) d 3E (W )
E () =
=
i = k)
=
1
Wr pi + (1 ? 4Wr )po : d 3E (W ) d 3E (W ) 4
For two TCP connections with target rates r1 and r2 , the ratio of bandwidth share can be approximated as r o o i 4Wr2 B1 = E (tr2 )rp2 ? (p2 ? p2 ) 3E(W ) (6.17) B2 E (t ) po ? (po ? pi ) 4Wr1 r1 1 1 1 3E (W )
In the under-subscribed case, pi is much smaller than po and E (W ) is much larger than Wri , and Eq. (6.17) can be further simpli ed to
B1 B2
r E (tr ) po (1 ? r 2
E (tr1 )
4Wr2 E (W2 ) ) po1 (1 ? 3E4W(Wr11 ) ) 2
3
p
o EE ((ttr2 ))ppp2o : r1 1
If the packet dropping probability for OUT packets are set as follows: po1 = (r2 E (tr2 ))2 ; po2 (r1 E (tr1 ))2 then the ratio of the long term goodput is approximately
B1 r1 B2 = r2 :
(6.18)
(6.19) (6.20)
6.2 An Extension of AIO { DAIO Based on the derivation in Section 6.1, we extend AIO to provide proportional bandwidth sharing as follows. We call the extension the dierentiated-rate AIO (DAIO) algorithm. The pseudo code that highlights the modi cation made in AIO to give DAIO is shown in Fig. 12. The basic idea behind DAIO is to treat ows with dierent target rates with dierent packet dropping probabilities. This is achieved by having each router keep a list of active ow information. Each packet of ow i carries the round trip time estimated by the end host in its header. When an IN packet of ow i arrives, the router uses Eq. (3.1) to calculate the average enqueue rate, ri , of IN packets and store it (along with the ow id and the round trip time tri ) in the list. The entry corresponding to
ow i in the list of active ow information is purged if no packet from ow i has been received in a short window (of length 500 ms). When an OUT packet of ow i arrives, the router calculates the packet dropping probability for OUT packets as poi / po =(ri tri )2 . 19
Lines inserted between line 3 and line 4 in Fig. 10: 1. if (p. ow id exists in the ow information list) 2. update ow info(p. ow id, rin , p.rtt); 3. else 4. insert ow info(p. ow id, rin , p.rtt); 5. if ( curren time ? last info update > entry purge time ) f /* scan the ow information list and delete entries that have not been updated since last update */ 6. purge ow info list(); 7. last update = current time; 8. g Lines inserted between line 17 and line 18 in Fig. 10: 9. pod = calculate dropping probability(p. ow id); /* The other parts are the same as in AIO */
Figure 12: Extension of AIO.
7 Simulation Results As analytically studied in Section 6 (and studied via simulation in [10, 14, 26]), RIO cannot sometimes ful ll the throughput assurance and fairness requirements of assured services, and usually favors connections with small target rates or small round trip times. In this section, we evaluate AIO and its extension (DAIO) and compare their performance against that of (i) RIO and (ii) BLUE with in and out (BIO) which is the combination of two BLUE instances in the AS architecture.
Performance comparison in single bottleneck topology: In the rst experiment, we compare the performance of RIO, BIO, AIO, and DAIO under the single bottleneck topology (similar to Fig. 5). 0.8
6.0e+05 RIO BIO AIO EAIO Throughput(bps)
Loss Rate(%)
0.6
5.0e+05
0.4
RIO BIO AIO DAIO ideal
4.0e+05
3.0e+05
2.0e+05
0.2 1.0e+05
0.0 1.0
6.0
11.0
16.0
Flow ID
0.0e+00 1.0
6.0
11.0
16.0
Flow ID
(a) Packet loss ratio
(b) Average throughput
Figure 13: The performance of RIO, BIO, AIO, and DAIO in terms of packet loss ratio and average throughput under the well-subscribed case in the simple bottleneck topology. 20
3.0
5.0e+05 RIO BIO AIO DAIO
4.0e+05
Loss Rate(%)
Throughput(bps)
2.0
1.0
RIO BIO AIO DAIO ideal
3.0e+05
2.0e+05
1.0e+05
0.0
1
4
7
10
13
0.0e+00 1.0
16
6.0
11.0
16.0
Flow ID
(a) Packet loss ratio
(b) Average throughput
Figure 14: The performance of RIO, BIO, AIO, and DAIO in terms of packet loss ratio and average throughput under the over-subscribed case in the simple bottleneck topology. There are total 16 TCP connections competing for the bandwidth of a single bottleneck link. They are labeled from 1 to 16. The bandwidth of the bottleneck link is set to 4.0 Mbps. The round trip time of all connections is set to 30ms. In the well-subscribed case, the total subscribed rate is close to the bottleneck bandwidth, and the target rate of ow i is set to (i + 2) 24 kbps. In the under-subscribed case, half of the bottleneck bandwidth is not subscribed, and the target rate of ow i is set to (i +2) 12 kbps. Figs. 13 and 14 give the simulation results under the well-subscribed and over-subscribed cases. As shown in both Fig. 13 and Fig. 14, AIO and DAIO perform best in terms of throughput assurance and fairness: the bandwidth allocation to dierent ows with dierent target rates is very close to their service level agreement. The performance dierence between AIO and DAIO is that AIO is still in favor of small-target-rate ows. Among all the mechanisms, DAIO achieves the best bandwidth allocation, while BIO performs worst in terms of bandwidth allocation and packet loss rate. To study why BIO performs worst, we dump one instance of the queue length variation under the well-subscribed case in Fig. 7. The queue length variation is rather stable under RIO, AIO, and DAIO, but is rather irregular under BIO. This suggests that BLUE cannot be straightforwardedly extended to the AS architecture.
Performance comparison in the merge topology: The Internet is composed of many networks
interconnected by border routers and gateways, and is much more complicated than the single bottleneck link topology. To validate whether or not the queue management mechanisms exhibit similar behaviors in networks with more complicated topologies, we conduct experiments in a more complicated topology shown in Fig. 16. Albeit its simplicity, this topology allows us to investigate the impact of ow aggregation on the performance. There are totally 8 TCP connections, each with a target rate of 600 kbps and round trip time 30 ms. Connections are merged at each aggregation point ni and mj (0 i 3 and 21
160
RIO
140
140
120
120 Queue Length(Kbs)
Queue Length(Kbs)
160
100 80 60
100 80 60
40
40
20
20
0
BIO
0 0
5
10 Time
15
20
0
5
10 Time
(a) RIO
20
(b) BIO
160
160
AIO
140
140
120
120 Queue Length(Kbs)
Queue Length(Kbs)
15
100 80 60
100 80 60
40
40
20
20
0
EAIO
0 0
5
10 Time
15
20
0
5
10 Time
(c) AIO
15
20
(d) DAIO
Figure 15: Average queue length variation under RIO, BIO, AIO, and DAIO under the well-subscribed case in the simple bottleneck topology.
s0
d0 n0
s1
d1 m0 10Mbs
10Mbs
d2
s2 n1 s3
d3 r0
s4
1.5Mbs 5ms
r1 d4
n2 d5
s5 m1 s6
d6 n3
s7
d7
Figure 16: Single bottleneck topology with merging points. 22
8.0 RIO BIO AIO DAIO
7e+05 Throughput(bps)
Loss rate (%)
6.0
RIO BIO AIO DAIO ideal
8e+05
4.0
6e+05 5e+05 4e+05
2.0 3e+05 0.0
0
2
4
2e+05 0.0
6
Flow ID
(a) Packet loss ratio
1.0
2.0
3.0 4.0 Flow ID
5.0
6.0
7.0
(b) Average throughput
Figure 17: Performance of RIO, BIO, AIO, and DAIO in terms of packet loss ratio and average throughput in the single bottleneck topology with merging points. 0 j 1). Hence, the pro lers have the following target rate: 600kbps from node si to ni , 1.2 Mbps from node ni to node mj , and 2.4Mbs from node mj to r0 . The bottleneck bandwidth is set to 5 Mbps. This topology represents a hierarchy of decreasing bandwidth ending in a 5Mbs bottleneck link. As shown in Fig. 17, the bandwidth is fairly shared and the packet loss ratio is kept low under AIO and DAIO. In contrast, the bandwidth is unfairly distributed under RIO and BIO, and the throughput cannot be maintained at the requested target rate for certain ows. This demonstrates the eectiveness of the proposed mechanisms in more complicated merging topologies. From the simulation results, we conclude that DAIO can realize bandwidth sharing in a rateproportional manner. As compared with AIO, RIO and BIO, it achieves the best performance in both the well-subscribed and over-subscribed cases. This is because DAIO monitors and controls congestion using the average packet enqueue rate, which we believe is more adequate in the AS architecture, and moreover, DAIO gures the target rate and RTT information in calcuating the OUT packet dropping probability.
8 Concluding Remarks This paper focuses on the design, analysis, and evaluation of a new active queue management mechanism, called ARED. ARED uses the packet enqueue rate as the congestion index and aims to keep the queue stabilized at an operational point at which the aggregate packet enqueue rate is approximately equal to or slightly below the packet clearing rate (link capacity). Via simulation, we have shown ARED outperforms all of the active queue management mechanisms currently known, perhaps except BLUE, in terms of packet loss ratio and link utilization. 23
We have also applied ARED in the context of assured services architecture, and proposed ARED with IN and OUT (AIO). We analyze the stationary goodput of TCP under the AS architecture and show that RIO cannot achieve throughput assurance and proportional fairness. To realize proportional bandwidth sharing, we then proposed, based on the derived analytical results, an enhanced version of AIO, called DAIO, that drops OUT packets according to the dierent contracted ow target rates and RTTs. The simulation results show that proportional bandwidth allocation can be achieved among users with dierent target rates under DAIO. We have also identi ed several avenues for future research. First, as part of our on-going work, we are studying several extensions of ARED, to further improve its adaptiveness and capability to deal with non-responsive ows (e.g., UDP). By measuring and understanding the trac characteristics of the bottleneck link, we will be able to optimally parameterize ARED and devise strategies for dealing with non-responsive ows. As discussed in Section 2, ECN can eectively decouple congestion noti cation and packet dropping, and thus eectively reduces the packet loss ratio This is because ECN reduces the elapsed time between the instant when congestion occurs and the instant the source responds. We believe an alternative approach is to combine active queue management and drop-front, since this approach can signal congestion early as well. This is a topic of future research.
References [1] F. M. Anjum and L. Tassiulas. Fair Bandwidth sharing among adaptive and non-adaptive ows in the Internet, Proc. of IEEE INFOCOM'99, March 1999. [2] R. Braden, D. Clark, J. Crowcroft, B. Davie, S. Deering, D. Estrin, S. Floyd, V. Jacobson, G. Minshall, C. Partridge, L. Peterson, K. Ramakrishnan, S. Shenker, J. Wroclawski, and L. Zhang. Recommendations on queue management and congestion avoidance in the Internet. work in progress, Internet draft, March 1997. [3] D. D. Clark and W. Fang. Explicit allocation of best-eort packet delivery service. ACM/IEEE Trans. on Networking, Vol. 6, No. 4, August 1998, pp. 362-373. [4] W. Feng, D. D. Kandlur, D. Saha, K. G. Shin. BLUE: A new class of active queue management algorithms, in submission. [5] S. Floyd. Connections with multiple congested gateways in packet-switched networks part 1: one-way trac, Computer Communication Review, Vol. 21, No.5, October 1991, pp. 30{47. [6] S. Floyd. TCP and explicit congestion noti cation, Computer Communication Review, Vol. 24, No.5, October 1994, pp. 10{23. [7] S. Floyd and K. Fall. Promoting the use of end-to-end congestion control in the Internet. ACM/IEEE Trans. on Networking, August 1999. [8] S. Floyd and V. Jacobson. Random early detection gateway for congestion avoidance, IEEE/ACM Trans. on Networking, Vol.1, No.4, August 1993. [9] E. Hashem. Analysis of random drop for gateway congestion control. MIT Technical Report, 1990. [10] J. Ibanez and K. Nichols. Preliminary simulation evaluation of an assured service, work in progress, IETF draft, August 1998. [11] V. Jacobson. Presentations to the IETF Performance and Congestion Control Working Group. August 1989.
24
[12] V. Jacobson. Dierentiated services architecture. talk in the Int-Serv WG at the Munich IETF, August 1997. [13] D. Lin and R. Morris. Dynamics of random early detection. Proc. SIGCOMM'97, September 1997. [14] W. Lin, R. Zheng, and J. C. Hou. How to make assured services more assured. Proc. 7th IEEE Int'l Conf. on Network Protocol, October 1999. [15] A. Mankin. Random drop congestion control. Proc. of ACM SIGCOMM, September 1990, pp. 1-7. [16] A. Mankin and K. K. Ramakrishnan. Gateway congestion control survey. RFC 1254, August 1991. [17] M. May, J-C. Bolot, and A. Jean-Marie. Simple performance models of dierential services schemes for the Internet, Proc. of IEEE INFOCOM'99, March 1999. [18] M. Mathis, J. Semke, J. Mahdavi. The macroscopic behavior of the TCP congestion avoidance algorithm, Proc. of ACM SIGCOMM'97, September 1997. [19] A. Misra and T. J. Ott. The window distribution of idealized TCP congestion avoidance with variable packet loss, Proc. of IEEE INFOCOM'99, March 1999. [20] K. Nichols, V. Jacobson, and L. Zhang. A two-bit dierentiated services architecture for the Internet. work in progress, IETF draft, November 1997. [21] T. J. Ott, T. V. Lakshman, and L. H. Wong. SRED: stabilized RED, Proc. of IEEE INFOCOM'99, March 1999. [22] J. Padhye, V. Firoiu, D. Towsley, and J. Kurose. Modeling TCP throughput: a simple model and its empirical validation, Proc. of ACM SIGCOMM'98, September 1998. [23] V. Paxson. Measurements and analysis of end-to-end Internet dynamics. Ph.D thesis, University of California, Berkeley, April 1997. [24] A. Romanow and S. Floyd. Dynamics of TCP trac over ATM networks. Proc. of ACM SIGCOMM, August 1994, pp. 79-88. [25] I. Stoica, S. Shenker, and H. Zhang. Core-stateless fair queuing: achieving approximation fair bandwidth allocations in high speed networks. Proc. of ACM SIGCOMM'98, October 1998, pp. 118-130. [26] I. Yeom and A. Reddy. Realizing throughput guarantees in a dierentiated services network. Proc. of ICMCS, June 1999. [27] UCB, LBNL, VINT Network Simulator. http://www-mash.cs.berkeley.edu/ns/.
25