I
IN ST IT UT
DE
E U Q TI A M R
ET
ES M È ST Y S
E N
RE CH ER C H E
R
IN F O
I
S
S IRE O T ÉA AL
A
PUBLICATION INTERNE No 1478
BANDWIDTH SHARING UNDER THE ASSURED FORWARDING PHB
ISSN 1166-8687
OCTAVIO MEDINA, JULIO OROZCO, DAVID ROS
IRISA CAMPUS UNIVERSITAIRE DE BEAULIEU - 35042 RENNES CEDEX - FRANCE
INSTITUT DE RECHERCHE EN INFORMATIQUE ET SYSTÈMES ALÉATOIRES Campus de Beaulieu – 35042 Rennes Cedex – France Tél. : (33) 02 99 84 71 00 – Fax : (33) 02 99 84 71 71 http://www.irisa.fr
Bandwidth Sharing under the Assured Forwarding PHB Octavio Medina* , Julio Orozco** , David Ros***
Theme 1 | Reseaux et systemes Projet Armor Publication interne n1478 | Septembre 2002 | 29 pages Abstract: The DiServ's Assured Forwarding (AF) Per-Hop Behavior Group de nes a dierentiated forwarding of packets in four independent classes, each class having three levels of drop precedence. Speci c end-to-end services based on this PHB are still being de ned. A particular type of service that could assure a given rate to a trac aggregate has been outlined elsewhere. In such a service, a fair distribution of bandwidth is one of the main concerns. This paper presents a experimental work carried out to evaluate how AF distributes bandwidth among ows under dierent load conditions and trac patterns. We focused on the eect that marking mechanisms have on bandwidth sharing among ows within a single AF class. The trac types we used include UDP ows, individual and aggregated TCP ows, mix of TCP and UDP, TCP sessions with heterogeneous RTTs, as well as color-blind and color-aware re-marking at the aggregation point for TCP ows. Tests were performed on real and simulated networks. We found certain conditions under which AF distributes bandwidth fairly among non-adaptive UDP ows and TCP aggregates. Finally, we evaluate a basic rule for setting the parameters of the two-rate Three-Color Marker conditioning algorithm (trTCM) in order to achieve a better bandwidth distribution for TCP ows. Key-words: IP quality of service, DiServ testing, Assured Forwarding, Assured Rate service. (Resume : tsvp)
[email protected] [email protected] ***
[email protected],
[email protected] *
**
Centre National de la Recherche Scientifique (UPRESSA 6074) Université de Rennes 1 – Insa de Rennes
Institut National de Recherche en Informatique et en Automatique – unité de recherche de Rennes
Partage de bande passante dans le comportement Acheminement Assure Resume : Le groupe de comportements par nud (PHB) dit Acheminement Assure (AF) de l'architecture DiServ de nit un traitement dierentie des paquets en quatre classes independantes, dont chacune comporte trois niveaux de priorite. Des services bases sur ce PHB sont encore en cours de de nition. Un type particulier de service qui pourrait garantir un debit donne a un agregat de ux a ete decrit dans la litterature. Dans un tel service, une distribution equitable de la bande passante est l'un des problemes principaux. Ce rapport presente des experiences menees a n d'evaluer comment la bande passante est-elle distribuee entre ux dans une classe AF, en utilisant divers pro les de tra c et de charge du reseau. Nous nous sommes interesses a l'impact que les mecanismes de marquage ont sur le partage de la bande passante. Les types de tra c employes incluent des ux UDP, des ux TCP individuels et agreges, un melange de ux TCP et UDP, des sessions TCP avec des RTTs heterogenes, ainsi que le remarquage de ux TCP dans le point d'agregation, en tenant compte ou pas du marquage prealable. En n, nous evaluons une regle simple pour xer les parametres de l'algorithme trTCM (marqueur a deux debits et trois couleurs) permettant d'obtenir une meilleure distribution de la bande passante entre ux TCP. Mots cles : Qualite de service, tests de DiServ, Acheminement Assure, service a Debit Garanti.
Bandwidth Sharing under the Assured Forwarding PHB
3
1 Introduction In the context of Assured Forwarding (AF) [1], one of the standardized PHB groups of the DiServ architecture de ned by the IETF, the idea of an enhanced IP service has been discussed, whether implicitly or explicitly, at least in the IETF [2] and in the SEQUIN Project [3]. The basic idea is to provide a committed rate for a particular TCP aggregate using AF. Therefore, names such as \IP+", \Assured Rate" or \Assured Bandwidth" have been used. The fair sharing of the available bandwidth is a major issue regarding this service. The question to solve is whether AF can be used to get a particular distribution of bandwidth for the trac forwarded within a single AF class. Since aggregation is a key factor in the DiServ architecture, even if the Service Level Speci cation (SLS) is de ned at the aggregate level, an improvement of QoS should be seen by individual ows. In the framework of the research activities conducted for the European TF-NGN Task Force [4], we have followed an experimental approach to study bandwidth sharing in AF. We have performed several tests using commercial technology, and simulations for certain signi cant conditions that cannot be easily generated in the TF-NGN test platform. The results of these tests are the subject of this report. Given that AF is actually implemented in routers with speci c mechanisms (conditioners, schedulers and queue management algorithms), we are interested in the eect that dierent marking schemes have on bandwidth sharing for ows forwarded in a single class. Several scenarios were studied. The rst experiment focused on the tuning of a router to get a particular behavior in terms of AF conformance (section 2). Then, in the second experiment, described in section 3, we tested bandwidth sharing for UDP ows based on dierentiated marking. Section 4 presents our study of bandwidth sharing and protection for TCP ows. The rst part of the section describes the tests carried out to verify the eect of marking in bandwidth sharing, using a simple rule we propose for setting the conditioner parameters. Afterwards, the simulation results for scenarios whose experimental testing was not feasible are presented. These scenarios included the eect of heterogeneous RTTs and aggregation with color-blind and color-aware re-marking. Finally, we present the conclusions and perspectives of our work in section 5.
PI n1478
4
Octavio Medina, Julio Orozco, David Ros
2 Tuning of an Active Queue Management Algorithm 2.1 Overview
The goal of the rst test activity was to nd the required parameter settings of a commercially available router so that it conformed with the AF speci cations regarding drop precedence within a class. From the AF speci cation [1], an Active Queue Management (AQM) mechanism is required for each class (queue) in the router. The parameters of this mechanism need to be set so that, in the presence of congestion, all red packets shall be dropped before any yellow packet. If congestion persists, all yellow packets shall be dropped before any green. The router used in the tests is a Cisco 7200 with IOS 12.1. The available AQM mechanism in this router is Weighted RED, a proprietary algorithm in which a set of parameters (max p, th min and th max ) can be set for each value of the IP precedence eld.
2.2 Weighted RED
Figure 1 shows the general concept of WRED for a three-precedence queue. It is based on the RED algorithm presented in [5] and RIO, its multi-level derivation [6]. The horizontal axis represents the average queue size, and the vertical axis represents the drop probability for packets. Two thresholds (th min and th max ) de ne the drop probability for the packets of each color. If the average queue size is less than th min , no packets are discarded. If the average queue size is between th min and th max , drop probability grows linearly, in an eort to signal the sources to reduce their transmission rate when they detect packet loss (congestion avoidance). If the average queue size continues to grow, exceeding th max , it means that congestion persists, and all packets will be discarded (congestion control) [7]. The WRED algorithm, just like RED, uses a low-pass lter (exponential weighted moving average) to obtain the average queue size: avg
(1 ? wq )avg + wq q
The weight wq de nes whether the instantaneous queue size (q) or the past calculated average (avg ) will have more eect on the calculation of the current average queue size. The selection of values for wq , the thresholds and the maximum loss probability for each precedence (color) is the key to dierentiation. This is
Irisa
5
Bandwidth Sharing under the Assured Forwarding PHB Pa
1
Pmax 2
Pmax 1
thmax 0
thmin 0
thmax 1
thmin 1
thmax 2
thmin 2
Pmax 0
avg
Figure 1: Weighted RED parameters quite a complex task, considering that for a single, independent RED algorithm, parameter setting is neither a fully understood nor closed issue.
2.3 Platform
The test was carried out in an ATM WAN (Fig. 2), in which a 220 kB/s (1.8 Mb/s) PVC was created to connect a Cisco 7200 router (R1) located at RedIris (in Madrid, Spain) and an ATM switch (R2) at IRISA (Rennes, France). DANTE network's ATM switch was the intermediary node, and did not have an eective role in the test. PVC 220 kB/s
RedIris
Dante
IRISA
Tx
Rx WRED R1: Cisco 7200
ATM Switch
R2: ATM Switch
Figure 2: Test platform
PI n1478
6
Octavio Medina, Julio Orozco, David Ros
2.4 Speci cation
In order to nd the right settings for WRED, an experiment with the following basic steps was designed and carried out:
Send a CBR ow at a rate that exceeded the capacity of the link. Mark packets at the ingress interface of the router (with dierent proportions
of green, yellow and red packets in several runs). Apply WRED at the output of the same router. Vary WRED parameters until the green packets are eectively dropped after all the yellow, and these, after all the red.
The CBR trac was generated as a UDP stream of 285 kB/s going from a host at RedIris to its peer at IRISA. Packet size is 1 kB. This means that:
Sending rate: R (285 kB/s). Eective link capacity: 0:75R (due to overhead, 215 kB/s is the maximum observed rate). Loss rate: 0:25R.
Router R1 marks packets at its ingress interface by means of a two-rate ThreeColor Marker (trTCM) [9]. This marker (Fig. 3) is composed of two cascaded token buckets: Tc (committed) and Te (excess), con gured with four parameters: the rate rc of the rst bucket de nes the CIR (Committed Information Rate), while its size bc de nes the CBS (Committed Burst Size). The rate re of the second bucket de nes the PIR (Peak Information Rate) and its size be de nes the PBS (Peak Burst Size). When a packet arrives at the trTCM, its conformance is rst checked by the excess bucket. If it does not conform to the PIR, it will be marked \red" and the other bucket will not be used. If it conforms to the PIR, it will then be checked by the committed bucket, where it will be marked either \green", if it conforms to the CIR, or \yellow", if it does not. For these tests, it was decided that green packets would go from 30 to 70% of the total rate (with increments of 10%), while yellow packets went from 15 to 30% of that rate (with 5% increments). Both bucket sizes were set at 4000 bytes. This small size is de ned in function of the absence of bursts when conditioning CBR sources.
Irisa
Bandwidth Sharing under the Assured Forwarding PHB Te: excess r e
be
7
Tc: committed r c
b c
r c
from 0,3 to 0,7 of link’s speed (increments of 0,1)
r e
from 0,15 to 0,3 of link’s speed (increments of 0,05)
b = b = 4000 bytes c e
green
yellow
red
Figure 3: Setting of trTCM parameters One last trac pattern was de ned so that green packets exceeded the link's capacity. These conditions were attained with an 80-10-10 (green-yellow-red) proportion. Figure 4 shows the color distribution of packets for input ows after marking.
2.5 Results
Settings were changed from its default values until acceptable results, in terms of the stated behavior, were obtained. The process to nd a proper set of values is roughly explained next. First, thresholds were moved to non-overlapping positions, then max p was raised for red and yellow packets. The combined eect of this changes did not lead to the desired absolute protection of green packets, a behavior nally observed when changing the weight wq . Table 1 and Fig. 5 show the nal WRED con guration. Threshold values are expressed in number of packets, while max p = m corresponds to a drop probability of 1=m. The weight parameter wq is set according to wq = 1=2n , with n a positive integer [8].
PI n1478
8
Octavio Medina, Julio Orozco, David Ros
Figure 4: Input trac distribution (after marking) color DSCP th min th max max p green 2 63 64 100 yellow 4 25 50 10 red 6 1 20 2 weight constant n 3
Table 1: Final WRED settings Pa
1 100 1 2 1
64
50
25
20
10 avg
Figure 5: Final WRED settings
Irisa
9
Bandwidth Sharing under the Assured Forwarding PHB
Table 2 shows how green packets are eectively protected. The dropping probability has to be very aggressive for red and yellow packets, while a plain tail-drop policy is required for green packets. It should be noticed that wq has to be large (the exponent n is small compared to its default value of 9), so that the mechanism is highly reactive to instantaneous changes in queue size, rather than to average queue size. Typically, RED seeks to avoid long-term congestion and accepts transient bursts. This is no longer true in the case of absolute protection of higher priority packets, since if a transient burst of red packets is accepted, incoming green packets will be discarded: hence the higher reactiveness required in this AF scenario. Input Packets (%) Losses (%) Green Yellow Green Yellow 30 15 0 0 40 20 0 0 50 25 0 0 60 30 0 37 70 30 0 69 80 10 2 94
Red 43 58 86 98 99 99
Table 2: Results for nal WRED settings
3 Bandwidth Sharing for UDP Flows
3.1 Overview
The goal of the next test activity is to observe if AF can dierentially distribute bandwidth among ows based on marking. It is important to know whether dierent marking parameters may result in a weighted distribution of the available bandwidth. By \weighted distribution of bandwidth", we mean that if two ows with the same total rate, but dierent assured (green) packet rates, traverse a congested link, the one with more green packets will get more bandwidth than the other. The tests were done for two marking scenarios: using three colors (green, yellow, red), and two colors (green/in, red/out). The three-color test is presented rst.
3.2 Platform
This experiment was carried out using the same platform as for the previous test activity (section 2.3).
PI n1478
10
Octavio Medina, Julio Orozco, David Ros
3.3 Three-Color Marking 3.3.1 Speci cation
These are the basic actions carried out for this experiment:
Send several CBR ows at the same rate; the aggregated rate exceeds the available link capacity. At the network's ingress interface, mark each ow with dierent rates of assured (green) packets. Measure the arrival rates of each ow. Vary the total amount of green packets as percentage of the available capacity. This experiment is designed to answer two questions:
Do ows get their assured rate by means of marking? If link capacity exceeds total assured rate, how is excess capacity shared among
ows?
Four sets of tests were executed varying the total amount of green packets as percentage of the available link capacity. It was set at 25, 50, 75 and 100%. For each set, four CBR trac streams, A, B, C and D, at 125 kB/s each (1 Mb/s), were sent from the sources. In order to check the dierentiated sharing of bandwidth, it was de ned that each one of the four sources would have a dierent assured (green) bit-rate. These rates were set at 40, 30, 20 and 10% of the total assured rate (not the rate of the ow). For the three-color test, one trTCM was con gured at the input interface of router R1 for each one of the sources. The yellow rate (i.e., the second token bucket) was xed at 1=2 of the corresponding green rate. WRED was con gured with the nal settings described in the previous section (see Table 2). As an example of the input for the three-color test, Figs. 6(a) and 6(b) show the distribution of packets when the aggregated green packet rate is 50%. The plot in Fig. 6(a) shows the color distribution for each one of the ows, while the plot in Fig. 6(b) shows the aggregated input trac. In this example, total input rate is 500 kB/s (4 125 kB/s), while link capacity is 215 kB/s (43% of input rate). If total assured rate must account for 50% of link's capacity, then it should be set at 107.5 kB/s. From this total rate, 40% will be sent in ow A (43 kB/s), 30% in
Irisa
11
Bandwidth Sharing under the Assured Forwarding PHB
ow B, 20% in ow C and 10% in ow D. In the case of ow A, the assured rate (43 kB/s), accounts for 43/125 = 34.4% of total ow's rate. Table 3 summarizes
ow rate speci cations.
(a) Per ow, green packet rate = 0.5 capacity
(b) Total
Figure 6: Input packet distribution (three-color test) Assured Rate (kB/s) Green Rate/ Total Flow A Flow B Flow C Flow D Link Capacity Green Rate (40%) (30%) (20%) (10%) 25% 53.75 21.50 16.13 10.75 5.38 50% 107.50 43.00 32.25 21.50 10.75 75% 161.25 48.38 49.50 32.25 16.13 100% 215.00 86.00 64.50 43.00 21.50
Table 3: Rates of assured (green) packets at input (three-color test) The rates of each one of the four ows were measured at the destination for ve runs of each set (25, 50, 75 and 100% of green packets with respect to the link's capacity).
3.3.2 Results
Table 4 and Fig. 7 show the results for the tests. The gure plots the average output rate of the four ows for the four aggregated assured rates (25, 50, 75 and 100%). The same values appear in the columns labeled as \Real" in Table 4, which also shows the desired assured rate for each ow.
PI n1478
12
Octavio Medina, Julio Orozco, David Ros
Flow Rate (kB/s) Total Assured Rate/ A B C Link Capacity Assured Real Assured Real Assured 25% 21.50 63.30 16.13 56.66 10.75 50% 43.00 71.31 32.25 56.19 21.50 75% 64.50 86.65 48.38 65.03 32.25 100% 86.00 82.40 65.50 66.05 43.00
D Real Assured Real 53.20 5.38 43.02 52.37 10.75 34.30 41.63 16.13 21.83 43.19 21.50 20.27
Table 4: UDP ow rates (assured and measured)
Figure 7: Weighted bandwidth distribution Results can be separated in two cases (see Table 5):
Excess fairness : In this case, sources obtain their assured bandwidth, and
excess resources are shared equally among all the ows. This behavior is observed in the rst two input pro les, where the aggregated assured rate was 25% and 50%. Weighted fairness : Sources obtain their assured bandwidth. Excess resources are shared dierentially based on the weights speci ed by the assured rates. This case is observed when the aggregated assured rate was 75%.
Irisa
Bandwidth Sharing under the Assured Forwarding PHB
13
Total Assured Rate/ Excess Bandwidth Share (%) Link Capacity A B C D 25% 25.74 24.95 26.14 23.17 50% 26.54 22.45 28.94 22.08 75% 41.12 30.89 17.41 10.58 100% 0.00 74.66 25.34 0.00
Table 5: UDP excess bandwidth sharing
3.4 Two-Color Marking
3.4.1 Speci cation
For the two-color test, most of the considerations described in the previous section are kept:
Four marking schemes, in which the total assured rate is 25, 50, 75 and 100%
of link's capacity. In each scheme, four UDP ows (A, B, C, D) are injected into the AF testbed. Each ow has a total rate of 125 kB/s. In all four schemes, ow A gets 40% of the assured packets, B gets 30%, C gets 20%, and D gets 10%. The elements that change are:
A single token bucket was used. A dierent WRED con guration in router R1 was required, considering only two precedence levels (Table 6).
color DSCP th min th max max p green 2 63 64 100 red 6 20 40 20 weight constant n 3
Table 6: WRED settings (two-color test)
PI n1478
14
Octavio Medina, Julio Orozco, David Ros
Figures 8(a) and 8(b) show the color distribution for the two-color test when green packets account for 50% of the link's capacity.
(a) Per ow
(b) Total
Figure 8: Input packet distribution (two-color test)
3.4.2 Results
Figure 9 shows the summary of results for the two-color test. These results show very signi cant dierences between two runs. In general, it can be said that low priority sources obtain far more than their fair (excess or weighted) share. Only when the aggregated assured packets are near 100% of the link capacity, weighted distribution is observed. The lack of dierentiation can be attributed to the low percentage of green (in ) packets, which means that there is not enough \information" for performing a good dierentiation.
4 Bandwidth Sharing for TCP Flows In this section, we focus on the study of bandwidth sharing for TCP ows. We rst present the tests performed to verify the eect of non-responsive UDP trac and trTCM parameter settings on bandwidth sharing for aggregated TCP ows. Other elements, such as the presence of heterogeneous Round Trip Times, aggregation and re-marking, were studied by simulation, since the conditions required for experimental testing were not readily available.
Irisa
Bandwidth Sharing under the Assured Forwarding PHB
15
Figure 9: Weighted bandwidth distribution
4.1 Eect of Marker Parameters in Bandwidth Sharing
4.1.1 Overview
The goal of this test is to study the eect of trTCM parameter settings and UDP trac on bandwidth sharing among TCP aggregates. Figure 10 shows the diagram of the trTCM. This marker uses two token buckets. One is used to mark low-priority, out-of-pro le trac (red packets), and the other, to mark medium-priority, excess trac (yellow packets) and high-priority, committed trac (green packets). Each token bucket is characterized by two parameters: a rate r and a bucket size b. So, the token buckets in a trTCM can be denoted as:
Tc(rc; bc) for committed trac. Te(re; be ) for excess trac. The work reported in [10] shows, based on simulations, that for a particular TCP
ow its burst size is directly proportional to its rate. Therefore, it is proposed in [10] that, having the committed rate rc as an objective parameter, the committed bucket size bc should be proportional to rc , because for a larger rc a larger bc is required to
PI n1478
16
Octavio Medina, Julio Orozco, David Ros Te: excess r e
be
Tc: committed r c
b c
green
yellow
red
Figure 10: Two-Rate Three-Color Marker accommodate larger bursts. Additionally, the trTCM algorithm states that every time a green packet is produced, tokens are taken from both buckets [9]. So, re must be at least equal to rc , and larger if any yellow packets are to be produced. With this condition in mind and in order to simplify settings, a proportional relation between rates is also proposed. In the current test, this approach is explored using two factors, > 0 and 1, such that: = be=re = bc =rc = re=rc so the token buckets can be denoted as:
Tc(rc; rc ) for committed trac. Te( rc; rc ) for excess trac. Thus, describes the proportion between rate and bucket size (for both buckets), while describes the proportion between the committed and excess rates. The sizes of the two buckets are equally proportional to its corresponding rate. Given these relations, this test deals with nding the values of and that give the best weighted bandwidth distribution among aggregates.
Irisa
17
Bandwidth Sharing under the Assured Forwarding PHB
4.1.2 Speci cation The rst task is to verify how UDP trac aects bandwidth sharing among TCP
ows. Then, we search the values for and that give the best bandwidth sharing and protection for TCP ows. Source Aggregate Markers
Individual TCP Sessions S S S
Sink
1,1 1,2
.. .
E1 4 Mb/s
1,10
TCP 1
S S S S S S S S S
2,1 2,2
.. .
TCP
E2
2,10
WRED 50 Mb/s
3,1 3,2
2
3 Mb/s
.. .
TCP 39
E3 2 Mb/s
3,10
TCP 40
4,1 4,2
.. .
E4 1 Mb/s
4,10
UDP
Background UDP
Figure 11: Logical test topology Figure 11 shows the logical topology for the test: four TCP aggregates (each one made of 10 sessions) and a UDP stream are forwarded in a network of two routers. In the ingress router, AF is supported by means of conditioning (ingress interface) and WRED (egress interface). The test procedure is as follows:
All four aggregates are marked at ingress with the dierent committed rates
to get a weighted sharing of the available bandwidth. UDP trac is always marked red (i.e., with the lowest priority). First, ows are injected with WRED disabled, to see the eect that the nonresponsive UDP ow has on bandwidth sharing. The experiment is repeated with dierent rates for the UDP stream.
PI n1478
18
Octavio Medina, Julio Orozco, David Ros
Then, with WRED enabled, the \best" value for (in terms of weighted
bandwidth sharing) is looked for. Several experiments are run for dierent values of , a xed UDP rate and a xed . Finally, the value for is found by running the test for dierent UDP rates, while keeping xed to the value found in the previous step.
4.1.3 Platform
The testbed is a local network located at CNAF-INFN, in Italy, consisting of two Cisco 7500 routers (IOS 12.1), connected through a 50 Mbps circuit, a single trac generator in one end and a receiving station in the other end (Fig. 12). Dante
4x10 TCP flows
50 Mb/s Tx
Rx WRED
trTCM 1 UDP flow
Cisco 7500
Cisco 7500
Figure 12: Test platform The ttcp trac generator, running on a Sun workstation, injects test ows to the network (UDP background trac and TCP ows, identi ed by port number). In the ingress router, WRED parameter settings are those found in the test presented in section 2.5.
4.1.4 Results
In the rst experiment, bandwidth sharing among TCP aggregates in the presence of a UDP with best-eort forwarding (i.e., WRED and trTCM disabled) was analyzed. UDP rates were 0, 15, 30 and 45 Mbps. Figure 13 shows the corresponding plot of the output. Here we observe a wellknown result: TCP is considerably aected by UDP trac, which gets all the bandwidth associated to its rate. Nevertheless, the capacity not used by UDP, if any, is shared equally among TCP aggregates. To nd the best , AF was enabled (WRED and trTCM), and bandwidth sharing was observed for values of from 0.1 to 1 (in 0.1 increments) and a UDP stream
Irisa
19
Bandwidth Sharing under the Assured Forwarding PHB
Figure 13: Bandwidth sharing for TCP aggregates (no AF) at 30 Mbps. was set at 1.5. A weighted scheme was used, assigning 4, 3, 2 and 1 Mbps as the committed rate for each one of the aggregates. UDP packets were always marked as red. The results are shown in Fig. 14. 45 40
Bandwidth Share (%)
35 30 25 20 15 10
aggr A aggr B aggr C aggr D
5
PSfrag replacements
0 0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Figure 14: Bandwidth sharing for TCP aggregates (varying , UDP at 30 Mbps)
PI n1478
20
Octavio Medina, Julio Orozco, David Ros
Bandwidth is eectively distributed among aggregates in a weighted fashion, and this distribution seems almost independent of the value of . The plot shows an example of \Excess Fairness" (as explained in section 3.3.1): there are resources in excess of the total committed rate (10 Mb/s), and those resources are distributed equally among the aggregates. That is why aggregates get a dierent share of the excess bandwidth as proportion of their assured rate. The \best" was found analyzing the output bandwidth distribution for dierent values of and a UDP stream at dierent rates. The tested values for were 1.5, 2, 2.5, 3, 3.5 and 4. The UDP stream was sent at 0, 15, 30 and 45 Mbps, and was xed at 0.1. Figure 15 shows the bandwidth distributions for = 2, 2.5, 3, 3.5 and all UDP rates.
(a) = 2
(b) = 2:5
(c) = 3
(d) = 3:5
Figure 15: Bandwidth sharing for TCP aggregates (varying )
Irisa
21
Bandwidth Sharing under the Assured Forwarding PHB
When is smaller (2.0 or 2.5), more red packets are produced, so, \red" bandwidth is shared equally between aggregates. Dierentiation (weighted distribution) is reduced in the absence of UDP background trac. When is bigger (3.0 or 3.5), dierentiation is reduced in high congestion scenarios, when UDP trac is at high rates. The explanation for this behavior is that, with a second token bucket at a faster rate, more yellow packets are produced. Losses are experienced for all red packets and some yellow packets, resulting in a less-precise two-level dierentiation. On the other hand, increasing the value of increases the total TCP throughput. Figure 16 shows a plot of the total bandwidth obtained by TCP aggregates for the dierent values of and rates of the UDP stream. This is the result of less TCP packets being marked at the same level as UDP (red). = 1:5 = 2:0 = 2:5 = 3:0 = 3:5 = 4:0
45
PSfrag replacements
Bandwidth Share (Mb/s)
40
35
30
25
20 0
15
30
45
Background UDP Traffic Rate (Mb/s)
Figure 16: Total TCP bandwidth for dierent UDP rates and values of Finally, with the values for and xed to 0.1 and 2.5, respectively, a dierent marking policy was evaluated. The weights for the aggregates moved from 4, 3, 2 and 1 Mbps to 6, 1.5, 1.5 and 1 Mbps. The plot in Fig. 17 shows the bandwidth distribution for dierent UDP loads. Dierentiation is more eective when UDP is active. This is the result of the tested router's implementation, where WRED is active only under congestion. When there is no UDP trac in the link, WRED is not applied and, therefore, no dieren-
PI n1478
22
Octavio Medina, Julio Orozco, David Ros
Figure 17: Aggregate TCP bandwidth sharing (alternate marking) tial treatment of packets is enforced. Dierentiation is better under congestion, but, on the other hand, the high-rate aggregate fails to obtain the desired rate (60%). There are two initial conclusions of this test: the rst one is that care should be taken when deciding if UDP trac is to be aggregated to TCP trac and processed in the same AF class. The second one is that bandwidth sharing can be achieved for individual TCP ows in the presence of unresponsive UDP ows by means of weighted marking.
4.2 Eect of Heterogeneous RTTs
4.2.1 Overview
The conditions to evaluate the eects of heterogeneous RTTs in bandwidth sharing among TCP ows can not be easily generated, controlled and measured in a test platform. Therefore, simulation is a more obvious choice.
4.2.2 Simulation Layout Using the well-known ns-2 Network Simulator [11], we built the layout depicted in Fig. 18 to study this subject. Two routers are connected through a 1250 kB/s link;
Irisa
23
Bandwidth Sharing under the Assured Forwarding PHB
the input are 32 TCP connections with RTTs that go from 40 to 288 ms (in 8-ms increments). RIO3|that is, RIO with three drop-precedence levels|is the AQM mechanism available at the ingress router. Sink
Source trTCM TCP 1: 40ms TCP 2: 48 ms
TCP 1 TCP RIO3
2
1250 kB/s TCP 31: 280 ms
TCP 31
TCP 32: 288 ms
TCP 32
Figure 18: Simulation layout for individual TCP ows with dierent RTTs Each ow was individually marked with a trTCM, in which dierent rates and bucket sizes were set for the rst and second token buckets:
For Tc, the rst bucket (committed), the rate was 20 kB/s and the bucket size
was 30 kB. For Te, the second bucket (excess), the rate was 40 kB/s and the bucket size was 90 kB.
The simulation was rst run with Best-Eort forwarding, then it was run with AF forwarding, i.e., with RIO3 enabled.
4.2.3 Results
The results with standard Best-Eort forwarding are shown in Fig. 19(a). Note that the bandwidth share decreases as the RTT increases. After enabling AF, the results were those shown in Fig. 19(b). AF improves fairness of bandwidth sharing, but penalizes the overall utilization of link capacity (93% in BE vs. 68% in AF). It can be said that marking with the same parameters does not assure a fair bandwidth distribution for TCP connections with dierent RTTs, since throughput
PI n1478
24
100
80
80 Mean Rate (kB/s)
100
60 40
60 40 20
0
0 40 56 72 88 104 120 136 152 168 184 200 216 232 248 264 280
20
40 56 72 88 104 120 136 152 168 184 200 216 232 248 264 280
Mean Rate (kB/s)
Octavio Medina, Julio Orozco, David Ros
RTTs of TCP connections (ms)
RTTs of TCP connections (ms)
(a) Best Eort
(b) Assured Forwarding
Figure 19: Output for mixed individual TCP with heterogeneous RTTs depends on RTT. Sessions with small RTTs accelerate faster, so they consume green tokens and generate red ones before the others. In general, a small RTT allows sessions to react faster to losses in the network, so that no big bursts of red packets are produced.
4.3 Eect of Aggregation and Re-marking
4.3.1 Overview
In order to evaluate the eect of aggregation and re-marking of TCP in bandwidth sharing, two questions are to be answered: How is bandwidth distributed among aggregates? How is bandwidth distributed among the micro ows that compose each aggregate? Having these two issues in mind, the goal set for the simulation presented in this section is to see if an equitable distribution of bandwidth at both levels can be obtained.
4.3.2 Simulation Layout The layout depicted in Fig. 20 was used. As in the previous layout, two routers are connected through a 1250 kB/s link. Five aggregates labeled E1 to E5 in Fig. 20,
Irisa
25
Bandwidth Sharing under the Assured Forwarding PHB
each one composed of 12 individually marked ows, are the input of the network, where they are remarked with dierent committed rates, looking for a weighted sharing of the available bandwidth. The committed rate of each aggregate is shown in Table 7. They also have dierent RTTs, chosen randomly from 40 to 300 ms, and the starting time goes from 0 to 1 second. Source Individual Markers S 1,1 S 1,2 S 1,12 S 2,1 S 2,2 S 2,12 S 3,1 S 3,2 S 3,12 S 4,1 S 4,2 S 4,12 S 5,1 S 5,2 S 5,12
.. .
.. .
Sink Aggregate Markers
E1 33%
E2
TCP
26.67%
1 TCP
.. .
RIO3
E3 20%
TCP 59 .. .
.. .
E4 13.33%
E5 6.67%
Figure 20: Simulation layout for aggregated TCP ows Aggregate Committed rate (kB/s) E1 300 E2 240 E3 180 E4 120 E5 60
Table 7: Committed rates for aggregated TCP ows
PI n1478
2
1250 kB/s
TCP 60
26
Octavio Medina, Julio Orozco, David Ros
4.3.3 Results
400
400
350
350
300
300 Rate (kB/s)
Rate (kB/s)
The plot in Fig. 21(a) shows the results of the simulation when the aggregate markers operate in color-blind mode. At the aggregate level, AF eectively distributes bandwidth in a weighted fashion according to the committed rates set in the markers.
250 200 150
250 200 150
100
100
50
50
0
0 E1
E2
E3 E4 Aggregate
E5
(a) Color-blind aggregate marking
E1
E2
E3 E4 Aggregate
E5
(b) Color-aware aggregate marking
Figure 21: Bandwidth sharing for aggregates But, what happened with bandwidth distribution for micro ows inside the aggregate? Figure 22(a) shows the corresponding plot. It is clear that the distribution is very unfair, due to heterogeneity in RTTs. A possible way of improving bandwidth sharing for micro ows inside the aggregate is to use a color-aware mechanism in the second-level (aggregate) marker. Original colors of individual ows give the aggregate marker more information about the behavior of each ow. Figure 23 shows the basic scheme of this color-aware marking. It is important to note that individual markers should be con gured with the same parameters, and with no relation to the aggregate marker, since in a real scenario sources cannot know the characteristics of this aggregate marker, nor how many ows will traverse it. This proposal was simulated using the same basic layout as the previous one. An individual TCM was used for conditioning each one of the 60 sessions, all with a committed rate of 15 kb/s. At the aggregate level, the weighted marking pattern was kept.
Irisa
27
60
60
50
50 Mean Rate (kB/s)
Mean Rate (kB/s)
Bandwidth Sharing under the Assured Forwarding PHB
40 30 20 10
40 30 20 10
0
0 E1
E2
E3
E4
E5
E1
E2
Aggregate
E3
E4
E5
Aggregate
(a) Color-blind aggregate marking
(b) Color-aware aggregate marking
Figure 22: Bandwidth sharing for individual ows inside aggregates Individual Marking S 1,1 S 1,2 S 1,12
.. .
Aggregate Color-Aware Marking E1 33%
Figure 23: Color-aware marking The plot in Fig. 21(b) shows the distribution at the aggregate level, while Fig. 22(b) shows the results of bandwidth sharing among ows in an aggregate. It can be seen that while color-aware marking increases fairness for micro ows, weighted distribution for aggregates, which was close to fairness, is lost. This issue needs further research, considering that aggregation is a major consideration in the DiServ architecture. Shaping of sources and alternative marking schemes are some of the possible solutions being studied in this matter.
PI n1478
28
Octavio Medina, Julio Orozco, David Ros
5 Conclusions and Future Work We found out that marking can eectively be used in AF to protect TCP ows from non-adaptive UDP streams. Anyway, care should be taken when deciding the admission of UDP and TCP trac in the same AF class. AF can also be used to distribute bandwidth dierentially among certain ows forwarded in the same class by means of marking. This capability is observed for UDP streams and TCP aggregates, but does not work well for TCP ows with dierent Round Trip Times, whether processed individually as micro ows or in aggregates. This drawback might limit the possible applications of an \Assured Bandwidth" service. Therefore, more research and experience in AF is needed to guarantee, at least in statistical terms, that individual sessions can get a better processing than that provided by plain Best Eort. Parameter setting is another open question regarding QoS mechanisms involved in AF. We saw during these tests that performance is de nitely sensitive to parameter values, both for trTCM and WRED. The latter is a specially complex case, since it requires as many as 10 dierent parameters to be properly adjusted. In the case of the trTCM, we used a simple proportional rule to limit the number of free parameters and facilitate its tuning, but a better analytical understanding is required for more general applications and requirements. Future work in this subject includes the design and implementation of alternative marking algorithms that yield a better dierential bandwidth distribution for TCP, both at the aggregate and the micro ow levels. Also, our study of the eect of heterogeneous Round Trip Times in bandwidth sharing for TCP sessions was done only with simulations. So, a practical test with real implementations should be made in order to validate the simulation results.
References [1] Heinanen, J. et al. Assured Forwarding PHB Group. Internet Standards Track RFC 2597, IETF, June 1999. [2] Seddigh, N. et al. An Assured Rate Per Domain Behavior for Dierentiated Services. Internet Draft draft-ietf-diserv-pdb-ar-01.txt, Work in progress, July 2001.
Irisa
Bandwidth Sharing under the Assured Forwarding PHB
29
[3] Campanella, M. et al. Quality of Service De nition. Project Sequin Report (Deliverable D 2.1), March 2001. http://www.dante.net/sequin/deliverables/ SEQ-01-030.pdf
[4] TF-NGN: Next Generation Networking.
http://www.terena.nl/tech/
task-forces/tf-ngn/
[5] Floyd, S. and Jacobson, V. Random Early Detection Gateways for Congestion Avoidance. IEEE/ACM Transactions on Networking, Vol. 1, No. 4, pp. 397{413, August 1993. [6] Clark, D. and Fang, W. Explicit Allocation of Best-Eort Packet Delivery Service. IEEE/ACM Transactions on Networking, Vol. 6, No. 4, pp. 362{373, August 1998. [7] \Congestion Avoidance Overview", in Cisco IOS Release 12.1 Quality of Service Solutions Con guration Guide. http://www.cisco.com/univercd/ cc/td/doc/product/software/ios121/121cgcr/qos_c/qcprt3/qcdconav. htm#11086
[8] \Con guring Weighted Random Early Detection", in Cisco IOS Release 12.1 Quality of Service Solutions Con guration Guide. http: //www.cisco.com/univercd/cc/td/doc/product/software/ios121/ 121cgcr/qos_c/qcprt3/qcdwred.htm
[9] Heinanen, J. and Guerin, R. A Two Rate Three Color Marker. Informational RFC 2698, IETF, September 1999. [10] Medina, O. E tude des algorithmes d'attribution de priorites dans un Internet a Dierentiation de Services. Ph.D. Dissertation. Universite de Rennes 1, March 2001. [11] McCanne, S. and Floyd, S. ns Network Simulator. http://www.isi.edu/ nsnam/ns/
PI n1478