Applying Adaptive Virtual Queue to Improve the ... - Semantic Scholar

2 downloads 0 Views 412KB Size Report
Under TSW2CM, the edge nodes monitor and mark packets to in-profile ..... [9] B. Nandy, N. Seddigh, P. Pieda, and J. Ethridge, "Intelligent Traffic. Conditioners ...
Applying Adaptive Virtual Queue to Improve the Performance of the Assured Forwarding Service X. Chang and Jogesh K. Muppala Dept. of Computer Science Hong Kong University of Science and Technology Clear Water Bay, Kowloon, Hong Kong Email: [email protected]

Abstract—Recent research studies in the over-provisioned networks have shown that the Assured Forwarding (AF) service in the current Differentiated Services (Diffserv) architecture fails to provide bandwidth assurance in some situations. This paper focus on the situation where adaptive and non-adaptive traffic coexist in the same real queue at routers and the buffer management scheme treats the traffic of the same priority in the same AF class indiscriminately. An enhanced RIO is introduced, which can, without excessively penalizing non-adaptive flows, (1) significantly improve bandwidth assurance of adaptive AF flows; (2) alleviate the starvation imposed on adaptive best-effort flows. These goals are achieved by doing the followings when the failure of bandwidth assurance is detected, (1) mapping adaptive OUT traffic and non-adaptive OUT traffic to different virtual queues; (2) adapting the queue length thresholds according to whether the bandwidth assurance is achieved. We validate our design through simulations. I.

INTRODUCTION

The Assured Forwarding (AF) service in the Differentiated Services (Diffserv) architecture is proposed to provide a scalable solution to the problem of service differentiation in the Internet [1]. The Time Sliding Window Two Color Marker (TSW2CM) [2] is one of the packet marking algorithms proposed to work with the AF service. Under TSW2CM, the edge nodes monitor and mark packets to in-profile (IN) or out-profile (OUT) according to a service profile; the congested core nodes make decision about whether discarding the arriving packets based on the marking information. Random Early Detection (RED) with IN and OUT (RIO) mechanism [2] is the active queue management (AQM) mechanism used in most AF implementations. RIO can be viewed as RED with a separate virtual queue for each priority and each virtual queue has its own set of RED parameters, that is, minimum/maximum queue level thresholds (maxth, minth) and maximum probability (maxp). Recently, several research studies such as in [3] and [4] have shown that bandwidth assurance in the AF service is not met in the over-provisioned networks when (1) there exists aggressive non-adaptive flows and (2) adaptive and nonadaptive traffic coexist in the same real queue at routers and the buffer management scheme treats the traffic of the same priority in the same AF class indiscriminately. The main reasons are (1) TCP congestion control algorithm; (2) nonadaptive flows make no response to congestion. Here, an over-provisioned link refers to the link where the sum of

Committed Information Rates (CIRs) of all competing aggregates is less than the link capacity. This paper aims to design an intelligent AQM to reduce the failure of bandwidth assurance due to the adaptive/nonadaptive traffic interactions while protecting the benefit of best-effort traffic. By assuming that a router can distinguish adaptive flows from non-adaptive flows, we propose an enhanced RIO, in which all traffic shares the same real queue. When the failure of bandwidth assurance is detected, the arriving adaptive OUT traffic and the arriving non-adaptive OUT traffic are mapped to different virtual queues. This concept is not new. The contribution of this paper is that in our mechanism, not the RED parameters of all virtual queues are static. Some parameters are adapted to the degree of the observed achievement of bandwidth assurance. We discuss the disadvantage of using static parameters in Section 3. We claim that the enhanced RIO is: (1) Simple and scalable. Because all actions are based on the local knowledge, that is, there is no cooperation of other routers or of end hosts. In addition, no per-aggregate state information is maintained and there is no per-aggregate processing at cored nodes. (2) Fair and robust. Over a wide range of network dynamics, it can significantly improve bandwidth assurance of unsatisfied aggregates and alleviate the starvation imposed on adaptive best-effort flows without excessively penalizing nonadaptive flows. Extensive simulation results show these abilities by compared to the original RIO [2] with TSW2CM as marker as ingress nodes. In this paper we call a flow/aggregate unsatisfied when its bandwidth assurance is not met; otherwise the flow/aggregate is called satisfied. By dropping, we mean either dropping a packet or setting its ECN bit at the time of congestion. We distinguish a real queue from a virtual queue by whether there are real packets in the queue. A virtual queue is maintained by a few variables, such as the number of packets. It is noted that the type of traffic contributing to a virtual queue may not be completely different from another virtual queue. For example, there are two virtual queues in RIO, one for IN traffic, the other for IN and OUT traffic. In addition, the scheme of a virtual queue may do nothing to some of the traffic contributing to this queue except accepting. The virtual queue of IN and OUT traffic in RIO is such example. Each virtual queue may have more than one set of RED parameters. The number of sets depends on the number of drop precedence levels in a virtual queue. FRED in [11] is an example that (1) applying multiple virtual queues; (2) each virtual queue has only one drop precedence level; and (3) each virtual queue has the same set of RED parameters, thus the

This work described in this paper has been supported by the Research Grants Council of Hong Kong SAR, China (Project No. DAG02/03.EG20)

0-7803-8939-5/05/$20.00 (C) 2005 IEEE

priority of traffic indifferent virtual queues is same. WRED in [12] is an example of having only virtual queue with multiple drop precedence level. RIO is similar to FRED. The difference between is an example of applying multiple virtual queues and each virtual queue belongs to a level of drop precedence. The rest of this paper is organized as follows. First, we discuss some related work on improving bandwidth assurance in Section 2. Then we present the new mechanism and design considerations in Section 3. The experimental evaluation is presented in Section 4. Finally we present the conclusions in Section 5. II.

RELATED WORK

Ever since Clark proposed the AF service framework by using RIO mechanism [2], extensive studies about fairness in this framework have been carried out. One approach to improve bandwidth assurance is to deploy intelligent markers like [6][7] at ingress nodes. The essence of such remedies is to increase the allowed maximum rate of high priority traffic of the unsatisfied flow and then the dropping probability of satisfied flows is indirectly increased. Another approach to improve bandwidth assurance is to use intelligent AQM at congested nodes to directly increase the dropping probability of satisfied flows. Using simulations, the authors in [5] conclude that utility of three levels of drop precedence in a traffic class depends on the traffic load, the sum of target rates and the service rate of the link. This suggests the inefficiency of applying three levels of drop precedence caused by static AQM configurations. In order to reduce the negative impact of non-adaptive flows on bandwidth assurance, the authors in [9] propose to map adaptive and no-adaptive traffic to different virtual queues of the same AF class, each with different level of drop precedence, or put them in different real queues. They also use static RED parameters and assume the knowledge of CIR of each aggregate. The authors in [10] apply different real queues and propose an adaptive method of allocating the link bandwidth among difference real queues. But using the loss of IN packet as the indication of the failure of bandwidth assurance is too conservative. It is known that even there is no IN packet drop, halving cwnd due to an OUT packet drop periodically can prevent from achieving bandwidth assurance. In addition, the adaptive method in [10] may cause bias against the AF queue or against the best-effort queue. III.

first describe the enhanced RIO and then give the design considerations. A. The Mechnaism The enhanced RIO consists of two parts. The first part takes care of dropping packets at time of congestion and performs when a packet arrives or leaves. The second part is in charge of adjusting RED parameters of virtual queues and performs periodically. Each part is explained in the following. The first part is the buffer management scheme, which needs three sets of RED parameters for packet IN, OUT and OUT-Down, respectively. These three sets of RED parameters OUT IN IN IN OUT OUT , qmin , pmax ], [ qmax , qmin , pmax ], and are [ qmax OUT-Down OUT-Down OUT-Down , qmin , pmax ]. The corresponding average [ qmax IN (formed only by IN packets), queue lengths are Qavg

OUT OUT-Down Qavg (formed by IN and OUT packets), Qavg (formed by all packets), respectively. Figure 1 gives the buffer management scheme, where all parameters are pre-defined OUT-Down OUT-Down OUT , qmin and pmax . constants except qmax

Parameters: p: dropping probability

For each packet arrival If the packet is OUT && the packet is UDP && Mark the packet as OUT-Down End If the packet is IN Compute Using

IN OUT OUT-Down Qavg , Qavg , Qavg

IN Qavg

IN

The main design idea of the enhanced RIO is that, when the core router detects the failure of bandwidth assurance, all arriving non-adaptive OUT packets are demoted to a new traffic level, which is lower than the OUT level, defined as OUT-Down. By dynamically adjusting the RED parameters of OUT and OUT-Down virtual queues, the goals mentioned above can be achieved over a wide range of network dynamics. The motivation of applying demotion is that the demotion approach can utilize the link bandwidth efficiently and the demotion approach can protect the benefit of adaptive best-effort flows to some degree. Best-effort traffic and FIFO scheduling will continue to have their place also, due to their fundamental simplicity and “good enough” performance as a simple method is still be deployed [13]. In the following, we

IN

and [ qmax , qmin ,

IN pmax ] to compute p

End If the packet is OUT Compute Using

OUT OUT-Down Qavg , Qavg

OUT OUT OUT OUT Qavg and [ qmax , qmin , pmax ] to compute p

End If the packet is OUT-Down Compute Use

OUT-Down Qavg

OUT-Down Qavg

OUT-Down

and [ qmax

OUT-Down

, qmin

,

OUT-Down pmax ] to compute p

End Drop this packet with p

For each packet to leave Mark the OUT-Down packet to OUT Figure 1

ENHANCED RIO

OUT OUT-Down qmax qmax

Buffer management scheme

The second part takes care of adjusting OUT-Down OUT-Down OUT qmax , qmin and pmax according to the status of the observed achievement of bandwidth assurance. The initial OUT-Down OUT-Down OUT-Down , qmin , pmax ] are set to values of [ qmax OUT OUT OUT , qmin , pmax ]. Figure 2 gives the adjusting algorithm, [ qmax where each core node monitors the instantaneous real queue length (Qlen) per packet arrival. We define a threshold (QThresh). We think that the node may be in danger if Qlen is above QThresh in the observing interval. When this danger OUT-Down OUT-Down , qmin ] are both decreased. If Qlen > occurs, [ qmax QThresh does not take place in the observing interval, they are OUT-Down OUT-Down , qmin ] is decreased increased step by step. When [ qmax

0-7803-8939-5/05/$20.00 (C) 2005 IEEE

OUT OUT below a lower bound [ qmax *γ, qmin *γ], we begin to decrease OUT max

p in order to prevent excessive penalization on nonadaptive traffic. B. Design Considerations The routers’ capability of distinguishing adaptive flows from non-adaptive flows is a premise. In our simulations we simply use TCP/UDP port number to distinguish. The discussion of the method of detecting the failure of bandwidth assurance is referred to [15]. Parameters: γ ,β , decr_per, incr_per : used-defined positive constants, all∈[0,1]

Initially OUT-Down

[ qmax

OUT-Down

, qmin

,

OUT-Down OUT OUT OUT pmax ]= [ qmax , qmin , pmax ].

Periodical Adjustment OUT-Down

If there is Qlen>QThresh in the last interval && qmax

OUT

< qmax *γ

OUT OUT OUT-Down pmax = min( 1.0, max( pmax *(1+ β ), pmax ))

End If

OUT-Down OUT qmax > qmax *γ OUT OUT OUT-Down pmax = min( 1.0, max( pmax *(1-β ), pmax ))

End If there is Qlen > QThresh in the last interval OUT-Down OUT-Down OUT OUT qmax = min(max ( qmax *(1-decr_per) , qmax *γ ), qmax ) OUT-Down OUT-Down OUT OUT qmin = min(max ( qmin *(1-decr_per) , qmin *γ ), qmin )

else OUT-Down OUT-Down qmax = qmax *(1+incr_per) OUT-Down OUT-Down qmin = qmin *(1+incr_per)

End Figure 2

The adjust mechanism

1. Why putting OUT and OUT-Down traffic to different queues In the original RIO, adaptive and non-adaptive OUT traffic are assigned to the same level of drop precedence. Now the question is whether it is enough to use the same average queue length but different set of RED parameters to compute dropping probabilities for OUT traffic and OUT-Down traffic, respectively. Simulation results show that there is little improvement in bandwidth assurance when there exists aggressive non-adaptive traffic. In such situation, most time OUT-Down qmin fluctuates around QThresh and the buffer is filled with IN traffic and UDP OUT traffic. Thus, we prefer to use different average queue sizes to compute the dropping probability, that is, map adaptive and non-adaptive OUT traffic to different virtual queues.

2. Why using virtual queue rather than real queue When mapping adaptive and non-adaptive traffic to different real queues at congested nodes in the overprovisioned network, the main problem is how to allocate the link bandwidth among the real queues, which affects not only the performance of hosts but also that of network nodes. It is possible to design an adaptive method similar to the enhanced RIO to guarantee that each real queue can be allocated with the bandwidth no less than the sum of CIRs of all the

aggregates passing via this queue in the over-provisioned network only based on local knowledge. Note that the bandwidth assurance of each aggregate is not guaranteed. Now the problem of using different real queues is how to allocate the left link bandwidth. It is difficult to allocate without degrading the performance of the adaptive traffic or of non-adaptive traffic, compared to apply the original RIO. In addition, it may lead to poor network performance since congested queues can’t utilize the freed link bandwidth allocated to the lightly loaded queues. This motivates us to apply virtual queues.

3. Why adjusting queue length thresholds of OUTDown In this section, we explain why we use dynamic RED parameters and then explain why we adjust queue level thresholds rather than maximum probability in the virtual queue of OUT-Down. Virtual queue with static RED parameters is simple. However, these static parameters are often determined using simple heuristics. Usually, a good static threshold scheme has to take a good compromise between the efficiency and the fairness of buffer sharing. Thus, in the varying networks, the bandwidth assurance may not always be improved efficiently or the UDP OUT traffic may be penalized excessively. Thus, using dynamic RED parameters are preferable. It is possible to speed up discarding OUT-Down packets OUT-Down . However, it is known that nonby increasing pmax adaptive sources don’t respond to congestion. That is, randomly discarding these packets can’t effectively and quickly control extremely aggressive UDP traffic. Adjusting queue level thresholds can provide timely and efficient control. Our simulation results show that when the enhanced OUT-Down OUT-Down fluctuates around qmax when there RIO is applied, Qavg are aggressive non-adaptive traffic. OUT is adjusted in the algorithm in Now we explain why pmax Figure 2 Both experimental and theoretical analysis has shown that the performance of RED is sensitive to traffic load [14] and its parameter settings. That is, it is possible that Qlen is above QThresh periodically due to the inappropriate parameter setting or due to traffic load burst. Thus, using Qlen>QThresh to detect the failure of bandwidth assurance may result in the unnecessary penalty on UDP flows in some OUT allows the possibility of reducing situation. Adjusting pmax such penalization. By increasing dropping adaptive traffic, OUT-Down Qavg is decreased and then dropping UDP OUT traffic is OUT *γ ) on reduced. In addition, applying lower bound ( pmax OUT-Down qmax also allows the reduction of such penalization.

IV.

SIMULATION RESULTS

This section presents simulation results obtained using the ns-2 [16] simulator. The main goals of the evaluation are to verify that the enhanced RIO can effectively improve bandwidth assurance and protect the benefit of adaptive besteffort traffic. We do simulation in 5 scenarios to compare the performances of the enhanced RIO and the original RIO. In the first 4 scenarios, the UDP flows’ characteristics are unchanged but the TCP flows’ characteristics are varying. In

0-7803-8939-5/05/$20.00 (C) 2005 IEEE

3

A1

2.5

A2

2

A3 A4

1.5

A5

1

A6

0.5

A7

0

A8

-0.5

Original

-1

RIO

Figure 4

Excess Goodput (Mbps)

The network topology for performance evaluation is shown in Figure 3 The link capacity and link delay that are not labeled in the figure are set to 20Mbps and 5ms respectively in default. There are 10 aggregates (A1…A10), from Si to Di respectively. A1 …A8 are adaptive aggregates. A9 and A10 are non-adaptive aggregates. Each aggregate consists of 5 micro-flows. Each micro-flow represents a connection. All the micro-flows in an aggregate have the same flow characteristics. Non-adaptive sources send CBR traffic over UDP. Adaptive sources send FTP traffic over TCP/Reno. A1 …A7, A9 are AF aggregates. A8, A10 are best-effort aggregates. Unless otherwise specified, the target rates of A1-- A7, A9 are all set to 2.0Mbps; the sending rates of A9 and A10 are both 5.0Mbps. C1—E1 is the bottleneck link and the subscription level is 120%. The subscription level of a link is defined as the ratio of the sum of CIRs of adaptive flows and the sending rates of non-adaptive flows to the link bandwidth.

Excess Goodput (Mbps)

the 5th scenario, we only vary the sending rates of UDP flows to investigate the ability of the enhanced RIO.

Enhanced

RIO

A9 A10

Different target rates

3

A1

2.5

A2 A3

2

A4

1.5

A5

1

A6 A7

0.5

A8

0

Original

-0.5

Figure 5

RIO

Enhanced RIO

A9 A10

Different # of micro-flows

C. Simulation_3: Different Packet Sizes The packet sizes of A1...A7 are set to 100, 300 500, 700, 1000, 1200, 1500bytes, respectively. Other settings are same as default values. Figure 6 shows the average excess goodput achieved by A1…A10 for each scheme. Network topology

In all the following simulations, each simulation lasts 500s. The Average Excess Goodput is used as the performance metric, defined as (averageGoodput- CIR). The average goodput of each aggregate is computed by measuring the number of packets received at the receiver over a specified time period, from 100th s to 500th s. Each simulation is repeated 10 times, and then a final average is taken over all the runs. In each following figure, the bar below the horizontal line (at 0 Mbps) denotes that bandwidth assurance of that aggregate is not met.

A. Simulation_1: Different Target Rates The target rates of A1…A7 are set to 0.5Mbps, 1Mbps, 1.5Mbps, 2Mbps, 2.5Mpbs, 3.5Mpbs and 4.5Mpbs, respectively. So the subscription level is 127.5%. Other settings are same as default. B. Simulation_2: Different Number of Micro-flows The number of micro-flows of A1…A8 is set to 5, 10, 15, 20, 25, 30, 35 and 15 respectively. Other settings are same as default values. Figure 5 shows the average excess goodput achieved by A1…A10 for original RIO and enhanced RIO.

Excess Goodput (Mbps)

Figure 3

The packet size is 1000bytes and the window size is 500 packets. The RED parameters [maxth, minth, maxp] for IN, OUT are [700,400,0.02], [400,150,0.05] respectively. decr_per=0.05, incr_per=0.01, β=10%, γ=75%. The observing interval is 1s. Adaptive host and network nodes are ECN-enabled for all simulations. For the ECN-able routers, when the average queue length is above maxth, all arriving packets are dropped.

3

A1

2.5

A2 A3

2

A4

1.5

A5

1

A6 A7

0.5

A8

0 -0.5

Original RIO

Figure 6

Enhanced

RIO

A9 A10

Different packet sizs

D. Simulation_4: Different RTTs We achieve different RTTs to different aggregates by setting the link delay of E1—Di (i from 1 to 7) to 10ms, 50ms, 200ms, 350ms, 500ms, 650ms, and 800ms, respectively. Other settings are same as default values. The results are in Figure 7 E. Simulation_5: Different Subscription Levels Now we examine the performance with two kinds RIO under different subscription levels. We achieve different subscription levels by changing the sending rates of A9 and A10 from 1Mbps to 13Mbps, respectively. The corresponding subscription level is varying from 80% to 200%. Other parameters are set as default. Since the performance of A1…A7 is similar, we only show the average excess goodput of A1, A8, A9, and A10, respectively under each subscription level.

0-7803-8939-5/05/$20.00 (C) 2005 IEEE

Excess Goodput (Mbps)

4

A1

3.5 3

A2

[2]

A3

2.5

[3]

A4

2 1.5

A5 A6

1 0.5

A7

0 -0.5

A8

Original

-1

RIO

Enhanced

Figure 7

Different RTTs

RIO

[4]

A9 A10

[5]

[6]

F.

Summary From the above simulation results, we can see that the demotion method and the dynamic parameter settings improve bandwidth assurance significantly and improve fair share in excess bandwidth. Because there is no per-aggregate information, the ability of A1…A7 in grabbing excess bandwidth remains unchanged. Thus, the ability of excess goodput among A1…A10 is same between the original and enhanced RIOs.

[7]

[8]

[9]

V. CONCLUSIONS

[10]

In this paper, we propose an enhanced RIO. By applying the adaptive virtual queues, this intelligent AQM efficiently alleviates the impact of aggressive non-adaptive flows on the fairness in the AF service. Extensive simulation results display its ability.

[11]

But there are still some remaining issues for further investigation, including (1) how to prevent from penalizing UDP flows excessively; (2) how to control aggressive adaptive flows only based on local knowledge; (3) adding onoff traffic for simulation. References

3.5 3 2.5 2 1.5 1 0.5 0 -0.5

[13]

[14]

[15] [16]

Excess Goodput (Mbps)

Excess Goodput (Mbps)

N. Christin, J. Liebeherr, and T.F. Abdelzaher, “A Quantitative Assured Forwarding service,” In Proc. IEEE INFOCOM’02, vol. 2, pp 864 - 873 Jun. 2002.

Original RIO

3.5 3 2.5 2 1.5 1 0.5 0 -0.5 Original RIO

Enhanced RIO

3.5 3 2.5 2 1.5 1 0.5 0 -0.5

Enhanced RIO

(b) A8: adaptive best-effort flow Excess Goodput (Mbps)

(a) A1: adaptive AF flow Excess Goodput (Mbps)

[1]

[12]

D. Clark, and W. Fang, “Explicit Allocation of Best-Effort Packet Delivery Service,” In IEEE/ACM Transactions on Networking, vol. 6, no. 4, pp. 362-373, Aug. 1998. N. Seddigh, B. Nandy, and P. Pieda, “Bandwidth Assurance Issues for TCP Flows in a Differentiated Services Network,” In Proc. IEEE GLOBECOM’99, vol. 3, pp. 1792-1798, Dec. 1999. S. Sahu, P. Nain, D. Towsley, C. Diot, and V. Firoiu, “On achievable service differentiation with token bucket marking for TCP,” In ACM. Performance Evaluation Review, vol.28, no.1, pp.23-33, Jun. 2000. M. Goyal, A. Durresi, P. Misra, C. Liu, and Raj Jain, "Effect of Number of Drop Precedences in Assured Forwarding," In Proc. IEEE GLOBECOM’99, vol. 1a, pp. 188-193, Dec. 1999. Y. Chait, C.V. Hollot, V. Misra, D. Towsley, H. Zhang, and J.C.S. Lui, “Providing throughput differentiation for TCP flows using adaptive two-color marking and two-level AQM, ” In Proc. IEEE INFOCOM’02, vol. 2, pp. 837-844, Jun. 2002. X. Chang, and J.K. Muppala, “A Robust Controller for Improving Performance in the AF-based Differentiated Services Network”, In Proc. IEEE IPCCC ‘04. P. Pieda, N. Seddigh, and B. Nandy, "The Dynamics of TCP and UDP Interaction in P-QoS Differentiated Services Networks," The 3rd Canadian Conference on Broadband Research, Nov. 1999 B. Nandy, N. Seddigh, P. Pieda, and J. Ethridge, "Intelligent Traffic Conditioners for Assured Forwarding Based Differentiated Services Networks," In Proc. IFIP HPN’00, May 2000. S. Yi, X.D. Deng, G. Kesidis, and C. R. Das, "Providing fairness in DiffServ Architecture," In Proc. IEEE GLOBECOM’02 D. Lin, and R. Morris, “Dynamics of random early detection,“ In Proc. ACM Sigcomm'97, pp. 127-137, Oct. 1997. R. Makkar, I. Lambadaris, J.H. Salim, N. Seddigh, B. Nandy, and J. Babiarz, “Empirical study of buffer management scheme for Diffserv assured forwarding PHB,” In Proc. IEEE ICCCN’00, Oct. 2000. S. Floyd, S. Ratnasamy, and S. Shenker, “Modifying TCP's Congestion Control for High Speeds,” Rough draft, May 2002. http://www.aciri.org/floyd. W. Feng, K.G. Shin, D.D. Kandlur, and D. Saha, “The BLUE active queue management algorithms,” In IEEE/ACM Transactions on Networking, vol. 10, no. 4, pp. 513-528, Aug. 2002. X. Chang, and J.K. Muppala, “Adaptive marking threshold for assured forwarding services,” In Proc. IEEE ICCCN ‘03. UCB/LBNL/VINT Network Simulator – NS (version 2), http://wwwmash.cs.berkeley.edu/ns/.

Original RIO

Enhanced RIO

(c) A9: non-adaptive AF flow

3.5 3 2.5 2 1.5 1 0.5 0 -0.5

Original RIO

Enhanced RIO

(d) A10: non-adaptive best-effort flow Figure 8

Different subscription levels

0-7803-8939-5/05/$20.00 (C) 2005 IEEE