Adaptive Priority Traffic Management Algorithm for

0 downloads 0 Views 414KB Size Report
different flows. The Weighted Fair Queuing (WFQ) is one such algorithm proposed [2]. Queue management algorithms manage the length of packet queues by.
Adaptive Priority Traffic Management Algorithm for IP-Based Diffserv WAN Nithyanandan Natchimuthu and Jamil Y. Khan School of Electrical Engineering and Computer Science, University of Newcastle, Callaghan, 2308 NSW, AUSTRALIA Abstract - An Adaptive Priority Traffic Management algorithm for a Wide Area Diffserv IP network is proposed to support varying QoS requirements of the Internet traffic. This proposal combines a scheduling algorithm and a queue management algorithm. In the scheduling algorithm, a new dynamic fair queuing technique that uses a continuous exponential function to determine the priority of service for a particular class of traffic is used for service differentiation. The queue management algorithm used with this model is Random Early Detection for a Diffserv network. The performance of the developed algorithm is evaluated using OPNET simulation model and compared with the Weighted Fair Queuing Model incorporated with Random Early Detection. I.

INTRODUCTION

Currently, the Transmission Control Protocol (TCP) congestion control and flow control mechanisms applied at hosts on the edge of the network is the major reason for the stability of the Internet. However with the recent integration of telephony with IP (VoIP) and the increasing usage of User Datagram Protocol (UDP) and multimedia applications in the Internet, hosts on the edge of the network singularly cannot provide sufficient congestion control for the transfer of stream information. There is an increasing need for routers to provide protection to wellbehaved flows from flows that are not so well-behaved and provide Quality of Service (QoS) to all flows. Traffic management algorithms using mechanisms for scheduling and queue management facilitates this [1]. Scheduling algorithms primarily manage bandwidth allocations for different flows. The Weighted Fair Queuing (WFQ) is one such algorithm proposed [2]. Queue management algorithms manage the length of packet queues by dropping packets whenever necessary or appropriate. The Random Early Detection (RED) [3] is one such proposed algorithm of this type. Currently, the Internet uses the best effort service for all classes of traffic using the simple datagram technique. However, in a move to provide service differentiation among flows, the Intserv paradigm was introduced [4]. The Intserv requires the use of packet classifiers and packet schedulers to identify and handle the forwarding of different packets so that appropriate QoS requirements are met. As the Intserv supports end-to-end per-flow metrics for unicast and multicast flows, the routers require lots of resources and computations in keeping per-flow state. To address the scalability issues of Intserv, the Internet

Engineering Task Force introduced another service model called the Diffserv [5]. In the Diffserv model, peraggregate service replaces per-flow service and complex processing is moved away from the core to the edge. Traffic is classified and conditioned at the edge of the network. In the core of the network, packet forwarding is in accordance with the per-hop behaviour associated with different behaviour aggregates. The Core Stateless Fair Queuing [6] is another model proposed using similar stateless-core architectures of the Diffserv. In this work, we propose a traffic management algorithm Adaptive Priority (AP) for a Diffserv network that guarantees a minimum QoS for certain classes of aggregate traffic. This technique uses a dynamic approach to schedule packets for transmission based on the maximum delay bound for particular classes of traffic. To provide fairness between the various classes of traffic, the algorithm operates in conjunction with queue management algorithm RED for the Diffserv network model known as DiffRED [7]. We compare this work with the WFQ scheduling algorithm incorporated with the DiffRED queue management algorithm. As will seen later, one of the major advantages of the proposed AP algorithm is that it provides guarantees with lesser computational resources compared to the WFQ. The structure of the paper is as follows. Section II describes the QoS based traffic management technique. OPNET simulation model is briefly introduced in the section III. Section IV discusses the simulation results of the proposed algorithm. Section V presents the conclusions of the work done. II. QoS-BASED TRAFFIC MANAGEMENT For the Diffserv network model, different packetforwarding treatments can be provided at each edge router [8]. These various packet-forwarding treatments are called per-hop behaviours (PHBs). Unlike the Intserv where the edge router allocates some resources for each flow, resource allocation is done on an aggregate basis for each PHB in the Diffserv network. The Diffserv supports three types of PHBs i.e. default (DE), Expedited Forwarding (EF) and Assured Forwarding (AF). The DE PHB is the standard best-effort treatment that edge routers perform when forwarding traffic. The EF PHB is a premium service that provides low-loss, low-latency, lowjitter, assured bandwidth for end-to-end services using the Diffserv network model. The AF PHB subscribes to traffic being delivered to destinations with the highest probability (but not necessarily with low jitter and low

latency), provided the aggregate traffic does not exceed the traffic profile. We have defined the DE PHB to support UDP-type traffic, streaming web video and other services which are non-responsive to congestion notification. The EF PHB is divided into 3 sub-classes i.e. Gold, Silver and Bronze. The EF-Gold is a premium service that supports low loss, low delay, low jitter and guaranteed bandwidth and this can include streaming multimedia and audio. To access this premium service however, the user must have a contract with the service provider. The EF-Silver and EFBronze PHBs support traffic flows of WWW, TELNET, FTP and other applications that are congestion responsive. The AF PHB supports traffic flows of SMTP, POP and other congestion responsive applications that require the highest probability in reaching the destination but not necessarily with low-jitter and low-latency. We define congestion responsive flows as flows that reduce their transmission into the Internet when packet drops are detected. In this proposed AP algorithm, the EF and AF PHBs support a maximum delay bound, provided the aggregate traffic of both these PHBs do not exceed the service rate of the edge router. The reason for placing the UDP and streaming multimedia traffic into the DE PHB is quite simple. It is desirable that packets of the DE type have low-latency at routers. On the other hand, when the network faces congestion, we want higher priority for EF and AF PHBs to access the transmission link compared to the DE PHB while at the same time not penalising DE type packets for lower priority for an extended period of time. To do this, active queue management algorithms like DiffRED must be implemented on the packets so that packets can be dropped to signal congestion back to the sources. The congestion responsive sources will then reduce the transmission rate into the network and prevent excessive tail dropping of DE type packets. In the proposed scheduling algorithm, the above attributes are achieved when the load of the network is under unity where we have defined load to be the ratio of traffic received by the edge router (in bits/sec) to its service rate (in bits/sec). When the network is not facing any sort of congestion, then the DE type flows will have relatively lower queuing delays when compared to the EF and AF type flows. In the event of congestion (when the load is close to unity and/or goes beyond unity), the algorithm shifts its preference to EF and AF flows for relatively lower queuing delays compared to the DE flows at the edge routers. At the same time, extended penalisation of DE flow is avoided by the implementation of active queue management algorithms for the other types of PHBs. With this approach, good QoS can be achieved for all types of traffic during low congestion periods and minimum QoS for selected traffic during congestion, without having extended bandwidth penalisation for non-responsive traffic flows. In this work, we have used 5 different types of service namely the DE, EF-Gold, EF-Silver, EF-Bronze and AF. We assume that the host node determines the PHB type of the transmitted packet and that the edge routers

differentiate the flow of packets by reading from the DS field of the packet. For the EF-Gold premium service however, the service provider must mark these packets for priority queuing at edge routers. Packets arriving at an edge router will be placed in one of the five types of queues based on the PHB type of a packet. In the core of the network, packets will traverse through the high-speed core routers using the FIFO protocol. Fig. 1 depicts the structure of an edge router. In our work, we have assumed that the same processor does the multiplexing and demultiplexing of traffic.

Fig. 1: Architecture of edge router.

Arriving packets to the edge routers are time-stamped. Unlike WFQ algorithm which requires time-stamping for every packet, the AP algorithm can support bulk packet processing per PHB queue which means lesser packets need to be time-stamped. Undoubtedly, this leads to coarser results but on the other hand, the number of computations required is reduced. To implement a suitable scheduling algorithm to serve various types of aggregate service mentioned, an exponential equation of the following type is used at the edge routers: f i ( ∆t ) = µ i [1 − exp(−λi ∆t )]

(1)

where (λi , µ i ) are a unique set of parameters for a particular type of PHB i and ∆t is the waiting time of the packets in the queue i. In the context of this work, f i ( ∆t ) can be defined as the service level of a PHB i and is identified with the priority of service to the transmission link at any one time in the edge router. λi represents the biasness of a PHB i relative to another PHB i′ and determines how fast f i ( ∆t ) increases as a function of the queuing delay ∆t . The parameter µ i sets the upper bound limit for the service level of each PHB i. For accessing the transmission link, the service levels f i ( ∆t ) is calculated for each packet in front of the queues of PHBs i in the edge router. The packet with the highest service level gets access to the transmission link. After serving this packet, the edge router recalculates the service levels of each packet in front of each PHB queue i

and gives the transmission link to the packet with the highest service level f i (∆t ) . This is repeated non-stop. The service levels f i (∆t ) as a function of queuing delay in the edge router for the various PHBs are depicted in Fig. 2. The reasons for choosing equation (1) to determine the service level are as follows. In accordance with its PHB type, the packet in front of the queues with the highest values of f i (∆t ) will have access to the transmission link. Denoting tint (i, i′) as the intersection time of f i (∆t ) and f i ′ (∆t ), and taking the example of the EF-Gold and DE PHB in Fig. 2, it can be seen that

levels of end-to-end delays can be supported for these various subclasses of traffic. In supporting these various types of PHBs in the edge router, the parameters (λi , µ i ) used in the simulation are given in Table 1. These parameters used reflect the actual plot in Fig. 2.

f EF −Gold (∆t ) ≤ f DE (∆t ) if ∆t ≤ t int ( EF − Gold , DE ) , f EF −Gold (∆t ) ≥ f DE (∆t ) if ∆t ≥ t int ( EF − Gold , DE ) . If on average, the queuing delay of packets in front of the queue is below the intersection time of the f DE (∆t ) and f EF −Gold (∆t ) i.e. ∆t ≤ tint ( EF − Gold , DE ), then the service level of DE type packets is greater than the service level of EF-Gold type packets on average. This results in packets of the DE type having a higher priority for access to the transmission link in general. However, if the queuing delays of EF-Gold type packets go beyond tint ( EF − Gold , DE ), then the service level of EF-Gold type packets on average will be greater than DE type packets. This results in higher probability of f EF −Gold (∆t ) being greater than f DE (∆t ) which in turn results in EF-Gold packets having higher probability to access the transmission link compared to the DE type packets. The same interpretation applies to the other classes of traffic. Another reason for the use of equation (1) is that it allows for different values of service levels at different congestion levels. As can be seen in Fig. 2, the service level of DE type packets is higher compared to the packets from the other PHBs below tint ( j , DE ) where j ∈ {EF-Gold, EF-Silver, EF-Bronze, AF} but has a lower service level compared to the other PHBs above tint ( j , DE ) . From Little’s theorem, it’s known that there is a strong correlation between queuing delays of packets and congestion in the network [8]. When the network load is low, all packets in the front of the queues will generally have low queuing delay ∆t when compared with a network having high load. Thus, DE type packets have higher probability of accessing the transmission link for these packets will in general have higher service levels compared to the others. However, when the packets have a higher queuing delay as a result of congestion, then the priority of access to the transmission link shifts automatically to EF and AF type packets providing higher probabilities of access to the transmission for these classes of traffic. This very nature of the equation and the use of queuing delays of packets along with suitable parameters of (λi , µ i ) provide an avenue to support priority queuing for differentiated services at low and high traffic loads. We also point out that in this work, EF type PHBs have the same maximum bound with f EF − Bronze (∆t ) < f EF − Silver (∆t ) < f EF − Bronze (∆t ) for any finite value of ∆t . With this sub-classification of the EF PHB, different

Fig 2: Service levels for different aggregate traffic of various PHBs in the Diffserv network model.

For bulk packet processing in the AP algorithm, the AP algorithm keeps track of the number of packets in the subqueue at any one time. Given the algorithm serves a maximum of x packets in the subqueue per computation, then incoming packets are time-stamped if the current number of packets not being served in the queue mod x is equal to 0. With this, a maximum of x packets per subqueue can be served per packet time-stamp. III. SIMULATION MODEL AND RESULTS The simulation performed in this work is done using OPNET. Fig. 3 shows the network architecture of the simulation model. The network architecture used is symmetric i.e. each router (core and edge) will have similar traffic behaviour. Each edge router is attached to five different types of PHB sources and to a core router. Each of the five different PHB sources contain 10 bursty sources and simulates the DE, EF-Gold, EF-Silver, EFBronze and AF type PHBs. These sources generate packets that are sent to other nodes through links and routers using a fixed path. The parameters used for the WFQ, AP and the DiffRED are given in Table 1.

Fig.3: Network architecture of simulation model.

8

AF

AF EF-Bronze EF-Silver EF-Gold DE

6

1 1.05 1 2.50 3.5

Average Network Load

1.0

0.9

0

200

400

600

Time (sec)

800

2 0.3 0.2 0.1

Table 1: Simulation parameters.

0.0

0

200

400

600

800

Time (sec)

Fig. 4: Time-plot of ETE delay using WFQ with DiffRED AF EF-Bronze EF-Silver EF-Gold DE

6

End-to-end Delay (sec)

The buffer sizes for the edge routers are set to 2 x maxth of each PHB in bits. The buffer sizes in the core routers are assumed to be infinite for the purposes of this simulation. The DiffRED model in this work does not use the gentle variant i.e. the variable β is set to 1 for all PHB queues. More information about the DiffRED model can be found in [7]. In this simulation, the EF-Silver, EFBronze and AF flows emulate TCP congestion control [8]. The AP algorithm serves a maximum of 20 packets per PHB queue per computation. The service rates of the edge routers and core routers are set at 10 Mbps and 25 Mbps, respectively. The link speeds are set to 10 Mbps. The simulation run times are 15 minutes.

4

1.1

Load

EFEFEFGold Silver Bronze WFQ parameters weights 5 6 3 2 AP parameters 0.9 1 1 1 µ 120 90 15 8 λ DiffRED parameters for subqueues (x 106) minth 0.25 0.25 0.25 0.25 maxth 0.5 0.5 0.5 0.5

4

Average Network Load

1.1

Load

DE

End-to-end Delay (sec)

PHB

1.0

0.9

0

200

400

600

Time (sec)

800

2

0.6 0.4 0.2

Figs. 4 and 5 depict the time-plot of the end-to-end (ETE) delay of the network using WFQ and AP scheduling algorithm along with the DiffRED queue management algorithm. Generally, there is a lot of similarity between the ETE delays of both the algorithms. The AP does provide service differentiation for the various classes of traffic. However, there is a slight difference. Unlike the WFQ, the AP does not provide good ETE delay for DE flow during high traffic loads (compare peaks of average traffic load with the ETE delay of DE traffic). When the average traffic load becomes lower and goes below unity, the ETE delay of DE flow becomes lower and comparable to the EF-Gold PHB (compare dips of average traffic load with the ETE delay of DE traffic). At points where the load is high, the queuing delay of all flows of traffic increases. However, the increment in queuing delay results in DE flows having relatively lower priority for access to the transmission link (see Fig. 2). Simultaneously, due to the DiffRED and TCP congestion control mechanism, packets of the EFSilver, EF-Bronze and AF flows are dropped in accordance with its queue sizes resulting in lower transmission rates of these flows. The average queuing delay of flows decrease and the relative priority of DE traffic increases compared to the other flows giving it higher probability of access to the transmission link and subsequently lower ETE delays. From here, it can be seen quite clearly that DE traffic is not extensively penalised during high loads of traffic. Fig. 6 depicts the frequency of packet time-stamping and calculation invoked by the edge routers for the WFQ and AP algorithm.

0.0

0

200

400

600

800

Time (sec)

Fig. 5: Time-plot of ETE delay using AP with DiffRED. Frequency of time-stamping and calculation

IV. RESULTS AND DISCUSSION

320

280

WFQ - DiffRED AP - DiffRED 120

80

40 0

200

400

600

800

Time (sec)

Fig. 6: Time-plot of the frequency of packet timestamping and calculation invoked by edge routers for WFQ and AP scheduling algorithm

As can be seen in this figure, the frequency of computation and packet time-stamping invoked by the edge routers using the AP algorithm is about 20% of the frequency of computation and packet time-stamping required using the WFQ algorithm. This means that the AP algorithm is much more cost-friendly in terms of computation resources while at the same time, providing QoS guarantees for certain classes of traffic. Fig. 7 depicts the total queue size for an edge router in the network. In this figure it can be seen that the queue sizes for the edge routers using the WFQ scheduling algorithm has a higher probability of having small or almost empty queues compared to the edge routers implementing the

AP scheduling algorithm. This suggests that the APDiffRED algorithm manages the queue length better compared to the WFQ-DiffRED algorithm since it does not have empty queues as often. WFQ - DiffRED AP - DiffRED

400

Frequency

300

200

100

0 0.0

6

2.0x10

6

4.0x10

6

6.0x10

6

8.0x10

V. CONCLUSION The AP traffic management algorithm uses queuing delay in conjunction with two other parameters to calculate the service levels for priority access to the transmission link along with the DiffRED active queue management algorithm. In this work, we have shown that service differentiation is provided for certain classes of traffic with guaranteed bandwidth and maximum delay bounds with much lesser computational resources. The proposed algorithm serves any ratio of flows with good QoS when traffic load is under unity. When traffic load is high, the AP algorithm selects certain flows to be transmitted according to a defined guaranteed bandwidth and lower priority classes are penalised but only for a short term due to queue management algorithms.

7

1.0x10

Total Queue Size (bits)

Fig. 7: Histogram of total queue size of edge router 0 using WFQ and AP scheduling algorithms with DiffRED queue management algorithm.

In this work, we have proposed a dynamic Adaptive Priority scheduling algorithm in conjunction with a DiffRED queue management algorithm for a Diffserv network using the various PHBs. We have provided timeplot results of ETE delay. From these results, it can be seen that the AP algorithm provides good service differentiation with less computational resources (≈ 80% reduction in computation compared to WFQ). For highspeed routers, the WFQ algorithm is not suitable because it is too demanding computationally. However, with the AP algorithm, good QoS can be achieved for EF-Gold, EF-Silver, EF-Bronze and AF type flows with much lesser computational resources compared to the WFQ. Only the DE type flow is not guaranteed with the AP algorithm. However, even if the DE type flow is penalised for lower priority due to congestion, it doesn’t suffer from lower priority too long as seen in Fig. 5. Unlike the WFQ algorithm which has weights that are set for certain classes of traffic and cannot be varied during different stages of traffic load, the AP traffic management algorithm tries to provide good service for DE type traffic (since most DE traffic do require low ETE delays) and only resorts to shifting DE traffic to a relatively lower priority when the network is facing congestion. Since traffic in the Internet is quite bursty and self-similar [9], there will be times when traffic load is high and there will be no other option but to assign lower priority to certain classes of traffic for access to the transmission link. The AP traffic management algorithm facilitates this shifting of priority. The AP algorithm is also suitable for high-speed routers that need guaranteed bandwidths for certain classes of traffic due to the algorithm’s nature in providing service according to its delay requirements with minimal costs in computations. Currently, work is being undertaken to study the effects of heavy-tailed traffic sources on the edge routers using the AP algorithm. Future works would also try to incorporate a modification to DiffRED so that it would support Explicit Congestion Notification for the edge routers [10].

VI. REFERENCES 1.

Braden B, Clark D, Crowcroft J, Davie B, Deering S, Estrin D, Floyd S, Jacobson V, Minshall G, Partridge C, Peterson L, Ramakrishnan K, Shenker S, Wroclawski J and Zhang L, “Recommendations on Queue Management and Congestion Avoidance in the Internet”, RFC 2309, April 1998. 2. Parekh A K and Gallagher R G, “A Generalized Processor Sharing Approach to Flow Control in Integrated Services Networks: The Single-node Case”, IEEE/ACM Transactions on Networking, Vol. 1, No. 3, June 1993. 3. Floyd S and Jacobson V, “Random Early Detection Gateways for Congestion Avoidance”, IEEE/ACM Transactions on Networking, Vol. 1, No. 4, August 1993. 4. Braden R, Clark D and Shenker S, “Integrated Services in the Internet Architecture: An Overview”, RFC 1633, June 1994. 5. Blake S, Black D, Carlson M, Davies E, Wang Z and Weiss W, “An Architecture for Differentiated Services”, RFC 2475, December 1998. 6. Stoica I, Shenker S and Zhang H, “Core-stateless Fair Queuing: A Scalable Architecture to Approximate Fair Bandwidth Allocations in High Speed Networks”, Proc. ACM SIGCOMM, August 1998. 7. Abouzeid A A and Roy S, “Modeling Random Early Detection in a Differentiated Services Network”, Computer Networks, Vol. 40, pp. 537-556, 2002. 8. Leon-Garcia A and Widjaja I, Communications Networks, McGraw-Hill 2000. 9. Leland W, Taqqu M, Willinger W and Wilson D, “On the Self-Similar Nature of Ethernet Traffic (Extended Version)”, IEEE/ACM Transactions on Networking, Vol. 2, No. 1, pp. 115, February 1994. 10. Ramakrishnan K and Floyd S, “A Proposal to Add Explicit Congestion Notification (ECN) to IP”, RFC 2481, January 1999.

Suggest Documents