Design and Implement Differentiated Service ... - Semantic Scholar

3 downloads 2996 Views 178KB Size Report
Abstract. Differentiated Service Model (Diffserv) is currently a ... [3] talked about multi-field packet classification. [9] analyzed PHB ... this work, and their solutions.
Design and Implement Differentiated Service Routers in OPNET  Jun Wang, Klara Nahrstedt, Yuxin Zhou Department of Computer Science University of Illinois at Urbana-Champaign  junwang3, klara, z-yuxin  @cs.uiuc.edu

Abstract

In early 90’s, the Integrated Service Model (IntServ) was proposed which provides an integrated infrastrucDifferentiated Service Model (Diffserv) is currently a ture to handle conventional Internet applications and those popular research topic as a low-cost method to bring QoS QoS-sensitive applications together [7, 13]. IntServ usto today’s Internet, especially in the backbone. Simulation es resource ReSerVation Protocol (RSVP) as its signaling is the best way to study Diffserv before deploying it to the protocol. [4, 5, 16] Although IntServ / RSVP can provide real Internet. In this paper, we introduce the techniques QoS guarantees to applications, it has a scalability proband methodologies that we used to design and implement lem since each router in the model has to keep track of Diffserv enabled routers by using OPNET. We have im- individual flows. To address the scalability issue, a new plemented the Token Bucket and Leaky Bucket algorithm- core stateless model, called Differentiated Service Mods, RIO and PS queueing schemes, RED dropping schemes el (Diffserv) was proposed and has become a popular reand other components in OPNET IP modules. Based search topic as a low-cost method to bring QoS to today’s on these Diffserv enabled routers, we setup a large scale Internet, especially in the backbone networks [10, 6]. network to study Diffserv QoS features: priority dropIntensive research efforts have been done on the Diffping (discrimination between different service classes), serv topic. For example, [12] studied pricing issue in the QoS guarantees, token bucket effects, fragmentation/de- Diffserv model. [2] implemented a Diffserv router on Linfragmentation effects and so on. Furthermore, we present ux platform. [1] introduced a combination of Diffserv and problems we encountered during our study, and their so- MPLS. [3] talked about multi-field packet classification. lutions. [9] analyzed PHB mechanisms for the premium service. [15] studied packet marking issue in Diffserv. [11, 14] did research on scheduling issues in core stateless networks. 1 Introduction Although intensive research has been done on this topic, it is still hard to progress in the Diffserv research with Internet traffic has increased at an exponential rate recent- respect to the overall impact on the Internet because it is ly and shows no signs of slowing down. In the meanwhile, too expensive and still not possible to deploy real Diffsome new classes of applications (e.g., distributed multi- serv enabled routers into the whole Internet at one shot media applications, distributed realtime applications, net- just for research purposes. Thus simulation is the best work management etc.) raise requirements for underlying way to study Diffserv before deploying it to the real Innetwork infrastructure to provide soft or even hard Quality ternet. The OPNET is a good simulator which provides of Service (QoS) guarantees, which throws big challenges complete node and model libraries besides the thorough to current Internet, since the current Internet provides only documentations. one simple service class to all uses with respect to QoS: In this work, we introduce our design and implementabest-effort datagram delivery which can not provide any tion of DS-enabled routers in the OPNET simulation enservice quality guarantees. The gap between QoS provi- vironment. Intensive simulations are conducted to verify sioning and demanding is even enlarged. our design and implementation and to study the UDP per This work was supported by the National Science Foundation PACI formance over Diffserv in a large scale network. We also grant under contract number NSF PACI 1 1 13006, and NSF CISE In- introduce several problems we have encountered during this work, and their solutions. frastructure grant under contract number NSF EIA 99-72884.  Please address all correspondences to Jun Wang and Klara NahrstThe paper is organized as follows. In section 2, we edt at Department of Computer Science, University of Illinois at UrbanaChampaign, Urbana, IL 61801, phone: (217) 333-1515, fax: (217) 244- introduce the Diffserv model and the design issues. In section 3 we cover the implementation of DS routers in 6869. 1

2

Meter

OPNET. Section 4 describes the simulations we conduct in the OPNET environment and their results. In the last section, we conclude our work.

(Token Bucket / Leaky Bucket)

Classifier

2 Differentiated Service in the Internet 2.1 Diffserv Model The main purpose of the Diffserv model is to provision end-to-end QoS guarantees by using service differentiations in the Internet. Unlike the Intserv model, it does not keep soft states for individual flows, instead, it achieves QoS guarantees by a low-cost method - aggregating individual flows into several service classes. Therefore, the Diffserv model has a good scalability. The Diffserv model works as follows. Incoming packets are classified and marked into different classes, using so-called Differentiated Services CodePoint (DSCP) [8] (e.g., IPv4 TOS bits or IPv6 Traffic Class bits in a IP header). Complex traffic conditioning such as classification, marking, shaping, policing are pushed to network edge routers or hosts. Therefore, the core routers are relatively simple - classify packets and forward them using corresponding Per-Hop Behaviors (PHBs). From the administrative point of view, a Diffserv network could consist of multiple DS domains. To achieve end-to-end QoS guarantees, the negotiation and agreement between these DS domains are needed. Although the boundary nodes need to perform complex conditioning like the edge nodes, the interior nodes within DS domains are simple. [6, 10] Three service classes have been proposed: the premium class, assured class and the best-effort class. Different service class is suitable to different types of applications. For example, the premium service provides a virtual reliable leased line to customers with desired bandwidth and delay guarantees, while the assured service focuses on statistical provisioning of QoS requirements and it can provide soft and statistical guarantees to the users. [10]

2.2 Design of DS Routers The Differentiated Service enabled routers (DSenabled routers or DS routers) are key nodes in the Diffserv model. There are two types of DS-enabled routers: (1) edge routers and (2) core routers. In this work, we focus on the design and implementation of the edge routers, since the core routers are simpler compared to the edge routers. Figure 1 shows the structure of a DS router. In the figure, we note that there are several key components in the DS router structure.

Marker / Remarker

Queueing Disciplining PS-queue / RIO-queue

Shaper

Dropper

Figure 1: The Structure of a DS Router

 The Classifier. The classifier classifies packets according to their DSCP in the IP headers. The classifier in a edge node may consider other information, such as source addresses and port numbers. After being classified, packets are put into premium, assured and best-effort classes accordingly.

 The Meter. The meter performs in-profile / out-ofprofile checking on each incoming packet. It uses the token bucket scheme to monitor the assured traffic, and uses the leaky bucket scheme to monitor the premium traffic, since the token bucket allows certain amount of bursts but the leaky bucket does not. Both leaky bucket scheme and token bucket scheme can control the output rates by the token generation rates.

 The Marker/Re-marker. After being classified, packets are marked into premium, assured and best-effort classes accordingly. Re-marking happens when assured packets become out-of-profile, which means they violate the contracted speed limit. They are remarked as best-effort packets.

 The Dropper/Shaper. If premium packets become out-of-profile, they are dropped directly by the dropper. Shaping happens in the edge nodes or boundary nodes, which eliminates jitters.

 The Queueing Disciplining Modules. The queueing discipline modules are very important for the DS model. The differentiation is achieved here. We use two separated queues: the Premium Service Queue (PS-queue) for the premium packets and the RIO-queue 1 for both assured packets and best-effort packets. The PS-queue is a simple FIFO queue, while the RIO-queue is more complicated. Figure 2 illustrates the multi-class Random Early Detection (RED) algorithm which the RIOqueue is using. When RIO-queue length exceeds the dropping threshold  

 , new best-effort packets are dropped with increasing probability up to  . 1 Random Early Detection with distinction of In-profile and Out-ofprofile packets [2]

3 Dropping Probability 50

1.0 45

40 clients

INET_CLOUD 35 servers

Pb

Pa 30

25

0

Tmin_b

Tmax_b

Tmin_a

Tmax_a

Queue Length -125

Figure 2: RIO Queueing Discipline

-120

-115

-110

-105

-100

3 Implementation We implement the Diffserv enabled router (DS router) using the OPNET simulation environment. Based on the DS routers, we construct a large-scale network environment in which multiple DS routers and traffic senders/receivers are included. The simulations we conduct focus on the verifications of the DS routers and the study of their performance. In our configuration we consider multiple DS routers, traffic senders and one receiver (Figure 3). The simulation is implemented in OPNET Modeler 6.0.L running on Windows NT 4.0 Workstation with dual PentiumPro 200Mhz CPU and 128MB of RAM. Figure 3 also shows the scale of the simulation environment. The ”clients” subnet comprises three client nodes, one switch and one DS edge router. The ”INET CLOUD” consists of three DS-enabled / non-DS-enabled routers (it can be expanded to a more complicated topology). The ”servers” subnet contains one server and one edge router. In this section, we first introduce the implementation of the required network nodes, including the DS router, traffic sender and traffic receiver. Then we introduce problems we encountered during the implementation, and their solutions.

3.1 DS router To implement the DS scheme in a router, we have two options: (1) start the implementation from the scratch; (2) take advantage of existing router architecture in OP-

-90

-85

-80

-75

-70

-65

OR 44

38.5

client0

When RIO queue length exceeds  

 , new assured packets are dropped with increasing probability up to   . When queue length exceeds    , all new best-effort packets are dropped. When queue length exceeds    , all incoming packets are dropped. By tuning the values of  

 ,     ,    ,   ,  and  , we can expect different dropping behaviors for both best-effort and assured packets.

-95

41

43.5

38

40.5 switch0 E_router_0 43

router_0

client1

router_1

router_2 server

E_router_1 37.5

40

42.5

37

client2 -120.5

-120

-119.5

-119

-101

-100.5

-100

-83

-82.5

Figure 3: Network Topology for the Simulation

NET. We choose the later. Thanks for the complete node library supported by OPNET, we have multiple choices to base our DS router on. In our real implementation, we choose Cisco 7204 router as our base, which saves a lot of time for implementation, since we do not have to handle routing, MAC or TCP/UDP at all. According to the DS scheme, IP packets are classified with respect to the DSCP in their IP headers so that IP flows are aggregated into different service classes. It is natural to put DS scheme into IP module. Therefore, we re-write the IP module and put DS components in it. Figure 4 shows the node model picture of a DS router. The picture is the same as that of a conventional router. But the ”ip” process model in the picture (the block which is just below ”ip-encap” block) has been changed to our DS enabled ”ip” process model. In this figure, we notice that the overall structure of the router has not been changed a lot, although making the router DS enabled is a significant enhancement with respect to functionality. The reason is that in OPNET different modules (e.g., MAC, IP, TCP, OSPF, RIP and so on) are implemented as separated objects, which communicate with each through interfaces. As long as the new module keeps an appropriate interface, the whole model works fine. Figure 5 illustrates the process model for a DS enabled IP model. We notice that in the process model, there are two different processes. The upper one is the main IP process which implements main IP and Diffserv functionality (which is called diff ip rte v4 model), and the lower one is the child process which implements priority scheduling scheme for Diffserv (which is called diff pq model). The IP diff ip rte v4 process model is implemented as

4

udp

IPAL_0

ospf

rip

 Packet Monitoring and Policing. Packet monitoring

bgp

igrp

tcp

ARP1

AAL_0

ip_encap

mac1

ATM_mgmt_0

ATM_layer_0

ip

eth_port_rx_1_0 eth_port_tx_1_0

ATM_trans_0

ATM_switch_0

pr_0_0

pt_0_0

ARP2

mac2

eth_port_rx_2_0 eth_port_tx_2_0

ARP3

mac3

ARP4

eth_port_rx_3_0 eth_port_tx_3_0

mac4

eth_port_rx_4_0 eth_port_tx_4_0

ARP5

Figure 4: The Node Model for a DS Router mac5

ARP6

eth_port_rx_5_0 eth_port_tx_5_0

mac6

eth_port_rx_6_0 eth_port_tx_6_0

ARP7 IP_servic (PK_READY)

(SERVICE_NEW_PK)

mac7

(SERVICE_QUEUED_PK)

ARP8 (NO_DIFFSERV)

init

init_too

DS_schd

svc_start

eth_port_rx_7_0 eth_port_tx_7_0

svc_compl

mac8 (default)

(ARRIVAL) (SELF_NOTIFICATION)

(DIFFSERV)

(DS_SCHD)

(default) eth_port_rx_8_0 eth_port_tx_8_0

(default) wait

ARP9

cmn_rte_tbl

arrival

(SELF_NOTIFICATION)

idle

(SVC_COMPLETION)

(ARRIVAL)

mac9

pt_10_0 (default)

hub_rx_9_0

and policing are implemented within the ”DS schd” state too. After being classified, an incoming packet is monitored and policed according to the class it is belonging to. If the packet is a premium class packet, it is monitored and policed by using the leaky bucket model (Section 2.2). If the packet is an assured or a best-effort class packet, it is monitored and policed by using the token bucket model (Section 2.2). If the packet is premium and conformed (In-profile), it is processed by the next state (”IP serv” state) directly. If it is non-conformed (Out-of-profile), it is dropped (destroyed) without any further process. If the packet is an in-profile assured packet, it is processed by the ”IP serv” state, otherwise it is re-marked as a ”besteffort” packet in ”DS schd” state and processed by ”IP serv” state later. If the packet is a ”best-effort” packet, it goes ahead into the ”IP serv” state and gets processed there.

hub_tx_9_0

pr_10_0

pt_11_0

pt_12_0

 Packet Routing and Forwarding. After classification and conformance checking, the packet enters the regular IP forwarding process, which is implemented by ”IP serv”, ”srv start”, ”srv compl” and ”idle” states. All of these states are almost the same as those in a conventional IP module, except that the ”idle” state is diffserv-aware. 2

(RECEIVE_PACKET) pr_11_0

pr_12_0

enqueue

pt_13_0

init

idle

pt_14_0

pr_13_0

pr_14_0

extract

(SEND_PACKET) pt_15_0

pt_16_0

pr_15_0

pr_16_0

Figure 5: The Process Model for a DS Enabled IP Module pt_17_0

pr_17_0

follows:

 Initializations. All the initializations are done in ”init”, ”wait”, ”cmn rte tbl” and ”init too” states sequentially, which is the same as what regular IP process model does.

 DS or non-DS. If the node is set to DS enabled, the transition with ”DIFFSERV” condition occurs. Otherwise, the transition with ”NO DIFFSERV” condition occurs. The reason why we design the model to handle both DS-enabled or non-DS-enabled cases will be described later (Section 3.3, Problem I).

 Packet Classification. The packet classification is done within the ”DS schd” state.

 Leaky Bucket and Token Bucket. As we described in Section 2.2, we use the leaky bucket model and token bucket model to do conformance checking on premium class traffic and assured class traffic respectively. The reason behind this is that for premium class traffic, the resource reservation is done based on the peak rate, thus we do not allow any burst rate which exceeds this reserved rate. While for the assured class traffic, the reservation is based on the statistical guaranteed rate, thus a certain amount of bursts are allowed. How many bursts are allowed in the system is determined by the token bucket depth Figure 6 shows the implementation. To calculate the token availability, instead of scheduling a selfinterrupt for each time unit, we do the calculation only at the time a packet is arriving, which is more efficient. For a premium packet, we hold it in the bucket until it gets enough tokens. If the bucket is overflowed, the packet is discarded directly. For the token bucket, we keep track of two time variables: the current time ( !#" ) and the last service time ( $&%(') '+*, !-" ), which are used to calculate the available tokens. When an assured packet 2 Besides the conditional transitions ”ARRIVAL” ”SVC COMPLETION”, ”DS SCHD” transition is added.

and

5

comes, the token bucket first updates its available tokens,

%.*,%.!$/%(0$&" 2143("657'98:2143;")5 

Suggest Documents