An Admission Control Approach for Elastic Flows in the ... - CiteSeerX

2 downloads 4999 Views 99KB Size Report
Assured Service, that provides a minimum throughput and allows the user to obtain an .... In-profile packets have assured delivery (that conforms the minimum throughput). − Out-profile ... An edge router assigns a flow to a class (aggregate),.
9th IFIP CONFERENCE on PERFORMANCE MODELLING AND EVALUATION OF ATM & IP NETWORKS 2001, Budapest

An Admission Control Approach for Elastic Flows in the Internet Lluís Fàbrega, Teodor Jové, Antonio Bueno, José L. Marzo 1 Institut d’Informàtica i Aplicacions (IIiA) Universitat de Girona Lluis Santaló Av., 17071 Girona, SPAIN e-mail: { fabrega | teo | bueno | marzo }@eia.udg.es Abstract Applications such as web or ftp require minimum guarantees of network performance. Users require some minimum response time in their transactions that translates into some minimum network throughput. These applications will be satisfactorily supported by the Assured Service, that provides a minimum throughput and allows the user to obtain an extra throughput if network resources are available. In this paper we assume Internet architecture based on the Differentiated Services framework. We propose an Admission Control (AC) method for the Assured Service that belongs to end-to-end measurementbased Admission Control schemes. In our method, the AC phase is within the data transfer phase, since first data packets are used for probing the available network throughput. During the AC phase, packets are assigned to some classes and after the AC phase, packets are assigned to other classes. Each class has a different per-hop behaviour that protects allocated flows from flows in the Admission Control phase. Keywords: QoS, Differentiated Services, Assured Service, Admission Control.

1 Introduction A lot of efforts are being made to turn the Internet into a multiservice network able to satisfactorily support any kind of application. Real-time audio and video applications, ftp and web applications and future ones, have very different requirements regarding the Quality of Service (QoS) provided by the network. Today’s Internet offers a single best-effort service that allows the network to be simple and robust, but it does not meet the new QoS requirements. A multiservice Internet needs more complex network mechanisms but the goal is evolving the existing architecture while maintaining simplicity. It is usually assumed that traditional Internet applications such as web or ftp have more flexible QoS requirements than real-time applications. However there is a minimum QoS that the network should satisfy. Moreover these applications can achieve a better QoS if network resources are available, by means of rate-adaptive mechanisms (they are called elastic). These applications are satisfactorily supported by the Assured Service proposed by [clar98a], that guarantees a minimum throughput and allows the user to get an extra throughput if available. Service guarantees can be obtained through network overprovisioning, but we are interested in studying the use of Admission Control mechanisms when “normal” provisioning is used. In order to meet scalability, we consider a core-stateless network with Differentiated Services architecture [rfc2475] to build the Assured Service and our proposed Admission Control method.

1

This study was partially supported by the CICYT (Spanish Education Ministry) contract TEL-98-0408-C02-01 and Acciones Integradas program ref. HI1998-0032. 0/1

9th IFIP CONFERENCE on PERFORMANCE MODELLING AND EVALUATION OF ATM & IP NETWORKS 2001, Budapest

The paper is organised as follows. First we review the features and QoS requirements of elastic applications and the QoS provided by the Assured Service. After that, service agreements between users and providers for the Assured service are presented. Next we review the mechanisms considered in building the Assured Service, and we discuss the issue of achieving service guarantees. Finally we present our proposed Admission Control method and finish with some conclusions.

2 The nature of elastic applications In data applications such as web or ftp, users transfer documents of different sizes (normally speaking, a short size for web and a longer size for ftp). No error is expected and, depending on user’s requirements, different interactivity or response times may be expected (usually short for web and longer for ftp). Sources fragment the document and send a packet flow that the network delivers to the destination with some delays and losses. The application detects packet losses and corrects it through retransmission, which adds more delay and the document transfer time is increased (TCP is the transport protocol usually used). The decisive network QoS parameter is the average receiving rate or throughput. Application QoS and network QoS parameters are easily related because realised throughput is equal to the document size divided by the transfer time. The utility curve of these applications, which measures user’s satisfaction in terms of throughput, is always considered positive and strictly concave in [shen95a]. This means that users tolerate large response times and that there is no limit. However, user impatience or high layer protocols impose a limit on the response time, so that if this is exceeded, the transaction is aborted. Moreover in commercial Internet, users will pay to obtain some desired performance (and some users may be more demanding than others and willing to pay more). Therefore there is a minimum throughput required or desired by users. These applications and flows are usually called elastic because the sending flow rate can vary up to the capacity of the network input link. This allows the application to take advantage of any available network resources to achieve the maximum possible throughput. Because the available resources change in time, the application uses rate adaptive mechanisms that increase and decrease the sending rate with the goal of matching these variations and minimising packet loss. Currently TCP uses an additive increase multiplicative decrease algorithm that reacts according to the reception and non-reception of ACK packets [jaco88a].

3 The Assured Service These applications require a minimum throughput and can benefit by an extra throughput due to their adaptive features. The network service should have a “best-effort with floor” nature [wroc98a], that is, a service that offers a guaranteed performance but, if possible, a better performance is provided. This service is called Assured Service or Allocated-Capacity Service [clar98a], and it can be defined as follows: − There is a traffic profile that allows the network to classify each flow packet as inprofile packet or out-profile packet. − In-profile packets have assured delivery (that conforms the minimum throughput) − Out-profile packets don’t have an assured delivery and they are delivered if network resources are available (which conforms an extra throughput). Available resources are shared among competing flows according to some defined sharing policy. Some sharing policies could provide an equal share for all flows, proportional to their assured throughput, a different share per user’s class, or priority to short flows [robe98a].

0/2

9th IFIP CONFERENCE on PERFORMANCE MODELLING AND EVALUATION OF ATM & IP NETWORKS 2001, Budapest

4 Service Agreements A service agreement between users and providers specifies different aspects of the service delivery. Non-technical aspects can cover pricing, penalties if the agreement is not met, verifying methods, reporting, and so forth. From the technical point of view, the agreement specifies the QoS that can be requested, the traffic characteristics and the minimum percentage of service requests that should be satisfied. It is important to note that a user can be an end-user, a network or a group of networks. The view is that a set of networks have bilateral agreements in a recursive chain of providers acting as users of other providers. This allows providing end-to-end services while stating clearly the responsibilities in case of agreement violations. In our case, service agreement specifies an Assured Service for an aggregation of any number of flows to any destination. The user can request an individual minimum throughput for each flow, as long as a limit value called Aggregated Minimum Throughput (AMT) is not exceeded. The sum of the requested minimum throughputs of all simultaneous flows must be always less than or equal to the contracted AMT. Therefore, users manage the sharing of their contracted AMT among their flows according to their policy. On the other hand, the provider verifies that the user’s service requests do not exceed the contracted AMT. Finally the agreement also specifies the minimum percentage of flows that should receive the requested minimum throughput, which is usually expected to be high. Service agreements and traffic statistics are used to carry out the provision of network resources.

5 Building the service Integrated Services (Intserv) and Differentiated Services (Diffserv) are the two broad network architectures proposed to turn the Internet into a multiservice network. The fundamental difference between them is the use of per-flow state in the network core. In Intserv [rfc1633] [clar92a] any router (on the edge or in the core) has knowledge of all passing flows. This allows the application of an individual treatment to the flow and therefore offering a richer service model. However it is not scalable with the number of flows so it is widely accepted that Intserv is not appropriate in backbone networks where core routers serve hundreds of thousands of flows simultaneously. On the contrary, Diffserv [rfc2475] [rfc2638] proposes to use per-flow state in edge routers but not in core routers. An edge router assigns a flow to a class (aggregate), there is a small set of classes and core routers apply the same treatment (Per-Hop Behaviour or PHB) to all packets that belong to the same class. Each packet carries a mark on the header that identifies the class or PHB (written in the so-called DS field [rfc2474]), and that allows simple classification. Although Diffserv model might not provide the same services as Intserv, the scalability properties make it more appropriate to be deployed in backbone networks. Finally, there have been several proposals that provide a richer service model using the scalable corestateless approach (SCORE network [stoi98b] [stoi99a] and Corelite network [siva00c]). We consider a core-stateless network like Diffserv to build the Assured Service in order to get a simple and scalable solution. The scheme was proposed in [clar98a] and use two classes, AIN and AOUT: − A Traffic Meter monitors each packet of the flow and classifies it as an in-profile or out-profile packet, depending on whether the sending rate is smaller or greater than the flow’s assured throughput. In-profile packets are assigned to class AIN while out-profile packets are assigned to class AOUT. A Marker writes the corresponding mark in the packet header. − Core routers in the followed path use the mark to classify packets into class AIN or AOUT. Output queues are scheduled and managed according to packet class.

0/3

9th IFIP CONFERENCE on PERFORMANCE MODELLING AND EVALUATION OF ATM & IP NETWORKS 2001, Budapest

It is desirable to maintain packet ordering so FIFO scheduling is used. Packet loss is the main concern, so if AOUT packets have a lower assurance level than AIN packets, the queue is managed so that it is applied higher discarding priority to class AOUT than to class AIN . Therefore if packets in a queue had to be discarded, AOUT packets would be discarded first. This scheme was proven successful in [clar98a] [feng99b] using discarding mechanisms based on Random Early Detection (RED) and TCP sources. functions at the edge assured throughput

flow classifier

traffic meter

functions in the core class classifier

marker

FIFO and priority discarding

6 Achieving service guarantees In the above scheme, if AIN packets had to be discarded (because the queue is full and there are only AIN packets), then some flows would not receive the assured throughput. To avoid this situation, network resources should be enough to carry the traffic offered by AIN class. Otherwise, if there is a traffic overload, congestion occurs, and the promised QoS is not reached. If the objective is that congestion never occurs (all service requests are satisfied), the network must be provisioned for the worst-case scenario, that is, when all sources are all sending the maximum traffic to the same egress point. This is also called overprovisioning. The worst-case scenario is generally unlikely to happen so the result is an inefficient resource provisioning. When less pessimistic provisioning is made (for a “normal” scenario), the efficiency increases but it is possible to have congestion in the network (some service requests could not be satisfied). Congestion can be handled using one of these two paradigms, “sharing” or “blocking”: − In the “sharing” option, resources are shared by all competing flows in some way, so that the QoS (i.e., the throughput) decreases for all flows (and if the number of flows increases the throughput decreases). The service is said not to offer throughput guarantees. − The other option is “blocking”, that is, resources are assigned to some of the flows so that they receive the desired QoS, and the rest of the flows do not receive it. The service is said to offer throughput guarantees. The Admission Control mechanism decides which flows receive the requested QoS and which flows do not. Therefore the first option means all service requests will not be satisfied and the second option means that some service requests will be satisfied. According to [shen95a], the goal of the network is to maximise the overall utility of users (the sum of all utility curves), so that if there is a minimum throughput, it is better “blocking” than “sharing” when congestion occurs. Moreover network resources are wasted carrying packets of flow that will be interrupted by user [robe99c]. On the other hand, if one considers that a minimum throughput is not required, “sharing” would be better (if fact one can not say that a service request is not satisfied, it always is). Last point is the traditional view in the Internet for elastic applications, being the right service the one that offers a fair (or weighted fair) throughput among users. TCP congestion

0/4

9th IFIP CONFERENCE on PERFORMANCE MODELLING AND EVALUATION OF ATM & IP NETWORKS 2001, Budapest

avoidance mechanisms were designed with the objective of sharing equally the capacity of a bottleneck link among competing flows [jaco88a]. In conclusion, service guarantees can be achieved through overprovisioning and no admission control (all service requests will be satisfied), or through “normal” provisioning and admission control (some service requests could not be satisfied). In this paper we are interested in the use of Admission Control methods.

7 Proposal of an Admission Control method Once a new flow requests some minimum throughput, the Admission Control (AC) mechanism evaluates if the network is able to provide this minimum throughput, while maintaining the minimum throughput already assured to the accepted flows. After the AC phase, the flow is allocated an assured throughput, which can range from 0 (that is, a best-effort service) to the requested minimum throughput. The AC method should be simple and fast. The majority of Internet elastic traffic comes from web pages transactions that turn into short-lived flows, so we need to deal with a large number of short flows. For example, a core router could handle simultaneously 100 thousand flows of an average duration of 10 seconds which means 10 thousand flows per second coming and going. Therefore, a traditional per-flow state approach is not appropriate. Setting up a reservation in each router through signalling messages has a high overhead due to the short life of the flow. The large amount of signalling messages that should be processed to update the start or end of a flow, and the classification of an incoming packet in one of this great number of active flows, would become a very complex task. In [robe99c] is proposed an AC method that avoids signalling. Each router uses an implicit way, that is, the start of a flow is detected when the first packet is received, and the end of the flow is detected when no packet is received within some defined timeout interval. The AC decision consists in limiting the number of active flows in a link to certain value that guarantees a minimum throughput to all of them. However, the method still needs per-flow state in core routers which make it not scalable. We consider a core-stateless network to build an AC method for the Assured Service. Our proposal belongs to Admission Control schemes based on end-to-end measurement [bian00a]. In these schemes end points measure the QoS over a path during the AC phase, and based on the measurement an AC decision is made. We suppose there is a way to pin a flow to a route (all flow packets follow the same path through the network) using for example MPLS mechanisms. AC phase

data packets error control packets

In our proposal the AC phase is within the data transfer phase. First data packets are used for probing the available network throughput in the followed path (from this point of view AC is transparent) and after that, a throughput is allocated. We add two classes, RIN and ROUT, that are only used during the AC phase:

0/5

9th IFIP CONFERENCE on PERFORMANCE MODELLING AND EVALUATION OF ATM & IP NETWORKS 2001, Budapest

− A new flow requests some minimum throughput. Source starts to send packets. − A Traffic Meter monitors each packet of the flow and classify it as in-profile or outprofile packet, depending on whether the sending rate is smaller or greater than the flow’s requested minimum throughput. In-profile packets are assigned to class RIN while out-profile packets are assigned to class ROUT. A Marker writes the corresponding mark in the packet header. − Core routers in the followed path use the mark to classify packets into class AIN , AOUT, RIN or ROUT. Output queues are scheduled and managed according to packet class. − The throughput experienced by the flow is measured. The allocated and assured throughput is the minimum between the measured throughput and the requested minimum throughput. − From now, flow packets are classified and assigned to AIN and AOUT class according to the assured throughput. A basic point is that it is necessary to protect allocated flows from the flows that are in the AC phase. This is done by defining how routers handle each class. To maintain packet ordering we use a FIFO scheduling. Because packets from different classes have different assurance level, we use a different priority in the discarding for each class. Therefore a high to low discarding priority is applied to classes in the following order: ROUT, AOUT, RIN , AIN . Another basic point for a proper operation of the method is that sources should transmit at a rate equal or higher than the requested minimum throughput during the AC phase, and after that, at a rate equal or higher than the allocated throughput until the end of the flow. If during the AC phase the sending rate is below the requested minimum throughput, then the measured throughput will be smaller for sure. On the other hand, if after the AC phase the source does not use the allocated throughput, other flows in the AC phase could steal it, and the future allocated throughput to these flows could be wrong (this could result in a future congestion). Throughput measurement could be made by the destination end-point from received data packets or by the source end point from received error control packets. In the first option, some special feedback packets should be defined to carry the measured throughput or error control packets could carry it. Both options need to deal with the problem of loss in the return path. Another point to deal with is to decide the measurement time or duration of the AC phase. If too short, the measurement could be incomplete. If too long, the AC decision could tend to be more “sharing” than “blocking”. This is because it is possible that a lot of flows in the AC phase meet simultaneously in a link and the measured throughput would be close to the sharing of available resources among these flows.

8 Conclusions and future work In this paper we have proposed an Admission Control method for the Assured Service in a core-stateless network like Differentiated Services. Elastic applications would benefit from the use of Assured Service, which guarantees a minimum throughput and allows to get an extra throughput if network resources are available. Service guarantees can also be provided without Admission Control if the network is (over) provisioned for the worst-case scenario. However our interest has been to study an Admission Control method to deal with overload situations when “normal” network provisioning is used. Our proposal belongs to end-to-end measurement-based Admission Control schemes, where end points measure the QoS of a path during the AC phase, and then a decision is made. In our proposal, the AC phase is within the data transfer phase, as first data packets are used for probing the available network throughput. At the edge of the network packets are classified and assigned to classes AIN, AOUT, RIN or ROUT. During the AC phase, packets are assigned to classes

0/6

9th IFIP CONFERENCE on PERFORMANCE MODELLING AND EVALUATION OF ATM & IP NETWORKS 2001, Budapest

RIN and ROUT, depending on whether the sending rate is smaller or greater than the requested minimum throughput. The available throughput is measured and a throughput between 0 and the requested minimum throughput is assured to the flow. After that, the AC phase, packets are assigned to classes AIN and AOUT, depending on whether the sending rate is smaller or greater than assured throughput. Core routers employ FIFO scheduling and a high to low discarding priority is applied to classes in the order ROUT, AOUT, RIN , AIN . We plan to study how the proposed AC method adapts to TCP dynamics.

9 References [bian00a]

Throughput Analysis of End-to-End Measurement-Based Admission Control in IP, G. Bianchi, A. Capone, C. Petrioli, IEEE INFOCOM 2000, 2000. [clar92a] Supporting real-time applications in an Integrated Services Packet Network: Architecture and Mechanism, David D. Clark, Scott Shenker, Lixia Zhang, Conference Proceedings on Communications, Architectures and Protocols, 1992. [clar98a] Explicit Allocation of Best-Effort Packet Delivery Service, David D. Clark, Wenjia Fang, IEEE/ACM Transactions on Networking, vol. 6, no. 4, 1998. [feng99b] Understanding and improving TCP performance over networks with minimum rate guarantees, Wu-Chang Feng, Dilip D. Kandlur, Debanjan Saha, Kang G. Shin, IEEE/ACM Transactions on Networking, 7, 2, 1999. [jaco88a] Congestion avoidance and control, Van Jacobson, Symposium Proceedings on Communications Architectures and Protocols, 1998. [rfc1633] Integrated Services in the Internet Architecture: an Overview, R. Braden, D. Clark, S. Shenker, RFC 1633, 1994. [rfc2474] Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers, K. Nichols, S. Blake, F. Baker, D. Black, RFC 2474, 1998. [rfc2475] An architecture for Differentiated Services, S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, W. Weiss, RFC 2475, 1998. [rfc2638] A Two-bit Differentiated Services Architecture for the Internet, K. Nichols,, V. Jacobson, L. Zhang, RFC 2638, 1999. [robe98a] Bandwidth sharing and admission control for elastic traffic, J. W. Roberts, L. Massoulié, ITC Specialist Seminar, Yokohama, 1998. [robe99c] Arguments in favour of admission control for TCP flows, J. W. Roberts, L. Massoulié, Proceedings of the 16th International Teletraffic Congress, 1999. [shen95a] Fundamental design issues for the future Internet, Scott Shenker, IEEE Journal on Selected Areas in Communications, 1995. [siva00c] Achieving Per-Flow Weighted Rate Fairness in a Core Stateless Network, R. Sivakumar, T. Kim, N. Venkitaraman, V. Bharghavan, IEEE Conference on Distributed Computing Systems, Taipei, Taiwan, 2000. [stoi98b] Core-Stateless Fair Queueing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks, I. Stoica, S. Shenker, H. Zhang, Proceedings of SIGCOMM'98, Vancouver, Canada, pp. 118-130, 1998. [stoi99a] Providing Guaranteed Services Without Per Flow Management, I. Stoica, H. Zhang, Proceedings of SIGCOMM'99, 1999. [wroc98a] Evolution of End-to-End QoS: A design philosophy, John Wroclawski, Proceedings of the First Internet2 Joint Applications / Engineering QoS Workshop, 1998.

0/7

Suggest Documents