Multi-Class Measurement Based Admission Control for a ... - CiteSeerX

1 downloads 0 Views 329KB Size Report
Mar 21, 2006 - provided a sound analytical model for the POTS, the same model fails ..... is namely capped by the codec but also depends on user interaction), ...
Multi-Class Measurement Based Admission Control for a QoS Framework with Dynamic Resource Management Thomas Bohnert1 , Edmundo Monteiro1 March 21, 2006

1

University of Coimbra Laboratory of Communications and Telematics CISUC/DEI, Polo II, Pinhal de Marrocos, 3030-290 Coimbra, Portugal Phone: +351239790000 Fax: +351239701266 EMail: tbohnert,[email protected] WWW: http://cisuc.dei.uc.pt

Abstract In the Laboratory of Communications and Telematics (LCT) of the University of Coimbra researchers have developed a novel, adaptive class-based QoS framework (LCT-QoS). In contrast to common, static resource allocation the core concept behind this approach is dynamic resource distribution based on delay and loss measurements as response to varying demand. To further complete this framework, currently research is undertaken to develop an Admission Control (AC) algorithm for resource control on QoS domain edges and the first accomplishment is presented in this article. Incipient we discuss the general need for AC to manage resources and its inherent features. Upon this, we derive an AC algorithm starting with an introduction of LCT-QoS and what it distincts from general queueing modules, i.e. continuous loss and delay measurements as load indicators for adaptive resource distribution, and the general definition of AC. By extensive simulation under various conditions, we scrutinise the latter and disclose general as well as specific shortcomings of the initial onset, which are then consequently tackled by gradual modifications. Finally, we present a Measurement Based Admission Control Algorithm (MBAC) for LCT-QoS, which fulfils user and operator demands on a high level. Keywords: Measurement Based Admission Control, Adaptive QoS, Delay and Loss Measurements.

T. Bohnert, E. Monteiro: MBAC for a Class-based QoS Framework

1

1

Introduction

The decision about an appropriate dimension of a link between two nodes in a packet switched network can mainly be considered as a trade-off between initial and operational costs, link utilisation and finally the Quality of Service (QoS) a provider aims to offer to its customers. Regarding the first two arguments, Statistical Multiplexing (SMUX) has proven to guarantee high link utilisation along with minimising required costs and has therefore become de facto standard in modern network technology. Although the deployment of SMUX has this advantage, it imposes some vast challenges regarding the last argument, the provisioning of QoS. While in times of average network load QoS parameters like delay and loss are below requested thresholds, these parameters can rapidly trespass those when many flows become active simultaneously. Independent of the applied multiplexing strategy, every approach is doomed to fail if the load is simply overwhelming. In order to guarantee boundaries for QoS parameters in a network with finite resources, an entity which exerts Admission Control (AC) for arriving flows to a common link is mandatory and became subject of extensive research in the past years. In the Laboratory of Communications and Telematics (LCT) at the University of Coimbra (UC) researchers have developed a novel, adaptive QoS framework (LCT-QoS) inspired by the IETF1 Differentiated Service (DiffServ) model [1]. Since the conceptual need of AC is not limited to any QoS architecture research is undertaken to develop an efficient Measurement Based Admission Control (MBAC) algorithm for LCTQoS. As the core design of this architecture is adaptive resource management based on continuous delay and loss measurement at packet level, a MBAC approach, which leverages this state indicators appears to be the natural approach and in this article we presents the results of our investigations. In the residual sections we first present the evolution of MBAC and related work, Sec. 2. In Sec. 3 we introduce the LCT-QoS framework and the MBAC algorithm we deployed. The experimental setup and the evaluation results are presented in Sec. 4. Finally, in Sec. 5 we conclude and indicate possible future research.

2

Background and Related Work

Admission Control is applied in various areas of telecommunication like Call Admission Control (CAC) in the POTS2 . The role of a particular CAC algorithm in this connection-oriented circuit-switched network is to decide in accordance with resource availability whether to switch a new call or not. In this network with its inherent, static and therefore highly predictable behaviour, AC algorithms can be derived from strong 1 Internet 2 Plain

Engineering Task Force Old Telephone Service

T. Bohnert, E. Monteiro: MBAC for a Class-based QoS Framework

2

mathematical models on a high confidence level, typically based on Poisson Models and Erlang formulas, see for example [2]. CAC is classified as Parameter Based AC since the decision is solely based on a priori characterisation of traffic behaviour and thus, resource demands. In connectionless packet-switched networks like the Internet, however, mathematical models proven to be valid in telecommunication for almost one century are not longer applicable. While the Poisson process has provided a sound analytical model for the POTS, the same model fails when applied to model Internet traffic behaviour [3] [4]. Various reasons contribute to the Failure of Poisson Modelling [5] and are theoretically abstracted by Long-Range-Dependence (LRD) and Self-Similarity [6] [7]. In short, the discrepancy lies in the burstiness of the arrival processes over different timescales. While traffic following a Poisson process ”smoothes-out” over large timescales, Internet traffic exhibits extreme variability [3] [6]. More specifically, LRD traffic correlates up to an extent, that its Autocorrelation Function (ACF) becomes non-summable. Contrary to Poisson traffic and its well-known exponential decaying ACF, LRD traffic can parsimoniously be described using so-called heavy-tailed distributions, i.e. distributions following a power law [3] [8] [4]. It is evident that finding theoretical models for such a highly stochastic traffic nature is difficult and fraught with limited accuracy. Therefore, parameter based AC (PAC) with its obligatory demand on a priori knowledge can not be applied with reasonable degree of efficiency. To overcome this issue MBAC was introduced. Instead of using source and network models based on probability theory, the method of this approach is realtime measuring of QoS parameters as substitution for a priori source characterisation. Consequently, MBAC became an integral part of the first standardised Internet QoS architecture Integrated Services [9]. Naturally, sampling a certain QoS parameter at one instant in time has little meaning due to extreme variability of Internet traffic. Thus, the challenge for MBAC is to find techniques for accurate and robust QoS parameter estimations from measured samples. A popular approach based on this concept is known as Equivalent Bandwidth (EB) and has its roots in CAC but has been frequently adapted to MBAC. Effective Bandwidth refers to the bandwidth a queue has to provide to keep QoS parameters statistically below requested levels. Hence, the EB is a function of the requested QoS and the sampled, statistical characteristics of the aggregate. Based on this concept, the authors of [10] and [11] developed an MBAC algorithm, which samples the packet arrival process at different time scales. The EB is computed from the empirical mean and standard deviation of the maximum arrival rate captured at the Dominant Time Scale (DTS), i.e. the time scale with the highest loss or delay probability [12] and a demanded overflow probability. If the EB plus the declared peak rate of the new flow is below the available capacity, the flow is admitted, otherwise it

T. Bohnert, E. Monteiro: MBAC for a Class-based QoS Framework

3

is rejected. A slight model improvement of the latter onset was presented in [13] and improves the accuracy, and thus performance of the model on small time scales. Sally Floyd presents In [14] an algorithm which is also based on the EB concept but has different mathematical roots. The EB is computed by applying a Chernoff-style bound on the distribution tail of the sum of a set of random variables relative to the mean of the sum. Thus, it quantifies the probability that the sum will exceed the mean up to a certain extent. With the measured mean of the arrival process, estimated using an exponential-weighted moving average filter, and a given capacity transgression probability, the EB can be calculated. The admission criterion is the same as in [10] and [11]. The decisive characteristics of traffic aggregates in terms of queueing performance are the distribution of the sampled packet arrival process and its ACF [15] [16]. Regarding the distribution parametric MBACs can coarsely be classified in models based on Gaussian assumptions and remainings. Gaussian models are motivated by the Central Limit Theorem and receive more and more attention with the availability of broadband access and thus high levels of aggregation in access networks [17] [18]. Due to its well known foundations, Gaussian models are attractive for MBAC and several algorithms have been presented recently, e.g. [14] [19] [10] [20] [21]. In brief, if applied to buffer-less models [22] [20] [21] the problem is reduced to estimate the probability that the arrival process exceeds the link capacity. For a rigorous analytical investigation of this model in the MBAC context we point the reader to [22]. Naturally, the issue of parametric probability modelling is its inherent dependence on matched assumptions. For example the Gaussian assumption is not generally given as discussed in [17] [18] [23] [24]. This issue is even more crucial due to the ever-changing heterogeneity of the Internet on any abstraction layer, i.e. technologies, protocols, applications and human behaviour etc. All these aspects influence the characteristics of resulting network traffic. To cover this uncertainty, a set of non-parametric models for MBACs have been devised in the past. In [25] and [26] the EB is calculated by recording the packet arrival process at different time scales and therefrom computed histograms. Similar too [10] [11] the EB is the maximum arrival rate at the DTS. Another histogram based algorithm is presented in [27]. Herein, the histogram reflects the ratio of experienced to worst case packet delay under a WFQ [28] discipline. This ratio together with the theoretical WFQ delay formulae enables the authors to compute a minimum bandwidth necessary to serve a traffic aggregate under a given, probabilistic delay bound. The authors claim that this is the first MBAC algorithm that exploits the statistical multiplexing (SMUX) gain along the delay dimension. The principle of the last approach was to measure the difference between an analytical result and measurements taken in real time. A somewhat similar onset has been recently published in [29] and is called

T. Bohnert, E. Monteiro: MBAC for a Class-based QoS Framework

4

Experienced Based Admission Control (EBAC). This algorithm uses histograms computed from the ratio between the utilised bandwidth and the registered, i.e. the sum of the peak rates, which is simply an expression of the SMUX gain. This metric in turn is used to calculate the so-called Over Booking Factor, a weight factor, which determines to what extent the sum of the flows peak rates can exceed the link capacity and hence, the quantity of admissible flows. Finally, we point the interested reader to several simulation based [30] [31] [32] [16] and to [33] for an implementation based performance comparison of various AC algorithms.

3

MBAC for the LCT-QoS Framework

3.1

The LCT-QoS Framework

The Laboratory of Communications and Telematics of UC (LCT-UC) currently pursues the investigation of an alternative IP service model, whose objective is to bridge the gap between Intrinsic QoS (IQ) and Perceived QoS (PQ). Briefly, IQ means evaluating provided QoS solely by monitoring associated physical parameters like packet delay or loss values. Contrary to, PQ means evaluating the extent of QoS parameters values regarding the way they are perceived by humans [34]. For for instance, humans perceive equal packet delay differently while using an Email or Internet telephone application. To implement this idea the framework is a layered compound of different modules. The lowest layer is an adaptive queueing module implementing the service model referred to before and introduced in [35]. Traffic classification and packet forwarding according to QoS demands is based on the the DiffServ notion [1] and therefore called Per-Hop-Behaviour D33 (PHB-D3). An intermediate module, LCT-QoSR, is placed on top of the latter and dynamically routes traffic classes within a QoS domain, see [36] for details. Eventually, there is an admission control entity (LCT-QoS-AC) as a means for preventive congestion control and resource management. This and the queueing module is addressed by this investigation. Fundamental to all layers is a novel QoS metric, which maps IQ to PQ by a functional coherence. Basically, it defines degradation and superfluity zones which, in turn, define how the impact varies with QoS parameter variations [37]. According to this metric, the PQ level is quantified through a variable named congestion index (CI), where a high CI value means low quality and vice versa. For each class, there is a CI related to the physical QoS parameters packet delay and loss. The concept of degradation slope (DSlope) is used by the metric for the definition of the classes’ sensitivity to delay and loss degradation. Thus, a traffic 3 D3

stands for Dynamic Distribution of Degradation

T. Bohnert, E. Monteiro: MBAC for a Class-based QoS Framework

5

Figure 1: Congestion Index Calculation for Delay and three Traffic Classes. With an increasing absolute value of measured delay the associated CI increases also, with a rate depending on the degradation slope (DSlope). Thus, the classes sensitivity to delay can be expressed through the DSlope concept and further, the perceived QoS level of a traffic class is expressed through the corresponding CI class its DSlope reflects the sensitivity of the associated applications to QoS degradation in a way, that more sensitive classes have greater DSlopes and less sensitive classes have smaller DSlopes. For an example see Fig. 1, which illustrates the concept for three classes with different sensitivities to delay degradation (it would be the same if we were talking of loss). Classes with lower DSlopes (measured in degrees) will be less sensitive to degradation, so their CI, and thus PQ, will grow slowly. Using this metric one can say that resources are being shared in a fair way as long as the different classes have the same CI value related to delay and loss. In fact, in this case, one can say that the impact of the degradation on applications, the PQ, is barely the same for all, despite different values of the physical parameters delay and loss, i.e. the network provided IQ. To integrate this concept for differentiated service provisioning, a monitor, which is a part of the queueing module, continuously measures average package transit delay and loss for each class and calculates the correspondent congestion indexes (CI for delay and loss). These samples are then reported to the involved modules. Contrary to static allocation of resources, the scheduler of the queueing module then throttles the forwarding of packets of some classes and speeds up others; respectively the dropper provides more memory to some queues removing it from others, both in a independent way. The criterion to rule the dynamic distribution of resources is, as introduced before, the equalisation of CIs and implies a fair share of resources

T. Bohnert, E. Monteiro: MBAC for a Class-based QoS Framework

6

evaluated on PQ.

3.2

Delay and Loss Aware MBAC Algorithm for LCT-QoS

The role of an Admission Control in a service constrained environment can be generally defined as follows: An Admission Control algorithm incorporates a decision logic to determine whether to reject or admit a new resource consumer on a link on which consumers compete for finite and shared resources. The admission is positive if, and only if, the traffic characteristics, respectively resource demands of the applying consumer can be accommodated without violating QoS commitments made to already admitted consumers as well as its own QoS demands, in both cases for their complete time of presence. Based on this general definition we derived a tailored algorithm for LCT-QoS-AC. The motivation of our approach is the existence of delay and loss measurements performed by the monitor module as part of the PHB-D3 implementation, recall Sec. 3.1. Thus, the goal of our investigation is is to scrutinise the practicability of these measurements as state indicators of the queueing module and therefore resource availability. Henceforth, certain peculiarities call for discussion in advance. i.) Intuitively, a first idea is to use the same metric, i.e. the CIs as performance parameters. Since CIs are abstractions of delay and loss measurements, however, it turned out to be more convenient to use the latter directly as they are ”natural” QoS parameters with a direct physical interpretation. ii.) LCT-QoS is class-based, i.e. measurements from a traffic aggregate, and thus decision arguments, can not lead to strict per-flow but only statistical guarantees. As a result, a single flow could experience temporal QoS degradation. This is however, fully compliant with the DiffServ paradigm and a inherent, general feature of class based QoS frameworks. iii.) Another peculiarity is the focus on delay and loss as QoS parameter motivated by human perception and a growing number of streaming content in todays Internet. However, delay and loss are only of minor value for the evaluation of TCP traffic cause of its adaptive congestion control. A more detailed investigation of this issue is presented in Sec. 2. Considering the above mentioned points we present a two-staged algorithm, which uses the advantages of both, PAC and MBAC. The following paragraph presents the algorithm first informally, and thereafter, for the sake of completeness, also formally. In the first stage, called charging stage, the decision argument is solely based on bandwidth and a priori knowledge, i.e. declared peak rates of traffic sources, since this is for the time being the only standardised

T. Bohnert, E. Monteiro: MBAC for a Class-based QoS Framework

7

parameter by the International Telecommunication Union (ITU) [26]. Moreover, only peak rates can be acceptable accurate specified in advance. In summary, in charging stage, the algorithm is a pure PAC algorithm. Assuming a priori know peak rates, it simply calculates the number of currently active flows plus one for the requesting source, times the peak rate for the associated traffic class. The result is assumed as the required total bandwidth. If this value is below or equal to the assigned link capacity for this class, the applying flow is admitted; else rejected. Using this simple algorithm allows to load (charge) the network independent from actual traffic nature and network conditions, achievable with negligible computation overhead. This algorithm is well known as RateSum or SimpleSum and is a parameter based AC algorithm. The limitations of the SimpleSum algorithm are rooted in the fact that peak rates of traffic sources are typically higher than their sustained rate (e.g. the transmission rate of a Voice over IP (VoIP) application is namely capped by the codec but also depends on user interaction), Thus, one can expect that the actual physical load will be lower than the registered rate, i.e. the sum of the peak rates, leaving valuable resources remaining unused. The impact of this fact in terms of the difference between assumed and intrinsic resource demand combines further with the deployment of SMUX. If the sources activity coincide in a propper way the required bandwith to serve an traffic aggregate can be significantly different from the corresponding sum of peak rates due to a high SMUX gain. As an adaptive response to these random influences, the admission controller shifts into a second stage, so-called saturation stage, whenever a flow would be rejected in charging stage. In this stage, it decides about admission based on delay and loss measurements. If delay and loss boundaries for all classes are in predefined boundaries, indicating a high SMUX gain, the admission decision is positive, irrespective the negative decision in charging stage, and an additional flow can be admitted. Thus, the error due to missspecified source descriptors is adaptively corrected. Contrary to charging stage, saturation stage is a pure feedback based decision. Recall the dynamic distribution of resources by the queueing module. The conceptual rationale for our approach is an algorithm, which includes state information requested from our alternative queueing module, PHB-D3. This clearly distincts our problem from common approaches where congestion control on AC level is absolutely separated from that on queueing level. In general queueing systems, capacity assignments are static and MBAC does not have to account for QoS provided for other classes. Thus, our approach complies to a shift from a single-class to a multi-class MBAC algorithm.

T. Bohnert, E. Monteiro: MBAC for a Class-based QoS Framework

8

Finally, observant readers may have noticed that we accept the loss of packets, however, in controlled quantities. This is justifiable by the fact that many streaming sources are rather tolerant to occasional packet loss. Eventually, the decision logic can formally be expressed as:

Ak,m

   ≥0 =  

Suggest Documents