Design and Configuration of PCN Based ... - Semantic Scholar

1 downloads 2961 Views 862KB Size Report
multimedia services introduce large challenges for network op- erators as they are .... bandwidth metering approach, which is easier to configure for bursty traffic.
Design and Configuration of PCN Based Admission Control in Multimedia Aggregation Networks Steven Latr´e, Bart De Vleeschauwer, Wim Van de Meerssche, Filip De Turck and Piet Demeester Ghent University - IBBT - IBCN - Department of Information Technology Gaston Crommenlaan 8/201, B-9050 Gent, Belgium Tel: +3293314981 , Fax: +3293314899 e-mail: [email protected]

Koen De Schepper, Christian Hublet, Wouter Rogiest, Stefan Custers and Werner Van Leekwijck Alcatel-Lucent Bell Labs Copernicuslaan 50, B-2018 Antwerpen, Belgium Abstract—DSL aggregation networks are evolving to the standard platform for the delivery of multimedia services such as television and network based personal video recording. These multimedia services introduce large challenges for network operators as they are sensitive to packet loss. Therefore, admission control mechanisms are required to avoid congestion caused by allowing too many sessions. However, as multimedia services are often bursty it is not possible to reserve a fixed amount of bandwidth in the network since this policy will lead to either over-admittance or under-admittance. Recently, the IETF Pre-Congestion Notification (PCN) Working Group, proposed a measurement based admission control mechanism, where the network load is measured at each node and sessions are allowed or blocked at the edge of the network. In this paper, we extend and evaluate the PCN mechanism: we propose a new measurement algorithm for PCN, based on bandwidth metering, and determine the configuration guidelines for the parameters of both the original token bucket based approach and the novel algorithm for different network conditions and traffic types. More specifically, we study PCN’s applicability on protecting VBR video services, which is currently not studied in the PCN Working Group. Furthermore, we characterise the gain of PCN in comparison to a centralised admission control mechanism.

I. I NTRODUCTION In recent years, multimedia services such as BroadcastTV, Network Based Personal Video Recording and Voice over IP have been introduced over broadband DSL aggregation networks. In terms of bandwidth, these services now have the largest share in the network. However, multimedia services have stringent Quality of Service (QoS) demands: to maximise the quality of these services no packet loss and only small amounts of jitter and delay can be tolerated. Otherwise, visual or audial artefacts occur such as blockiness for video or clipping for audio. To characterise the delivery quality of multimedia services, operators are focusing on the Quality of Experience (QoE). For video services, this QoE describes how the end user perceived the video and is commonly characterised through objective video quality metrics such as the Structural Similarity score [1] or subjective testing. As broadband DSL access and aggregation networks are able to provide a reliable transport, mainly packet loss due to congestion is a problem and not packet loss due to errors

in the underlying channel. To avoid congestion and protect existing sessions, resource admission control mechanisms are employed that limit the amount of sessions allowed into the network. Today, mostly centralised approaches are deployed, which are generally employed for protecting narrowband constant bit rate services with a traffic specification known a priori such as telephony [2], but are sub-optimal for video services, which exhibit bursty behaviour and have a variable bitrate by nature. Traditional admission control mechanisms dimension the amount of resources to reserve based on the peak rate of the video. By reserving that much resources, QoS can be guaranteed. However, in practice, the video’s bit rate is significantly lower than its peak rate. This over-dimensioning thus leads to a waste of resources. A more efficient way to utilise resources is to protect only the aggregate of sessions as opposed to protecting every session individually. The main advantage is that the variability of the aggregate is generally lower than that of individual sessions, which allows to carry more video sessions over the same link, enabling better resource usage than in the case of protection on a per-session basis. If the number of resources to reserve is based on the aggregate of sessions instead of every session independently, an admission control mechanism needs to know how much of resources the current aggregate consumes to determine if new sessions can be allowed. The IETF Pre-Congestion Notification (PCN) Working Group [3], proposed the PCN [4] architecture, which is a distributed measurement based admission control mechanism. In its initial work, the PCN Working Group has made some assumptions on the considered traffic. PCN assumes that the traffic is inelastic and constrained to a known maximum rate. Based on this assumption, it can be applied to protect narrowband inelastic sessions (e.g. telephony in a core network). However, the assumption can be violated if PCN is applied to protect video sessions encoded at a constant quality, which have a variable bitrate. In this paper, we investigate the performance of PCN based admission control mechanisms in protecting both data and video sessions. Through a dedicated NS-2 [5], [6] based

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.

simulator, we emulate the transmission of real video sequences in a PCN based aggregation network. We used a wide range of videos (e.g. news sequence, sport), to introduce a realistic set of content types which can be requested. The major contributions of this paper are as follows: firstly, we propose a new measurement algorithm based on a sliding window bandwidth metering approach, which is easier to configure for bursty traffic. Secondly, we compare the PCN architecture with a centralised admission control mechanism and illustrate the added benefit of applying PCN for typical usage of video services in broadband aggregation networks. Thirdly, we determine the optimal configuration of two PCN measurement algorithms, based on bandwidth metering and token bucket, and evaluate its performance through both network and video quality metrics for varying request rates. The remainder of this paper is structured as follows. The next section discusses relevant work concerning admission control and QoS guaranteeing. In Section III, the PCN architecture is detailed, while an overview of different options to implement the PCN mechanism is presented in Section IV. These options are extensively evaluated in Section V. Finally, the results are discussed in Section VII. II. R ELATED WORK In today’s broadband DSL aggregation networks, mostly centralised admission control mechanisms are used such as the Resource Admission Control Subsystem in TISPAN [2], the ITU-T Resource Admission Control Function (RACF) [7] or the Bandwidth Broker [8] in Diffserv. These admission architectures are often policy based [9] and use a combination of policies that include service, user and operator requirements combined with a view on the network resource availability to decide on incoming requests. To make their decision, these centralised approaches require the knowledge of the network topology, the dimensioned resources and the route being followed by any flow consuming controlled resources. A first limitation of these approaches is that the knowledge needed by the system can be large and difficult to keep up to date, especially when the network is reconfiguring itself as a consequence of a link or node failure. The alternative of using more distributed standardized admission control systems such as IntServ [10] has the problem of per-flow state maintenance in the routers. In addition to this scalability problem it also shares a number of problems with the more central approaches: it is difficult to have a correct view on the resource usage in the underlying network for variable bit rate services such as video streaming. More specifically, the resource usage of these services cannot be captured in static traffic specifications, such as the RSVP TSPEC [11], but is dynamic on both the level of an individual session and a session aggregate. Furthermore, the actual bandwidth that is used may not be known beforehand. Several alternatives for a decentralized admission control have also been proposed that allow or block sessions based on local knowledge. The authors of [12] propose to perform edge-based admission control by no longer using the IntServ model with per flow state in each router but by delegating

Flow acceptance

Congestion level measurement

CLE Calculation X

X PCN Ingress node

PCN Interior Node

PCN Interior Node PCN Interior Node PCN Interior Node

Streaming servers

PCN Ingress node

PCN Egress Node

PCN Egress Node

Home Network

Home Network

Fig. 1. Overview of the PCN architecture. Streaming servers are transmitting video to various home networks. PCN interior nodes measure the network load and start marking packets if the load becomes too high.

the decision to the edge routers where the load on a path is estimated using passive measurements. In [12], the abstraction of a statistical envelope is used as a general framework for characterizing a service [13]. In [14] the focus is on a wireless setting and in [15] the authors discuss an admission control system for large-scale media delivery systems. Of late, the IETF community is researching a measurement based admission control mechanism through the PCN Working Group [3]. PCN [4] is a measurement based admission control mechanism, where the network load is measured and signalled through the marking of packets. The inner workings of PCN are still being defined but the PCN architecture [4] document has been published as an RFC. In recent work, different PCN based algorithms have been evaluated through simulation [16], [17]. Our work differs from this study in several ways. Firstly, we focus on the transmission of video services, while [16], [17] focus on the transmission of narrowband services (e.g. VoIP). Currently, traffic which does not have a known maximum rate (e.g. VBR video services) is not studied in the PCN Working Group. Secondly, we evaluate the performance of PCN based admission control techniques through network and video quality metrics such as bandwidth, packet loss and Structural Similarity, while [16], [17] focuses more on the accuracy of the individual algorithms, by evaluating the correctness and behaviour of the output values for different configurations. III. P RE -C ONGESTION N OTIFICATION The PCN architecture applied to DSL aggregation networks is illustrated in Figure 1. The PCN architecture distinguishes three node types: interior, ingress and egress nodes. Each interior node measures the network load on its links and marks packets once the load exceeds a certain threshold. PCN uses different marking algorithms, in this paper we focus on threshold marking as this results in a clear transition between no congestion and pre-congestion [16]. The egress and ingress nodes are located at the edges of the PCN controlled network; the egress node collects the information from the interior nodes, by calculating a Congestion Level Estimation (CLE), and signals the pre-congestion state to the interior node, which is responsible for making the admittance decision. The PCN mechanism tries to protect the Quality of Service (QoS) of established inelastic sessions within a DiffServ do-

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.

main when congestion is imminent or existing. This protection is achieved by avoiding over-admission. Hence, the goal of a PCN mechanism is twofold: it should avoid packet loss in the network but also maximise the link utilisation to serve as many clients as possible when the network load is high. Packet loss occurs when the bandwidth of the admitted traffic is above the link capacity. Queues in the network can cope with bursts for a limited timeframe but on a larger timeframe the bandwidth of the admitted traffic should always be lower than the link capacity. Once the bandwidth aggregate exceeds the link capacity, the QoS of existing sessions will be affected. At the same time, the link capacity also specifies the utilisation to strive for. If sessions are blocked and the bandwidth aggregate is lower than the link capacity, the network is under utilised and bandwidth is lost. This under utilisation can be because too few sessions are allowed or the variability of the traffic hinders allowing more sessions. IV. A LGORITHMIC DETAILS In this section we explain how the detection of a precongestion state can be implemented in the PCN architecture. This consists of the CLE calculation at the egress node and the network load measurement at the interior node. A. CLE Calculation In the PCN architecture, the CLE value is measured through an Exponential Weighted Moving Average. A CLE value can be between 0 and 1, where 1 denotes pre-congestion and 0 corresponds with no congestion. The CLE value at time n is calculated as: CLEn = X ∗ (1 − w) + w ∗ CLEn−1 , w ∈ [0, 1] where w is the CLE weight and X is 1 if the packet is marked and 0 if the packet is not. The higher the CLE weight, the more previous measurements will contribute to the overall CLE value. Once calculated, the CLE information is signalled back to the ingress node which is responsible for allowing or blocking sessions. B. Measurement algorithms The key factor in the PCN architecture is the measurement algorithm at the interior nodes. This algorithm must provide a timely and accurate detection of a pre-congestion state on which the admittance decision is based. These measurements are performed by monitoring the traffic running through every interior node. In [4], a token bucket approach is suggested as measurement algorithm. We introduce an alternative approach based on bandwidth metering and discuss its differences with the token bucket approach. 1) Token bucket: The token bucket algorithm is typically used as a policer in Diffserv domains to limit network transmission speed but can also be used to monitor bandwidth usage. In essence, a token bucket is a simple bit counter with lower and upper boundaries, where the counter represents the load in the network. At a constant token rate R, tokens are

added to the bucket; these tokens are removed again when a packet arrives at the interior node. As a result, the number of tokens in the bucket increases when the bandwidth of the aggregate is lower than the given token rate R. The number of tokens decreases when the bandwidth of the aggregate is higher than the token rate R. As such, a token bucket provides information on the network load. When using this mechanism in a PCN architecture, packets are marked when the number of tokens in the bucket is lower than a predefined token bucket threshold. 2) Bandwidth metering: An alternative to using a token bucket is performing a bandwidth measurement using a time window. Different options exist to perform such bandwidth measurements; in our approach, we focus on a sliding window algorithm as this provides the most accurate calculation of bandwidth. In this algorithm, the measurement window (denoted by mw) slides over the arrived packets, and each time a packet arrives the bandwidth is measured based on the packets received during the last mw seconds. Packets sent during bandwidth peaks, introduce more measurements than packets during lower bandwidths. When using this approach, an arriving packet is marked when the bandwidth measurement at that time is higher than a predefined threshold, the configured rate. C. Comparison between measurement algorithms Both the token bucket and bandwidth metering approach provide a way to measure the network load in order to signal the PCN egress node. However, they are two different mechanisms which have different algorithmic complexities and configurations. As a token bucket algorithm is a bit counter which only needs to be updated when a packet arrives, it has a very limited complexity. The bandwidth metering algorithm has more memory requirements as it needs to store the packets received during the last measurement interval. However, this added complexity also has its benefits. The bandwidth metering works on a time basis as it aggregates over the packets received during the last measurement interval. This can be especially beneficial for bursty traffic where the aggregation interval needs to be large enough to cope with fluctuations. The token bucket approach is a packet based algorithm, where a burst of packets will cause the token bucket to decrease sooner than when packets arrive at a constant bit rate. Furthermore, as a token bucket has boundaries, information can get lost when the bucket is almost completely full or empty. In situations where bursts of data are likely to occur, the configuration of a bandwidth metering algorithm will be much easier than the token bucket algorithm. For the latter, information about the size of bursts is needed to find a suitable value for the aggregation interval. V. S ETUP DESCRIPTION We implemented the token bucket and bandwidth metering algorithm as described in Section IV and evaluated their performance in different network conditions and configurations of the algorithm. We also compared both PCN implementations

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.

20 links 2Gbps

Video Server

1Gbps

20 links

1Gbps

Service Router

Service Aggregator

Access Node

PCN Interior & Ingress Node

PCN Interior Node

PCN Egress Node

Home Network

Fig. 2. Network topology modelling a multimedia access and aggregation network. The investigated topology is a tree based network where multimedia services are streamed to home networks.

with a traditional centralised admission control mechanism, as defined in the TISPAN RACS [2], where the same amount of resources is reserved for every session. All results were obtained through simulation. We designed a modified NS-2 simulator [6], capable of emulating the transmission of real video sequences in a simulated environment and calculating the video quality. A. Investigated network topology We investigated the performance of PCN in broadband aggregation networks. These networks are typically a logical tree network where the root of the tree, denoting one or more servers, is streaming multimedia services to its leaves, consisting of the home networks of users. The investigated network topology is illustrated in Figure 2. A video server is connected to 400 home networks, which request videos from the server. We modelled the request process through a uniform random distribution with a fixed request rate. We assume that the aggregation network should support flash crowds up to a request rate of 1000 requests per second. Throughout the evaluation, the performance of PCN is investigated up to this request rate. Any higher request rate is assumed to be handled by an external request shaper and not to be considered. The investigated network topology contains one congestion point on the service router, where a 2 Gbps link is connected to a 1 Gbps link. The buffer on this congestion point supports bursts up to 120 ms. In our evaluation, we present bandwidth measurements aggregated over a timeframe of 240 ms. Using this timeframe, no buffer overflow will occur if the bandwidth measurements are lower than the link capacity. This is because any system with a buffer capacity larger or equal than T imeF rame ∗ (LinkIn − LinkOut)/LinkIn provides a sufficient condition that the system does not face overflow. In this formula, LinkIn is the incoming link capacity and LinkOut is the outgoing link capacity. B. Traffic details We investigated the transmission of two different traffic types: data sessions and video sessions. Each data session was a pure CBR data session of 2.5 Mbps where each packet was exactly 1500 bytes, resulting in 208.33 packets in a 1 second time frame. No jitter was introduced in the arrival time of a packet resulting in a periodical process where one packet of 1500 bytes arrives each 0.0048 second per session, as 208.33 packets are equidistantly divided over a 1 second time interval. For the video sessions, we used 500 different video sequences encoded as a H.264 video with a PAL resolution and a

frame rate of 25 fps. The videos have different content types to represent a realistic bouquet of requested video services. The content types are a music show, a news broadcast, a wildlife documentary, a soccer game and an action movie. The videos were encoded through a constant quality encoder resulting in a variable bit rate. The bit rate differs from video to video but averages between 2 Mbps and 3 Mbps. However, on a smaller time frame, the peak bit rates can be much larger, e.g. up to 9.5 Mbps for a 240 ms timeframe. C. PCN Parameters For the token bucket approach we varied the token bucket depth from 5,000 to 100,000,000 bits. The token bucket threshold was set to 50% of its depth as, for this configuration, the delay introduced to switch between the two states (precongestion and no congestion) is more or less equal. For the bandwidth metering algorithm, the measurement window was varied from 0.001 to 2 seconds. The CLE weight, as defined in Section III, was varied from 0.9 to 0.99999. The threshold rate in both algorithms was varied from 600 Mbps to 1000 Mbps. For each test, we performed 30 iterations; to prove the statistical significance of the results we performed a one-way ANOVA test on the results. VI. E VALUATION RESULTS In this section, we discuss the optimal configuration of both the token bucket and bandwidth algorithm for CBR and VBR sessions. We vary the algorithm’s parameters such as rate and aggregation interval and take both network and video quality metrics into account. Furthermore, we also study the influence of the request rate. We first focus on a non pathological case with a request rate of 4 requests per second, but also investigate the impact of flash crowds with a request rate of 1000 requests per second. Unless otherwise stated, the CLE weight was set 0.9. A. Impact of the traffic type Figure 3a illustrates PCN’s admittance process over time to protect CBR data sessions. In this test, the rate was set to 990 Mbps, the request rate was set to 4 requests per second, the token bucket depth was set to 500,000 bits in the token bucket algorithm, and the measurement window was set to 0.0048 seconds in the bandwidth metering algorithm. The latter value is the theoretical optimum as for each session individually, a packet is sent every 0.0048 seconds. There is almost no difference between the bandwidth metering and token bucket algorithm. In both approaches, the measured bandwidth averages around 990 Mbps, while approximately 400 sessions are allowed. Furthermore, the admission process is very stable: once the first sessions are blocked no other sessions are allowed. The results for the RACS mechanism are comparable, as each CBR data session is 2.5 Mbps, the amount of resources to reserve is set to 2.5 Mbps and exactly 400 sessions are allowed each iteration. ANOVA tests also show that their is no significant difference between the 3 algorithms.

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.

250

700 600

200

500 150

400 300

100 BW Metering: Bandwidth BW Metering: Number of admitted sessions Token Bucket: Bandwidth Token Bucket: Number of admitted sessions

50 0 0

50

100

150 Time (s)

200

250

(a) CBR Data sessions

250

700 600

200

500 150

RACS - 5 Mbps

0 300

400 300

100

200 100

800

PCN - BW Metering

0 50

100

150 Time (s)

200

900

250

0 300

800

PCN - Token bucket 250

700 600

200

500 150

RACS - 5 Mbps

400 300

100

200

RACS - 9.5 Mbps

50

100 0

1000

300

200

RACS - 9.5 Mbps

50

Measured Bandwidth (Mbps)

800

900

300

1100

350 Number of Admitted Sessions

900

300

1000

1200

Bandwidth Number of admitted sessions

400

1100

350 Measured Bandwidth (Mbps)

1000 Number of Admitted Sessions

Number of Admitted Sessions

350

1200

Bandwidth Number of admitted sessions

400

1100

100

0 0

(b) VBR Video - Bandwidth metering

Measured Bandwidth (Mbps)

1200 400

50

100

150 Time (s)

200

0 300

250

(c) VBR Video - Token bucket

Fig. 3. Influence of the token bucket and bandwidth metering approach on the number of admitted sessions and measured bandwidth for a mix of CBR data sessions (a) and VBR video sessions (b-c). Both mechanisms provide stable and comparable results. In the VBR case, the PCN mechanism is compared with two configurations of the RACS mechanism, identifying the gain of applying PCN. 1080

1

1060 1040

0.9

1020 1000

0.8

980 960 0.7

940 920

0.6

900

BW Metering: Video Quality Token Bucket: Video Quality BW Metering: Peak BW Token Bucket: Peak BW

880 860 750

Video quality (SSIM score)

Peak Bandwidth (Mbps)

The previous figure indicates that configuring the PCN mechanism for CBR data sessions is easy as a lot is known a priori about the behaviour of the session. Figures 3b and 3c illustrate the admittance decision and measured bandwidth over time for the bandwidth metering and token bucket algorithm, respectively. These algorithms are also compared with two configurations of the RACS mechanism. In the first configuration, the amount of resources to reserve is configured at 9.5Mbps: this is the peak bit rate measured over an aggregation interval of 0.240 seconds and is the lowest value that is able to guarantee that no buffer overflow occurs in the network. In the second configuration, a certain level of statistical multiplexing gain is assumed and the amount of resources to reserve is configured at 5 Mbps which is the double of the average of all videos. For all algorithms the request rate was set to 4 requests per second. For both PCN algorithms, the rate was set to 825 Mbps. The token bucket depth was set to 16,000,000 bits, while the measurement window was set to 140 ms. Applying PCN results in a large increase in the network utilisation for bursty traffic when compared to the RACS mechanism. When the RACS is configured at 9.5 Mbps, only 106 sessions are allowed, resulting in a network utilisation which is 3 times lower than when PCN is applied. A RACS configuration of 5 Mbps results in the admittance of 200 sessions in the network, which is still considerably lower than when PCN is applied. However, an advantage of the RACS is that is able to provide a better transition between the admittance and blocking phase. In PCN, there is a short time in which some sessions are allowed and others are blocked: this is not the case in the RACS mechanism. When comparing Figures 3b and 3c, we see almost no difference. Both PCN algorithms provide a comparable admittance process, resulting in allowing approximately the same number of sessions in the network. This is also confirmed through ANOVA tests, which show no significant difference between the token bucket and bandwidth metering algorithms for both the bandwidth measurements and number of admitted sessions. When studying the configuration of both mechanisms, the variability of the individual sessions results in a variable

800

850 Configured rate (Mbps)

900

0.5 950

Fig. 4. Influence of the configured rate on the measured bandwidth and average video quality per session for both the bandwidth metering and token bucket approach.

session aggregate as well. Therefore, the same configuration as in the CBR case does not yield good results. Both algorithms only block all sessions once the bandwidth aggregate is above the configured rate. This is because the variability of the aggregate results in bandwidth measurements lower than the threshold and hence not marked packets. A suitable value for the rate parameter is discussed in the next section. B. Impact of the configured rate In the remainder of this section, we focus on the transmission of VBR videos as this is the most realistic and at the same time challenging scenario. For this test, the request rate was set to 4 requests per second. The token bucket depth was set to 16,000,000 bits, while the measurement window in the bandwidth metering algorithm was set to 140 ms. Figure 4 illustrates the influence of the configured rate on the measured peak bandwidth and average video quality per session. The video quality is measured through the Structural Simularity (SSIM) [1], which is an objective Full-Reference quality metric based upon the assumption that the Human Visual System is more specialised in the extraction of structural information from scenes. The SSIM model takes the original and the distorted signal as input and produces a score between

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.

0 650

20

30

Token Bucket Depth (Mbit) 40 50 60 70

80

90

100

0

10

20

Video quality (SSIM score)

500 450 400

300

90

100

1.8

2

0.6

0.4

BW Metering: CLE 0.9 Token Bucket: CLE 0.9 BW Metering: CLE 0.9999 Token Bucket: CLE 0.9999

0 0.4

80

0.8

0.2

350

0.2

Token Bucket Depth (Mbit) 40 50 60 70

1

550

0

30

1.2

BW Metering: CLE 0.9 Token Bucket: CLE 0.9 BW Metering: CLE 0.9999 Token Bucket: CLE 0.9999

600 Number of admitted sessions

10

0.6 0.8 1 1.2 1.4 Bandwidth measurement interval (sec)

1.6

1.8

2

(a) Number of admitted sessions

0

0.2

0.4

0.6 0.8 1 1.2 1.4 Bandwidth measurement interval (sec)

1.6

(b) Video quality through SSIM score

Fig. 5. Influence of the aggregation interval on the number of admitted sessions (a) and average video quality per session (b) for both the bandwidth metering aggregation interval (lower x axis) and the token bucket aggregation interval (upper x axis). In the configuration of the aggregation interval, there is a trade-off between speed and accuracy: an optimum can be found that maximises both the number of admitted sessions and the average video quality.

0 and 1, where 1 stands for perfect quality. The SSIM scores should be interpreted as follows: a video with a SSIM score above 0.9 is indistinguishable from the original, a SSIM score between 0.8 and 0.9 corresponds with a moderate quality while a SSIM score of 0.7 and lower results in a video which is barely watchable. The figure shows that the configured rate is not the same as the peak bandwidth and even a drop in video quality occurs once the configured rate is above 875 Mbps. In terms of packet loss, a configured rate of 875 Mbps results in an average packet loss ratio of 0.07% which increases up to a packet loss ratio of 4.37% for a configured rate of 950 Mbps. This explains the drop in SSIM score from a perfect quality to a bad quality, approximately 0.62 for both algorithms. Furthermore, ANOVA tests show that there is no significant difference between the two algorithms in terms of SSIM score. These results confirm the observations made in Figures 3b and 3c: the configured rate is not the maximum rate allowed in the admission control mechanism but a parameter of the system. When transmitting bursty traffic, the measurement algorithm will only mark all packets once the bandwidth aggregate is above the configured rate. Hence, when configuring the rate, it is important to take the variability of the aggregate into account. C. Impact of the aggregation interval Both measurement algorithms do not take decisions solely based on the last packet received: they aggregate the information and average over a series of packets. For each measurement algorithm, this aggregation interval consists of two components. For the token bucket algorithm, the aggregation interval consists of the token bucket depth and the CLE weight. The larger the token bucket depth, the longer it will take to reach the token bucket threshold and thus the more packets are taken into account. For the bandwidth metering algorithm, the measurement window and CLE weight are the contributing parameters to the aggregation interval. Figure 5 illustrates the influence of the aggregation interval

for both measurement algorithms on the number of admitted sessions and average video quality per session. The request rate was set to 1000 requests per second and the configured rate was set to 825 Mbps. A number of important observations can be made. Firstly, the variability of the VBR sessions and consequently of its aggregate introduces some constraints on the aggregation interval. This aggregation interval needs to be large enough to cope with the fluctuations in bandwidth. If the aggregation interval is too small not all sessions are blocked when congestion is imminent and over-admission occurs leading to a complete video quality degradation. For a token bucket depth of 10,000,000 bits and lower and a measurement window of 0.1s and lower the measured packet loss ratio is more than 10% leading to videos which cannot even be decoded anymore and a SSIM score of 0.0. While Figure 5 shows the peak bandwidth for a high request rate of 1000 requests per second, this is also true for lower request rates. Secondly, the aggregation interval also introduces a delay in the measurement algorithm. The higher the aggregation interval the longer it takes to react on a pre-congestion state. As a result, more sessions will be allowed as the measurement algorithm reacts too slow. This is especially the case for a high request rate and an aggregation window of 20,000,000 bits and higher or 0.7 seconds and higher for the token bucket and bandwidth metering algorithm, respectively. The high request rate in combination with a large delay causes the PCN mechanism to allow too many sessions, ultimately leading in a video quality drop from 0.98 to 0.78 for the bandwidth metering algorithm and 0.63 for the token bucket algorithm. Finally, it is not possible to combine a very small token bucket depth (i.e. smaller than 10,000,000 bits) or measurement window (i.e. smaller than 0.1 second) with a large CLE weight. In this case, over-admittance still occurs and the SSIM score is below 0.8, which corresponds with a moderate to poor video quality. This is somewhat contra intuitive as you

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.

Packets being marked over time X X X X X

1060

1

silence

1050

time

CLE

time

Fig. 6. Impact of the CLE value on a large burst and long silence. The last packets are marked, causing the CLE to increase. During the silence, the CLE value receives no input on the congestion level and remains unchanged until the next packet arrives.

would expect that the high CLE weight is able to compensate for the small token bucket depth or measurement window. However, the way information is signalled in the PCN architecture results in loss of information. A low token bucket depth or measurement window will already trigger marked packets when there is no sign of congestion, which should be compensated by the CLE calculation. However, as illustrated in Figure 6, as the marking of packets is the only information the CLE calculation can rely on, it ignores silences resulting in inaccurate measurements. D. Impact of the request rate Figure 5 indicated that the request rate has an influence on the performance of the PCN mechanism. This is illustrated in Figure 7 which shows the impact of the request rate on the peak bandwidth and average video quality per session. In this test, a token bucket depth of 16,000,000 bits is used for the token bucket algorithm and a measurement window of 140ms is used for the bandwidth metering algorithm. For both algorithms, the rate was set to 875 Mbps and the CLE weight was set to 0.9. Figure 7 shows how an increasing request rate leads to an increasing peak bandwidth. As already shown in Figure 5 this is because the measurement algorithm detects the pre-congestion state too slow. The introduced delay results in additional sessions being wrongfully accepted. While previous tests (i.e. in Figure 4) showed that a rate of 875 Mbps resulted in a perfect video quality for a request rate of 4 requests per second, applying the same rate for a larger request rate results in a drop in video quality. When the number of requests is increased to 10 requests or more, the SSIM score drops from approximately 0.89 to approximately 0.62, which corresponds with a shift from a moderate to bad video quality. Furthermore, ANOVA tests show that there is no significant difference between both algorithms in terms of SSIM score. These results show that, when configuring the PCN mechanism, it is important to take the request rate into account as well. The PCN mechanism should be able to support a maximum request rate and the parameters of the algorithm should be adjusted accordingly. For example, for the configured rate it is important to introduce a certain amount of headroom to cope with high request rates (e.g. caused by flash crowds). During normal operation, this headroom is bandwidth which is wasted as fewer sessions will be allowed. As shown in Figure 7, a configured rate of 875 Mbps is too high to obtain a perfect video quality at high request rates. To support a request rate

Peak Bandwidth (Mbps)

1040

Influence on CLE value

1030

0.8

1020 0.7

1010

Video quality (SSIM score)

0.9 1

1000 0.6 BW Metering: Peak BW BW Metering: Video quality Token Bucket: Peak BW Token Bucket: Video Quality

990 980 1

10 100 Request rate (requests / sec)

0.5 1000

Fig. 7. Influence of the request rate on the measured bandwidth and average video quality per session for both the the bandwidth metering and token bucket approach.

of 1000 requests per second, the rate should be at most 825 Mbps. However, when only 1 request per second is received, approximately 80 Mbps of bandwidth is wasted. Hence, there is a trade off between the maximum request rate supported and the bandwidth you are willing to waste to protect against these flash crowds. VII. D ISCUSSION & F UTURE W ORK In the previous section, we presented a detailed evaluation of both the token bucket and bandwidth metering algorithm by varying different configuration parameters and request rates. The obtained results clearly show that both measurement algorithms can be successfully configured to protect the transmission of bursty and non bursty traffic. However, the bursty traffic does have a number of consequences on the ease of configuration. Firstly, the aggregation interval needs to be large enough to be able to cope with the fluctuations in the traffic but on the other hand not too large to minimise the introduced delay. This is because a larger delay will limit the supported request rate of the PCN mechanism. Secondly, When configuring the rate, the rate should not be set to the link capacity, but is merely a parameter, which depends on both the variability of the aggregate and the maximum supported request rate. A formula expressing the relationship between the rate and its dependencies is: Rate ≈ LinkCapacity − V araggregate (m) − Delay × M aximumRequestRate × BWsession In this expression, Delay is an estimation of the delay introduced by the measurement algorithm and BWsession is an estimation of the average bandwidth of every session. The variability of the aggregate is characterised by: tn

V araggregate (m) = max(BWaggregate (i, m)) i=t0 tn

− min(BWaggregate (i, m)) i=t0

BWaggregate (i, m) denotes the bandwidth aggregate measurement at time i measured over a measurement interval m.

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.

Hence, the variability is the difference between the maximum and minimum of all bandwidth measurements when the system is in pre-congestion state, defined through the interval [t0 , tn ]. Note that these expressions are rules of thumb and should not be considered as an analytical configuration of the PCN mechanism. Several parameters, i.e. Delay, V araggregate and BWsession can only be estimated and can fluctuate considerably, depending on the burstiness of the sessions and its aggregate. When comparing the token bucket and bandwidth metering approach, both algorithms provide comparable performance. However, as discussed in Section IV, the packet based approach used by the token bucket algorithm, makes it hard to know the exact value of the aggregation interval in seconds. While for the bandwidth metering mechanism, algorithms can be designed to adaptively determine a good bandwidth metering configuration based on previous measurements, this is not so easy for the token bucket approach as a good configuration depends on micro level information of the traffic (e.g. size of the individual bursts). Hence, for bursty traffic, a bandwidth metering algorithm can be considered as a better option than the token bucket approach. While both approaches show comparable performance, the bandwidth metering algorithm is easier to configure as its effects on the traffic are more intuitive. The focus of this paper is on the evaluation of PCN for bursty traffic and deriving rules of thumb for configuring both algorithms. The results show that the configuration of both mechanisms depends on the type of traffic being transmitted (e.g. CBR vs. VBR) and the request rate. If no traffic specification is known beforehand, it is difficult to find a suitable configuration that is able to maximise the network usage and protect the network from congestion. However, the results have shown the impact of different network conditions on the configuration of the measurement algorithms. Therefore, adaptive algorithms that change their configuration based on local measurements of the traffic aggregate and request rate are necessary, which is an important topic for future work. VIII. C ONCLUSION We studied the performance of the Pre Congestion Notification mechanism in protecting video services in multimedia aggregation networks. We designed a new measurement algorithm for PCN based on bandwidth metering measurements and performed an experimental characterisation of the novel algorithms and the token bucket algorithm, as proposed by the IETF working group. Furthermore, we compared the PCN mechanism with a centralised admission control mechanism and highlight its gain. We focused on results obtained through simulations which provide information about the network utilisation and video quality of the received videos. Through evaluation, we derived rules of thumb for configuring the token bucket and bandwidth metering algorithm. The results show that the introduction of bursty traffic, which is common in VBR videos but currently not studied in the original PCN work, introduces consequences for the configuration

where a trade off is present between speed and accuracy. A large aggregation interval results in a more accurate network load measurement but also in an increase in delay in precongestion detection. Furthermore, we discuss how the token bucket algorithm is harder to configure for bursty traffic due to its packet based nature. The configuration of both algorithms should take into account the variability of the aggregate and the request rate. Therefore, the design of an adaptive approach where the configuration is altered based on previous measurements is an important part of future work. ACKNOWLEDGMENT The research was performed partially within the framework of the EUREKA CELTIC RUBENS project; The authors would like to thank all RUBENS partners for their valuable contributions and feedback. Steven Latr´e is funded by Ph.D grant of the Fund for Scientific Research, Flanders (FWO-V). R EFERENCES [1] Z. Wang, L. Lu, and A. C. Bovik, “Video quality assessment based on structural distortion measurement,” Signal Processing: Image Communication, vol. 19, no. 2, pp. 121–132, February 2004. [2] ETSI TS 182 019, “Resource and Admission Control Sub-system (RACS); Function Architecture.” [3] “Congestion and Pre-Congestion Notification Working Group,” [online] http://tools.ietf.org/wg/pcn. [4] P. Eardley, “Pre-Congestion Notification (PCN) Architecture,” RFC 5559 (Informational), Jun. 2009. [Online]. Available: http://www.ietf.org/rfc/rfc5559.txt [5] “Ns-2, The Network Simulator,” [online] http://www.isi.edu/nsnam/ns/. [6] S. Latr´e, F. De Turck, B. Dhoedt, and P. Demeester, “Scalable Simulation of QoE Optimization for Multimedia Services over Access Networks,” in The International Conference on Internet Computing (ICOMP), 2007. [7] “ITU-T Recommendation Y.2111: Resource and admission control functions in next generation networks,” 2006. [8] K. Nichols, V. Jacobson, and L. Zhang, “A Two-bit Differentiated Services Architecture for the Internet,” RFC 2638 (Informational), Jul. 1999. [Online]. Available: http://www.ietf.org/rfc/rfc2638.txt [9] C. Esteve Rothenberg and A. Roos, “A review of policy-based resource and admission control functions in evolving access and next generation networks,” Journal of Networks and Systems Management, vol. 16, no. 1, pp. 14–45, 2008. [10] R. Braden, D. Clark, and S. Shenker, “Integrated Services in the Internet Architecture: an Overview,” RFC 1633 (Informational), Jun. 1994. [11] R. Braden, L. Zhang, S. Berson, S. Herzog, and S. Jamin, “Resource ReSerVation Protocol (RSVP) – Version 1 Functional Specification,” RFC 2205 (Proposed Standard), Sep. 1997. [12] P. Yuan, J. Schlembach, A. Skoe, and E. Knightly, “Design and implementation of scalable edge-based admission control,” Comput. Netw., vol. 37, no. 5, pp. 507–518, 2001. [13] J. yu Qiu, C. Cetinkaya, C. Li, and E. W. Knightly, “Inter-class resource sharing using statistical service envelopes,” in In Proceedings of IEEE Infocom, 1999, pp. 36–42. [14] J. Rezgui, A. Hafid, and M. Gendreau, “A distributed admission control scheme for wireless mesh networks,” in 5th International Conference on Broadband Communications, Networks and Systems, 2008. BROADNETS 2008., 2008. [15] Z. Xia, W. Hao, I.-L. Yen, and P. Li, “A distributed admission control model for QoS assurance in large-scale media delivery systems,” Parallel and Distributed Systems, IEEE Transactions on, vol. 16, no. 12, pp. 1143–1153, Dec. 2005. [16] M. Menth and F. Lehrieder, “Performance evaluation of PCN-based admission control,” in Quality of Service, 2008. IWQoS 2008. 16th International Workshop on, 2008, pp. 110–120. [17] M. Menth and M. Hartmann, “Threshold configuration and routing optimization for PCN-based resilient admission control,” Journal of Computer Networks, 2009.

978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.

Suggest Documents