MASE LAYOUT

4 downloads 0 Views 75KB Size Report
network, QoS (packet loss rate, delay, etc.) for ... every switch during the call setup process. The ... messages are allowed to pass the access router.
VOICE OVER IP AND QUALITY OF SERVICE

Toward Scalable Admission Control for VoIP Networks Kenichi Mase, Niigata University

ABSTRACT We present an overview of scalable admission control in IP networks. We introduce various approaches and discuss the mechanism and characteristics of each method. In particular, we argue that end-to-end measurement based admission control (EMBAC), which employs end-to-end on-demand probing, should be used for call admission control. Second, we consider use of EMBAC in VoIP networks. We present a new probability-based EMBAC scheme and show that its performance is close to the ideal method using virtual-trunk-based admission control. We also present a QoS allocation approach for selecting an admission threshold and dimensioning link capacities. A simple network design and evaluation results suggest that this QoS allocation approach is effective to adequately dimension a network, while satisfying end-to-end targets in terms of blocking probability and packet loss rate.

INTRODUCTION The current Internet provides best effort service and does not support quality of service (QoS). Although Internet service providers (ISPs) customarily announce their backbone network QoS in service level agreements (SLAs), it is different for end-to-end application-specific QoS for each application. As the Internet becomes popular and widely used, demands for guaranteed end-to-end QoS service that supports each real-time application at the network level is increasing. Among realtime applications, voice over IP (VoIP) is becoming a popular service in the Internet as well as in enterprise intranets. Today, the number of VoIP subscribers is rapidly increasing with popularization of broadband access service such as asymmetric digital subscriber line (ADSL) and fiber to the home (FTTH). If new calls are accepted without limit in the VoIP network, QoS (packet loss rate, delay, etc.) for calls in progress may be worse than an acceptable level because the total bandwidth required for them exceeds the network capacity. Therefore, a mechanism called call admission control (CAC) is necessary to reject a new call when

42

0163-6804/04/$20.00 © 2004 IEEE

enough network spare capacity is not available. Conventional circuit-switched networks have an inherent mechanism for admission control as its nature, where link capacity is divided into a number of trunks and CAC is conducted at every switch during the call setup process. The Internet originally provided best effort service and does not support CAC. The necessity of admission control in the connectionless IP network is the same as in circuit-switched networks, although the function and implementation may be different. The basic notion of CAC is to accept a call request if it is possible to allocate required resources (buffer and bandwidth) to the new call request and to maintain the given QoS target for all existing calls, and otherwise reject the call. The frameworks of QoS management have been studied in the Internet Engineering Task Force (IETF). They include integrated services/ Resource Reservation Protocol (IntServ/RSVP) and differentiated services (DiffServ). In the IntServ architecture, a signaling protocol, RSVP, is used for reserving resources in the routers along the path to guarantee QoS for a new call. A set of traffic parameters is used to describe traffic characteristics of each flow. These parameters are maintained at each router and used to calculate a total required bandwidth that is compared to the link capacity in admission control. If resources for a new call are not available at a router, the call is rejected. This mechanism is similar to that in the classical telephone network. In IntServ/RSVP, each router needs to maintain status information of each call in progress, which presents scalability challenges. IntServ may be used in access networks, but it would be difficult to scale in backbone networks. On the other hand, in DiffServ, packet classification and DiffServ code point (DSCP) marking are conducted for incoming packets at the edge router of the backbone network based on an SLA. Packets are forwarded with preferential control based on DSCP to the core routers in the backbone network. Because a core router recognizes only DSCP-based aggregate flows, the need for scalability may be relaxed. The DiffServ architecture can provide packet-level QoS for the aggregate flows, which

IEEE Communications Magazine • July 2004

meets contracted bandwidth, but does not provide QoS for each flow. That is, no flow-level or call-level admission control is included in DiffServ. In this article we first give an overview of scalable admission control in IP networks in general. We then focus on end-to-end measurement-based admission control (EMBAC), present efficient EMBAC methods for VoIP networks, and discuss its traffic engineering aspects with the results of performance evaluation. Finally, we have some concluding remarks.

Approaches to improve scalability

IntServ over DiffServ

Traffic engineered tunnel

Endpoint methods Measurementbased

■ Figure 1. Approaches to scalable admission control.

APPROACHES TO SCALABLE ADMISSION CONTROL The proposed scalable admission control methods can be classified into three methods, IntServ over DiffServ, lightweight signaling protocol, and endpoint control, which is further divided into the traffic-engineered tunnel method and EMBAC, as shown in Fig. 1.

Lightweight IntServ/RSVP

Backbone network DiffServ Access network

Virtual link

IntServ

Access network IntServ

INTSERV OVER DIFFSERV In this method, IntServ is supported in the access networks, DiffServ in the backbone network. A DiffServ cloud provides a virtual link between IntServ access networks (Fig. 2). DiffServ works to allocate backbone network resources to connect the access networks. IntServ reallocates the allocated resources to each call to satisfy the resource request. Signaling messages such as RSVP PATH and RESV are carried as data packets in a DiffServ backbone network. Access routers conduct admission control according to instructions given by the policy server. RESV messages are allowed to pass the access router when resource reservation is possible in the virtual link. A DiffServ backbone network may be composed of multiple DiffServ administration domains in general, and resources are allocated to the aggregated flows passing between the domains based on SLAs. When traffic characteristics change, dynamic SLA negotiations and resource allocation with support of a bandwidth broker are desirable. However, if resource allocation between domains does not correctly reflect the traffic characteristics of the aggregated flow, admission control for each call in the access network may not be consistent with the virtual link congestion state, and individual QoS requirements may not be satisfied.

LIGHTWEIGHT SIGNALING PROTOCOL In the IntServ/RSVP framework, it is possible to improve scalability by aggregating states in the router or employing resource reservations between subnets [1]. A disadvantage of this approach is that each flow is not completely isolated in the resource allocation because multiple flows share the same service class. In the measurement-based method, each router measures the aggregated flow on each link and does not need to maintain the state of each flow [2]. This method can improve link usage, although it may not guarantee quantitative QoS for each flow. In Dynamic Packet State (DPS) [3], each

IEEE Communications Magazine • July 2004

■ Figure 2. An image of IntServ over Diffserv. router maintains a reservation rate for the aggregated flow (aggregate reservation rate) in each outgoing link and conducts per-hop admission control. The aggregate reservation rate is updated when a new flow is added or existing flow is terminated. This method is not robust due to failures in general. To overcome this problem, the ingress router adds information to each packet header in DPS. Let t1 and t2 be the times the (k – 1)th and kth packets of flow i are transmitted by the ingress router. Then the product of r * (t2 – t1) is inserted in the header of the kth packet, where r is the reserved rate for flow i. DPS allows each core router to calculate the aggregate reservation rate while staying robust. These methods do not need to maintain state of each flow, but require such tasks as call setup signal processing, measurement, and admission control at each router, and any improvements in scalability may be limited.

ENDPOINT CONTROL Traffic Engineered Tunnel Method — A tunnel is established between endpoints. A label switched path in multiprotocol label switching (MPLS) can be used to realize a tunnel [4]. Dedicated bandwidth and resources are allocated to each tunnel with support of traffic engineering, and admission control is performed at the entrance to the tunnel (source endpoint), resulting in relatively low network resource efficiency. EMBAC — An endpoint has an end-to-end QoS measurement mechanism that estimates the internal congestion state of the network and conducts admission control. An endpoint can be a host or an access router to the backbone network. Core routers do not have to support a hop-by-hop signaling protocol such as RSVP for resource reservation. Thus, EMBAC is potential-

43

Network Endpoint

Probe flow

Endpoint

QoS data/admission decision

■ Figure 3. An image of active end-to-end measurement-based admission control. ly scalable and efficient, and has high adaptability to the DiffServ architecture. There are two types of control schemes in EMBAC. In the passive method, QoS is measured for the end-toend real aggregate flows. In the active method, QoS is measured for the end-to-end probe flow. QoS can be measured in terms of packet loss rate, delay, or delay jitter in a properly set measurement period. Typically, a sequence number is used to detect lost packets, and a timestamp is used to measure delay assuming time synchronization between endpoints. Passive Method [5, 6] — Actual data flows are constantly monitored at the endpoints to estimate network internal states. If the sum of the measured load of the existing aggregate flows and the expected additional load of a new flow is no greater than the estimated service capacity between ingress and egress endpoints, the flow is admitted. The advantage of the passive method is no need to generate probe flows, which consumes a part of network resources due to overhead. Network state can be systematically monitored in various timescales, and measurement load at the ingress and egress routers is generally high. To reduce measurement load, sampling techniques can be effective at the cost of decreasing accuracy in estimating network internal state. When the backbone network is composed of multiple DiffServ domains, it is necessary for the ISPs concerned to coordinate measurement at ingress and egress routers. The following techniques may be used to support passive EMBAC. Measurement methods [7–10]: The egress router needs not only the time each packet leaves the network but also the time each packet arrives at the ingress router to estimate networkloading state. If the ingress and egress router clocks are synchronized, a timestamp at an ingress router can be used. If not, each router may calculate the queuing time of each packet and add it to the accumulated queuing time field in the packet header. The reservation message, which includes traffic profiles and QoS requirements generated by the application, is transferred to the egress router, where admission control is conducted. Congestion notification: Network congestion information is delivered to the endpoints and used to support admission control there. At each core router, packets may be marked based on the congestion level of the related link for delivering congestion information to endpoints. Dummy traffic: When there is not sufficient data flow traffic between an endpoint pair, con-

44

gestion information may not be delivered in a timely manner. In such a case probe flow may be used additionally to fill the lack of data flow. This method is to be distinguished from the active method, where the probe flow is generated for each data flow. Active method — When a data flow request occurs, a short-duration probe flow is generated between the endpoints, and QoS of the probe flow is measured to estimate network internal state. The obtained QoS is compared to a predetermined threshold (admission threshold); if it is no greater than the admission threshold, a new call is accepted; otherwise, it is rejected (Fig. 3). The probe packet may include probe time, rate, and flow ID. The advantage of the active method is simple functionality and implementation at endpoints. The disadvantage is that this method adds to call setup time. Some design issues are summarized as follows: Control architecture: Three methods have been proposed: • Host control: Probe flow is generated at the source host and transmitted to the destination host, and admission control is done at the source and destination hosts. • Access router control: Probe flow is generated at the source access router based on a request from the source host and transmitted to the destination access router, and admission control is done at the source and destination access routers. • Hybrid control: Probe flow is generated at the source host and transmitted to the destination host with permission of the access routers, and admission control is done at the source and destination access routers. Control architecture is the major factor in designing VoIP signaling protocols and determining host and access router complexity, and should be selected based on a clear deployment strategy considering target application areas. For example, host control may be used in small-scale corporate network environments, while access router control may be used in large-scale virtual private network (VPN) and public service VoIP network environments. Probe mechanism: The duration of the probe flow is typically 1 or 2 s to minimize call setup delay increase. If the probe flow duration is too short, QoS measurement accuracy may decrease. Therefore, probe flow duration must be set as short as possible while keeping measurement accuracy. The rate of probe flow is typically the same as that of data flow. Peak rate for constant bit rate (CBR) and effective rate such as equivalent bandwidth for variable bit rate (VBR) can be used. Probe rate may also be selected independent of the data rate, depending on admission control mechanisms. Scheduling and admission threshold: Consider the same link is shared by QoS and best effort services. The maximum available bandwidth for QoS service is given. Probe flow also uses this allocated bandwidth. The remainder of the link capacity and unused bandwidth allocated to QoS service are available for best effort service. Data flow and probe flow have priority over best effort flow in packet scheduling. Two methods

IEEE Communications Magazine • July 2004

can be considered for data flow with regard to probe flow priority: • Same priority: In this case, the QoS of probe flow and data flow are similar, so the determination of admission threshold is straightforward. That is, the QoS target for data flow is directly used as the admission threshold. When new connection requests increase, QoS of the existing data flow degrades due to stress from probe flows. • Lower priority: In this case, QoS degradation due to stress from probe flows is minimized. Congestion in an outgoing link can be directly reflected to probe packet loss rate by limiting queue size or queuing time. Probe flow QoS is not directly related to data flow QoS, and it is necessary to establish the method to find optimum admission threshold. Congestion notification: Congestion notification is also used in the active method. As a principle, core routers are not directly engaged in admission control in EMBAC. Nevertheless, performance of admission control may be improved by having such router functions as supporting endpoint admission control unless scalability is harmed. For example, usage of outgoing links and buffer occupancy are measured at each router, and if congestion is detected, a probe packet is marked or discarded so that congestion information detected by the router is conveyed to the endpoints. It is proposed that a single class be given to data flow and probe flow, each with a different discarding level under DiffServ assured forwarding, where QoS of data flow is measured and a probe packet is discarded in case of congestion [9]. It is suggested that only a single packet be used for probe flow per call.

EMBAC FOR VOIP NETWORKS A BASIC MECHANISM We consider use of EMBAC in VoIP networks. A VoIP network may be a corporate network or an ISP network providing public VoIP service. An end node may be a router connected to a LAN or VoIP gateway, accommodating circuit switches or PBXs, and is in charge of admission control according to the definition of EMBAC. Upon receiving a call request, two probe flows are independently carried out on the forward and backward paths between the pair of end nodes. When there is best effort traffic, three priority classes are used. The highest priority is given to voice flow. The second priority is given to probe flow. The lowest priority is given to best effort traffic. The maximum bandwidth for voice and probe traffic is limited to a predetermined value. The unused bandwidth can be used by best effort traffic without affecting voice traffic. The specific call setup signaling protocol based on EMBAC needs to be developed and standardized based on a selected control architecture, as mentioned earlier. We focus on a call with its originating node (node O) and terminating node (node T). Consider an admission test at node T, which is in charge of judging whether to accept the flow from node O to node T or not. To do this, node

IEEE Communications Magazine • July 2004

(a)

Flow 1

Flow 2

Flow 3

(b) Traffic source

Traffic sink

■ Figure 4. Network models: a) one-link; b) two-link.

T measures the packet loss rate in the measurement period, which can be appropriately set in the call setup procedure, for the probe flow from node O to node T. If the packet loss rate is greater than the predetermined threshold (admission threshold), x, this flow is rejected with probability p = 1 – f(x), where f(x) is a monotonic increasing function of x. This probability-based scheme is termed EMBAC-P. Note that f(x) = 0 in any conventional EMBAC scheme, which is termed EMBAC with the deterministic policy (EMBAC-D). It has been shown that EMBAC-D tends to over-control call admission [10]. The parameter f(x) is thus introduced above to relax the strength of control. The selection of the function f(x) is a research issue and it is shown a simple form of f(x) = x has achieved high performance. If we increase the admission threshold x, the chance of success in the admission test also increases. As a result, more calls are accepted. This in turn increases packet loss rate for the calls in progress, and degrades voice quality. If we decrease x, the opposite effect is obtained at the cost of resource efficiency. Thus, admission threshold x controls the packet loss rate of voice flows. We next discuss traffic engineering aspects of VoIP networks with active EMBAC and present two examples of performance evaluation using simple network models. The detailed modeling and parameters are same as in [10].

EFFICIENCY EVALUATION We first focus on a simple one-link network model (Fig. 4a) and compare the efficiency of EMBAC-D, EMBAC-P, and virtual trunk-based admission control, which are defined as follows: • EMBAC (EMBAC-D and EMBACP):Obtain the maximum value of offered traffic A1 to meet average packet loss rate no greater than 0.1 percent and blocking probability no greater than 1 percent using the optimum admission threshold for a given link bandwidth. • Virtual trunk-based admission control (VTBAC): Obtain the maximum value of offered traffic A2 to meet the same condi-

45

0.96 VTBAC EMBAC-P EMBAC-D

0.94

Efficiency

0.92 0.9 0.88 0.86 0.84 0.82 0

1

2

3

4

5

Link bandwidth (Mb/s)

■ Figure 5. The efficiency of VTBAC and EMBAC. tions using the optimum number of virtual trunks, c (maximum allowed number of calls in progress), for a given link capacity. We use simulation to obtain A1 and A2. Then the efficiency of EMBAC and that of the VTBAC are defined as A1/c and A2/c, respectively, Note that link-by-link signaling is required to realize VTBAC. We compare the efficiency of EMBAC-D, EMBAC-P, and VTBAC in Fig. 5. As expected, the efficiency of VTBAC increases with link capacity. The efficiency of EMBAC-P has the same trend, becomes close to that of VTBAC as link capacity increases, and significantly outperforms that of EMBAC-D. These results show that EMBAC-P is quite effective, especially for links with larger capacity. Note that link congestion to be used for admission control is assumed to be directly measurable based on the number of flows in progress in the VTBAC, while it is only measured indirectly based on packet loss rate of probe flow in EMBAC-P. Nevertheless, it is shown that EMBAC-P achieves efficiency close to that of VTBAC.

SELECTING ADMISSION THRESHOLD AND SIZING LINK CAPACITY We assume that the network topology composed of nodes and links, traffic demands between end nodes, and traffic routing rules are given as the results of proper network planning and traffic forecast. Using these inputs, the amount of offered traffic to each link can be calculated in terms of erlangs. We consider how to determine a capacity of each link so that endto-end QoS targets (blocking probability and packet loss rate) will be fulfilled. To do this, we first allocate end-to-end blocking probability target, B, and average packet loss rate target, L, to each link in the VoIP network, which is termed QoS allocation. Specifically, we simply assume that B/n and L/n are allocated to each link, where n is the maximum number of links in end-to-end paths of the network. Since end-toend delay can be controlled by limiting both the

46

number of hops and the size of the buffer at each router, we do not use packet delay target explicitly. Note that QoS allocation is a popular method in conventional circuit-switched telephone networks. We consider how to choose the admission threshold under the framework of QoS allocation. We adopt the highest-loaded-link-based approach, where the highest loaded link is selected and sized at first to meet the allocated QoS (packet loss rate and blocking probability) when the optimal admission threshold for this link is simultaneously determined. The obtained admission threshold is then used to size all other links in the networks. Similarly, the least-loaded-linkbased approach can be defined. Note that the same admission threshold is used to size every link in each method, which is necessary to satisfy end-to-end QoS targets. This dimensioning procedure is simple because we design the network on a link-by-link basis, where each link is independently sized to meet the allocated packet loss and blocking probability targets. We compare the performance of the highest-loaded-link-based and least-loaded-linkbased approaches using the simple network model shown in Fig. 4b. Nodes A, B, and C are traffic sources, and nodes G, H, and I are traffic sinks; there are three traffic flows, 1, 2, and 3. Each flow is composed of bidirectional voice packet flows. We focus on the two backbone links, 1 and 2, to size, and assume that each access link has enough capacity. The end-toend blocking probability target is 2 percent, and the end-to-end packet loss rate target is 0.2 percent. Link allocation of blocking probability and packet loss rate is then 1 and 0.1 percent, respectively, according to QoS allocation mentioned above. The results are given in Fig. 6. We keep the total offered traffic of links 1 and 2 to 700 Erl and change the ratio. Thus, (x – y) in the horizontal line represents x Erl on link 1 and y Erl on link 2. The vertical line represents the normalized total link capacity of links 1 and 2 when the offered traffic on each link is uniform and 350 Erl, respectively. As shown in Fig. 6, it is clear that the highest-loaded-link-based approach outperforms the least-loaded-linkbased approach. As a reference, we also show the results when each link is sized using the optimal admission threshold, regardless of admission thresholds of other links (the independent approach). In this case, end-to-end QoS targets may not achieved. It is shown that the highest-loaded-link-based approach is very close to the independent approach in required capacity, while we have confirmed that end-toend targets are satisfied. These results suggest that efficient capacity sizing in the highly loaded links is important to improve network-wide efficiency.

DISCUSSIONS We demonstrate that reasonable network efficiency can be achieved by using a proper traffic engineering method in VoIP networks with active EMBAC. As mentioned earlier, the major drawback of active EMBAC is the increased call setup time associated with probe flow. The amount of increase in call setup time

IEEE Communications Magazine • July 2004

1.03 Least-loaded-link-based Highest-loaded-link-based Independent

1.02 Normalized total link capacity

depends on a signaling protocol and implementation, and can be about the same as the length of probe flow. In PSTN, hop-by-hop signaling and speech path connection at each transit switch are required to set up a call, which gives rise to end-to-end connection delay, typically 1 s depending on the number of hops. On the other hand, there is neither hop-by-hop signaling nor speech path connection in active EMBAC. Thus, call setup time increase in active EMBAC would almost be cancelled by end-to-end connection delay in PSTN, if probe flow duration is selected no greater than 1 s. This observation suggests that the drawback of active EMBAC is not always significant and can be overcome when our target is to provide PSTN-level QoS.

1.01

1

0.99

0.98

0.97

CONCLUSIONS First, we present an overview of scalable admission control in IP networks. We introduce various approaches and discuss the mechanism and characteristics of each method. In particular, we argue that active EMBAC, which employs an end-to-end on-demand probing technique, should be used for call admission control. We summarize its functionalities and design issues. Second, we consider use of active EMBAC in VoIP networks. We present a new probabilitybased EMBAC scheme and show that its performance is close to the ideal method of virtual-trunk-based admission control. We then present a QoS allocation approach to selecting an admission threshold and dimensioning link capacities. A simple network design and evaluation results suggest that a QoS allocation approach is effective to adequately dimension a network while satisfying end-to-end targets in terms of blocking probability and packet loss rate. VoIP service is expanding and replacing the conventional plain old telephone service. Users will request at least the same reliability and quality as in the PSTN. To achieve this objective economically, we need to pursue resource optimization in VoIP networks, in which admission control is a key issue. Our research results show that active EMBAC is promising from the traffic engineering perspective, and we encourage further research, including development of call setup signaling protocols capable of managing active EMBAC and their standardization. Other future research issues include optimal admission threshold selection in practical networks, network management and operation methods for adjusting admission threshold in diverse environments, and implementation of EMBAC based on standardized protocols.

IEEE Communications Magazine • July 2004

(350:350)

(300:400)

(250:450)

(200:500)

(150:550)

(100:600)

Traffic volumes at links 1 and 2 (Erl)

■ Figure 6. A comparison of link dimensioning methods.

REFERENCES [1] F. Baker et al., “Aggregation of RSVP for IPv4 and IPv6,” IETF RFC 3175, Sept. 2001. [2] J. Qiu and E. W. Knightly, “Measurement-based Admission Control with Aggregate Traffic Envelopes,” IEEE/ACM Trans. Net., vol. 9, no. 2, Apr. 2001. [3] I. Stoica and H. Zhang, “Providing Guaranteed Services without Per Flow Management,” ACM SIGCOMM ’99, Boston, MA, Aug. 1999. [4] D. Awduche et al., “RSVP-TE: Extensions to RSVP for LSP Tunnels,” IETF RFC 3209, Dec. 2001. [5] C. Cetinkaya and E. W. Knightly, “Engress Admission Control,” Proc. IEEE INFOCOM 2000, Tel Aviv, Israel, Mar. 2000. [6] C. Oottamakorn and D. Bushmitch, “A Diffserv Measurement-based Admission Control Utilizing Effective Envelopes and Service Curves,” ICC 2001, 2001. [7] R. L. Hill and H. T. Kung, “A Diff-Serv Enhanced Admission Control Scheme,” GLOBECOM 2001. [8] N. Blefari-Melazzi, M. Femminella, and F. Pugini, “Definition and Performance Evaluation of a Distributed and Stateless Algorithm for QoS Support in IP Domain with Heterogeneous Traffic,” SSGRP 2001, Aug. 2001. [9] G. Bianchi and N. Blefari-Melazzi, “Admission Control over Assured Forwarding PHBs: A Way to Provide Service Assuracy in a Diffserv Framework,” GLOBECOM 2001. [10] K. Mase and H. Kobayashi, “An Efficient End-to-End Measurement-Based Admission Control for VoIP Networks,” ICC 2004, 2004.

BIOGRAPHY KENICHI MASE ([email protected]) received B. E., M. E., and Dr. Eng. degrees in electrical engineering from Waseda University, Tokyo, Japan, in 1970, 1972, and 1983. He joined NTT Public Corporation in 1972. He was executive manager, Communications Assessment Laboratory, NTT Multimedia Networks Laboratories from 1996 to 1998. He is now a professor with the Graduate School of Science and Technology, Niigata University, Japan. He received the IEICE best paper award of 1993. He is an IEICE Fellow.

47