An Implementation Study of a Dynamic Inter-Domain Bandwidth ...

5 downloads 6094 Views 308KB Size Report
Aug 9, 2003 - enhanced Simple Inter-Domain Bandwidth Broker Signaling (SIBBS) ..... with its ingress SLS, then checks intra-domain resource availability.
An Implementation Study of a Dynamic Inter-Domain Bandwidth Management Platform in Diffserv Networks

Junseok Hwang a c Steve J. Chapin b Haci A. Mantar b Ibrahim T. Okumus b Rajesh Revuru a Aseem Salama a School b L.C.

of Information Studies, Syracuse University

Smith College of Engineering and Computer Science, Syracuse University c College

of Engineering, Seoul National University

Abstract In this paper, we assess the scalability and efficiency of a scalable Bandwidth Management Point (BMP) for guaranteed Quality-of-Service in Diffserv networks. Our BMP uses centralized network state maintenance and pipe-based intra-domain resource management schemes that significantly reduce the admission control time and minimize the scalability problems present in prior research. We have designed, developed and implemented an enhanced Simple Inter-Domain Bandwidth Broker Signaling (SIBBS) protocol for interdomain communication. The BMP uses dynamic inter-domain pipes to handle inter-domain provisioning and dynamic provisioning schemes to increase signaling scalability. To assess our BMP implementation in terms of signaling scalability and effective resource utilization,

Preprint submitted to noms

9 August 2003

we conducted experiments on a test-bed demonstrating how a BMP substantially increases resource utilization and scalability while requiring minimum changes in the underlying infrastructure. Key words: Bandwidth Broker, Internet Interconnection, Differentiated Services, Middleware

1 Introduction

The importance of providing quality-of-service (QoS) services through seamless interconnection networks is fueling an increasing interest in the next generation Internet. IETF working groups, such as Differentiated Services Working Group (DiffServ), have established frameworks for supporting end-to-end QoS [1] for the Internet. Important recent research questions for the next generation Internet include how to manage the bandwidth across interconnection network domains for services to assure end-to-end QoS support. Bandwidth management mechanisms for network interconnection will play a critical role in providing support for end-to-end QoS network services used by highperformance distributed applications, such as grid networks. Several interconnection mechanisms have been proposed to support multiple network services [1–4]. However, there has not been sufficient research on integrating network interconnection mechanisms with distributed and dynamic bandwidth management and open Email addresses: [email protected] (Junseok Hwang), [email protected] (Steve J. Chapin), [email protected] (Haci A. Mantar), [email protected] (Ibrahim T. Okumus), [email protected] (Rajesh Revuru). 1 This project is partially supported by NSF award NMI ANI-0123939.

2

QoS provisioning controls. We designed and implemented a dynamic bandwidth management platform, the Bandwidth Management Point (BMP). The BMP architecture provides a scalable and transparent platform for cross-network open provisioning for resource management of bandwidth markets with computational grids. Our work significantly differs from existing Bandwidth Broker designs since the BMP is designed to support scalable dynamic provisioning over multiple domains. We achieve this goal through scalable interconnection setup, aggregation, and Service Level Specification (SLS) update mechanisms designed within our BMP communication protocol. The BMP is implemented in a distributed fashion that is independent of any particular grid system. In this paper we present the design and implementation details and evaluation results of scalable dynamic inter-domain resource provisioning capability of BMP’s. In section 2, we briefly review related work. Section 3 provides inter-domain resource management focused BMP architecture. Actual inter-domain resource management mechanism is explained in section 4. Section 5 gives the evaluation results and section 6 presents the concluding remarks.

2 Background Work

There have been two QoS models proposed in the Internet: a Per-Flow QoS model [5] and a Per-Class QoS model [6]. The major flow based Internet QoS model is the Intserv/RSVP model which aims at providing strong end-to-end QoS guarantees to end hosts by having explicit flow reservation in all the nodes along the path. However, due to the severe scalability problems in the core of this flow-based 3

model, an alternative model based on per-packet classes, Differentiated Services [6], is considered the de facto standard approach for large-scale Internet QoS. Although DiffServ has advantages of scalability and ease of implementation, it still lacks a standard signaling mechanism and admission control framework. Currently, end-to-end QoS guarantees can be provided only based on worst-case static SLA (with long time scales of days or weeks) between customers and providers. There is no dynamic service negotiation between customers and providers. Thus, a primary concern has been achieving end-to-end QoS in a Diffserv environment in a scalable, efficient, and effective way. Nichols et al. [7] proposed a model called the Bandwidth Broker (BB), a logical entity for Diffserv domains. Each domain has a BB, which is in charge of both intradomain and inter-domain resource management. Terzis [8] proposed a Two-Tier Resource Allocation Framework. The main idea is that a BB in each domain collects its users’ and applications’ requests for external resources and sets up an SLA with its neighboring domains based on aggregated traffic profiles. A BB dynamically readjusts the SLA between domains based on substantial changes in network traffic. Braun et al. proposed an application of BB for VPN in [11]. In their work, the role of the BB is to adjust dynamically the VPN between customers of the same domain by taking utilization and accounting issues into account. Another BB architecture is from the CADENUS project [17,18]. This project considers using a hierarchy of BBs where the domain is divided into small areas, with each area having its own BB and the domain itself having another BB which acts as a parent of the sublevel BBs. The Active Bandwidth Broker [20] project introduced the active networking concept into the network management area. In this architecture, SLA requests come to the border routers, which convert requests into COPS messages and communicate 4

to the BB. In this model, requests are refined before they reach the bandwidth broker, and the BB makes simple decisions based on the refined request received from the S-PDP. Zhang, et al. [21,22] describes a model with a centralized bandwidth broker that serves edge bandwidth brokers. The edge BBs use COPS to request a block of bandwidth from the centralized BB, and then serve reservation requests until the quota is exhausted. QBone Signaling Design Team developed a signaling mechanism called the Simple Inter-domain Bandwidth Broker Signaling (SIBBS) [9] protocol for inter-BB communication. A BB in an access domain is responsible to reserve end-to-end QoS across multiple domains either by using individual demand-based end-to-end notification or pre-established tunnels. However, the current version of the BB architecture does not address the issue of scalability and security of BB protocol signaling with dynamic provisioning.

3 BMP Middleware Platform

3.1 Architecture and Operation

We have implemented the Bandwidth Management Point [12,13], which not only subsumes the capabilities of a traditional bandwidth broker [4] but also provides additional functionalities. We designed our Bandwidth Management Point to handle admission control, resource management, and resource negotiations for a Diffserv domain. The BMP interacts with multiple agents in a domain to provision intra-domain resources, and neighboring BMPs to provision inter-domain resources and to estab5

Fig. 1. Functional decomposition of BMP architecture

lish end-to-end reservation requests. We achieve inter-domain resource provisioning by establishing a QoS tunnel between domains. The I-BMP (Internal BMP) and E-BMP (External BMP) are modules that handle intra-domain and inter-domain provisioning, respectively. The E-BMP processes high-level domain topology and network state information (e.g., information on pipes spanning multiple domains). It primarily deals with resource allocation and provisioning at the network boundary among multiple domains, solving a complex set of brokerage problems among alternative options. This layer also does not consider node-level functionality, detailed network state, or intra-domain topology information. The External BMP views the domain as if the domain consists of only border routers; from the External BMP’s point-of-view, there is only one connection between each pair of border routers for a particular QoS class. The E-BMP is responsible for negotiating and maintaining resources 6

with the BMPs of neighboring domains in order to assure QoS support of traffic crossing its borders. It also performs admission control based on inter-domain resources (negotiated with peers, and stored in the internal pipe database) and intradomain resources provided by the I-BMP. Furthermore, the E-BMP performs policy control such as Service Level Agreement (SLA) and SLS checks and configures border routers with the appropriate traffic conditioning parameters. Therefore, the E-BMP provides a mechanism to manage the interconnection SLS with other BMP domains and interconnection points.

As shown in Figure 1, both I-BMP and E-BMP have a number of functional and database components. Those include the inter-domain provisioning manager (IDPM), inter-BMP communication protocol in the E-BMP, and the intra-domain pipe maintenance in the I-BMP, which are most relevant to scalability and efficiency of the network.

For a BMP to provision inter-domain resources, it needs to communicate with neighboring BMPs to negotiate for allocation of corresponding inter-domain capacity. We have chosen the Simple Inter-domain Bandwidth Broker Signaling (SIBBS) protocol as the external interface to the BMP for signaling inter-domain provision between neighboring BMPs. SIBBS is a product of the Internet2 QBone Signaling Design Team [9], and is the closest to an accepted standard in existence. Our research group is part of QBone Signaling Design Team and actively participated in the development of SIBBS protocol. 7

Fig. 2. SIBBS core tunneling (BMP inter-domain pipe)

4 Simple Inter-Domain Bandwidth Broker Signaling (SIBBS)

The SIBBS protocol is designed for inter-BB communication. A BB can be implemented in different ways in terms of resource request handling. One possibility is for a BB to separately deal each request. This requires reserving resources for each request and keeping the state of that reservation. This is also known as microflow management. Another option is for a BB to aggregate the individual request into a single request. In this case, a BB establishes a tunnel to a destination and keeps a single state for that tunnel. All the reservation requests for that destination with a specific QoS class are aggregated into the same tunnel. The SIBBS protocol is designed to work with different BB implementations as described above. Among many challenges associated with resource management is efficient and scalable inter-domain resource management. As described above, if a BB handles microflow reservation, there will be a scalability problem, which will be worse than the one faced by RSVP. Recalling that Diffserv avoided the scalability problem of Intserv by introducing aggregated treatment, it is most reasonable to design a BB that handles reservation requests in an aggregated fashion for a Diffserv evironment. In our architecture, inter-domain resources are managed via inter-domain tunnels, called pipes, and we modified SIBBS to work with our inter-domain tun8

neling approach.

SIBBS is a simple request-response protocol. Common SIBBS messages are Resource Allocation Requests (RAR), Resource Allocation Answers (RAA), cancel (CANCEL), and cancel acknowledgement (CANCEL ACK). In particular, SIBBS has two use cases: per-request reservations, and core tunneling. In per-request reservations, each individual request (initiated by an end host) is processed by all the BBs along the path toward destination end hosts (in similar fashion as how routers process RSVP requests in IntServ). In the core tunneling case, a tunnel is preestablished between every possible origin and destination domain which carries traffic of a single class. Individual reservations are then multiplexed into the tunnel. The destination-based tunnel concept here means that the end points of the reservation are not fully specified, but only the destination domain.

Throughout this paper we assume that all supported QoS services are Globally Well Known Services (GWKS), which simplifies resource negotiation and complex mapping problems in inter-connection points. We assume that tunnels are between source and destination domains. We also assume that all policy issues (SLA, SLS) are satisfied.

We will now explain how SIBBS works in our architecture for inter-domain resource provisioning. There are two different modes of operation upon receiving a request for a particular destination: 1)- If the request is the first request to that destination, the BMP needs to establish a tunnel to that destination. 2)- If another request has been received for that destination previously, the BMP already has a tunnel to that destination and operation will be quite different than the first case. The first case is known as tunnel establishment phase and the second case is termed as aggregation and provisioning phase. 9

4.1 Tunnel Establishment Phase

Suppose the BMP of source domain 1 wants to establish a tunnel to destination domain 1 for a particular class (Figure 2). The procedure is outlined as following: The BMP of source domain 1 (BMPs1) builds an RAR message with appropriate parameters such as GWKSID, BW amount, duration, and destination domain IP prefix, and then sends it to the transit domain BMP (BMPt). Upon receiving the RAR message, BMPt verifies its intra-domain resource availability (between the ingress and egress routers for this path) and the capacity of its border links. If both checks succeed, BMPt builds an RAR message and sends it to BMPd1. BMPd1 checks its egress router link capacity. If sufficient capacity is available, it reserves bandwidth in its link, builds an RAA message, and then sends that message to BMPt. Upon getting the RAA message, BMPt reserves resources, builds an RAA message and then sends it to BMPs1. Upon receiving the RAA message, BMPs2 signals the completion of tunnel establishment phase.

4.2 Aggregation and Provisioning Phase

Figure 3 depicts the operation of a BMP in response to an RAR message. At this point, we assume that such tunnels and pipes are pre-established and dynamically resized. In the following, we briefly explain the behavior of the BMP in each case as shown in Figure 3. There are four different cases a BMP takes into account: 10

Fig. 3. The activity diagram of BMP operation

Both source and destination are in the same domain. The BMP will grant a reservation if the RAR passes all tests along the logic steps 2-3-4-5-6 in the diagram. The source is in the BMP’s domain and the destination is in different domain (steps 2-3-4-5-7). For this case, the BMP first checks the egress SLS, then checks the availability of an inter-domain tunnel to that destination and finally checks its intra-domain pipe size. If all these tests succeed, the BMP grants the reservation. As we can see, although the destination is in another domain, the BMP does not go beyond its domain, making the admission control decision depend only on local knowledge of the domain. The source is in different domain and the destination is in the BMP’s domain (steps 2-3-4-10-11-12). In this case, the BMP first checks if the RAR complies with its ingress SLS, then checks intra-domain resource availability. If both tests succeed, the BMP accepts the request. Both source and destination are located in different domains (3-4-10-13)—in this case, we say the BMP is serving as a transit domain for this request. A BMP in a transit domain checks both ingress and egress SLS, then checks its intra-domain 11

pipe size. If the results are positive, the BMP builds an RAR message with its own ID and sends it to proceeding BMPs and waits for an RAA message. The above process is termed as tunnel establishment. When a BMP grants a reservation, it configures edge routers, and updates the reservation and path database and the inter-domain tunnel database. The BMP then builds a Resource Allocation Answer (RAA) message and sends it to the customer. The communication and configuration process between the BMP and edge routers are done using COPS-PR. Note that all the tasks that a BMP performs during aggregation have light-weight processing times. The complexity of managing tunnel sizes is performed in relatively longer time intervals, minimizing its impact on BMP performance.

5 Evaluation

The traffic behavior used for validation and performance testing of the system has an important effect on the results. Therefore, we used real-time traffic traces collected from real networks for our experiments. We used traffic traces provided by CAIDA [16], which has advanced tools to collect and analyze data based on source and destination Autonomous Systems (AS) and traffic type for short time intervals (every 5 minutes). We used traces for TSRI-AS (2641) and SDSC-AS (195), which have high traffic demand, for 150 minutes. We evaluated the BMP prototype in terms of inter-domain protocol operation, scalability, and resource utilization in our test-bed. The experiments were performed for both QBone Premium Service (QPS) [9] ( Virtual Leased Line (VLL)) , as well as delay- and loss-tolerant services. For the former we used the parameter-based 12

Fig. 4. BMP test-bed

scheme where each host requests reservation with the peak rate. For the latter one, we applied the measurement-based scheme, where end hosts request reservations based on a token bucket scheme. In the measurement-based experiments, the delay and loss tolerances were set to 1 ms and 0.1% in each node. We define the Operation Region (OR) as the cushion between the high-threshold, where the pipe size is increased when the traffic rate reaches this value, and the low-threshold, where the pipe size is decreased when the traffic rate reaches this value. Figure 4 shows the testbed we used for these experiments. We configured domain 1 and domain 2 as source stub domains; domain 0 as transit domain; and domain 3 and domain 4 as destination stub domains. All experiments were run for 25 minutes. The duration of a reservation was exponentially distributed with a mean of 1 minute. The reservation average rates varied over the duration of the experiment based on the rate-profile generated from traced data. In source domains per-reservation conditioning was performed. In transit and destination domains perpipe based conditioning was applied. We also assumed that any traffic in excess of the profile was simply discarded (no remarking). Figures 5 and 6 show parameter-based experiment results in BMP1 for reservations 13

Fig. 5. Parameter-based Update

Fig. 6. Parameter-based Utilization

to destination domain 3 (192.25.0.0/16). Figure 5 shows the customer reservation demand, actual traffic rate, and the corresponding tunnel size. As depicted, there are two important enhancements we implemented for the BMP scheme. First, the BMP dynamically modifies the pipe size with respect to aggregated traffic change resulting in efficient resource utilization. Second, the number of pipe modifications is minimal compared to customer requests—the number of messages exchanged between BMPs is minimized. In this experiment, while BMP1 receives 311 reservation set up or tear-down messages, the pipe size was modified only 23 times, meaning that the transit domain (BMP0), where scalability is the main concern, receives only 23 messages. This reduces both communication overhead and pro14

Fig. 7. Measurement-based Update

cessing load, which are potential bottlenecks of traditional BBs and RSVP. Figure 6 compares the BMP scheme with a non-BMP scheme (static reservation, no modifications took place during the experiment), as in the original SIBBS [9] core tunneling scheme. For a no-BMP case, we set the tunnel size to the maximum contained in the traced data. As Figure 6 shows, the BMP significantly increased resource utilization comparing to the no-BMP case. On the other hand, when the tunnel size was set to the average value of historical data, 60% of the requests were rejected. However, as the figure shows, the actual utilization (the traffic rate measured at the interface) is still much lower than the reservation demand. The reason for this is that we are providing QPS, which requires peak rate based reservation in order to minimize queuing delay and losses. Figures 7 and 8 show the measurement-based experimental results of delay- and loss-tolerant of service, where the traffic rate is computed based on data samples collected from routers. The meter located in each router dynamically measures the priority traffic rate and reports the results to the BMP. The meter estimates the traffic rate based on delay and loss parameters. As shown in Figure 7, the pipe size modification timely follows the actual traffic rates and as seen in Figure 8, the 15

Fig. 8. Measurement-based Utilization

average actual utilization is higher than the parameter-based case results. Similar to previous experiments, the number of pipe size modifications is significantly less than the number of incoming reservation messages. As shown in Figures 5,6,7, and 8 there is a clear trade-off between parameter-based and measurement-based schemes, and also between QPS (VLL) and loss=delaytolerant services. While the BMP achieves excellent QoS guarantees in the case of QPS (VLL), it results in poor resource utilization (Figure 6). On the other hand, as shown in Figure 8, it has high resource utilization with trade-off in delay and loss. Comparing Figures 6 with 8, we can clearly see that utilization is increased by 30% by using the measurement-based scheme, when chosen as a viable alternative for the service or application. As Figures 5,6,7, and 8 illustrate, although the BMP scheme decreases the number of inter-domain negotiation messages (tunnel size modification messages) and increases the utilization, the scheme suffers clear trade-offs. As shown in Figure 9, the operation region has a significant effect on the number of negotiation messages and utilization, regardless of service type. As expected, when the OR (Operation Region) approaches the average reservation size, the negotiation is performed for 16

Fig. 9. The effect of Operation Region

almost every set up and tear-down (similar to RSVP), which in turn causes serious scalability problem. Large OR sizes, which ameliorate the scalability problem, result in inefficient resource utilization. Thus, an appropriate OR size should be carefully chosen. A good prediction scheme such as the Network Weather Service [15] can be used.

6 Conclusion and future work

In this work we have presented the design and implementation of a scalable and efficient BB scheme called Bandwidth Management Point. A BMP uses centralized network state maintenance and pipe-based intra-domain resource management schemes that significantly reduce the admission control time and minimize scalability problems. We designed and implemented a scalable inter-domain communication scheme originating from SIBBS. We used a pipe-based scheme where each domain establishes pipes (core tunnels) to all possible destination domains. In addition to the pipe setup mechanism in SIBBS, we added dynamic provisioning and used different provisioning schemes. For traffic conditioning, we modified and implemented COPS-PR. These mechanisms embody a set of algorithms which can be 17

used for software agents that manage and control network devices dynamically. Our design, implementation, and experimental results show the significant role that bandwidth brokers can play for service providers, and address many outstanding questions about BB. The experimental results show that an ISP can substantially improve its resource utilization, thereby increasing its revenue, while requiring minimal changes in the underlying infrastructure. The scalability and utilization results give the basic guidelines that an ISP should consider in defining services and SLA. Currently, we are working on integration of the BMP with the Open Grid Services Architecture (OGSA), making it compatible with Globus and Legion. We will also investigate a dynamic inter-domain SLS negotiation scheme that provides possible service in other domains, and their pricing and usage rules. In this work we have relied on GWKS, however, it is clear for a potential need for other services, so we will enhance the SIBBS protocol to support additional services. Furthermore, we developed a region-based inter-domain QoS routing protocol to improve the effectiveness of BMP in Diffserv Internet. We also developed Transaction Management System to incorporate economic decisions into BMP.

References

[1] Y. Bernet et.al., ”A Framework for Integrated Services Operation over Diffserv Networks”, RFC2998, Nov 2000. [2] M. Garrett, M. Borden, ”Interoperation of Controlled-Load Service and Guaranteed Service with ATM”, RFC2381, Aug 1998. [3] Y. Bernet, R. Yavatkar, P. Ford, F. Baker, L. Zhang, K. Nichols, and M. Speer. “A

18

Framework for Use of RSVP with Diffserv Networks” IETF draft, November, 1998. [4] Internet2 Bandwidth Broker Working Group. “QBone Bandwidth Broker Arhitecture.” http://qbone.internet2.edu/bb/. [5] J. Wroclawski, ”The Use of RSVP with IETF Integrated Services”, RFC 2210, Sep 1997. [6] S. Blake et.al. ” An Architecture for Differentiated Services”, RFC 2475, Dec 1998. [7] K. Nichols, V. Jacobson, L. Zhang, ”A Two-bit Differentiated Services Architecture for the Internet”, rfc2638, Jul 1999. [8] Andreas Terzis, Lan Wang, Jun Ogawa, Lixia Zhang, “A Two-Tier Resource Management Model for the Internet.” Global Internet 99, Dec 1999. [9] QBone Signaling Design Team, ”Simple Inter-domain Bandwidth Broker Signaling (SIBBS)”, http://qbone.internet2.edu/bb/ work in progress. [10] X. Wang, H. Schulzrinne,”Pricing Network Resources for Adaptive Applications in a Differentiated Services Network”, In Proceeding of INFOCOM’2001 , Anchorage, Alaska, Apr. 2001. [11] T. Braun, M. G¨unter, I. Khalil. ”An Architecture for Managing QoS-enabled VPNs over the Internet”, 24th IEEE Annual Conference on Local Computer Networks (LCN’99), Lowell/Boston, Massachusetts, U.S.A., October 18-20, 1999, pp 122-131 [12] Junseok Hwang, ”A Market-Based Model for the Bandwidth Management of IntservDiffserv QoS Interconnection: A Network Economic Approach”, Ph.D. Thesis, University of Pitssburg, 2000. [13] Haci ALi Mantar, Junseok Hwang, Ibrahim T. Okumus, Seteve J. Chapin; ”InterDomain Resource Reservation via Third PArty Agent”, Fifth World Multi-Conference on Systemics, Cybernetics and Informatics 2001, Jun 2001.

19

[14] K. Nichols, B. Carpenter, ”Definition of Differentiated Services Per Domain Behaviors and Rules for their Specification” RFC 3086, April 2001. [15] Rich Wolski, Neil Spring, and Jim Hayes,”The Network Weather Service: A Distributed Resource Performance Forecasting Service for Metacomputing” Journal of Future Generation Computing Systems, Volume 15, Numbers 5-6, pp. 757-768, October, 1999. [16] Cooperative Association for Internet Data Analysis, http://www.caida.org/ [17] CADENUS Public Deliverables 2.1 QoS Control in SLA Networks, 3.1Mediation Components.

Release

1

Requirements

and

Architectures,

3.2Service Configuration and Provisioning Framework Requirements, Architecture, Design http://www.cadenus.org/deliverables/. [18] Giovanni Cortese et.al. ”CADENUS: Creation and Deployment of End-User Services in Premium IP Networks”, IEEE Communication Magazine Vol.41 No.1 , January 2003. [19] Ibrahim Khalil and Torsten Braun;

Implementation of a Bandwidth Broker for

Dynamic End-to-End Capacity Reservation over Multiple Diffserv Domains, 4th IFIP/IEEEInternational Conference on Management of Multimedia Networks and Services (MMNS), Chicago, USA, Oct 29 - Nov 1, 2001 [20] K. Wong, K.L.E. Law, ”ABB: Active Bandwidth Broker,” in Proc. SPIE ITCom 2001, Vol.4523, pp.253-262, Denver, August, 2001 [21] On

Scalable

Network

Resource

Management

Using

Bandwidth

Brokers,

http://www.cs.umn.edu/research/networking/BB/. [22] Z.-L. Zhang, Z. Duan, L. Gao, and Y. T. Hou, ”Decoupling QoS control from core routers: A novel bandwidth broker architecture for scalable support of guaranteed services”, In Proc. ACM SIGCOMM, Sweden, August 2000.

20

Suggest Documents