Provisioning of Deadline-Driven Requests With Flexible ... - Yavum

5 downloads 291 Views 1MB Size Report
We investi- gate the problem of provisioning DDRs with flexible transmission rates in .... in our study, the combination of file sizes, deadlines, and net- work state determines .... ented problem. For each DDR , we create an auxiliary graph de-.
IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 2, APRIL 2010

353

Provisioning of Deadline-Driven Requests With Flexible Transmission Rates in WDM Mesh Networks Dragos Andrei, Student Member, IEEE, Massimo Tornatore, Member, IEEE, Marwan Batayneh, Student Member, IEEE, Charles U. Martel, and Biswanath Mukherjee, Fellow, IEEE

Abstract—With the increasing diversity of applications supported over optical networks, new service guarantees must be offered to network customers. Among the emerging data-intensive applications are those which require their data to be transferred before a predefined deadline. We call these deadline-driven requests (DDRs). In such applications, data-transfer finish time (which must be accomplished before the deadline) is the key service guarantee that the customer wants. In fact, the amount of bandwidth allocated to transfer a request is not a concern for the customer as long as its service deadline is met. Hence, the service provider can choose the bandwidth (transmission rate) to provision the request. In this case, even though DDRs impose a deadline constraint, they provide scheduling flexibility for the service provider since it can choose the transmission rate while achieving two objectives: 1) satisfying the guaranteed deadline; and 2) optimizing the network’s resource utilization. We investigate the problem of provisioning DDRs with flexible transmission rates in wavelength-division multiplexing (WDM) mesh networks, although this approach is generalizable to other networks also. We investigate several (fixed and adaptive to network state) bandwidth-allocation policies and study the benefit of allowing dynamic bandwidth adjustment, which is found to generally improve network performance. We show that the performance of the bandwidth-allocation algorithms depends on the DDR traffic distribution and on the node architecture and its parameters. In addition, we develop a mathematical formulation for our problem as a mixed integer linear program (MILP), which allows choosing flexible transmission rates and provides a lower bound for our provisioning algorithms. Index Terms—Bandwidth-on-demand, deadline-driven request (DDR), flexible transmission rate, large data transfers, wavelengthdivision multiplexing (WDM) network.

I. INTRODUCTION

T

ODAY, telecom networks are experiencing a large increase in the bandwidth needed by their users as well as in the diversity of the services they must support. There Manuscript received January 20, 2009; approved by IEEE/ACM TRANSACTIONS ON NETWORKING Editor A. Somani. First published October 30, 2009; current version published April 16, 2010. This work was supported by the National Science Foundation (NSF) under Grant CNS-06-27081. Preliminary versions of this work were presented at the Optical Fiber Communications Conference (OFC), February 2008, and at the IEEE International Conference on Communications (ICC), May 2008. D. Andrei, M. Tornatore, C. U. Martel, and B.Mukherjee are with the Department of Computer Science, University of California, Davis, CA 95616 USA (e-mail: [email protected]; [email protected]; [email protected]). M. Batayneh is with Integrated Photonics Technology (IPITEK) Inc., Carlsbad, CA 92008 USA (e-mail: [email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNET.2009.2026576

are many bandwidth-hungry applications (ranging from grid computing and eScience applications to consumer applications, e.g., IPTV or video-on-demand) that require flexible bandwidth reservation and need strict quality-of-service (QoS) guarantees. The requirements of these new applications—large bandwidth, dynamism, and flexibility—can be well accommodated by optical networks using wavelength-division multiplexing (WDM) [1], particularly when they have reconfigurability capabilities. Reconfigurability can be provided by agile optical crossconnects (OXCs) and by control plane protocols such as ASON/GMPLS, which are designed to handle the automatic and dynamic provisioning of lightpaths [2]. A new switching paradigm suitable for such emerging on-demand data-intensive applications is dynamic circuit switching (DCS) [3] (based on the mature technology of Optical Circuit Switching), which can efficiently handle the “bursty traffic” generated by these applications, and transport it over high-capacity circuits (which can be sublambda granularity circuits or lightpaths) established dynamically over the WDM network backbone [3]; we consider DCS as the switching technology employed in this study. A new class of network services that may need on-demand flexible bandwidth allocation are deadline-driven applications, which require the transfer of large amounts of data before a predefined deadline. Such deadline-driven applications occur especially in the fields of eScience and high-end grid computing [4]. Let us consider a remote visualization application [4], [5] that requires the transfer of a large dataset (which could contain scientific data obtained from high-energy particle physics experiments, astronomical telescopes, medical instruments, etc.) from a remote location. Since the remote visualization (which may use costly computing resources) cannot start before all its input data is transferred, the customer can advertise his preferred visualization start time (i.e., the deadline for the large data transfer) in advance [4]–[6]. In general, any application that needs coordinated use of several resources (with a strict dependency workflow) can benefit from being deadline-aware [5]. Such deterministic reservation of resources is essential in the case of high-performance computing [4], [6]. The works in [4], [6], and [7] consider deadlines as QoS parameters for data-intensive applications. Deadline-driven applications may have diverse bandwidth and deadline requirements. For example, real-time applications, such as large bulk data transfers or stock market information exchange applications, require immediate service, while database/server backup applications may require a large bandwidth, but not necessarily immediately, thus having looser deadlines. The possibility to use different transmission rates to serve an application combined with the deadline requirements creates different scenarios by which the service provider can serve these

1063-6692/$26.00 © 2009 IEEE Authorized licensed use limited to: GOVERNMENT COLLEGE OF TECHNOLOGY. Downloaded on May 30,2010 at 18:49:25 UTC from IEEE Xplore. Restrictions apply.

354

deadline-driven requests (DDRs). This creates opportunities for the service provider to enhance the network performance by exploiting these opportunities while meeting the customer’s requirements (i.e., deadlines). Consider a scenario in which a user needs to transfer a large data file, e.g., a 10-GB (80 Gb) file. Without counting propagation delays, the transfer could finish in 8 s if it is offered a 10-Gbps channel, or in 80 s if a 1-Gbps bandwidth pipe is provided. The user states his preferred deadline to the service provider. If the user can tolerate a maximum transfer time of 80 s, the service provider can allocate either of the two transmission rates (1 or 10 Gbps), as the deadline is met in both cases. Since the service provider’s objective is to assign the transmission rate so that the network’s bandwidth is utilized efficiently, the question is how to determine this rate considering the network’s state? In this paper, we investigate the problem of DDR provisioning in WDM networks by exploiting the opportunity of allocating flexible bandwidth to the requests. Hence, our work uses traffic grooming [8], [9] (which is the problem of efficiently aggregating a set of low-speed requests onto high-capacity channels). However, in contrast to traditional traffic grooming (presented later in this section), in which requests have predefined bandwidth requirements (e.g., OC-48 traffic over OC-192 channels), in our study, the combination of file sizes, deadlines, and network state determines the bandwidth that will be allocated to the incoming request, and our algorithms try to improve network performance while satisfying the DDR’s deadline. Since the performance of a WDM network also depends on the type of node architecture used, we study the problem using the two dominant OXC architectures in WDM networks: opaque and hybrid, which are characterized by complete and partial optical–electronic–optical (OEO) conversion, respectively. We first propose routing and wavelength assignment algorithms, suitable for our DDR-oriented problem, and then allocate bandwidth to DDRs. Our bandwidth-allocation policies are: 1) fixed, which use the same predefined policy for all the requests; 2) adaptive, which consider network state when allocating bandwidth; and 3) with changing rates, in which the algorithms allow readjustment of the transmission rate for ongoing DDRs. We also provide a mixed integer linear programming (MILP) formulation for our problem. Our results show that the node architectures and their parameters, as well as the DDR bandwidth distributions, significantly impact the performance of our algorithms, that the changing rates algorithms usually improve over the performance of our other provisioning approaches, and that provisioning DDRs in an opaque network generally accepts more service than that in a hybrid network. The important problem of traffic grooming is well researched in the literature. Works in [10] and [11] consider the traffic grooming problem in WDM networks under a static traffic scenario. Traffic grooming is also considered in a dynamic environment: The work in [12] proposes a graph model for dynamic traffic grooming, while in [13], the authors study the performance of optical grooming switches in a dynamic environment. In [14], it is shown that, by using the requests’ holding time information, the performance of dynamic traffic grooming can be improved.

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 2, APRIL 2010

Accommodating guaranteed network services in WDM networks under various traffic models has also been addressed in the literature. In the scheduled traffic model [15], [16], lightpath demands with predefined knowledge of set-up and tear-down times, called scheduled lightpath demands (SLDs), are considered. The focus in [15] and [16] is to encourage the reuse of network resources by time-disjoint demands. An extension of the scheduled traffic model is the sliding scheduled traffic model [17], [18], where requests of a fixed holding time are allowed to slide in a predefined time window. In [19], the authors consider an approach for provisioning dynamic sliding scheduled requests. The work in [20] designs routing and wavelength assignment (RWA) [21] algorithms for accommodating advance reservations in WDM networks. This work considers three flavors of advance reservations: 1) with specified start time and duration (similar concept as SLD); 2) with specified start time and unspecified durations; and 3) with specified duration and unspecified start time. The work in [6] designs a deadline-aware scheduling scheme for resource reservation in lambda grids; however, in contrast with our work, it does not consider the routing or the possibility of grooming the requests. In [22], the authors devise heuristics and an ILP solution for accommodating large data transfers in lambda grids, while in [23], an approach that provisions data-aggregation requests over WDM networks is proposed. To consider related works from the general computer science field, our problem has common features with fundamental resource-allocation problems (such as the dynamic storage allocation problem [24]), however, these efficient resource-allocation algorithms can not be directly utilized by our DDR-oriented problem, in which resources (wavelengthlinks) for which the bandwidth allocation is done are not independent, but interrelated through a mesh connectivity; in addition, our problem addresses the complexity of flexible rate allocation. To the best of our knowledge, our study is the first to investigate the importance of allowing flexible transmission rates when provisioning DDRs in WDM optical networks. Today, this flexibility in the choice of the bit rate to support large data transfers can be achieved by using reconfigurable optical add-drop multiplexers (ROADMs), which are able to accommodate multigranularity traffic. Thanks to the flexible transmission rates, the service providers now have the opportunity to improve network resource utilization (and network cost), while still meeting their customers’ deadlines. The rest of the paper is organized as follows. Section II presents the problem and node architectures. Section III presents the RWA algorithms, the bandwidth-allocation schemes, the MILP formulation, and the changing-rates algorithms. In Section IV, we discuss illustrative numerical results. Finally, Section V concludes the paper. II. PROBLEM DESCRIPTION In this section, we formally describe the characteristics of our DDR provisioning problem. We are given a WDM mesh network, with its physical topology represented by a graph , where is the set of nodes, is the set of is the set of wavelengths on each link. fiber links, and wavelengths of capacity (e.g., Each physical link has

Authorized licensed use limited to: GOVERNMENT COLLEGE OF TECHNOLOGY. Downloaded on May 30,2010 at 18:49:25 UTC from IEEE Xplore. Restrictions apply.

ANDREI et al.: PROVISIONING OF DDRs WITH FLEXIBLE TRANSMISSION RATES

355

Fig. 1. Node architectures. (a) Hybrid architecture. (b) Opaque architecture.

). We need to provision dynamically arriving DDRs. Each incoming DDR is defined by the tuple (1) source node destination node where arrival time of size of the large file to be transferred, and deadline of , specified by the network customer, which is defined as the difference between the maximum time when the file must be fully transferred and the arrival time . DDR is considered provisioned if we can choose, as the bandwidth allocated to , a transmission rate such that (2) is the minimum required rate to meet the deadwhere . Note that, for the large line of the request, i.e., file sizes considered here, propagation delays (on the order of tens of ms for the typical backbone mesh network) are negligible compared to the large transmission time. In addition, the allocated for the request cannot exceed (which in our problem is a wavelength’s capacity ). Thus, the holding time , obtained when , of ranges between . and , obtained when To provision a DDR , we need to do the following: • Find a route and assign wavelengths to , so that there is enough bandwidth on to meet ’s deadline . [from the • Determine a specific transmission rate bandwidth range in (2)], which will be allocated to , with the objective stated below. To choose this specific , we may use one of the following types of rate bandwidth-allocation algorithms (presented in detail in Sections III-C and E): — Fixed allocation: Allocate a fixed amount of bandwidth . We choose to alloto , depending on its cate the maximum end-to-end available bandwidth on ’s chosen path (policy called ) or the minimum bandwidth required to meet ’s deadline (policy called ). — Adaptive allocation: Use network-state information to improve the performance of the bandwidth-allocation

algorithm. A first policy, simply called “Adaptive,” uses link-congestion information to determine which fixed allocation policy to use. A second policy, called “Proportional,” allocates bandwidth to each request propor. tionally to its — Changing Rates: Allow the transmission rate of existing requests to change over time to accommodate new requests that can not be provisioned otherwise. The objective of our DDR provisioning algorithms is to satisfy the current request while retaining maximum resources unused to accommodate future traffic. A. Node Architectures We briefly present the two typical node architectures for WDM mesh networks for which we design our algorithms. Fig. 1(a) shows the hybrid switch architecture. This architecture has two components: 1) an OXC, which can optically bypass the incoming lightpaths; and 2) an electronic switch ( ) (e.g., an IP router), where lightpaths can be initiated/terminated. Notice that if a lightpath simply bypasses the node optically (through the OXC), it must use the same wavelength and are abbreviations for wavelength (in Fig. 1, add/drop). Fig. 1(b) presents the opaque switch architecture, which performs full OEO conversion. Now, a lightpath is first demultiplexed to the lowest electronic port speed granularity, while electronic signals are multiplexed to outgoing lightpaths. Therefore, opaque OXCs can perform wavelength conversion. III. PROVISIONING OF DEADLINE-DRIVEN REQUESTS A. DDR RWA for Hybrid Architecture Upon arrival of a DDR , we search for a route with at least unused bandwidth and for a feasible wavelength is found, we provision . assignment. If a valid RWA for Path can span over one or multiple lightpaths from source to destination of the request. The set of all lightpaths in the network forms the virtual topology. The term virtual link is used to denote a lightpath. 1) RWA Algorithm for Hybrid Architecture: For our route computation, we use an integrated architectural model (in which

Authorized licensed use limited to: GOVERNMENT COLLEGE OF TECHNOLOGY. Downloaded on May 30,2010 at 18:49:25 UTC from IEEE Xplore. Restrictions apply.

356

both virtual and physical link information are known in a unified control plane) [8]. In existing traffic grooming literature, one approach that uses an integrated model is the auxiliary graph model [10], [12], [13], [25]. We also use a lightweight (and computational effective) auxiliary graph, suited for our DDR-oriented problem. deFor each DDR , we create an auxiliary graph pending on current network state. Thus, contains informaand curtion from both the current virtual topology . consists rent available physical topology graph contains the status of free of all existing lightpaths, while wavelengths in the physical network . has the same vertex set as , but its edges have different meaning. For any node pair , auxiliary graph has an edge if either: 1) there is a existing lightpath with enough free capacity for (i.e., at least ); or 2) there is a direct physical link be(if the link exists and has at least one unused tween nodes wavelength). Algorithm 1: RWA for Hybrid Architecture: , ; Current network Input: DDR and . state consisting of for R, consisting of a sequence of Output: Virtual path lightpaths; null, if no feasible path is found. 1) Construct auxiliary graph : a) If there exists a lightpath between node pair with free capacity , add an edge to the auxiliary graph . b) Copy into all physical links that have at least one unused wavelength if no existing lightpath was already added into between node pair in Step 1a. c) The weights of the edges of (virtual and physical edges) are assigned depending on grooming policy. ) between s 2) Generate a set of K-Shortest Paths ( and d on graph . Any path is a sequence of existing lightpaths and physical links. do 3) For each path a) Transform into a sequence of lightpaths by creating new lightpaths from physical links on . To maintain wavelength continuity, transform sequences of physical links located between two consecutive existing lightpaths on path into new lightpaths, by segmenting the physical links into wavelength-continuous lightpaths (see Section III-A-II). b) If there are enough transceivers at the intermediary nodes on to setup the new lightpaths, form using the existing/new lightpaths virtual path and Return . EndFor 4) If no route is found, Return null. Our RWA algorithm for Hybrid architecture is described in Algorithm 1. We can either use existing lightpaths or create new ones by using free physical resources. Depending on the grooming policy used, different weights are assigned to (as detailed below). Minthe edges in the auxiliary graph imum-weight path algorithms are then applied on .

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 2, APRIL 2010

Fig. 2. Segmentation example: Path P (formed from existing lightpaths and physical links) is transformed into virtual path P .

in Algorithm 1 is the K-Shortest-Paths algorithm [26]. The paths obtained from applying on (in Step 2 of Algorithm 1) are a sequence of existing lightpaths and/or physical links. Our algorithm must create lightpaths from these physical links (i.e., perform wavelength assignment) while respecting the wavelength-continuity constraints. We name this part of our algorithm Segmentation. It is summarized in Step 3a of Algorithm 1 and detailed next. In Step 1c of Algorithm 1, weights are assigned to the edges . For our experimental results, we use of auxiliary graph a congestion-aware policy which prefers paths that are less congested (we also experimented with other grooming policies, e.g., a policy that puts the emphasis on having short paths, by assigning uniform costs to physical links and to lightpath physical hops). A congested link is a link with many utilized . Use of a congested link should be wavelengths out of avoided, as the network connectivity will be degraded if, for example, all wavelengths of a link are fully utilized. Our congestion-aware policy increases the weights of congested links; however, these weights should not be too large compared to the weights of lowly utilized links (or to the weights of existing lightpaths), or else the congested links will never be used. Therefore, we set the weight of an existing lightpath to the number of its physical hops, and the weight of a physical link (inspired from the formula for average delay for to queues: [27]), where number of number of used wavelengths on the wavelengths per link; a parameter , chosen such that the weight of a link; highly congested link is not too large; a scaling constant used to make the weights of lower utilized physical links close to the weight of an existing lightpath’s physical hop, so that excessively long paths are avoided (for our numerical results, and ). we chose 2) Segmentation Algorithm: We use the Segmentation algorithm to maintain the wavelength-continuity constraint on path (computed in Step 2 of Algorithm 1). Fig. 2 shows such a path , consisting of three physical links which are located between two existing lightpaths. We cannot simply create a lightpath between nodes 2 and 5 since there is no single common free wavelength on links 2–3, 3–4, and 4–5. Hence, we must segment the physical path 2–3-4–5 into two new lightpaths, which respect the wavelength-continuity constraint. For all contiguous links on path (links 2–3, 3–4, and 4–5), the goal of our Segmentation approach is to create new lightpaths that span as many physical hops as possible. This will result in a small number of lightpaths between and (respectively, nodes 1 and 6).

Authorized licensed use limited to: GOVERNMENT COLLEGE OF TECHNOLOGY. Downloaded on May 30,2010 at 18:49:25 UTC from IEEE Xplore. Restrictions apply.

ANDREI et al.: PROVISIONING OF DDRs WITH FLEXIBLE TRANSMISSION RATES

The Segmentation algorithm works as follows: Maintain a set of free wavelengths for the “physical subpath” between two consecutive lightpaths (e.g., path 2–3-4–5 in Fig. 2). Ini. For tially, the set includes all possible wavelengths , intersect the wavelength set with each physical link the free wavelengths of link , until becomes empty, or until we are finished with all the links in . If no free wavelength remains in the wavelength set , backtrack to the previous link, choose the first-fit wavelength [21], and segment the lightpath here (we also check if there are enough free transceivers to setup the lightpath). The Segmentation algorithm continues until we pass through all the links in . We illustrate the idea of the Segmentation algorithm, by using is 2–3–4–5, and the initial Fig. 2. The “physical subpath” wavelength set contains all available wavelengths (the free wavelengths for each link are listed in Fig. 2). After we pass link and ; after link 3–4, only ; 2–3, includes wavelengths and after 4–5, is empty. We backtrack to link 3–4, segment at to setup lightpath between node 4, and use wavelength 2–4. Similarly, we setup between 4–5. B. DDR RWA for Opaque Architecture algorithm for a network with opaque nodes is Our based on the same general principles as Algorithm 1, i.e., use of an auxiliary graph ( ) created for each DDR, and then selection of a path by using a minimum-weight algorithm on . However, the differences between Algorithm 1 and Opaque RWA come from the different physical properties of Opaque and Hybrid OXCs (see Section II-A), namely: 1) for the network with opaque switches, every utilized wavelength channel on each fiber link forms a lightpath between two adjacent OXCs, and therefore, the virtual topology is the same as the physical topology; and 2) Opaque OXCs provide wavelength conversion. In order to compute the paths, our Opaque RWA algorithm uses the knowledge of what bandwidth-allocation algorithm is utilized by the service provider. An overview of the bandwidthallocation algorithms was given in Section II, and they are detailed in Section III-C. The Opaque RWA algorithm works as follows. (formed of all links that have wavelength(s) First, graph ) is constructed. On , with remaining capacity we generate a set of between and . Since our numerical results show that having short paths is important for Opaque algorithm’s performance, we only consider the paths with the same number of hops as the minimum-cost path obtained. Next, for each path , we do the following. If the , in order to find a bandwidth-allocation policy is , we first try to wavelength assignment (WA) for each link find already-utilized wavelengths on link , with free capacity (between all these wavelengths, we choose the one with smallest remaining capacity). If no such wavelength is found on link , we choose a free wavelength on , by using policy (which does not perthe first-fit policy [21]. For the form grooming), a free wavelength is always selected by using alfirst-fit. If more feasible paths are found, for location policies, we choose the path with the smallest allocated capacity end-to-end.

357

Fig. 3. Maximum available bandwidth on all the lightpaths of path P is B .

C. Bandwidth-Allocation Algorithms The customer’s main requirement is to meet the DDR’s deadline. However, the service provider’s objective is to design bandwidth-allocation policies that maximize network utilization and consequently minimize resources used. In this section, we design algorithms for deciding what bandwidth should be allocated to the request. Note that these allocation policies can be used with either the Hybrid or Opaque architecture. 1) Fixed Bandwidth-Allocation Policies: Consider path for request (computed as in Section III-A or Section III-B), formed of one/multiple lightpaths. Let be the maximum available bandwidth over all of ’s lightpaths (see Fig. 3). The maximum bandwidth that can be offered to is , while the minimum is . : After Our first bandwidth-allocation policy is called computing path , always allocate the maximum available ’s advantage is bandwidth ( ) to the DDR. Intuitively, that the whole bandwidth is used efficiently, and no bandwidth remains idle, but the downside is that it can create resource contention and congestion at high network loads. . In this case, we offer Our second policy is called bandwidth to the request. The advantage of policy is that it tends to “pack” (groom) the requests very densely on the lightpaths, leaving room for future requests. and A third policy, which is a combination between can be devised: , where . The bandwidth offered to is . Intuitively, for larger values would perform close to , while for smaller of , values of , it would perform closer to . 2) Adaptive Bandwidth-Allocation Policies: Policies described so far assign fixed bandwidth, so they do not consider the current network state. In this section, we devise adaptive approaches, which make use of current network state information. The information that can be employed by the adaptive ap): 1) the congestion proaches are (in addition to and state along the chosen virtual path ; and 2) the type of the of the incoming request compared with of distributions. other requests and information about Adaptive Policy: This approach considers the congestion (number of utilized wavelengths) of all the physical links of the lightpaths that form path when determining what fixed policy to use. The main idea here is that path may pass through links that to request are already congested. In this case, allocating is not a good policy, as congested links will become even more congested for future requests. Hence, for congested links, it is bandwidth to . On the other hand, if better to allocate none of the links is highly congested, it may be better to use

Authorized licensed use limited to: GOVERNMENT COLLEGE OF TECHNOLOGY. Downloaded on May 30,2010 at 18:49:25 UTC from IEEE Xplore. Restrictions apply.

358

, as there is enough free capacity to service future requests that may need one of the links on ’s path. We show the idea of the “Adaptive” algorithm on the network used in our simulations (Fig. 5). Considering uniform traffic requests between any source-destination node pair, the links at the network center (e.g., 9–12, 12–13, 10–13), which connect nodes of high degree, are usually more congested than links at the periphery of the network (e.g., 1–2, 23–24). If we compare a request between nodes 1 and 24, which finds path 1–6-9–12-13–17-23–24, to a request between nodes 1 and 5, bandwidth which finds path 1–2-3–5, it is better to assign to the first DDR since it traverses links which are (or may in to the second DDR, as the future be) highly utilized, and we do not need to leave so much room for future requests on less-congested links. if there are congested The Adaptive algorithm allocates . To define congestion, links along path , else it allocates . If there is at least one link on path we choose a threshold with more than used wavelengths, path is considered congested. Proportional Policy: This policy is based on the idea that should not be given the same DDRs with different amount of bandwidth. A 500-Mbps file transfer need not be allocated the same bandwidth as a 5-Gbps transfer. If it is allocated a lot of bandwidth, the first transfer will finish quickly, but it will make bandwidth unavailable for future requests. Alof the request locating bandwidth proportional to the seems like an attractive policy. and information about We can use the distributions to decide what bandwidth to allocate to the DDR. To understand our assumptions about how DDR traffic is generated, and that we consider the service provider to have classes , please see the DDR traffic demand model and of used bandwidth distributions, which are detailed in Section IV. policy also considers that the service provider has previous statistics about user requests pattern, so it can compute of all requests. the expected policy is described in Algorithm 2. Algorithm 2: Allocate Proportional: of minimum rate ; Input: DDR maximum bandwidth we can allocate to (see Fig. 3); of all requests. Output: Bandwidth allocated to . 1) Objective: Allocate proportional bandwidth depending and the distribution. on ’s . 2) Compute bandwidth to 3) Attempt to allocate , which would be fair to all requests of different . , allocate . 4) If , allocate . Else if Else allocate .

D. Mathematical Model So far, we have examined RWA and bandwidth-allocation algorithms for DDRs. In order to better understand our problem,

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 2, APRIL 2010

we state it as a MILP, which can solve the RWA and bandwidth-allocation subproblems together. There are three variations of our MILP. The first allocates flexible bandwidth to the (by referring to the requests; hence, it is named policies in Section III-C2). The other two allocate fixed bandand width to the requests and are named since they use the and bandwidth-allocation policies. These MILP formulations can be used as benchmarks for our heuristic provisioning approaches. Our MILP model assumes that all DDR arrivals and deadlines are known, hence they are based on static traffic. However, the solution of the MILP constitutes a valid lower bound on the performance of our provisioning approaches (which consider a dynamic traffic environment). Our MILPs can provision DDRs in a network equipped with opaque OXCs. The three MILP formulations are computation, as it includes: 1) seally complex, especially lection of the appropriate bandwidth for DDR , which can be translated into a flexible finish time for the transmission of ’s data; 2) RWA and grooming; and 3) constraints for time-disjointedness of requests that share common resources. That is alternate why we simplify the routing, by considering only routes for each DDR, an approach utilized in other works that consider time-domain scheduling [15], [16], [18]. is given here. The formulation for Given: — Graph representing the physical topology of the network, as defined in Section II. ; each DDR has the no— : Set of DDRs, with arrival tation in (1). Note that, for each request are known. time , deadline , and file size — : In this MILP, the wavelength capacity is divided into sublambdas for which we will maintain the resource utilizations, each of capacity , where is the line rate. , For example, in a network of line rate 10 Gbps, if the smallest sublambda is 2.5 Gbps. : Binary inputs representing predefined paths. For — each request , we precompute the K-Shortest Paths [26]. , if ’s th path uses fiber link , otherwise. — : Number of different transmission rates that could be . allocated to request , with , with , we construct an or— For each of dered set pairs (with , ). Set only maintains the pairs for which ’s deadline is met, i.e., pairs . Hence, any of the pairs in can which satisfy be allocated to . Variables: : if request is allocated the -th — combination ), ; otherwise . , expressed as — : Bandwidth assigned to request an integer multiple of the number of sublambdas ). Both and (variable pre( sented next) are auxiliary variables (computed from and the pairs in set ), used to simplify the description of the equations.

Authorized licensed use limited to: GOVERNMENT COLLEGE OF TECHNOLOGY. Downloaded on May 30,2010 at 18:49:25 UTC from IEEE Xplore. Restrictions apply.

ANDREI et al.: PROVISIONING OF DDRs WITH FLEXIBLE TRANSMISSION RATES

— —

359

: Holding time to be assigned to request . : Virtual connectivity variables. if is routed through link , wavelength , and uses sublambda on this wavelength-link, , , , ; otherwise, . : Route chosen for request (from path set ). — , if the request is routed on path , with ; otherwise, . if request is routed through fiber link ; — : . otherwise Constraints: To keep our mathematical formulation simple, we use logical constraints (e.g., implications, disjunctions) in our model. These constraints can be easily linearized by adding auxiliary integer variables. In addition, commercial MILP solvers (e.g., CPLEX [28]) allow the specification of logical constraints in their optimization models [28]. 1) Flexible transmission rate constraints

on this wavelength-link. Equation (8) establishes if request uses link , by considering the path variables. Equation (9) connects the path ( ) and virtual connectivity ( ) variables, by using the variables. 3) Time-domain constraints:

(3)

(11)

(4)

Equation (11) counts the number of accepted requests by considering which requests found paths for their file transfers. Objective B: Maximize total network throughput

(10) Equation (10) states that any two distinct requests , either do not overlap in time [first line of (10)], or they must not share same physical resources [line two of (10)]. We consider two alternate optimization goals: Objective A: Maximize the number of accepted requests

(5) (12) Equation

(3)

states

that exactly one pair is chosen for . Equation (4) fixes ’s bandwidth , by request selecting the appropriate precomputed bandwidth (depending on the value of ). Similarly, (5) fixes ’s holding time . 2) Routing, wavelength, and sublambda assignment

(6)

(7)

Equation (12) considers total data transferred for each DDR and provisions the requests that provide maximum throughput. , and are given (constants), so (4), (5), Note that , and (8) are linear. The two MILP formulations that allocate fixed bandwidth and ) are obtained by removing (3)–(5) ( ’s model, and fixing and defrom the pending on the policy. For these MILPs, the finish time of the DDR transfers is known (as the rate is fixed); thus, the requests become close to scheduled lightpath demands (SLDs), as deand fined in [15] and [16]. However, as a difference, accommodate sublambda granularity requests, while the works in [15] and [16] do not consider grooming. E. Changing-Rates Algorithm

(8)

(9) Equation (6) ensures that at most a single path can be chosen to route request . If no path is chosen to route , this request cannot be provisioned. Equation (7) constrains that is either routed on wavelength of fiber link with all its allocated bandwidth ( ), or it is not routed

The bandwidth-allocation algorithms in Section III-C and the mathematical models in Section III-D focus on directly deciding what bandwidth to allocate for a request . The allocated bandwidth was fixed for the duration of ’s transfer. If no path bandwidth is found for , the request with at least cannot be provisioned (see Algorithm 1). In this section, we relax the fixed bandwidth constraint, and allow changing (i.e., reconfiguration of) the transmission rates dynamically, which can improve the network’s utilization and DDR acceptance rate. A practical example of a protocol that allows hitless dynamic bandwidth adjustment is SONET’s link-capacity adjustment scheme (LCAS) [29], [30], which allows dynamic increases or decreases of the bandwidth of a virtual concatenated group (VCG).

Authorized licensed use limited to: GOVERNMENT COLLEGE OF TECHNOLOGY. Downloaded on May 30,2010 at 18:49:25 UTC from IEEE Xplore. Restrictions apply.

360

Fig. 4. Simplified example for Changing Rates (on a lightpath from path P of request R).

The Changing-Rates technique tries to accommodate a new incoming request which otherwise cannot be provisioned, by changing the transmission rates of currently ongoing transfers. Lightpaths between ’s source and destination may be servicing previous requests which do not have a stringent deadline, but they were allocated extra bandwidth (as in the approach). In this case, even if there is no path with enough capacity for , it may still be possible for to be provisioned, if: 1) we free bandwidth for , by decreasing the rates of the requests which conflict with on a path ; 2) these already-scheduled requests conflicting with all still meet their deadlines; 3) we can free enough bandwidth on path , so that now meets its deadline. Fig. 4 illustrates a simplified example of a lightpath on path of an incoming request . The lightpath is represented (on the left side) before the arrival of , and (on the right side) after was scheduled. For simplicity, before ’s arrival, we have scheduled on the lightpath (in practice, only one request there can be multiple existing requests groomed on the light) and ( , ) are the arrival time and deadpath). ( , line of R and , respectively. The right-hand side (RHS) of at time Fig. 4 shows that we can decrease the rate of (while still meeting its deadline) and also accommodate . The are computed using allocanew rates allocated to and tion which is proportional to the minimum rates needed to catch of is updated at time their deadlines (also, the , considering its data which remains to be transferred). Algorithm 3: Changing Rates: ; current Input: DDR , minimum bandwidth network state. for , Bandwidth allocated to : , Output: Path New rates of the affected connection requests: ; Otherwise, null, if can not be provisioned. PHASE 1 ) to obtain virtual path . 1) Apply Alg. 1 ( 2) If exists: (i) and (ii) Return . Else (we do not find any path ): go to PHASE 2. PHASE 2 1) Apply Modified Alg. 1 (Modified RWA) and obtain sets of s( ), each set

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 2, APRIL 2010

being formed of virtual paths: . , 2) For each a) On virtual path , find all connections (requests) affected by . Let this connection list be: . , test if we can decrease the b) For each rate of , on its own path, so that it still meets its Deadline. , compute (by using c) For each lightpath proportional allocation) separate values for the rate of and new rates of . , compute d) For all the lightpaths as the minimum of the computed rates ( on all the lightpaths: ). Similarly, the new rates of all are computed as the minimum value (for all lightpaths) of the new rates computed in Step 2c. e) If all the deadlines for existing affected connecare still met: and Return tions , and the new rates of affected con{ }. nections 3) , Return null. Algorithm 3 illustrates the Changing-Rates algorithm (for networks using hybrid architecture), which attempts to change the rates of some already-scheduled DDRs with the goal of accommodating . The algorithm consists of two main phases. In Phase 1, we try to provision DDR without any rate change (i.e., perform the RWA as in Section III-A and bandwidth allocation as in Section III-C). If a path (with at least free capacity) is found for , we allocate bandwidth to . Subsequently, we can take away bandwidth from whenever bandwidth may be needed by a future request. If no path is found for , we will try to reconfigure existing (already-scheduled) connection requests in Phase 2. According to Phase 2, sets of paths on which we attempt Step 1, we first obtain changing the rates (parameter is 3 in our numerical results). In order to obtain a set of alternate virtual paths on which to try making room for (from these paths, in Step 2e, we will choose one as ’s scheduling path), we apply a modified version of Algorithm 1. The main modifications are: 1) When constructing , in Step 1a of Algorithm 1, we randomly choose which existing lightpaths between two nodes to put into . Moreover, we no longer constrain lightpaths , because now we change to have free capacity the rates of existing connections anyway. 2) In the modified version, we do not return one virtual path alternate virtual paths, on (Algorithm 1, Step 3b), but which we attempt to change the rates. For each of the alternate virtual paths, we try changing the rates of existing connection requests (which keep their paths, without any rerouting), compute the new rates by using proportional allocation, and see if all the requests with changed rates still meet their deadlines. If yes, we can provision and finish. We try all the alternate virtual paths , from all path sets, and if changing the rates fails on all the paths, ’s provisioning fails.

Authorized licensed use limited to: GOVERNMENT COLLEGE OF TECHNOLOGY. Downloaded on May 30,2010 at 18:49:25 UTC from IEEE Xplore. Restrictions apply.

ANDREI et al.: PROVISIONING OF DDRs WITH FLEXIBLE TRANSMISSION RATES

Fig. 5. A representative US nationwide network.

1) Changing Rates for Long Paths: In Algorithm 3, we only attempt changing the rates when Phase 1 fails to find a path with enough bandwidth for request . However, because we apply policy (in Phase 1), which utilizes lots of bandwidth in can become bursts, the paths found on the auxiliary graph rather long. Long paths can degrade network performance (as they use many wavelength-links). Therefore, in this scheme, is we attempt changing the rates (Phase 2), even if a path has at least more physical hops found in Phase 1, when than the shortest path between ’s source and destination (if , with a parameter). In Section IV, this algorithm will be de. noted by 2) Changing Rates With Time Limitations: The ChangingRates approaches presented so far assume an unlimited number of rate changes for a scheduled DDR during its holding time. In practice, we may wish to limit the number of rate changes for the lifetime of each DDR (as frequent rate changes may create signaling overhead). In this variation, we allow changing the rate of an ongoing file transfer only if at least s have passed since the last rate change. As detailed in Section IV-B, changing the rates helps even when is large. We denote this algorithm as ChangeRates(T sec). IV. ILLUSTRATIVE NUMERICAL RESULTS We simulate a dynamic network environment to evaluate the performance of our DDR provisioning algorithms. Fig. 5 shows the topology used in our study. Each network edge has two unidirectional fiber links, one in each direction. Each link has 16 wavelengths, each of capacity 10 Gbps (OC-192). All network nodes are equipped with either Hybrid or Opaque switches. For , each switch in the network with Hynodes with degree brid OXCs has 32 bidirectional transceivers (i.e., 32 transmit, each ters and 32 receivers), while for nodes with degree switch is equipped with 64 transceivers. DDR arrivals are independent and uniformly distributed among all source-destination . Our pairs. The number of considered alternate paths is results are averaged over 20 simulation runs, each of 100 000 DDRs. We consider that the customer’s choice of file deadline can be affected by the price offered by the service provider for the

361

required to transfer the file. In this case, it is possible that the customer may relax its required deadline, so that from the set of transfer rates it chooses its preferred offered by the service provider. A larger will lead to means fixing a more expensive service. Choosing the , where is the file size. the deadline as We investigate the performance of our DDR provisioning al. The gorithms on three distributions of the requested (denoted as ) is first bandwidth distribution of 100 Mbps : 500 Mbps : 1 Gbps : 2.5 Gbps : 5 Gbps : 10 Gbps = 50 : 25 : 15 : 7 : 2 : 1. File sizes are uniformly distributed in the – GB for Mbps (leading to deadrange – GB for 500 Mbps, – GB lines between 8–200 s), Gbps, and – GB for of 2.5, 5, for and 10 Gbps. Our second bandwidth distribution of ( ) is 100 Mbps : 500 Mbps : 1 Gbps = 50: 35: 15. This traffic set contains small rates compared with the 10-Gbps line rate. ) is 500 Mbps : 1 Gbps : The third bandwidth distribution ( 2.5 Gbps = 50 : 30 : 20, with large rates compared with the line and are generated similarly as in rate. File sizes for . Please note that all the three BDs chosen are skewed distributions, with larger frequency for smaller bandwidth requests, similar to the distributions of practical traffic requests [13], [31]. To compare the performances of our approaches, we consider two metrics. The first one is the fraction of requests which are not provisioned out of the total number of file-transfer requests. The second metric corresponds to the sum of file sizes which cannot be provisioned out of the total sum of file-transfer requests (and we will refer to it as “Fraction of Unprovisioned Bytes”). This metric shows the total bandwidth that cannot be provisioned out of total requested bandwidth. It is important and to note that, even if two provisioning algorithms have close performance relative to the second metric (same provisions more requests than (i.e., throughput), but performs better considering first metric), the service provider would probably favor . This is because our current networks consider volume discount for bandwidth pricing: from a service provider perspective, it is usually preferable to service more customers, and gain a larger revenue. The two metrics above [corresponding to the MILP objectives in (11) and (12)] are considered as interrelated in the objectives of our heuristics: Our algorithms focus on both achieving an effective bandwidth utilization and on rejecting few DDRs. Some of the solutions pursued by our heuristics to achieve these objectives are: use of short paths (in both hybrid and opaque RWAs), minimization of used resources (e.g., “Segmentation” avoids setting up many lightpaths, thus saving transceivers), effiuses the entire bandwidth cient bandwidth allocation (e.g., efficiently), or reallocation of bandwidth to prevent suboptimal ). bandwidth allocation (by A. Performance of Fixed Bandwidth-Allocation Algorithms in Hybrid OXCs, for Varying Number of Transceivers First, we study the performance of our DDR provisioning algorithms (employing the fixed bandwidth-allocation poliand ), considering that the network in Fig. 5 is cies equipped with Hybrid OXCs with varying number of deployed

Authorized licensed use limited to: GOVERNMENT COLLEGE OF TECHNOLOGY. Downloaded on May 30,2010 at 18:49:25 UTC from IEEE Xplore. Restrictions apply.

362

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 2, APRIL 2010

Fig. 6. Performance of provisioning DDRs in Hybrid networks equipped with varying number of transceivers. (a) Fraction of unprovisioned bytes for (b) Fraction of unprovisioned bytes for BD .

transceivers (we subscript the policy name with , e.g., , to indicate it is for Hybrid). For the purpose of this section, we vary our base transceiver configuration (shown at the beginning of Section IV), by using (i.e., we multiply the number of transceivers in a factor of the base case by , for any node degree). signifies fewer deployed transceivers than the basic configuration (e.g., , which corresponds to 38 transceivers for nodes with ), deploys more transceivers (e.g., , degree is our base concorresponding to 76 transceivers), while figuration [denoted without any in Fig. 6(a) and (b)], which we further study in the later subsections. We investigated . Fig. 6(a) and (b) show the fraction of unprovisioned bytes, and , for varying number of deployed transceivers for achieves similar results). A first conclu(please note that and sion from Fig. 6(a) and (b) is that, in general, for both , by deploying more transceivers we are able to provision more bytes. Between the two fixed allocation approaches, is more sensitive to transceiver variation than . This is policy utilizes because (as we will show in Section IV-D) more transceivers than , especially in the Hybrid case. The does not change much with varying performance of (for and , performs almost the same, and ). By increasing to more than 1.2, for both very small performance improvement can be obtained because, , the number of transceivers is close to the physical for upper bound of one transceiver per wavelength. Depending on the number of transceivers deployed, we distinguish the following cases: 1) the case where transceivers are the scarce resource in the network (and not the wavelengths), ); 2) the hence they lead to significant blocking (e.g., or case where blocking is mainly due to capacity (e.g., larger); and 3) intermediate cases (such as our base case), where (depending also on the bandwidth distribution) blocking is due to both transceivers and capacity. An interesting result obtained for all three BDs [see Fig. 6(a) and (b)] is that, for the cases where transceivers are not the ), provisions more bytes than scarce resource (e.g., . For the cases where transceivers are the scarce resource (e.g., ), performs better than , as it requires

BD

.

fewer transceivers. For intermediate cases (e.g., our base case), and depends on the imthe relative performance of pact of the transceivers on the bandwidth distribution (i.e., if blocking is mostly due to transceivers or to capacity). B. Performance of DDR Provisioning Approaches in Hybrid Architecture for the Base Transceiver Configuration In this section, we investigate the performance of our DDR provisioning algorithms (for the three ) for our base transceiver configuration deployed on Hybrid OXCs. . Fig. 7(a) shows the fraction of unprovisioned bytes for We observe that, among the fixed allocation policies, slightly outperforms for low and intermediate loads. The method obtains only a slight improvement over (more visible for lower network loads), while further improves over . The Changing Rates for long paths policy ( ) provisions slightly more bandwidth . than Fig. 7(b) shows the fraction of unprovisioned requests for , by comparing and policies. Note that contains requests with ranging from 100 Mbps (50% of the requests) to 10 Gbps (1% of the requests). In Fig. 7(b), and indicate the overall fraction of unprovisioned requests, without considering the granularities of the blocked , while the rest of the plotted values detail the fraction : 1, 2.5, of unprovisioned requests for three types of has similar fraction of unproviand 5 Gbps. Note that granularity, theresioned requests, irrespective of the fore ’s plots are clustered together. We observe that by policy over , the number of unprovisioned choosing connections is approximately 5 times less. This large difference rejects almost no in performance is explained as follows. 1 Gbps (or lower ) requests, which together sum up cannot provision a relato 90% of the offered requests. tively small number of 2.5 Gbps requests and a large number of 5 and 10 Gbps. This is explained of DDRs with by ’s ability to properly groom small-granularity requests in contrast to large-rate requests. These large requests will contend for bandwidth with the smaller-granularity requests, which are rejects the same fraction easier to provision. In contrast,

Authorized licensed use limited to: GOVERNMENT COLLEGE OF TECHNOLOGY. Downloaded on May 30,2010 at 18:49:25 UTC from IEEE Xplore. Restrictions apply.

ANDREI et al.: PROVISIONING OF DDRs WITH FLEXIBLE TRANSMISSION RATES

363

Fig. 7. Performance of DDR provisioning for BD . (a) Fraction of unprovisioned bytes for BD . (b) Fraction of unprovisioned requests of different bandwidth granularities for BD .

Fig. 8. Performance of the bandwidth-allocation policies for BD and BD . (a) Fraction of unprovisioned bytes for BD . (b) Fraction of unprovisioned requests for BD and effect of allowing a limited number of rate changes.

of requests of all , as uses the whole wavelength capacity for all types. . Fig. 8(a) shows the fraction of unprovisioned bytes for rejects significantly Among the fixed allocation policies, . As expected, the more bandwidth than policy has intermediate performance between and . Considering the adaptive bandwidth-allocation policies, and perform slightly better both , which is expected because they utilize more inforthan mation (i.e., the network state). Both flavors of improve performance over the other bandwidth-allocation , provisions approaches (same as for ). Overall, for slightly more bandwidth than , the performance can be improved if we utilize the adaptive policies over the fixed ones; further improvement is approaches are used. possible if Fig. 8(b) shows the performance of the allocation policies and performs a sensitivity analysis on the Changing for Rates with Time Limitations policy, which may be preferred in practice as it involves less signaling overhead compared with the . Time (shown in brackets) is the standard minimum time between two possible consecutive rate changes in the lifetime of a DDR. Fig. 8(b) shows that, for time periods of still 10, 20, and 30 s between rate changes, . For 40 s, however, rate changes are no longer outperforms

applied because the period between allowed changes is too long (compared with holding time), and the performance is closer to (recall that policy uses in Phase 1 of Algorithm 3). C. Provisioning DDRs in Opaque Versus Hybrid Networks We compare the performance of provisioning DDRs in Opaque and Hybrid networks (Opaque results are subscripted ). The fraction of unprovisioned bytes for with , e.g., is shown in Fig. 9(a). In the case of and , the does not groom type of OXC is not as important, because ’s performance is mainly requests (hence, the difference in due to the different RWA algorithms being used for Hybrid and performs significantly better Opaque). We observe that . In addition, is able to provision more bandthan . This is because Opaque OXCs do full OEO width than conversion, so they can do better grooming than Hybrid OXCs. The Adaptive approaches are not shown in Fig. 9(a) because their performance is not significantly different compared to . their corresponding Fig. 9(b) shows the performance of our policies for . As , and Adaptive policies perform better for Opaque in than for Hybrid. We observe that both Adaptive approaches . slightly improve over their corresponding

Authorized licensed use limited to: GOVERNMENT COLLEGE OF TECHNOLOGY. Downloaded on May 30,2010 at 18:49:25 UTC from IEEE Xplore. Restrictions apply.

364

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 2, APRIL 2010

Fig. 9. Performance of provisioning DDRs in both Hybrid and Opaque networks for BD and BD . (a) Fraction of unprovisioned bytes for BD . (b) Fraction of unprovisioned bytes for BD .

is ment. Fig. 10 shows that, for both Hybrid and Opaque, . This is because uses more transceiver-efficient than a full wavelength’s capacity (i.e., optimal lightpath utilization), hence, transceivers are also used efficiently. On the other hand, grooms connections, and lightpaths are kept active (and transceivers utilized) even if only a fraction of the lightpath’s is more capacity is actually used. This result shows that efficient than when transceivers are the bottleneck (result obtained in Section IV-A). E. Numerical Study for the Mathematical Model Fig. 10. Average number of used transceivers per OXC for BD , for both Hybrid and Opaque scenarios.

D. Resource Utilization To evaluate the efficiency of our DDR provisioning approaches, we studied the utilization of network resources (transceiver, wavelength, and lightpath utilization) for both the Hybrid and the Opaque cases. Because of space considerations, in Fig. 10 we only show the average transceiver utilization, (with and being computed as the average the number of utilized transmitters and receivers) during the simulation time, so weighted by the interval of time between two consecutive events, as in [14]. We use bidirectional transceiver slots. Since today’s networks are often overprovisioned, for this experiment, we assume that the capacity of the links (i.e., number of wavelengths) is large enough to satisfy all requests. This way, we can compare the transceiver utilizations fairly, with a constant value (e.g., zero) of unprovisioned bandwidth for all our approaches [14]. Fig. 10 shows the transceiver utilization for both node ar( and achieve similar results chitectures for ). For both and policies, provisioning using to Hybrid OXCs consumes fewer transceivers than provisioning using Opaque OXCs. This is because, in the Hybrid case, transceivers are used only at the lightpath’s end nodes, whereas in the Opaque case, each intermediate node on the path has to terminate and set up lightpaths, which utilizes transceiver equip-

The MILP formulations presented in Section III-D are solved using CPLEX [28], a commercial linear-programming package. To obtain a lower bound for our DDR provisioning heuristics, we use the mathematical formulations as benchmarks. Recall that the MILPs solve the offline setting of our problem, where all requests are known in advance, while our algorithms must schedule each request when it arrives. Since solving our MILP formulations is computationally demanding, we use a small six-node mesh topology as in [22] (a six-node logical ring with two chords), equipped with Opaque switches, and bidirectional links equipped by two wavelengths per link (each wavelength of capacity 10 Gbps). Our studied bandwidth distribution has two MinRate granularities 5 Gbps : 10 Gbps = 65 : 35. All file sizes are 10 GB. The number of al. For all approaches compared in Fig. 11, ternate paths is we assign uniform (unit) costs to available links. Requests are generated uniformly between node pairs and DDR arrivals are independent with average arrival rate of 1.0. Fig. 11 shows the average number of unprovisioned requests1 for 10–20–30–40–50 DDRs (results in Fig. 11 are averages of 10 ILP runs with different seeds). We observe that the perforand are fairly close to those of mances of and , respectively. can provision more requests (and similarly, provisions more DDRs than than ). As detailed in Section III-D, is a and . Fig. 11 shows lower bound on both that can usually improve the performance of the 1Note that because the file sizes of all DDRs are the same (10 GB), the two performance metrics (Unprovisioned Requests and Unprovisioned Bytes) corresponding to the MILP objectives in (11)–(12) have identical results.

Authorized licensed use limited to: GOVERNMENT COLLEGE OF TECHNOLOGY. Downloaded on May 30,2010 at 18:49:25 UTC from IEEE Xplore. Restrictions apply.

ANDREI et al.: PROVISIONING OF DDRs WITH FLEXIBLE TRANSMISSION RATES

Fig. 11. Results of the mathematical approaches.

fixed allocation MILPs, especially for larger number of offered demands where there may be more opportunities for rate-choice optimization; however, the improvement may depend on many parameters, such as bandwidth distribution or arrival process. ILP computation times (on a 3-GHz Pentium-4 HT processor with 1-GB RAM) significantly increase for larger number of offered demands (for the few cases when CPLEX did not find the optimum result in 12 h of execution, we chose the best objective obtained that far; the maximal distance to the optimal solution and is always under 2.2%). Computational times for are significantly smaller than those of .

V. CONCLUSION We studied the problem of provisioning DDRs over WDM mesh networks by allowing flexible transfer rates. We investigated the effect of using different node architectures (Hybrid and Opaque) on the performance of the network. We devised three categories of bandwidth-allocation algorithms: fixed, adaptive to network state, and strategies which allow transfer-rate reconfiguration. We studied the problem on a comprehensive set of traffic distributions. Our results show that: usu1) among the fixed bandwidth-allocation strategies, for the cases when transceivers ally performs better than outperforms for are not the scarce resource, while the cases when transceivers are the main source of blocking; 2) changing-rates strategies, even if we change the transfer rates infrequently, improve over the other bandwidth-allocation approaches; for some bandwidth distributions, Adaptive policy slightly improves over fixed allocation strategies also; 3) provisioning DDRs in a network with Opaque OXCs (when using subwavelength granularity transfer rates) generally accepts more services than in a network using Hybrid OXCs; and 4) our DDR provisioning algorithms are benchmarked by solutions obtained from MILP models. REFERENCES [1] B. Mukherjee, Optical WDM Networks. New York: Springer, 2006. [2] D. Saha, B. Rajagopalan, and G. Bernstein, “The optical network control plane: State of the standards and deployment,” IEEE Commun. Mag., vol. 41, no. 8, pp. S29–S34, Aug. 2003.

365

[3] B. Mukherjee, “Architecture, control, and management of optical switching networks,” in Proc. IEEE/LEOS Photon. Switch. Conf., Aug. 2007, pp. 43–44. [4] I. Foster, M. Fidler, A. Roy, V. Sander, and L. Winkler, “End-to-End quality of service for high-end applications,” Comput. Commun., vol. 27, no. 14, pp. 1375–1388, Sep. 2004. [5] “Grid Network Services Use Cases from the e-Science Community,” T. Ferrari, Ed., 2007, Open grid forum informational document. [6] H. Miyagi, M. Hayashitani, D. Ishii, Y. Arakawa, and N. Yamanaka, “Advanced wavelength reservation method based on deadline-aware scheduling for lambda grid networks,” J. Lightw. Technol., vol. 25, no. 10, pp. 2904–2910, Oct. 2007. [7] M. Netto, K. Bubendorfer, and R. Buyya, “SLA-based advance reservations with flexible and adaptive time QoS parameters,” in Proc. ICSOC, 2007, vol. 4749, Lecture Notes in Comput. Sci., pp. 119–131. [8] S. Balasubramanian and A. Somani, “On traffic grooming choices for IP over WDM networks,” in Proc. Broadnets, San Jose, CA, Oct. 2006, pp. 1–10. [9] S. Huang and R. Dutta, “Research problems in dynamic traffic grooming in optical networks,” presented at the 1st Int. Workshop Traffic Grooming, San Jose, CA, Oct. 2004. [10] H. Zhu, H. Zang, K. Zhu, and B. Mukherjee, “A novel generic graph model for traffic grooming in heterogeneous WDM mesh networks,” IEEE/ACM Trans. Netw., vol. 11, no. 2, pp. 285–299, Apr. 2003. [11] K. Zhu and B. Mukherjee, “Traffic grooming in an optical WDM mesh network,” IEEE J. Sel. Areas Commun., vol. 20, no. 1, pp. 122–133, Jan. 2002. [12] H. Zhu, H. Zang, K. Zhu, and B. Mukherjee, “Dynamic traffic grooming in WDM mesh networks using a novel graph model,” in Proc. IEEE Globecom, Taipei, Taiwan, Nov. 2002, vol. 3, pp. 2681–2685. [13] K. Zhu, H. Zang, and B. Mukherjee, “A comprehensive study on nextgeneration optical grooming switches,” IEEE J. Sel. Areas Commun., vol. 21, no. 7, pp. 1173–1186, Sep. 2003. [14] M. Tornatore, A. Baruffaldi, H. Zhu, B. Mukherjee, and A. Pattavina, “Holding-time-aware dynamic traffic grooming,” IEEE J. Sel. Areas Commun., vol. 26, no. 3, pp. 28–35, Apr. 2008. [15] J. Kuri, N. Puech, M. Gagnaire, E. Dotaro, and R. Douville, “Routing and wavelength assignment of scheduled lightpath demands,” IEEE J. Sel. Areas Commun., vol. 21, no. 8, pp. 1231–1240, Oct. 2003. [16] C. V. Saradhi, L. K. Wei, and M. Gurusamy, “Provisioning fault-tolerant scheduled lightpath demands in WDM mesh networks,” Proc. IEEE Broadnets, pp. 150–159, Oct. 2004. [17] B. Wang, T. Li, X. Luo, Y. Fan, and C. Xin, “On service provisioning under a scheduled traffic model in reconfigurable WDM optical networks,” Proc. IEEE Broadnets, pp. 13–22, Oct. 2005. [18] A. Jaekel, “Lightpath scheduling and allocation under a flexible scheduled traffic model,” in Proc. IEEE Globecom, Nov. 2006, pp. 1–5. [19] B. Wang and A. Deshmukh, “An all hops optimal algorithm for dynamic routing of sliding scheduled traffic demands,” IEEE Commun. Lett., vol. 9, no. 10, pp. 936–938, Oct. 2005. [20] J. Zheng and H. Mouftah, “Routing and wavelength assignment for advance reservation in wavelength-routed WDM optical networks,” in Proc. IEEE Int. Conf. Commun., Jun. 2002, vol. 5, pp. 2722–2726. [21] H. Zang, J. P. Jue, and B. Mukherjee, “A review of routing and wavelength assignment approaches for wavelength-routed optical WDM networks,” Opt. Netw. Mag., vol. 1, no. 1, pp. 47–60, Jan. 2000. [22] A. Banerjee, W. Feng, D. Ghosal, and B. Mukherjee, “Algorithms for integrated routing and scheduling for aggregating data from distributed resources on a lambda grid,” IEEE Trans. Parallel Distrib. Syst., vol. 19, no. 1, pp. 24–34, Jan. 2008. [23] D. Andrei, M. Tornatore, D. Ghosal, C. Martel, and B. Mukherjee, “Ondemand provisioning of data-aggregation requests over WDM mesh networks,” in Proc. IEEE Globecom, Nov. 2008, pp. 1–5. [24] E. Coffman, “An introduction to combinatorial models of dynamic storage allocations,” SIAM Rev., vol. 25, no. 3, pp. 311–325, Jul. 1983. [25] M. Batayneh, D. Schupke, M. Hoffmann, A. Kirstadter, and B. Mukherjee, “Optical network design for a multiline-rate carrier-grade Ethernet under transmission-range constraints,” J. Lightw. Technol., vol. 26, no. 1, pp. 121–130, Jan. 2008. [26] J. Yen, “Finding the shortest loopless paths in a network,” Manage. Sci., vol. 17, no. 11, pp. 712–716, Jul. 1971. [27] L. Kleinrock, Queuing Systems. New York: Wiley, 1976. [28] ILOG CPLEX 9.0 Oct. 2003 [Online]. Available: http://www.ilog.com/ products/cplex/, CPLEX 9.0 User’s Manual, Ch. 17, “Logical constraints in optimization.”

K

Authorized licensed use limited to: GOVERNMENT COLLEGE OF TECHNOLOGY. Downloaded on May 30,2010 at 18:49:25 UTC from IEEE Xplore. Restrictions apply.

366

IEEE/ACM TRANSACTIONS ON NETWORKING, VOL. 18, NO. 2, APRIL 2010

[29] “Link capacity adjustment scheme (LCAS) for virtual concatenated signals,” ITU-T Recomm. G.7042, Feb. 2004. [30] S. Acharya, B. Gupta, P. Risbood, and A. Srivastava, “PESO: Low overhead protection for Ethernet over SONET transport,” Proc. IEEE INFOCOM, pp. 165–175, Mar. 2004. [31] S. Rai, O. Deshpande, C. Ou, C. Martel, and B. Mukherjee, “Reliable multipath provisioning for high-capacity backbone mesh networks,” IEEE/ACM Trans. Netw., vol. 15, no. 4, pp. 803–812, Aug. 2007. Dragos Andrei (S’05) received the B.S. degree from the Polytechnic University of Bucharest, Bucharest, Romania, in 2004, and the M.S. and Ph.D. degrees in computer science from the University of California, Davis, in 2007 and 2009, respectively. His research interests are on the design and performance analysis of algorithms for traffic engineering and network optimization in optical backbone networks and high-speed grids.

Massimo Tornatore (S’03–M’07) received the Laurea degree in telecommunication engineering and the Ph.D. degree in information engineering from Politecnico di Milano, Milan, Italy, in 2001 and 2006, respectively. During his Ph.D. course, he worked in collaboration with Pirelli Telecom Systems and Telecom Italia Labs, and he was a visiting Ph.D. student with the Networks Lab, University of California, Davis, and with CTTC Laboratories, Barcelona, Spain. He is currently a Post-Doctoral Researcher with the Department of Computer Science, University of California, Davis. He is author of about 60 conference and journal papers. His research interests include design, protection strategies, traffic grooming in optical WDM networks, and group communication security. Dr. Tornatore was a co-recipient of the Best Paper Award at IEEE ANTS 2008 and the Optical Networks Symposium in IEEE Globecom 2008.

Marwan Batayneh (S’07) received the B.S. degree from Jordan University of Science and Technology, Irbid, Jordan, in 2001, and the M.S. and Ph.D. degrees from the University of California, Davis, in 2006 and 2009, respectively, all in electrical and computer engineering. Since July 2009, he has been with Integrated Photonics Technology (IPITEK) Inc., Carlsbad, CA, as a Research and Development Scientist in the area of carrier-grade Ethernet. His research focus is on the design and analysis of carrier Ethernet architectures.

Charles U. Martel received the B.S. degree in computer science from the Massachusetts Institute of Technology, Cambridge, in 1975, and the Ph.D. degree in computer science from the University of California (UC), Berkeley, in 1980. Since 1980, he has been a computer science Professor at UC Davis, where he was Chairman of the Department from 1994 to 1997. He has worked on a broad range of combinatorial algorithms, including applications to networks, parallel, and distributed systems, scheduling, and security. His current research interests include the design and analysis of network algorithms, graph algorithms (particularly for modeling small worlds), and security algorithms. As a five-time world bridge champion, he also has an interest in computer bridge playing programs.

Biswanath Mukherjee (S’82–M’87–F’07) received the B.Tech. (Hons.) degree from the Indian Institute of Technology, Kharagpur, India, in 1980, and the Ph.D. degree from the University of Washington, Seattle, in 1987. He holds the Child Family Endowed Chair Professorship at University of California (UC), Davis, where he has been since 1987, and served as Chairman of the Department of Computer Science from 1997 to 2000. He served a five-year term as a Founding Member of the Board of Directors of IPLocks, Inc., a Silicon Valley startup company. He has served on the Technical Advisory Board of a number of startup companies in networking, most recently Teknovus, Intelligent Fiber Optic Systems, and LookAhead Decisions Inc. (LDI). He is author of the textbook Optical WDM Networks (New York: Springer, 2006) and Editor of Springer’s Optical Networks book series. Dr. Mukherjee is co-winner of the Optical Networking Symposium Best Paper Award at the IEEE Globecom 2007 and 2008 conferences. He is a co-winner of the 1991 National Computer Security Conference Best Paper Award. He is winner of the 2004 UC Davis Distinguished Graduate Mentoring Award. He serves or has served on the editorial boards of eight journals, most notably the IEEE/ACM TRANSACTIONS ON NETWORKING and IEEE Network. He served as the Technical Program Chair of the IEEE INFOCOM ’96 conference. He served as Technical Program Co-Chair of the Optical Fiber Communications (OFC) Conference 2009. He is Steering Committee Chair of the IEEE Advanced Networks and Telecom Systems (ANTS) Conference (the leading networking conference in India promoting industry–university interactions), and he served as General Co-Chair of ANTS in 2007 and 2008.

Authorized licensed use limited to: GOVERNMENT COLLEGE OF TECHNOLOGY. Downloaded on May 30,2010 at 18:49:25 UTC from IEEE Xplore. Restrictions apply.

Suggest Documents