Cost-effective VPC network design - CiteSeerX

18 downloads 4280 Views 55KB Size Report
This requires building a network having the intelligence to monitor and control ..... bandwidth VPCs and leaves free bandwidth throughout the network links so ...
Cost-effective VPC network design L. Georgiadis1, 2, G. Gikas2, M. Chatzaki2, S. Sartzetakis2 1. Aristotle University of Thessaloniki, GREECE 2. Institute of Computer Science, Foundation of Research and Technology - Hellas, Heraklion, GR71110, Crete, Greece

1. Introduction Complex environments for ATM based telecommunication services assume high availability and reliability of the network resources as well as the services. In order to reduce the cost of building and maintaining wide area ATM networks, efficient and cost-effective resource management techniques are needed to avoid over-allocation of resources. This requires building a network having the intelligence to monitor and control its own quality of service requirements guaranteeing high availability without over-expending resources. Research in resource and routing management, that led to a number of optimizing algorithms, usually tackles the problem of network availability and performance in isolation, not considering interactions between control and management functions. It does not take explicitly into account the diversity of performance and bandwidth requirements of the many service classes supported by the network nor an integrated approach to resource control, routing and alarm management from the point of view of availability and reliability. Recognizing the need for combining the functional capabilities of the control and management planes, the REFORM system (Resource Failure and restORation Management in ATM-based IBCN) has been designed and implemented in order to guarantee network performance and availability to network users under normal and fault conditions. This implies that the system under development will be characterized by a wide-range of functionality, ranging from generic functions like configuration and monitoring to specific functions like self-healing, protection resource handling, routing and resource management functions. More specifically, control plane functions such as route selection, Operation Administration and Maintenance (OA&M) and self-healing mechanisms are integrated with higher level network-wide routing, VPC layer design and resource management functions. In this paper we will describe the approach adopted by the REFORM system for cost effective VPC network design. In order to achieve cost-effectiveness of VPC network design, a functional component, namely the VPC Layer Design component, has been specified, designed, implemented and tested. This component consists of a number of interacting algorithms, specifically, 1) the Connection Route Design algorithm, 2) the Working VPC Route Design algorithm, 3) the Protection VPC Route Design algorithm and 4) the VPC Route Reconfiguration algorithm. In this paper we will present specification, design and implementation issues for the first two algorithms. Furthermore an assessment of correctness and effectiveness of the algorithms is presented based on experimental results. The design, implementation and experimental results for the latter two algorithms will be presented in a forthcoming paper.

2. VPC Layer Design The VPC-LD functional component is the core of the REFORM management system. The main task of VPC-LD is to design and redesign whenever necessary working VPCs and admissible routes taking into account the existing Class of Services and suitable protection VPCs and to allocate appropriately resources for protection purposes. This task is based on the predicted traffic and physical constraints (e.g. connectivity, link capacity) of the network.

Graphical User Interface

CoS Model

Traffic Prediction

VPC Layer Design

Fault Manager

Configuration Manager

Network Performance Verification

Bandwidth Distribution

Network Resource Monitoring

Resource Adaptor

Figure 1. The REFORM Management System The functionality of VPC-LD is two-fold, static and dynamic. The static is related to VPC network design activities, concerning the working VPCs for each Class of Service (CoS) and the allocation of resources for protection purposes. These functions are taking place during the initialization phase of the system. The dynamic functionality is activated during changes in the traffic predictions, QoS degradation, changes in the physical topology or the network (e.g., addition of a new link), or due to the creation of new VPCs or the deletion of old ones during link failures. In these cases reconfiguration of working as well a protection VPCs is required. The functionality of the VPC-LD is heavily based on the information exchange between VPC-LD and the rest of the REFORM management components (see Figure 1). Specifically the Traffic Prediction component provides VPC-LD with a set of traffic predictions while the CoS Model component provides VPC-LD with the definitions of the Class of Services offered by the network. The definition of a CoS consists of a set of traffic characteristics and a set of performance objectives. The VPC-LD interacts with the Configuration Manager in order to receive the physical topology information and in order to provide the VPC network topology information. The Bandwidth Distribution, the Network Performance Verification and the Network Resource Monitoring components notify the VPC-LD for inefficient use of already established VPCs and for QoS degradation respectively. VPC-LD interacts with the Fault Manager component during failures in order to bring the network in a normal operating state. The tasks of designing VPCs and routes are tightly coupled. Routes are based on VPCs and VPCs are defined for optimal routing. Furthermore appropriate allocation of protection resources should be considered. Protection VPCs are designed so that the network is able to react to fault situations. All these tasks depend heavily on each other and are part of an interactive process aiming at providing cost-effective VPC network design solutions. These tasks are realized by the execution of a number of algorithms. The algorithms have been specified, implemented and tested. Below we present the algorithms leading to the creation of working VPCs in the network.

2.1 Connection Route Design Algorithm The Connection Route Design Algorithm consists of two steps: First, for each commodity, i.e., each triple , the capacity requirements are computed. Next, routes are determined so that the computed capacity requirements for each commodity are satisfied, and the network is left in a “balanced” state. The capacity of each commodity is computed so that the pre-specified CoS blocking probability, one of the CoS performance objectives, is guaranteed. The derivation is based on the specified CoS traffic

characteristics, average arrival rate for each source destination (s-d) pair and average holding time, which by definition refer to long term predictions. Specifically, the algorithm first calculates the number of connections needed per commodity to satisfy the specified CoS blocking probability Pb . Then the bandwidth that can accommodate the number n of connections calculated from the previous formula is times the defined bandwidth for the specified CoS.

n

The objective of the Connection Route Design algorithm is to specify a set of possible routes for each commodity type, accommodating the calculated bandwidth requirements of the commodity. A restriction on the maximum physical link hop-count is imposed on each possible route as part of the specified CoS performance objectives. This maximum number of hops may depend on the CoS to which the connection belongs. All connections are considered bi-directional, and the bandwidth requirements per connection may be different in the two directions. Unidirectional connections are a special case of bi-directional connections where the bandwidth in the reverse direction is zero. Moreover, there is the restriction imposed by the standards that the forward and reverse directions of a connection follow the same path. The routes obtained by this procedure will be called “primary routes” and will be the default routes along which connections will be established. The objective of the algorithm is to satisfy all the commodity bandwidth requirements, while respecting all the previous restrictions and, moreover, achieving a balanced network in terms of link utilisations. Specifically, the minimization of the maximum link load in the network is sought. The algorithm employed is a variation of the algorithm provided by Bertsekas and Gallager [1] for solving the multicommodity network flow problem. The latter algorithm optimizes a separable convex function of link loads instead of minimizing the maximum link load. However, using separable convex functions one can approximate minimization of the maximum link load. These approximations have the additional characteristic of providing a compromise between minimizing the maximum link load and minimizing an average performance objective. In our implementation, we used the following link cost functions: 1.

f l ( x) = ( xl / cl ) k , where xl is the commodity bandwidth allocated to link l , cl is the link capacity and k is a number that determines the degree of approximation- the larger the k the better the approximation to the problem of minimizing the maximum link utilization

2.

f l ( x) = (cl /(cl − xl )) k f ( x) = ( x /(c − x )) k

l l l l 3. While all these functions give the same solution as k goes to infinity, for finite k they may have different characteristics in terms of goodness of approximation, speed of convergence and the compromise between the min-max and average optimization problems.

The current route design approach provides enough bandwidth to each commodity so that the specified connection blocking probability is satisfied, while balancing the load on the network links. This approach does not take into account the possibility of bandwidth sharing between various commodities. However, it relies on tested and relatively simple implementation and has generally satisfactory computational requirements. Routing techniques where bandwidth may be shared between connections belonging to different commodities require knowledge of the induced connection blocking probabilities in order to be able to provide the corresponding QoS guarantee. For multi-rate multi-service networks, methods for computing blocking probabilities have been proposed in [2], [3] and a design methodology has been proposed in [4]. These results, however, are restricted to specific routing policies and the computational requirements are generally very demanding even for simpler optimisation objectives and constraints than the ones considered in the REFORM system. It should also be stressed, that the bandwidth requirements determined by the above procedure represent long term averages that are adjusted (by modifying the VPC bandwidth) during the operation of the system by the Bandwidth Distribution component according to current requirements. As such, simplicity and numerical stability was deemed at this level preferable to very precise calculations and detailed optimisation that would lead to excessive computations or would not model essential aspects of the system.

2.2 Working VPC Route Design Algorithm The objective of this algorithm is to specify a number of working VPs in the network that can accommodate the connection routes that the previous algorithm determines. The algorithm satisfies the following requirements: •

An upper bound on the number of VPs on link (i,j) Uij. Both the VP table size restrictions and the requirement that the number of VPs per link be small to avoid large processing loads in case of failure impose this restriction. Another reason for keeping Uij small is future VPC requirements, especially in our case, protection VPCs.



An upper bound Uhop on the number of VPs that connect any source-destination pair (VP hop count). A large VP hop count implies increased call set-up cost.



The bandwidths of VPs should be kept small. This is to avoid excessive processing cost at the VC switches, due to the need to perform VC table look-up. This cost is proportional to the number of cells per second that are carried by the VP. In addition, smaller VPC bandwidths make it easier to establish protection VPCs. The heuristic used for the design of the working VPCs based on the previously described constraints and objectives consists of the following steps: 1. Choose a link (if any) where the number of VPCs crossing that link exceeds the specified threshold Uij by the maximum amount. 2. Among the VPCs crossing the chosen link, pick two VPCs that satisfy the following requirements: a) the maximum VPC hop-count of the s-d routes crossing the VPC is smaller than Uhop-2, b) they have the smallest bandwidth sum. The bound Uhop-2 is set so that the VPC hop-count of the routes that may result from the merging of VPCs (step 3 below) does not exceed Uhop. 3. Merge parts of the chosen VPCs to create a new VPC. This new VPC should be using as many links as possible. More VPCs may need to be created depending on the degree of overlapping between the two chosen VPCs. 4. Update the VPC hop-count of all affected source destination routes, as well as the number of VPCs crossing all affected links. Go to step 1. A similar approach to VPC design appears [5]. The algorithm provided in [5], however, attempts to minimise average performance costs. The approach followed in the REFORM system is to provide good performance per commodity, and this is reflected in the designed steps outlined above. In [6], VPC design algorithms are proposed which attempt to minimise the maximum number of VPCs at each link, under the constraint that the VPC hop-count of each source-destination pair remains below a certain bound. The algorithm [6] assumes shortest path routing and does not take into consideration the bandwidth requirements of the various routes.

3. Integration of the two Algorithms In order for the two algorithms to run, cooperate, and store the data they compute, three data structures were used. The first one holds the classes of service supported by the network, the second one keeps the traffic predictions and the third keeps the network topology. The last structure consists of three layers. Each layer keeps information about nodes and the links connecting them. The bottom layer represents the physical network topology. The nodes and links at this layer correspond to physical network nodes and links. The second and top layer keeps a logical view of the network in terms of VPCs and routes respectively. At the second layer the links represent VPCs and the nodes represent VC switches. At the top layer all the information about routes is kept. Specifically the nodes correspond to access nodes and the links to routes. The structure that holds the CoS definitions is filled in by the CoSM component, while the one that keeps the traffic predictions is initialized by the Predicted Usage Model (PUM) component. Finally the structure that contains the network topology information is initialized by the ConfM component. Specifically the bottom layer keeping the physical topology is filled in. The rest two layers are filled in by the outcome of the algorithms.

In order to determine the number of connections needed for each traffic type one needs the blocking probability Pb and ρ = λ / µ of the commodity. This information is retrieved from the classes of service and the traffic prediction data structures. Upon completion the results of the algorithm are stored in the local data structures that Connection Route Design algorithm uses and the Connection Route Design algorithm is triggered. Apart from the commodities and the bandwidth that each commodity requires, the Connection Route Design algorithm needs also as input the physical network topology. This information is retrieved by the network data structure (bottom layer). Upon completion of the algorithms operation, the routes that have been determined are stored in the top layer of the network data structure and the Working VPC Design algorithm is triggered. GUI

CoSM

VPC_TD

Configuration Controller

Capacity Requirements

PUM FM

Data Acquisition Data Storage

ConfM ConfM FM PUM BD LB

Connection Route Design

Working VP Design

Notification Handler

ConfM

Install VPCs & Routes

Protection VP Design

Figure 2. VPC_TD Implementation Design

The Working VPC Design algorithm designs the VP layout for the working VPCs. All the information it requires (physical topology and the routes determined by the previous algorithm) is found in the network data structure. After designing the working VPC layout, the algorithm stores it at the second layer of the network data structure. Then some relations between the three layers are set (i.e. which physical links are used by each VPC and which VPCs are used by each route). Finally the protection VPC Design algorithm is triggered to start its operation.

4. Discussion on the experimental results The tests conducted for the algorithms described in the previous sections include validation and performance tests. Emphasis was given to verify the correctness and evaluate the behavior and effectiveness of the algorithms. As such testing was not concerned with implementation dependent issues. Performance tests consisted mainly of comparisons between alternative algorithmic parameter selection and information gathering that was used a) to identify areas of improvement and b) as a basis for the comparison of new algorithms that may need to be developed. The approach taken for verifying correctness was to identify test scenarios with predictable results. Then the outcome of the algorithms was compared to the manually calculated predictions. For performance assessment tests more complex scenarios were specified. A realistic network topology, 19 nodes and 37 links, representing a pan-european ATM network was used. For the Capacity Requirements algorithm it was validated that the number of connections per commodity is calculated correctly. For the Connection Route Design algorithm the results has shown that the choice of the 3 cost functions presented in section 2.1 does not have a significant effect in the determination of optimal connection routes. Therefore the simplest to implement cost function, namely the first one, was selected. Similar results were obtained for the values of the exponent k (tested for k=5, 10, 15). Since generally smaller values of k

require smaller number of iterations for convergence, it was suggested that a value between 5 and 10 to be used. The optimization objective of the Connection Route Design was to minimize the maximum link utilization. It has been seen from the experiments that this choice facilitates the establishment of protection VPCs. The main reason for this is that a balanced network in terms of link utilization tends to create smaller bandwidth VPCs and leaves free bandwidth throughout the network links so that a large number of alternative paths exist between any pair of nodes. For the Working VPC Route Design algorithm, the tests have shown that the algorithm performs according to the objectives sought: The constraint on the number of VPCs per link, Uij, is always satisfied. As Uij increases the bandwidth of the resulting VPCs decreases rapidly since the algorithm joins VPCs with small bandwidths, and the VPC hop-count of the routes decreases. One undesirable effect that has been observed is that Uij may be incompatible with Uhop (the maximum number of VPC hops per route). Since Uhop mainly influences the connection set-up delay and this constraint’s violation does not have a deleterious effect on the system, it is suggested that it should be used as a “soft bound” in the sense that the algorithm tries to satisfy the bound, but if it is not possible, it continues on a “best effort” basis to provide a VPC layout even if some routes have larger VPC hop-count than Uhop. This change can be easily incorporated in the algorithm.

5. Conclusions and future work As mentioned in the previous section the optimization objective for the Connection Route Design algorithm was to minimize the maximum link utilization. However, other alternative optimization criteria exist that may have similar effects. For example, one may consider the absolute free bandwidth that exists on a link instead of the relative one (utilization) considered thus far. The incorporation of this type of criteria in the existing algorithm is relatively easy. Furthermore the minimization of maximum link utilization criterion, does not provide any means for choosing among the various alternatives that may exist in allocating bandwidth among links that are not maximally utilized. A stricter optimization criterion that can take into account such alternatives, is the lexicographic optimization of the link costs. It is not known, however, whether any efficient algorithms exist for providing a lexicographic optimal bandwidth allocation for the multi-commodity problem under consideration. Preliminary investigation indicates that for a single commodity the problem may be tractable, however, it seems that significantly more research is needed to extend these results to the multi-commodity problem with the special constraints imposed by the nature of the networks under consideration. Under the current design of the algorithm, the maximum allowable number of physical hops is used as a QoS constraint on the connection paths. A more general QoS constraint on the paths is the maximum allowable path penalty, where the penalty of a path is the sum of the penalties of its links. Here penalty may be the loss probability of a link, or the packet delay on that link. While the current solution can in principle incorporate more general QoS constraints, the straightforward approach is inefficient since it depends on finding minimum cost paths that satisfy the additive QoS constraint, a known NP-complete problem. A criterion that is lacking from the current Working VPC Route Design algorithm is the appropriate connection type mix-up that passes through a VPC. This is an important issue that may significantly affect the multiplexing gain within a VPC; however, further research work is needed in order to provide appropriate guidelines.

References [1] D. Bertsekas and R. Gallager, Data networks, Prentice Hall 1992. [2] P. Gazdicki, I. Lambadaris and R. R. Mazumdar, “Blocking Probabilities for Large Multirate Erlang Loss Systems,” Adv. Appl. Prob., v 25, p 99-1009, 1993. [3] G.Choudhury, K. Leung and W. Whitt, “An Algorithm to Compute Blocking Probabilities in Multi-Rate Multi-Class Multi-Resource Loss Models,” Adv. Appl. Prob., v 27, p 1104-1143, 1995. [4] D. Mitra, J. Morrison and K. Ramakrishnan,” IEEE/ACM Trans. on Networking, v 4, no 4, August 1996, p 531-543.

[5] S. Ahn, R. Tsang, S. Tong and D. Du, “Virtual Path Layout Design for ATM Networks,” IEEE INFOCOM’94, pp 192-200, Toronto, Canada, June 1994. [6] I. Cidon, O. Gerstel and S. Zaks, “The layout of Vitual Paths in ATM Networks,” Technical Report 831, Technion, August 1994.

Suggest Documents