Distributed Multi-path and Multi-objective routing for network operation and dimensioning Laurent Fournié* , Dohy Hong*, Sabine Randriamasy° *N2NSOFT, Paris, France,
[email protected],
[email protected] °Alcatel CIT, Marcoussis, France,
[email protected]
Abstract In this paper, we propose a multi-objective and multi-path routing mechanism called Distributed Multi-criteria Load Balancing (DMLB) algorithm. Such an algorithm can be used either to improve the network robustness in case of traffic increase or to allocate less link resources for a given Quality of Service. Traffic flows of an overloaded link are distributed on a set of alternative paths that is output by a load-sensitive multiobjective routing algorithm. DMLB is distributed on the network elements involved in intra-domain IP routing and works on-line at the network operation phase. Mechanisms are specified to ensure network stability and decision coherency. DMLB remains low time and space consuming. It has been tested and validated through massive and realistic network simulations. Improvements have been seen from both the network side and the end-to-end user side. Index terms — traffic engineering, distributed, multi-path routing, multi-objective routing, load balancing, network dimensioning.
A
INTRODUCTION
Traffic engineering is concerned with optimizing the use of network resources through allocation at the dimensioning stage or, given an allocation, to find an optimal way to route the incoming traffic. The challenge is to maintain or improve both the user-sensed performances and network capacity. Today, in a meshed topology, the route multiplicity between two nodes is under-exploited. Traditional Shortest Path (SP) based routing aims at minimizing one single metric or cost (typically the path length in number of hops or sum of corresponding interface usage costs). This however does not fit the traffic to the link loads and ends up in packet losses in case of link congestion. To cope with that issue, adaptive approaches such as Shortest Widest Path (SWP) or Widest Shortest Path (WSP) combine both path length and available link bandwidth, in an attempt to better balance the load across the network. SWP outperforms SP at low levels of network load but, as it uses longer paths than necessary, gets worse for highly loaded networks (see [4]) contrary to WSP [12]. WSP, well-suited for QoS routing, offers to choose among feasible paths, yet if used as a default routing, it restricts the choice to the shortest paths and does not necessarily avoid using congested links. In general, straightforward path changes in the default routing often cause traffic oscillations. Besides, a scalar combination of metrics of different nature and magnitude such as length and bandwidth may be sub-optimal. Multi-path routing has been investigated for several years [6]. The concept of Equal Cost Multi Path (ECMP) was This work has been funded in part by the European Union through the IS&T Integrated Project NOBEL (Next generation Optical networks for Broadband Leadership.
proposed with the OSPF (Open Shortest Path First) link-state routing protocol standard [5]. ECMP splits the traffic evenly among equal cost paths to a given destination by distributing the packets in a round and robin fashion among them. The drawback is that link loads are not considered. Optimized Multi-Path [7], is a thorough investigation that attempts to better fit the traffic distribution by splitting the traffic unevenly among alternative paths of “comparable costs”. Traffic of paths containing a “critically loaded” segment is shifted away to alternative paths containing none. The selection of alternative paths is done by relaxing the “best path” criterion and selecting those paths for which the next hop is closer of M metric units to the destination D than the current hop (see [7]). Such a function ensures loop-free routing as the cost to D must strictly decrease at every next hop towards D. Another algorithm called Adaptive Multi-Path routing [8], uses the same path selection and flow distribution techniques than OMP, but with link state information restricted to a local scope and thus lighter multi-path data structures and signalling. However, in these approaches, alternative paths are not selected w.r.t. their load or available bandwidth. The latter is only considered afterwards, at the stage of load adjustment, limiting thus the chances for an optimal traffic distribution. This paper presents a routing solution called Distributed Multi-Criteria Load Balancing (DMLB) that can be used for “regular” best-effort traffic while also supporting QoS constraints: routing is by default shortest path based and progressively switches to adaptive multi-path when link loads exceed a given threshold. The rest of the paper is organised as follows: section B outlines the DMLB algorithm. Section C provides its technical description and section D details its major parameters. The simulation of DMLB in the network operation phase is presented in section E and observed results in section F. Section G presents the advantages of DMLB in the network planning phase. Section H concludes the paper. B
OVERVIEW OF DMLB
The challenge for DMLB is to maintain or improve both the user-sensed performances and network capacity, with robust load-balancing mechanisms. Figure 1 illustrates how the different components of DMLB are connected. DMLB uses a link load limit ThLoad above which, a link is classified as “Overloaded”. The additional link traffic is shared among alternative paths. Multi-path routing is triggered at the origin I of any directed overloaded link (I, J), and is used for all destinations for which J is the next hop. The default routing remains shortest path based, as long as no link is overloaded.
Reactive: critical links are defined as “overloaded” links for which the load exceeds ThLoad. Critical links are detected based on their load measurement and DMLB is automatically triggered. The present paper only considers the reactive mode. -
Destination router D : max K paths P1..Pk Cost C1…Ck
Routing with Multiple Criteria (RMC) Adjust criteria weights
Flooding of - link load - topology
Adjust link admin cost
DMLB automatic parameter adjustment
Congestion detection (reactive trigger)
Operator LB request (preventive trigger)
Dynamic Load Balancing
Unequal Target Traffic Shares Q1… Qk
C.1 Merge paths with Qk < ThQ Unequal dynamic progressive flow distribution NextHop (D) = f(D,m) hash value ≡ M ⇒ bin m Qk % bins ⇒ Pk LB controller(bin m) = Ri
Traffic shifting granularity GrQ Traffic shifting status
Figure 1: main building blocks of DMLB DMLB involves available link bandwidth at the path set selection stage. In order to limit the consumption of additional network resources the path metrics also include: hop count, transit delay and administrative cost. Unlike many approaches, the link cost is a vector and all the Pareto-optimal solutions are extracted and kept until a later scalar path cost function is used to rank them, providing thus the largest possible choice of alternative paths (see §C.3). The path cost depends on the ratio, for each criterion, between the path metric and the best value obtained among the set of solutions. This maintains numerical stability and spreads traffic according to the available bandwidth while limiting the path length. Each alternative path gets a “target traffic share”, representing the proportion of traffic flows shifted to it and inversely proportional to its cost. Flow shifting is controlled and progressive, to avoid traffic oscillations. It requires an adapted forwarding process that handles both multiple next hops and dynamic progressive hashing (see §C.5). The shifting process is stopped once the overloaded link load goes below ThLoad. Routing remains in multi-path mode until the link load goes below a smaller threshold ThLoadBack and stays there during a certain period of time. In that case, the shifting process is reversed to progressively go back to the initial mono-path routing. Triggering multi-path routing in a distributed mode, at several places and dates jeopardizes routing coherency and network stability. Particular undesirable events are traffic loops, mainly caused by incoherent routing decisions. Mechanisms to prevent such events need particular fine-tuning and include: - data base synchronization, - efficient and robust signalling, - coordination of router decisions through a common priority order. Such mechanisms are proposed in §C.6. C
TECHNICAL DESCRIPTION OF DMLB
The default routing remains single and shortest path based as long as no “overloaded” link is detected. DMLB can be used in several modes: - On-demand: triggered upon operator request. Critical links are defined as to be bypassed for operator-specific reasons.
Link state advertisement: link load information collection and flooding
Similarly to OMP, link loads are measured every 10 seconds and averaged over 3s, through standard built-in counters at the in- and outgoing interfaces. The available bandwidth of the unidirectional link and its load are derived. An alarm is triggered if the load exceeds a given threshold. The metric used to select paths is the Available Bandwidth (AB), in order to differentiate links of different capacities. Available bandwidth is flooded through OSPF Opaque LSA either if log(AB) has increased of more than TAB=5%, or an “overload” alarm has been triggered or some timer has expired (typically 60s as in OMP). This way, signalling overhead remains moderated and links with few remaining bandwidth advertise it more often. Data base synchronization is extended from the neighbouring routers to the whole OSPF area. A “flooding delay” of 6 seconds is used to ensure that all nodes of the area have received the LSA. This delay allows to recover from one LSA loss: if the first transmission failed, a second LSA is flooded after the OSPF standard delay MinLSInterval (5 s), see [13]. C.2
DMLB triggering and stopping conditions
DMLB is triggered when a critical link is detected. An alarm is then triggered and DMLB absorbs the link overload by switching traffic to alternative paths, in order to prevent link congestion and thus avoid either packet losses or important jitter variations. Improvement is observed on network capacity while goodput and RTT values remain constant. The choice of ThLoad is discussed in §D.1. The default value is set to 80%. Let us call L0, the critical outgoing link. DMLB flow shifting is stopped when either: - The last flooded load of L0 is below ThLoad, - The target traffic repartition on the alternative paths has been reached, - Operator request has expired. DMLB progressively switches back to SP routing when load of L0 stays under ThLoadBack for a given period: flow shifting is reversed from the alternative paths to the initial ones. Note that the hysteresis threshold ThLoadBack (0.65 by default) is lower than ThLoad, to avoid traffic oscillations. C.3
Path search algorithm: RMC
Once a link overload has been detected, a path search algorithm is triggered for each destination: the Routing with Multiple Criteria (RMC) algorithm. A first step does a Paretooptimal path extraction with the following criteria: available bandwidth, number of hops, theoretical link transit delay and administrative cost, together with constraints on these metrics in case of QoS requirements. As a second step, a scalar cost function allows to sort the Pareto-optimal paths. The best value for each metric serves as a normalization factor: a path cost is the weighted sum, for each criterion, of the path value
divided by the previously calculated normalization factor. Hence, path ranking depends on the relative cost rather than on an arbitrary normalization factor. The choice of the weight of each metric is left to the user. The values chosen for the presented results are: transit delay: 0.2, available bandwidth: 0.4, number of hops: 0.4. This choice attempts to give equal importance to bandwidth and hops while involving transit delay with yet a smaller weight, compensated by a huge value difference if e.g. satellite links are used . The path extraction algorithm of the first step is detailed in [1]: it generalizes the Martins label setting algorithm [2] that itself generalizes the Dijkstra shortest-path extraction algorithm by using link cost vectors of additive metrics. RMC generalizes [2] in that it considers link cost vectors having one sub-linear “bottleneck” metric such as the minimum available link bandwidth along the path, the other metrics being additive. The main modification is on the dominance test and the procedure to identify the efficient paths. The CPU time and memory used by RMC are in value scales that comply with classic shortest paths algorithms and makes it easy to integrate in a distributed on-line routing process: for 3 criteria, under 0,01 seconds CPU time for 50 nodes and 1250 links, and 0,30 seconds for 100 nodes and 2000 links on a PowerPC G4 processor. Using simultaneously several link metrics allows to enlarge the set of optimal paths, contrary to Constraint-Based Routing (CBR) that uses a single metric and where the set of possible paths is reduced by QoS constraints. A useful property of RMC is that it extracts the whole set of efficient paths and sorts it afterwards, avoiding thus to “miss” optimal paths through pruning. A fundamental property is that for each destination, several Pareto-optimal paths are produced and they provide the basis to multi-path routing. C.4 Traffic repartition among alternative paths After an alarm trigger, RMC outputs several paths P1…Pk, including the shortest one (default path), for each destination D. Traffic flows are shifted away from the overloaded path P0 to the alternative paths P1…Pk with cost C1..Ck. The Target Traffic Share TTSk affected to Pk is inversely proportional to Ck. For example if 3 paths have respective costs of C1=80, C2=60, C3=30, their respective TTS are: TTS1=20%, TTS2=27%, TTS3=53%. Paths with a too small share TTSk < ThQ (set here to 5%) are ignored and their TTSk is shared among the other paths. The term Target Traffic Share is used because link overload may disappear from P0 and traffic be stabilized before the shares TTSk of alternative paths are reached. C.5
Progressive flow shifting
C.5.1 Unequal flow distribution by dynamic hashing Flow assignment to paths Pk follows the scheme defined in [3], see also Figure 1. Flow shifting is controlled and progressive, to avoid traffic oscillations. It requires an adapted forwarding process that handles both multiple next hops and dynamic progressive hashing:
-
-
a hashing function H (typically CRC16) is applied on the flow attributes: IP source, IP destination, source port, destination port, Protocol ID. The H values are then mapped on M “flow bins”, flow f being assigned to bin H(f) modulo M. Each flow bin is supposed to represent a constant proportion of flows (asymptotically true for a large number of flows). Here M=100 to have enough granularity and progressiveness in traffic shifting. the flow bins are assigned to path interfaces independently, by picking them randomly, to ensure equity between them and thus avoid always impacting the same flows/users.
C.5.2 Flow shifting speed and granularity To avoid traffic oscillations the flows are shifted away from P0 to the Pk in several iterations separated by an interval timer Flow_Shift_timer. At each iteration, a proportion of traffic called TBM (Total Bin Move) is shifted away from P0 and dispatched on paths Pk in quantities TSk (Traffic Shift) proportional to their Target Traffic Share. In the present case, TBM is set to 5 flow bins dispatched at every firing of Flow_Shift_timer. The granularity of TSk is set to 1 flow bin i.e. 1% of the flows. The default value for Flow_Shift_timer is 10 seconds and corresponds to the minimal interval between 2 route calculations, for purposes of traffic stability, in most of the implementations of OSPF. C.5.3 Target traffic repartition adjustments Unlike [7] and [8], target traffic shares are not updated every Flow_Shift_timer seconds: load adjustment is done straightforwardly, with the initial values of the TTSk. However, link state measurement and is permanent and a new alarm (new link overload or harder overload of link L0) immediately triggers RMC and update of the TTS. Thus, 200 seconds is the maximal time required for DMLB to reach the Target Traffic Shares with 5% of the flows shifted away every 10 seconds. In practise, the traffic shifting is stopped much sooner, when L0 load is below ThLoad. C.6 Coherency of distributed multi-path routing decisions Triggering multi-path routing in a distributed mode, at several places and dates jeopardizes routing coherency and network stability. Indeed, as the available bandwidth of a path is defined as the minimum available bandwidth on its links, each router can have a different view of the network state. Hence, the path selection can be different from one router to another and if the decision coherency is not carefully managed, this could lead to traffic loops. For instance, in Figure 2 routers S and A are in multi-path mode. S sees two possible paths for destination D: S-A-D with 2 hops and 1Gbit/s of available bandwidth and S-A-B-D with 3 hops and 1Gbit/s. Therefore, the only Pareto-optimal path is S-A-D. For A, path A-B-D is optimal for the available bandwidth metric (5 Gbit/s). So A could choose to route flows through A-B-D.
B
5 G bit/s S
5 G bit/s D
A
1 G bit/s
1 G bit/s
Figure 2: Example of incoherent routing decision, router S chooses path S-A-D, whereas A prefers path A-B-D Routing incoherency can also come from the computation of the path cost in RMC. Indeed, each path metric is compared with the best observed value on this metric in the set of Pareto-optimal solutions. As these “best values” vary from one router to another, the path ranking can be different. The features described below have been defined to ensure that no traffic loops occur. C.6.1 DMLB signalling When router R decides to shift a set S of bins from path P0 to path P1, for destination area egress router D, R signals its decision to the downstream routers of P1 until D. The advertised decisions mainly contain: DMLB triggering date, the ID of router R, the corresponding alternative path P1 and the set S of concerned bins. C.6.2 Decision coordination The advertised routing decision is not always applied: indeed, several routers may be competing to apply DMLB for the same destination and send different signals to the same intermediate routers. To keep one unique next-hop for each bin and destination, all signals for a given bin are ranked and only the decision with the highest priority is applied. The routers along an alternative path keep all the received signals. In addition, the priority order is common to all routers. Thus, no traffic loop can occur. DMLB signals may arrive out of order w.r.t. their date of emission (for example, because the first signal has followed a longer path than the second one). This could disorganize the flow shifting process. To avoid this, DMLB decisions are stored, ordered and applied after a given time-slot (6s after the emission date, similar to the OSPF spfDelay [13]). C.6.3
2D forwarding based on 3D routing information.
Storing all received decisions requires a three dimension routing table (destination egress router, router which sent the signal and bin ID M). For a given D and M, the choice of which decision is applied is done “off-line” at the routing stage (every 10s). Therefore the forwarding decision on bins only depends on D and M. This avoids to slow down the packet forwarding process, while allowing to perform 3D routing. D
Parameter setting
D.1 The choice of the link load limit ThLoad In many cases of mono-path routing, dimensioning implicitly sets a link load threshold T e.g. 50% of its capacity C so that the dimensioning produces links initially loaded at T%. Usually T minutes). So, we set σ to model traffic variation corresponding to session starting/ending (short time scale): σ2 = K.(m-K), where m and K are the corresponding traffic mean and peak rates (10 and 100 Mbit/s for transaction data, 50 and 512 Kbit/s for IP web and 64Kbit/s for VoIP);
RESULTS: DMLB COMPARED TO SP ROUTING
The figures in this section compare performances of DMLB and shortest-path based routing (Dijkstra). F.1
Figure 3 German network topology. Local traffic is progressively added on links [8, 11] [6, 11] [6, 8].
Performance metrics
Two types of performance metrics are used: - Network level (core level) performances: network capacity, packet loss, mean link load. - User-level performances: flow/user goodput, terminal packet loss, round trip time. F
11
Traffic perturbation scenario
Network level performances
Packet loss number vs. demand is drawn in Figure 4 for different congestion types (thin lines are for Dijkstra, large lines for DMLB): - Core overflow (red line): congestion in the core network, - Terminal overflow (blue line): terminal buffer overflow, - Bit error rate (pink line): due to noise in terminals, serves as a reference for an acceptable level of losses. Losses due to routing change (eg because of packet inversion) were not significant in this scenario. The first losses with SP routing appear at a demand level of around 16 Gbits/s and grow drastically for larger loads. With DMLB, the network can absorb 17% more traffic (up to 18 Gbit/s) before losses in the core network. Note that for other traffic perturbation scenarii, the gain can be much higher.
The mean link load as shown in Figure 6, is almost 50% when the first losses appear with SP. Note that usually, at this load level, Shortest Widest Path used as default routing is outperformed by SP routing. F.2
User level performances
HTTP goodput is compared in Figure 7 for SP routing (red line) and DMLB (blue line). Goodput values are calculated as the mean transfer rate (total HTTP throughput divided by the number of users downloading files). The goodput starts to fall for DMLB at 18 Gbit/s and the goodput gain vs. SP is about 100%.
Figure 4 : loss rate (in packets) at the core and terminal node levels. Loss rates below the BER (Bit Error Rate, pink lines) are acceptable. Network throughput vs. total traffic demand is drawn in Figure 5. The blue line is for DMLB and the red line for SP routing. The performances are similar until the first packet losses occur for SP. From that demand level, because of TCP flow control, the network throughput with SP is much below the traffic demand, whereas with DMLB, the network remains transparent until 18.5Gbit/s (throughput equals the demand). Figure 7 Mean HTTP flow goodput vs demand. Mean RTT vs. demand is compared for SP and DMLB in Figure 8. As alternative paths are often longer, one could fear that DMLB increases the RTT for deviated flows. However, DMLB dramatically decreases queueing delays on overloaded links and hence decreases the mean RTT.
Figure 5: Network total throughput vs traffic demand. The dashed line corresponds to ideal case where the throughput equals the demand.
Figure 8 RTT vs traffic demand: average over the network. G G.1
Figure 6 Mean link loads vs traffic demand observed with SP routing (in red) and DMLB (in blue).
NETWORK DIMENSIONING ASSOCIATED SENSITIVE MULTI-PATH ROUTING
TO
LOAD-
Overview
Traditional dimensioning with mono-path routing relies on over-provisioning of link capacity to prevent losses in case of important traffic increase. As the routing is not adaptive to traffic evolution, one has to dimension link capacities according to the maximal traffic volume.
Using adaptive multi-path routing allows to reconsider network dimensioning in a more economical way: instead of dimensioning each link for the maximal traffic volume, the resources of the alternative paths are jointly dimensioned to absorb traffic increase. Thus, thanks to multiplexing gain, the deployment cost can be reduced. This complies with many operator strategies who wish to minimize over-provisioning expenses. It is quite intuitive to see that the gain underlying such an approach directly depends on the number of alternative paths (or more precisely, their resource addition). The advantage of this new “Multi-path” aware dimensioning approach is to: - reduce the network deployment cost (for the same quality of service level), - have a more flexible deployment scenario: in case of economical or physical constraints such as bandwidth granularity, a link capacity increase can be distributed on several links; - have a traffic engineering approach that is independent of any model of required bandwidth, With mono-path routing, link dimensioning is typically C = m + reserve_SP, where m is the traffic mean rate and reserve_SP is chosen to absorb traffic variations and bursts on links. This computation is done independently for each link. With DMLB, we have C.ThLoad = m + reserve_MP. However, as adaptive multi-path routing allows to use bandwidth from the reserve of alternative paths, traffic variations and bursts can be absorbed with lower provisioning: reserve_MP < reserve_SP. G.2
Simple case study
Figure 9 shows a simple example of network dimensioning. For simplification reasons, we assume that the traffic demand mean m and standard deviation ı are the same between every node pair. (m, σ)
F
E
(m, σ)
G
G.3
General case
The general case is more complex and under investigation. Indeed, link capacities cannot be allocated independently. MPaware link capacity depends on the possible alternative paths to each destination and therefore to the dimensioning of other links. An optimal dimensioning must be such that, whatever the traffic increase is (with a given probability), there exists a routing that, when stabilized, ensures that all link loads remain under ThLoad. H
CONCLUSION
Multi-path routing algorithms such as DMLB allow to balance the traffic on the network, prevent link congestions, route more traffic and reconsider dimensioning in a more economical way. This complies with many operators strategies who wish to minimize over-provisioning expenses. The advantages of DMLB over traditional Shortest Path routing are twofold: - network robustness: in the operation phase, ability for the network to absorb a larger traffic volume or variation while maintaining end to end user sensed performances, - network dimensioning: in the planning phase, reduction of network deployment costs by downsizing link dimensioning, and more flexibility for link capacity upgrade. For purposes of robustness and because the current majority data networks is “pure” IP rather than IP/MPLS networks, DMLB has been implemented and evaluated for the distributed IP context, which is much more complex that the IP/MPLS context. DMLB has also been specified for the (g)MPLS context where there are no traffic deviation issues and LS DB synchronization is simpler. (g)MPLS specific metrics have been included to allow multi-layer traffic engineering. The results on the IP layer have shown the advantages of DMLB over pure SP routing in terms of network level and user sensed performances. The next step to this work is to associate appropriate practical dimensioning rules, that take advantage of this ability to adapt the routing to the traffic evolution and requires less link capacity for the same level of quality of service.
(m, σ)
Figure 9 : Mono-path capacity allocation C = m + Į ı. DMLB-aware allocation C*ThLoad = m+ Į ı/¥2 In case of traffic increase, the traffic volume of link E-G can be shared between paths E-G and E-F-G. Consequently, the sum of E-G and E-F capacities can be calculated according to the aggregation of all traffic outgoing E. For instance, if we use the dimensioning rule in §E.2, the required bandwidth associated to the aggregation of all traffic outgoing E is equal to 2m + Įı¥2, where Į is a parameter depending on the QoS requirements. This bandwidth is to be shared between the two paths. Consequently, we can set the link capacities of (E, F) and (E, G) to: C*ThLoad = m + Įı/¥2 to compare with C = m + Įı with mono-path routing. Using the same values as in previous simulations and a maximum buffer overflow probability of 10-8 (Į=5.916), the resource gain is 13%. With 4 independent paths, one can save up to 30% bandwidth.
I [1]
[2] [3]
[4] [5] [6] [7]
REFERENCES “Martin’s algorithm revisited for multi-objective shortest path problems with a MaxMin cost function”, X. GANDIBLEUX, F. BEUGNIES, S. RANDRIAMASY, January 2004, to appear in 4OR, Quarterly Journal of the Belgian, French and Italian Operations Research Societies – Springer Verlag. “On a multicriteria shortest path problem” , E.Q.V. MARTINS, European Journal of Operations Research, N°17, pp 85-94, 1984. “Performance of Hashing-Based Schemes for Internet Load Balancing”, Z.CAO, Z.WANG, E..ZEGURA, proc. of IEEE INFOCOM 2000, Tel Aviv, ISRAEL, Volume 1 pp. 332-341. “Should QoS routing algorithms prefer shortest paths ?”, K. KOWALIK and M. COLLIER, proc. IEEE Int. Conf. on Communications, May 2003, Anchorage, USA, pp 213-217. “OSPF Version 2”, J. MOY, IETF RFC2328, STD54, April 1998. “A survey of multipath routing for TE”, G. MYOUNG LEE, JIN SEEK CHOI, TR Information and Communications University – Korea, 2004, “OSPF-OMP Optimized Multipath”, C. VILLAMIZAR, IETF draft, draft-ietf-ospf-omp-03, January 2002, http://www.fictitious.org/ospfomp/ospf-omp.pdf
[8] [9] [10] [11]
[12] [13]
“Adaptive Multipath Routing for Dynamic Traffic Engineering” , I. GOJMERAC, T. ZIEGLER, F. RICCIATO, P. REICHL, IEEE GLOBECOM 2003, “Reference Transport Network Scenarios”, A.BETKER et al., http://www.ikr.unistuttgart.de/IKRSimLib/Referenz_Netze_v14_full.pdf www.n2nsoft.com “Equivalent Capacity and Its Application to Bandwidth Allocation in High-Speed Networks”, R. GUÉRIN, H. AHMADI and M. NAGHSHINEH, September 1991, IEEE Journal on Selected Areas in Communications, Vol 9, No 7. “QoS Routing Mechanisms and OSPF Extensions” , IETF RFC 2676, “Achieving Faster Failure Detection in OSPF Networks”, M. Goyal, K.K. Ramakrishnan a;d W. Feng, Proc. ICC 2003, Anchorage, USA.