server based qos routing - Semantic Scholar

0 downloads 0 Views 63KB Size Report
Maryland, MD 20742. Philadelphia ... QoS routing protocol, where a route server is responsible ..... As discussed in [10], the cost of QoS routing can be broken.
SERVER BASED QOS ROUTING G. Apostolopoulos

R. Gu´erin

S. Kamat

S. K. Tripathi

[email protected]

[email protected]

[email protected]

[email protected]

U. Maryland

U. Pennsylvania

Bell Laboratories

Bourns College of Engineering

College Park

200 S. 33rd Street

101 Crawfdords Corner Road

U of CA, Riverside

Maryland, MD 20742

Philadelphia, PA 19104

Holmdel, NJ 07733

Riverside CA 92521

Abstract

Clearly, one of the most important advances in the area of packet networks and the Internet is the introduction of QoS capabilities and the development of the infrastructure and protocols that are needed to support these capabilities. An important aspect of providing improved QoS to some of the network users is the need for mechanisms to control access to these “higher” levels of service. On-going work in the RSVP Admission Policy (RAP) working group of the Internet Engineering Task Force (IETF) has proposed a policy capable admission control architecture, designed to operate in conjunction with a reservation protocol, such as RSVP [6]. This architecture [11] is structured around a client-server model, where policy servers are responsible for making policy decisions for incoming requests. This facilitates accessing information on a remote route server since part of the processing of an incoming request will be contacting a remote policy server. Thus, potential access to a QoS route server could then be conveniently piggybacked on this server access. Furthermore, introducing QoS routing requires important and potentially costly changes in the network infrastructure. Performing most of the tasks of QoS routing in a server, allows the other routers to remain simple and to a large extent unaware of the introduction of QoS capabilities. In particular, although some level of QoS awareness is still required from routers, its extent is much more limited and in some cases can be provided with minimal modifications, simply by extensions to the resource reservation protocol. This being said, a server-based implementation of QoS routing clearly has challenges of its own. In particular, a major concern is whether such a centralized solution is at all feasible and scalable. In this work, we attempt to evaluate the actual processing load imposed on such a QoS route server, and explore various alternatives for reducing this load. In particular, we investigate the gains achievable from mechanisms designed specifically for greater efficiency in maintaining resource availability information in the context of a server based solution. In addition, we discuss and evaluate the effectiveness of well known techniques such as path caching in reducing the load on the route server. In future work, we will address issues such as server duplication for improving reliability and supporting

We discuss schemes for a centralized implementation of a QoS routing protocol, where a route server is responsible for determining QoS routes on behalf of all the routers in a network. We argue that this centralized organization has some advantages, when compared to a conventional distributed link state implementation. We discuss techniques for efficient maintenance of QoS topology information at the server and for reducing the amount of requests for routes that the network will generate. Using a comprehensive cost model based on measurements taken from a real implementation, we evaluate the feasibility and the cost of server based QoS routing in a variety of network configurations. Our results show that such an implementation is well within the capabilities of off-the-self routing equipment.

1 Introduction The vast majority of routing protocol implementations to this date follow a distributed model. In some cases, the routing algorithm is distributed and executes on all the routers participating in the network. A typical example is the distributed Bellman-Ford computation used by distance vector protocols such as RIP [14]. In other cases, the routing algorithm is centralized, in the sense that it executes independently on all routers. For example, this is how most of the link state routing protocols are implemented e.g. OSPF [8] and IS-IS [13]. Even in these cases, each router executes its own instance of the routing algorithm and is capable of determining its own routes. Very few protocols have (with the exception of the Router Arbiter project [5] and the centralized multicast of [4]) followed a centralized model where only a single entity in the network (which from now on we call route server) is responsible for determining the routes for all participating routers. Obviously, the main reason for this lack of server based routing protocol implementations is the scalability concerns that such a centralized approach raises. There may nevertheless be advantages to a server based routing architecture, if the scalability concerns can be properly addressed. 1



load balancing across servers.

1.1 Operational Environment The QoS routing model assumed in this paper, is the one presented in [3, 7]. It is based on link state routing and aims at handling requests with minimum bandwidth requirements. As a result, the only QoS quantity that needs to be monitored is the available bandwidth on network interfaces. The link state topology database is appropriately extended to include this information. Similarly extended link state updates are used to maintain this information current. We further assume threshold-based triggering of link state updates, where a new update is sent when the change in available bandwidth since the last update exceeds a predetermined threshold. The path finding algorithm computes widest-shortest paths [1, 7] in an on-demand mode after pruning links that do not appear to have the required amount of available bandwidth. The above algorithm can compute both explicit paths and next-hop information. In this work we use explicit paths. The structure of the paper is as follows. In Section 2, we discuss possible ways for organizing a route server based QoS routing architecture. In Section 3, we discuss the details of our evaluation methodology while in Section 4, the results of the evaluation are presented. Finally, in Section 5 we present our conclusions.



Flooding. This is the conventional method for reliably maintaining link state topology databases. This method was developed for maintaining the database at each network node and as a result is not well suited for a route server based architecture. Nevertheless, we include it here as a baseline for evaluating the effectiveness of the route server specific update methods we present next. Directed Updates. In this approach, a new update is still triggered as a result of a sufficiently significant change in a link’s available bandwidth, e.g., using a threshold based trigger. However, instead of flooding the update, the originating node simply forwards it directly to the server. The benefits of such an approach, when compared to flooding, are clearly a smaller number of messages and a lower processing overhead in the network, since routers are simply responsible for forwarding update packets to the server without the need for the special processing that flooded updates require.

2.2 Path Computation

The most basic implementation of a QoS route server has each incoming request on any router in the network result in a query to the server, which then computes a path for the request and returns it to the querying router. Such a straightforward approach has the obvious drawback of not scaling well when the number of requests is high. The 2 Architectural Options for a Server resulting load on the server will most likely be too high, making such an on-demand model infeasible. Multiple alternatives to such an on-demand computation model have Based System been proposed, and can be applied in the context of a route Any QoS routing architecture involves two major func- server to reduce its processing load. One of these methods, tional components: a) distribution of network state so that which we explore further in this paper, is path caching route computation is aware of the resource availability sit- [9, 15]. uation in the network, and b) path selection. Next, we discuss how the above components can be implemented in 2.2.1 Path Caching a server based QoS routing environment. In this paper we consider a sample set of methods that allow us to perform In the caching scheme presented in [9], QoS paths are a first-cut feasibility study of server based routing. It is not cached and reused for routing subsequent requests. Each our purpose to exhaustively list all the possible options for a cache entry contains information about the path as well server based architecture, and such a more comprehensive as its bottleneck capacity which is not necessarily up-toevaluation will be the subject of future work, once we have date. A cached path is used to route a request only if its bottleneck capacity as recorded in the cache entry is larger established that the approach is indeed worth pursuing. than the requested amount of bandwidth. We call these paths feasible. In a route server based architecture, caches can be located either at the server or the client routers. In 2.1 Maintenance of Network State order to minimize storage overhead at the server, we only In conventional, distributed implementations of link state consider the cases where caches are located at the client routing protocols, all nodes are required to maintain a com- routers. Every caching architecture needs some mechanisms to plete and up-to-date link state topology database. When route computations are performed at a QoS route server, maintain the contents of the cache up-to-date. In [9], two there is no need for other routers in the network to maintain methods for cache maintenance have been proposed. In network state. The fact that network state is located only the invalidation based approach cached paths are either periodically or individually invalidated, forcing subsequent at the server can simplify its maintenance. Potential methods for maintaining network state at the requests to be routed on-demand (in our case this is equivalent to querying the route server). The returned path is QoS route server are: 2

3 Performance Evaluation

then added to the cache. In the update scheme, information about cached paths, i.e., bottleneck capacity, is updated by accessing the topology database that contains the most up-to-date resource information for the network. These two schemes require different implementations in a route server environment. For update caching, since the caches are located at the clients and network state information exists only at the server, the caches need to be sent to the server, be updated there and then returned to the client routers. Initiation of the cache update operation is the responsibilityof the client and is controlled by a cache update triggering policy. Implementing invalidation based cache management is considerably simpler than update caching. Each path entry in the cache has an associated lifetime and when this time expires, it is removed from the cache. In order to determine when to trigger cache update operations, we use a periodic triggering model, where an update is triggered every N requests that a router receives. This results in a period that is approximately a multiple of the request inter-arrival time, thus it self-adapts to the incoming request rate. When invalidation based caching is used, we convert this period to a lifetime for the cached paths by multiplying N with the average request inter-arrival time. Besides specifying when and how to update cache entries, a caching architecture also needs the definition of a cache selection and cache replacement policy. The cache selection policy determines which of the potentially multiple feasible cached paths to select when routing a request. Multiple feasible paths to a destination can accumulate in the cache as a result of multiple routes for requests with different bandwidth requirements. In this work we use a widest path selection policy, where among feasible cached paths the one with the largest bottleneck capacity is selected. Cache replacement is performed as follows: When a new path is received and the cache is full, it replaces the existing cached path with the smallest bottleneck capacity. If there are multiple such paths, the longest one is replaced.

We compare the following schemes: server - which is the base server operation mode, i.e., all routers contact the route server to retrieve a QoS route for each incoming request, and cache - where path caching at the client routers is used to reduce the amount of times the route server has to be queried for a route. Different variations of the schemes can be generated by choosing a different update mechanism (flooding, and directed updates) as well as cache update mode for the cache scheme, i.e., cache update vs. cache invalidation. We compare the above schemes along the following dimensions:





Routing performance. We use the bandwidth acceptance ratio as the main measure of routing performance. This is a popular measure, used in several routing performance studies [1, 9] and is defined as the fraction of the offered bandwidth that is successfully routed. Processing load on the server and the client. We estimate this load based on benchmarking results that were obtained from experimentation with a real implementation of a QoS routing protocol described in [10].

In our evaluation, we do not discuss memory requirements as our experience has been that they are well within acceptable limits. Even when path caching is used, the requirements at the clients are small and there is no impact to the server. Finally, due to space limitations, we do not present results on the amount of link traffic generated by the various methods. Again, our experiments have shown that this load typically corresponds to a negligible fraction of the total link capacities, even when considering links that are adjacent to the route server.

3.1 Estimating the QoS Routing Protocol Cost

2.3 Implementation Issues The main issue when implementing a server based routing architecture is the communication between client routers and the server. In particular, message exchanges are needed for the route request/reply operation, sending link state updates to the server and also cache update operations. Some of these message exchanges, in particular the route request/reply, need to be reliable. Link state and cache updates need not be reliable since loss of an update can only temporarily affect the routing performance of the path computation algorithm. The COPS [12] protocol has been developed for the client/server interactions that are necessary in a policy aware environment. This protocol can support reliable message exchange and it is easily extended by adding new message types. Since we assume that policy support already exists in the network, extending the COPS protocol with some new message types specific to the operations required appears to be the most efficient method for implementing these message exchanges.

As discussed in [10], the cost of QoS routing can be broken down into a few base operations such as path computation and link state update generation. It is relatively simple to measure the cost of each of these operations in isolation using an implementation of the routing protocol. The results from such measurements are reported in [10]. The load on a QoS router is then generated from the combination of the above operations during normal router operation. Clearly, this combination depends on a host of factors: network topology, traffic patterns, frequency of link state update generation, and cache update frequency. We will reuse the method presented in [10] to approximate this mix of operations. This method consists of using a simulation of a network and routing protocol configuration in order to observe when events of the above types occur at the routers. It is then simple to compute the average and instantaneous router utilization based on these simulation traces. The 3

When flooding or directed updates are used for updating the QoS routing database of the route server, we vary the threshold for triggering a new update between 10% and 80%. The route server is located in (or close to) the middle of the network, to reduce the cost of client-server communication. Caches at the clients, when used, are assumed to be large enough to hold four paths for each destination. For the simulations we used a modified version of the MaRS [2] simulator. The 95% confidence interval for the reported values is less than .01% for bandwidth acceptance and less than 8% of the reported value for routing cost.

Building Block

Routers with increased traffic

4 Evaluation of Server Based QoS Routing

Figure 1: Topologies used in cost measurements

4.1 Evaluation of Network State Update Methods costs reported in [10] and re-used here are computed on a Pentium 233 MHz machine running FreeBSD 3.0.

For the traffic load in our experiments we get an average server load of 39.7%. This value was obtained for the server scheme, i.e., all requests are sent to the server, and flooding is used for maintaining the QoS topology database at the server. In addition, the update triggering threshold is set to a relatively sensitive value of 10%. This scenario represents a heavy load case, since the small update triggering threshold generates large levels of update traffic, and the relatively small request sizes, and consequently high arrival rate, result in a large request rate at the server. As can be seen in Figure 2(a), directed updates appear to be more cost effective than flooding This advantage of the update methods that is specifically geared towards a server environment was verified in other combinations of topology and traffic patterns we experimented with. The reasons behind these benefits are intuitive. Directed updates result in a single message to the route server whenever an update is triggered by a router. In addition, only the server has to fully process the message, intermediate nodes just forward it to the server with minimal processing. On the other hand, with flooding, the route server (or any other router) will have to process up to 2 messages per interface for each update triggered by any router in the network. For these reasons, the route server load drops significantly with increasing update triggering threshold, when flooding is used. The dependence of the server load on the update triggering threshold is much smaller when directed updates are used, since in this case the server load due to update processing is much lower. Figure 2 shows that directed updates achieve routing performance similar to that of flooding, but at a significantly lower cost to the server, at least when using a sensitive update triggering threshold. Finally, we should note that the routing performance loss exhibited when the update triggering threshold is increased is not dramatic. For the configuration shown, a static routing algorithm that is using shortest paths to route requests achieves a bandwidth acceptance ratio of 84% showing that QoS routing can still

3.2 Evaluation Configurations As mentioned above, the measured values of route server load greatly depend on the configuration tested. We attempt to use network topologies that let us explore the scalability of server based operation for different network sizes, and traffic patterns that reflect the requirements of the users of a QoS enabled network. The network we consider is a mesh like topology constructed by replicating a basic building block. The basic building block which consists of 4 routers is depicted in Figure 1. An N  N mesh network is constructed by repeating the building block along two dimensions. Figure 1 also illustrates a 2  2 mesh network. The results presented are mainly derived from the 7  7 mesh, that consists of 49 routers. In order to achieve a more realistic operational environment, in our experiments we use non-uniform traffic. In each network there is a pair of nodes that have higher request rates between them, when compared to the other source/destination pairs in the network. The location of these nodes is shown in Figure 1. Request arrivals are independent at each node and follow a Poisson distribution. Request sizes are uniformly distributed between a minimum of 64 Kbits/sec, up to a maximum of 1 Mbit/sec (that corresponds to low quality video). Request duration is geometrically distributed with a mean value of 3 minutes. We dimension the network links for uniform traffic, assuming shortest path routing. Link capacities are determined in such a fashion so that conventional shortest path routing can handle (i.e., successfully route) about 70% of the offered QoS traffic described above. This dimensioning method results in link capacities of 45 Mbits/sec in the network topologies used in our experiments. 4

40

Flooding Directed Updates

Avg Route Server Load (%)

BW Acceptance Ratio

Avg Route Server Load (%)

server is reduced by about 5%, which should, therefore, approximately correspond to the cost of processing link state updates. Based on this, we see that the processing load 30 is roughly evenly split between path computation, update 25 processing, and handling of route queries and replies. However, reducing the number of route queries to the server has 20 the benefit of reducing both the path computation cost and 15 the message handling cost. As a result, it seems desirable to seek methods that allow such savings without incurring 10 10 20 30 40 50 60 70 80 substantial penalties in terms of routing performance. We Update Trigger Threshold (%) investigate such methods in the next section. (a) Server Load However, before we proceed with this aspect of our in0.99 vestigation, we briefly return to our initial attempt at char0.988 acterizing the overall processing load on a QoS route server. 0.986 Flooding Directed Updates One aspect, which we still need to explore, is how the pro0.984 cessing load changes with the network size. This is shown 0.982 in part (a) of Figure 3, which displays the average load 0.98 on the route server as a function of the network size, in 0.978 a network where directed updates are used. The figure 0.976 illustrates that the load increases linearly with the num0.974 ber of routers, which seems to indicate that a route server 10 20 30 40 50 60 70 80 Update Trigger Threshold (%) should be able to support a number of routers larger than the maximum recommended size of an OSPF area (about (b) Routing Performance 200-300 routers). However, the cost values reported in the 22 figure are only average values over the duration of the ex20 Total:Directed upd periment. In order to better assess the ability of a server Path:Directed upd 18 Total:Per-request upd to sustain the necessary processing load, it is necessary to Path:Per-request upd 16 look at the distribution of the processing load rather than 14 simply its average value. This is shown in Figure 3(b), 12 which gives the distribution of the instantaneous (50sec 10 intervals) load at the server for the different update meth8 ods, a 10% update triggering threshold, and the 7  7 mesh 6 topology. The figure shows, that in all cases the distribu10 20 30 40 50 60 70 80 Avg Router Server Load (%) tions have relatively long tail, which indicates the presence of periods where the server’s resources are oversubscribed . (c) Breakdown of route server processing It should, however, be noted that the tail is significantly smaller with per-request and directed updates, which further confirms the benefits of update methods targeted to a Figure 2: Performance of server update policies server environment. Nevertheless, this indicates that our earlier optimistic assessment that a server could easily handle a large (200 to 300 routers) OSPF area, may need to achieve a very substantial performance improvement over be tempered. Especially, if we want to consider networks static routing. where the rate of requests might be higher than the one Note that the average load shown in Figure 2(a) cor- we have assumed in our experiments, e.g., a large volume responds to the total processing load at the server, i.e., it of low bit rate IP telephony requests. These conclusions includes not only the processing of updates, but also path further motivate the need to investigate methods that can computation and the handling of route requests. Figure 2(c) help keep the processing load at the server under control. identifies some individual components of the processing In particular in scenarios where there may be large numbers cost at the server. In particular, it isolates the contribu- of requests. This is the topic of the next section. tion of path computation for the different update methods (except flooding) for the 7  7 topology. From the figure, we see that for directed updates, path 4.2 Reducing the Amount of Route Queries computation accounts for about 30%-50% of the overall route server load. The rest is due to the processing cost As discussed in the previous section, it is of interest to of updates and the handling (packet receptions and trans- explore techniques to reduce the cost of processing route missions) of route requests and replies. The cost of only requests. In this section, we evaluate the effectiveness of update processing can be estimated by increasing the trig- caching as a mechanism for reducing the path computation gering threshold to 80%, which drastically reduces the fre- load on the route server. Since, as was shown above, diquency of updates. Under that scenario, the load on the rected updates are an efficient method for maintaining the 35

5

16 14 12 10 8 6 4 2 0

25

30

35 40 45 50 55 60 Network Size (# of Routers)

20

40

65

140

160

0.99 Server Update Caching Invalidation Caching

0.98

100000 Flooding Directed Updates Per-request Updates

0.97 BW Acceptance Ratio

10000

60 80 100 120 Update Period (reqs)

(a) Router Load

Scaling of average router load Number of Occurences (log10)

Server Update Caching Invalidation Caching

18

Route Server Load

Router Server Load (%)

Avg Route Server Load (%)

20

13 12 11 10 9 8 7 6 5 4 3 2

1000

100

0.96 0.95 0.94 0.93 0.92 0.91 0.9

10

0.89 0 1 0

10

20

30 40 50 60 Router Utilization (%)

70

80

20

40

60 80 100 120 Update Period (reqs)

140

160

(b) Routing Performance

(b) Distribution of instantaneous route server load Figure 4: Performance of caching Figure 3: Scaling and distribution of route server load savings are in terms of both path computation and handling of route query and reply messages. While lower processing cost are the main benefits of path caching, this comes at a price in terms of routing performance. This trade-off is illustrated in part (b) of Figure 4. The figure shows that, as expected, routing performance is lower than that of the basic server scheme, even with very frequent updates of cached entries. As the period of updates increases, routing performance is reduced further. Still, even for quite large values of the cache update period, routing performance is higher than the performance of static routing (84%). The figure also shows that the lower cost of the update based scheme translates into lower routing performances. However, note that the savings in processing cost are proportionallymuch larger than the corresponding loss in performance, so that the update based scheme remains an attractive alternative. Especially in the context of very large networks or high request rates.

QoS topology database at the server, in the rest of the section we only show results for this update method. The results are summarized in Figure 4 for 1 Mbit/sec requests and non-uniform traffic. In part (a) of the figure, we show how the load of the route server varies as the period of updating cache entries increases. The results are shown for the two caching methods described in Section 2.2.1. In order not to artificially magnify the savings due to caching, we use a configuration where the processing load due to updates remains a substantial component of the total processing cost. Specifically, the results are for an update triggering threshold of 10%. As can be seen from the figure, the use of caching significantly reduces the load on the server, even when caches are updated frequently. As the cache update rate decreases, savings increase although not significantly. Invalidation based caching achieves a smaller reduction in server load, mainly because when cached paths time-out, a large number of requests have to go to the server to obtain a route. Update based caching achieves a better cache hit ratio as it keeps a greater number of cached paths, which substantially reduces the number of requests that need to go to the server for a route. Note that this reduction in the load of the route server exceeds the 5% that correspond to the entire contribution of path computation as was shown in Figure 2(c). As mentioned before this is because the

5 Conclusions This paper presented a first attempt at assessing whether a server based solution could be an acceptable approach for the deployment of QoS routing. Our initial results appear to show that even with moderate processing power, a relative 6

[7] R. Gu´erin, A. Orda, and D. Williams, “QoS Routing Mechanisms and OSPF Extensions.” In Proceedings GLOBECOM’97, Phoenix, AZ, November, 1997

large topology, and fairly high request rates, a route server approach to QoS routing is feasible in terms of the processing load that the server has to handle. This conclusion may be somewhat optimistic in the case of our base server scheme, where no techniques are used to reduce the number of requests that the server has to handle. However, we have shown further reduction in processing load can be achieved through a number of simple methods. In addition, we confirmed that simply relying on flooding to advertise link state updates to the server should be avoided, as it induces substantial processing load, not only on the server, but also on all the other routers in the network. When more efficient update methods, such as per-request or directed updates are used, route request related processing, i.e., path computation and message handling, becomes the major contributor to the processing load at the server. Techniques such as path caching were found effective in reducing the cost of route request, without significantly compromising routing performance. The two caching methods we investigated represent different trade-offs between cost savings and performance loss, but the update caching scheme appears to provide savings in processing cost that are proportionally larger than its performance loss. Clearly, further studies are required to fully assess the merits of a server based solution to QoS routing and to identify the right design point, but we feel that this paper establishes that this is at least an area worth further investigation.

[8] J. Moy, OSPF Version 2, Internet Request for Comments, RFC 2178, July 1997 [9] G. Apostolopoulos, R. Gu´erin, S. Kamat, and S. K. Tripathi, “On Reducing the Processing Cost of OnDemand QoS Path Computation.” in Journal of High Speed Networks, Vol. 7, No. 2, 1998, pp. 77-98 [10] G. Apostolopoulos, R. Gu´erin, and S. Kamat, “Implementation and Performance Measurements of QoS Routing Extensions to OSPF”, in the proceedings of Infocom’99, New York, April 1999 [11] Raj Yavatkar, Dimitrios Pendarakis, Roch Guerin, “A Framework for Policy-based Admission Control”, draft-ietf-rap-framework-01.txt, April 1999, Internet Draft, Work in progress [12] Jim Boyle, Ron Cohen, David Durham, Shai Herzog, Raju Rajan, Arun Sastry, “The COPS (Common Open Policy Service) Protocol”, draft-ietf-rap-cops-05.txt, August 1999, Internet Draft, Work in progress [13] D. Oran, “OSI IS-IS Intra-domain Routing Protocol”, Internet RFC 1142, February 1990 [14] G. Malkin, “RIP Version 2”, Internet RFC 2453, November 1998

References

[15] M. Peyravian and A. D. Kshemkalyani, “Network Path Caching: Issues, Algorithms, and a Simulation Study.” Computer Communications, vol. 20, pages 605-614, 1997

[1] Q. Ma and P. Steenkiste, “On Path Selection for Traffic with Bandwidth Guarantees.” In Proceedings IEEE International Conference on Network Protocols, Atlanta, Georgia, October 1997 [2] C. Alaettinoglu, A. U. Shankar K. Dussa-Zieger, and I. Matta, “Design and Implementation of MaRS: A Routing Testbed.” Journal of Internetworking Research and Experience, 5(1):17-41, 1994 [3] G. Apostolopoulos, R. Gu´erin, S. Kamat, A. Orda, T. Przygienda, and D. Williams, “QoS Routing Mechanisms and OSPF Extensions.” Internet Request for Comments, RFC 2676, August 1999 [4] S. Keshav and S. Paul, “Centralized Multicast”, Technical Report, Computer Science Department, Cornell University, TR98-1688, 1998 [5] The Routing Arbiter http://www.isi.edu/div7/ra/

Project,

[6] R. Braden (Ed.), L. Zhang, S. Berson, S. Herzog, and S. Jamin, “Resource ReSerVation Protocol (RSVP), Version 1, Functional Specification.” Internet Engineering Task Force, Request For Comments (Proposed Standard), RFC 2205, September 1997 7