Document not found! Please try again

Performance of NACK-Oriented Reliable Multicast in ... - CiteSeerX

7 downloads 33446 Views 165KB Size Report
relatively large amount of data is distributed from one sender to many receivers. ... packet losses is very low, one might think that a pure best- ... Email: {mahidell, psj}@s3.kth.se .... unreliable IP multicast and are designed for transferring bulk.
Performance of NACK-Oriented Reliable Multicast in Distributed Routers Markus Hidell and Peter Sjödin Department of Signals, Sensors and Systems School of Electrical Engineering KTH—Royal Institute of Technology, SE-100 44 Stockholm, Sweden Email: {mahidell, psj}@s3.kth.se Phone: +46 8 790 4251

 Abstract—We investigate a decentralized modular system design for routers, where functional modules are connected to a high-performance network. In such a distributed system internal reliable dissemination of data to multiple receivers is an important issue. We study the use of a reliable multicast protocol (NORM—NACK-Oriented Reliable Multicast) for routing table dissemination in a distributed router, which has been designed and implemented in our networking laboratory. Experimental results of using both file- and stream-oriented transport services in NORM are presented and compared. We also compare the use of reliable multicast with an ideal “hand-tuned” multicast dissemination. We find that NORM is useful for this application, but that there is a certain cost in terms of overhead. Index Terms—Distributed systems, reliable multicast, NACKoriented reliable multicast, performance measurements

I.

INTRODUCTION

T

he advances in optical communication technology during the last decade have resulted in dramatic increases in communication link capacity. The processing power of electronic computing devices has not increased at the same rate. This has lead to a situation where the bottlenecks in communication are in the units that process the data, rather than in the communication links that interconnect those units. The trend that communication capacity increases faster than processing capacity has wide implications on the architectures of network systems, such as routers, switches, servers, etc. One approach for keeping up with the speed of optics is to design network systems as distributed systems, where the processing burden can be shared over many processing units that are interconnected by a communication network. In addition to increasing the processing capacity, such an arrangement has other advantages: for instance, if one unit fails there are other units that can take over, which helps to improve availability and reliability. If more computation capacity is needed, it is possible to add more units so that the system can scale in an incremental way. Our work is in the context of distributed routers, where control elements and

forwarding elements are interconnected by an internal network, but distributed systems appear in other networking areas as well, such as server farms, grids, and storage area networks. However, the mismatch in capacity between optics and electronics needs to be addressed also within the distributed system: consider a distributed system consisting of an internal, high-speed network that interconnects slower processing units. Given the low cost of high-speed links, it seems costeffective to over-dimension the internal network in order to avoid network congestion and limit network delay. However, with an abundance of communication capacity, it will be the endpoints—the processing units—that are communication bottlenecks also in the distributed system. The question we are addressing here is what is a suitable protocol for communication within a distributed system? In particular, we are interested in traffic scenarios where a relatively large amount of data is distributed from one sender to many receivers. This happens for example in applications where many nodes for performance reasons need local copies of the same information, and when it is crucial for the correct behavior of the system that all nodes have identical information. One such application is in a distributed router, where the routing table is distributed from a control element to many packet forwarding elements, but we believe that such traffic patterns appear in other distributed systems as well. Given that no congestion occurs, and the probability of packet losses is very low, one might think that a pure besteffort service would suffice, as provided by UDP. It turns out though that a best-effort service is not feasible—packets can be lost due to buffer overflow in the receivers, which rules out UDP as an alternative. TCP supports flow control so it does not suffer from this problem, but it has other drawbacks: since TCP only supports unicast, multiple TCP connections need to be established and the sender has to send the data many times. The result is that it is the capacity of the sender that limits the performance. Hence, the total transmission time increases with the number of receivers, so it is not a scalable solution. Reliable multicast protocols are designed to support reliable distribution of data to many receivers, so it seems more

c 0-7803-9569-7/06/$20.00 2006 IEEE 241

appropriate to use such a protocol. There are many different flavors of reliable multicast and the efficiency of the protocol depends on how well the application matches the protocol design [14], [10]. One of the more recent protocols is NORM—NACK-Oriented Reliable Multicast [1], [2], [3], which is currently being standardized within the IETF. NORM is intended for multicast distribution within small-tomedium sized groups with relatively homogeneous receivers, which seems to fit well with our purposes. It should be noted that the group sizes targeted by NORM (small-to-medium) are, in this context, compared to intended group sizes for other reliable multicast protocols focusing on massive scalability where the number of receivers must scale far beyond the group sizes that are realistic in our scenario. Group sizes targeted by NORM are however well above what can be efficiently supported by multiple TCP connections. In this paper, we experimentally investigate the performance of NORM in the context of a routing table distribution in a distributed router. We examine two different implementations of NORM, one from INRIA (Institut National de Recherche en Informatique et en Automatique) [13], and one from NRL (Naval Research Laboratory) [22]. We also make comparisons with hand-tuned UDP multicast in order to obtain an ideal, upper limit on the possible performance in our settings. In addition, we use UDP multicast to verify that packet losses do indeed occur at the receivers, even on a relatively slow network (100 Mb/s Ethernet). We find that NORM can be used for reliable distribution of routes in the context of losses at the receivers, but that there is a performance penalty compared to (ideal) UDP multicast. Since NORM is based on NACKs and retransmissions, there is a scalability issue when the number of receivers increases: the number of retransmissions might increase with the number of receivers. We therefore also investigate how NORM behaves for different numbers of receivers. Our experiments indicate that the number of NACKs and retransmissions do in fact increase when the number of receivers increases, and also that this has influence on the total throughput. This influence is however modest compared to the use of multiple TCP connections. The rest of this paper is organized as follows. Section II has a short survey of reliable multicast, and motivates why NORM is an interesting candidate for our application. In Section III we introduce distributed routers and describe the experimental settings for our investigations. The experimental results are described in Section IV, and Section V finally concludes the paper and outlines further work. II. RELIABLE MULTICAST Reliable multicast transport over IP networks has been an extensive research area over the last two decades, and is still active, for instance within the IETF [24]. The main challenge with reliable multicast transport is that the requirements of multicast applications vary significantly. Accordingly, it does

not seem possible to fulfill these diverse requirements using one single reliable multicast protocol. This can be compared with the unicast case, where TCP is used by nearly any application requiring reliable and ordered delivery. Instead, a large variety of reliable multicast protocols with various purposes have been proposed and studied over the years [16], [23]. This includes protocols with support for multipoint interactive applications, protocols with support for data dissemination services as well as general-purpose protocols. There are several ways to categorize reliable multicast protocols [16], [25], [26], and dividing them into the following families is suitable for our needs: ACK-based protocols, NACK-based (Negative ACK) protocols, ALCbased (Asynchronous Layered Coding) protocols, and router assist protocols. An ACK-based, or sender-initiated, protocol in its simplest form means that a receiver sends an ACK for every received packet. This puts a lot of burden on the sender: First it needs to maintain information about all receivers to ensure that all ACKs have been received. Second, ACK implosion may lead to a high degree of processing at the sender. To mitigate these problems, and thereby improve the scaling properties, treebased ACK protocols have been suggested, where servers (or group leaders) can process ACKs from receivers and send aggregated ACKs back to the sender. An example of such a protocol is RMTP (Reliable Multicast Transport Protocol) [17]. In NACK-based (or receiver initiated) protocols, a receiver sends a NACK (Negative Acknowledgement) when the receiver discovers that one or more packets are missing (instead of sending ACKs for correctly received packets). This reduces the burden on the sender, but the approach is still subject to message implosion on the sender side. To avoid NACK implosion, NACKs can be multicast. This, in combination with randomized delays before sending NACKs, allows for suppression of redundant NACKs. Such a scheme is used in SRM (Scalable Reliable Multicast) [6], which is an example of a NACK-based reliable multicast protocol. The main purpose with ALC-based protocols is to eliminate feedback in terms of ACKs and NACKs from the receivers. This is possible through the use of FEC (Forward Error Correction), which means that errors can be detected and corrected by the receivers using redundant data originating from the sender. There have been several proposals for using FEC for reliable multicast over the years [20], [21]. ALC-based protocols are built on top of best-effort unreliable IP multicast and are designed for transferring bulk data to virtually an unlimited number of heterogeneous receivers. Further, in ALC protocols, data is encoded so that several multicast groups can be used in a layered fashion, where the transmission rate is increased with the number of multicast groups. A receiver can join one or more multicast groups depending on the transmission rate it can accept. Digital Fountain [4] is an example of an ALC-based protocol, and an ALC protocol instantiation has been standardized within the IETF [18].

242

Router assist protocols for reliable multicast can be based on any of the previous families, but require assistance from routers for efficient operation. An example of a modern router assist protocol is PGM (Pragmatic General Multicast) [8], which is designed for applications like stock quote dissemination, i.e., a large number of receivers for which fresh data is more important than the ability to recover old (stale) data. PGM uses both NACKs and FEC. PGM-enabled routers receive NACKs from the receivers and forward requests for data along the multicast tree towards the sender. In the work with standardization of reliable multicast protocols, the RMT (Reliable Multicast Transport) IETF Working group [24] has taken a modular approach. Within that framework, “building blocks” of protocol mechanisms are developed, and protocols are then standardized as instantiations of combinations of building blocks. One such instantiation is NORM (NACK-Oriented Reliable Multicast) [2], [3]. NORM is based on the use of NACKs to send repair requests back to the sender when data is missing at the receiver side. To avoid NACK implosions where many receivers simultaneously send NACKs back to the sender, NORM includes mechanisms for suppressing redundant NACK transmissions among a group of receivers. In addition to retransmissions, NORM includes support for forward error correction (FEC). III. DECENTRALIZED MODULAR ROUTERS The distributed system we consider is a decentralized modular router consisting of different functional elements [9]. Using the terminology of the IETF ForCES Working group [7], there are two main types of elements: Control Elements (CEs) and Forwarding Elements (FEs). CEs implement functions such as routing protocols, signaling protocols, and network management, while FEs perform for example packet forwarding, classification, traffic shaping, and metering. Together the CEs and FEs form a Network Element (NE)—a distributed router. The CEs and FEs are interconnected by two internal networks—a control network and a data network. The control network is where CEs send management and control information to FEs, while the data network is for forwarding of packets between FEs. In general, the internal networks can be any kind of packet switching networks, but we envision that the internal networks consists of standard switches and routers in a relatively regular, flat topology of limited geographical size. A distributed router architecture should be scaleable to hundreds of FEs, according to the requirements formulated within the ForCES working group [15]. Within the ForCES framework, a protocol for communication between CEs and FEs is being standardized—the ForCES protocol. ForCES can run on top of different transport protocols through a transport mapping layer. Currently, UDP- and TCP-based transport mapping layers are being defined. NORM could potentially be

regarded as another transport mechanism for the ForCES protocol. To evaluate transport mechanisms for a distributed application it is of great importance to understand the behavior of the application and its requirements. In our experimental scenario, the application can be described as follows. A certain amount of data—a routing table—is disseminated from one sender to many receivers. The receivers will perform some level of processing of the data and install the routes in a local forwarding table. The forwarding table is implementation-dependent, but the processing of data typically involves sorting data, updating of data structures, several memory operations, and possibly the crossing of user/kernel operating system boundaries. As we will see later, the amount of processing is not negligible. Further, the data is typically dynamically produced, and it is not necessary for the receivers to wait for the complete routing table before they start processing the data. Instead, the general case when it comes to route updates resembles more of a stream-oriented type of communication. It is however important that no data is lost and that data is processed in correct order. In the distributed system that we consider, the network is of limited geographical size and has a rather simple topology in terms of included switching elements. Further, the constituents forming the multicast group operate within a controlled environment. Regarding reliable multicast for this environment, the following observations can be made: 1. Pure ACK-based protocols do not scale well due to ACK implosion and sender complexity. 2. Tree-based ACK protocols, as well as router assist protocols, require support in intermediate servers or routers between the sender and the receivers. Since we are mainly interested in relatively flat internal networks with standard routers and switches, we are not primarily interested in protocols that require special support in the network. 3. ALC-based protocols have a strong focus on massive scalability. This is a different target compared to a distributed router, which is, in relative terms, a small system. In addition, ALC achieves reliability through redundancy, meaning that more data is sent to the receivers. We believe the capacity of the receivers to be a main limiting factor in our system, and therefore reliability through redundancy may be less effective. Still, an evaluation of ALC-based protocols would be relevant, but it is beyond the scope of this paper. Based on those observations, we have taken the approach to focus our studies on NORM, the NACK-based protocol currently subject to standardization within the IETF. NORM is mainly intended for “flat” multicast topologies with relatively homogeneous receivers (in contrast to hierarchical, tree-based topologies with large variations in the receiver

243

groups). Our environment seems to fit well with the model underlying the design of NORM, which motivates our study of NORM performance. In the performance evaluation following below, we study two NORM implementations supporting different types of transport services. These implementations are in the following referred to as NRL NORM (from Naval Research Laboratory [22]) and INRIA NORM (from Institut National de Recherche en Informatique et en Automatique) [13]). IV. PERFORMANCE EVALUATION In the performance evaluation, we experiment with the dissemination of a large routing table from one control element to a set of forwarding elements over the internal control network. A straight-forward way to accomplish a reliable distribution service would be to use multiple TCP connections. A TCP connection is established to each receiver, and data from the sender is duplicated over the connections. However, in previous experiments with TCP and NORM [11], [12], the distribution time increases linearly with the number of receivers when TCP is used. We further concluded that the overhead introduced by NORM does not prohibit its usage for routing table distribution. The work was limited to studies of communication overhead with simple receivers that perform no application-level processing of the data. Finally, only INRIA NORM was studied. A. Experiment Configuration The experimental set-up consists of one CE and up to 15 FEs. The CE and FEs are based on rack-mounted PCs with 1.70 GHz Intel Pentium 4 processors, running Linux Fedora Core 4, with operating system kernel version 2.6.11. The internal network consists of three 100 Mb/s Ethernet switches (Netgear FSM726S) that interconnect the CE and the FEs. The switches are wire-speed Ethernet switches, and none of the output ports is overloaded in our measurements. Thus, there is no congestion inside the internal network. The structure of the experimental platform is shown in Figure 1. Switch Switch

CE

FE 1

Switch

FE 8

FE 9

FE 15

Figure 1 Experimental platform, consisting of one CE and up to 15 FEs

In the experiments, the CE reads a relatively large number of static routes from a configuration file (100,000 routes) and distributes the routes to the FEs. The time it takes for the routes to be communicated to the FEs, installed by the FE in the operating system kernel, and acknowledged back to the CE is measured. The total amount of data that is distributed is 8.4 MB.

B. UDP The first experiment is to study the throughput using UDP for transportation. Figure 2 shows the throughput of UDP transfers from a sender to a single receiver for varying load. Throughput and load are for UDP payload, so headers for UDP, IP, Ethernet etc. are not included. The data transmitted in each UDP packet is 16 routes, corresponding to a UDP payload of 1344 bytes. For the curve “UDP, no FE processing”, we short-circuit the application processing at the receiver by letting the receiver process immediately drop the data once it enters the application. The throughput follows the load up to the full link capacity, so in this case the link can be saturated without packet losses. When we add application-level processing of the data at the receiver, packet losses occur before the link has been filled. This is the “UDP, FE processing” case, in which the receiving FE analyses the received data, verifies that it contains valid routes, and installs the routes in the local forwarding table. Here, throughput follows load up to roughly 70 Mb/s, and then packets get dropped due to receiver buffer overflow.

Figure 2 Throughput versus load for our data dissemination scenario.

From this rather simple experiment we conclude that the receiver is slower than the sender in our application. Thus, without mechanisms to throttle the sender, we run the risk of loosing packets in the receiver buffers. This motivates the need of a reliable multicast protocol. In addition, we see the results of this “hand-tuned” UDP (roughly 70 Mbit/s) as a measure of the best performance that could ideally be achieved by NORM, since NORM will introduce additional protocol overhead. Thus, in the analysis of NORM, UDP multicast serves as reference. C. Multiple TCP Connections It could be argued that multiple TCP connection and Gigabit Ethernet would be an appealing alternative for our scenario. We have measured the sender’s maximum transmission capacity, which resulted in slightly below 200 Mbit/s. Assuming that the capacity of the receiver is the same for TCP as for UDP above, our distributed application have a transmission capacity of 200 Mbit/s, and a maximum receiving capacity of 70 Mbit/s. Furthermore, there is no

244

congestion in the network so the throughput is determined solely by the receivers in our case. Hence, our scenario is as follows: With up to three receivers, multiple TCP connections would be able to serve the receivers at near full speed. Thereafter, the sender is saturated and the receiver rate will drop to 200/n, where n is the number of receivers. Increasing the number of connections will thus simply decrease the speed of each connection. Considering that a distributed router should scale to hundreds of FEs, using multiple TCP connections does not appear to be a viable alternative. D. NORM Transmission Rate and Throughput Both INRIA NORM and NRL NORM support the setting of a static, user-specified transmission rate. We use this mechanism to vary the offered load, and Figure 3 shows throughput versus user-specified transmission rate. Throughput is calculated as the amount of data sent by the sender divided by the time it takes until the last receiver is finished.

Figure 3 Throughput versus NORM transmission rate for NRL NORM and INRIA NORM respectively.

NORM defines three different types of content objects: data, file, and stream objects. File and data objects require that the size of the object is known in advance, while the streamoriented service is designed for streams of continuous data. Streams are more suitable for dynamically produced content, where there are no pre-determined boundaries on the data stream, as in the case for distribution of routes. For NRL NORM, there is a significant performance benefit with using larger data units within the stream. We therefore aggregate route updates into groups of 16 (corresponding to 1344 bytes) before passing them to NORM. INRIA NORM does not support stream objects, so instead we use the file object delivery service. With this service the entire object needs to be present before it can be transmitted. The naïve approach of using file objects would be to represent each route update as a file object. However, there is a high cost for setting up a file object session, so this cost should be amortized over a large amount of data. Therefore, to achieve optimal performance, all route updates are accumulated into one large block of 100,000 messages, which is sent as a single file object. To achieve as high efficiency as possible, INRIA NORM is configured to use the maximum message size (1500

bytes including NORM, UDP, and IP headers). It can be noted that NRL NORM yields higher throughput than INRIA NORM in this experiment. Our explanation for this is that with NRL stream objects, data are transferred in a concurrent fashion: portions of the data are handed over to the receiving application and the receiver can start processing data before all data has been transported by NORM. With the fileoriented service provided by INRIA NORM, the behavior is more sequential: all data is first produced by the sender, then transported by NORM, and thereafter handed over to the receiving application as one large file object. Finally, the receivers start processing the data. The results indicate a performance penalty for INRIA NORM and NRL NORM, compared to ideal, “hand-tuned” UDP—the throughput with NORM is around 40 Mbit/s versus around 70 Mbit/s for “hand-tuned” UDP. However, to what extent the performance degradation is inherent to NORM, or related to the particular implementations, is subject for further study. With NORM’s NACK-based scheme we achieve error recovery through retransmissions. An alternative solution would be to use adaptive transmission rate control, in order to dynamically adjust the transmission rate to reduce the number of packet losses. NORM is intended to provide dynamic rate control through the use of a congestion control mechanism [2]. However, there is no such mechanism present in the NORM implementations used in our studies, so we have not been able to experiment with automatic adjustment of the transmission rate. E. Number of Receivers One important property of a reliable multicast protocol is that it scales well with the number of receivers, so that transmission times do not increase too much when the number of receivers increases. In order to study this aspect, we measure the throughput while varying the number of receivers from 1 to 15. Figure 4 shows the results for INRIA NORM in file object mode and NRL NORM in stream mode.

Figure 4 Throughput versus number of receivers for NRL NORM and INRIA NORM respectively.

To be able to observe variations in the distribution time, the measurements were repeated 10 times. In Figure 4, minimum and maximum values are shown, together with the mean

245

value. The transmission rate was set to 100 Mb/s. Ideally, when using multicast, the throughput should be independent of the number of receivers. However, with the group sizes that have been included in our measurements, a tendency of performance decrease can be discerned when the number of receivers (FEs in our scenario) increases. It can be noted that this degradation is rather modest compared to, for instance, earlier experiences of using multiple TCP connections to provide reliable transport. NORM is not intended to scale beyond medium sized groups of receivers [1], [19], and, in general, it is not surprising that throughput is affected when the group size grows. Even though feedback (NACK) suppression is applied the amount of feedback is expected to increase for larger receiver sets since such a mechanism is not perfect. However, simulations of NORM have shown a high NACK suppression performance for group sizes as small as 15 receivers [1]. Thus, we did not expect to observe throughput degradation with the rather limited group sizes used in our experiments. Further analysis of the results in Figure 4, indicates that the throughput degradation is coupled to the number of retransmissions performed by the sender. The performance degradation in our measurements appear to depend on the total number of packets that must be transmitted by the sender before all receivers can finish their processing of the disseminated data and acknowledge back to the sender that receiver processing is completed. In Figure 5, the number of retransmissions required by the sender is plotted versus the number of participating receivers. The number of retransmissions is calculated from the same set of data as presented in Figure 4, and we show the average value of the number of retransmissions for each measurement point.

mechanisms to deal with packet losses due to receiver buffer overflow. We believe that this situation is not specific to our configuration. In general, it is not uncommon that receivers are more complex than senders: a receiver needs to analyze the received data in order to verify its validity, and to decide what operations to perform on the data. So we believe that our results are relevant for other distributed systems as well that are based on high-speed networks, where the links are faster than the processing nodes. Based on the experiments we have conducted we argue that NORM is a better choice for this kind of application compared to alternatives like multiple TCP connections and UDP with user-configured rate control over IP multicast. The former has poor scaling properties and the latter is not a general solution. User-configured rate control means that the rate has to be hand-tuned for the specific scenario. Still, it should be noted that it is NORM that limits the performance in the distributed application we have studied. This is somewhat unexpected since the distributed application in our case, and specifically the receivers, are rather complex in their nature when it comes to data processing, memory accesses, and operating system activities. It has been claimed that NORM is a complex protocol [19], which is also indicated by the fact that NORM seems to be the main limiting factor in our scenario. Compared with TCP, which also is a complex protocol, the number of implementations available for experiments is very low. TCP has been extensively studied over the years and modern TCP implementations are very efficient. It is possible that NORM software implementations may evolve to become more efficient and therefore yield higher throughput. V. CONCLUSIONS AND FURTHER WORK

Figure 5 Number of retransmission required by the sender versus the number of participating receivers.

The results indicate that the number of retransmissions tends to increase with the number of receivers, and this seems to be related to the performance degradation in our measurements. F. Discussion The receivers are slower than the sender in our distributed router set-up, which has large influence on the protocol for route distribution. In particular, the protocol needs to provide

We have investigated the performance of a multicast data distribution application in the context of a distributed router. Such a system is characterized by an internal high-speed network, where the communication links are faster than the processing nodes, and by having receivers that are slower than the senders. We argue that for such a system, a reliable multicast protocol is suitable in order to distribute data from a sender node to many receiver nodes. Neither TCP nor UDP multicast work well in this environment: With TCP, the limited I/O capacity of the nodes gives a total transmission time that is directly proportional to the number of receivers, so it does not scale well. UDP multicast, on the other hand, is too fast, and causes packet losses at the receivers. We have performed experimental studies with two implementations of NORM, from INRIA and NRL, and found that there are significant performance penalties compared to the ideal case with “hand-tuned” UDP. The results indicate that the stream-mode implemented in NRL is better suitable for our implementation, compared to INRIA NORM’s file object mode. The basic problem we are addressing is simple: we need to protect multicast receivers from buffer overflow. Our

246

experiments indicate that the price for using NORM for flow control might be rather high, which prompts the question whether it would be possible to design a simpler reliable multicast protocol that is optimized for this kind of applications? We have used user-configured rate control, and for further work it would be interesting to study the effect of adaptive transmission rate for congestion control. Both TCP-like congestion control and multi-rate (layered) control mechanisms have been simulated and analytically studied by others [5]. We believe that comparisons with experimental studies in our environment could further contribute to the understanding of reliable multicast for distributed systems. Furthermore, we have used NACKs as the sole mechanism for error control. NORM also provides support for error control through Forward Error Correction (FEC), and it would be interesting to study the performance of NACKs in combination with FEC under different packet loss scenarios. REFERENCES [1]

[2]

[3]

[4]

[5]

[6]

[7] [8] [9]

[10] [11]

[12]

[13] [14]

[15] [16]

R. B. Adamson, J. P. Macker, “Quantitative Prediction of NACKOriented Reliable Multicast (NORM) Feedback”, Proceedings of MILCOM 2002, vol. 2, pp. 964-969, October 2002. B. Adamson et al, ”Negative-acknowledgment (NACK)-Oriented Reliable Multicast (NORM) Protocol”, IETF Internet RFC 3940, November 2004. B. Adamson et al, “Negative-Acknowledgment (NACK)-Oriented Reliable Multicast (NORM) Building Blocks”, IETF Internet RFC 3941, November 2004. J. W. Byers, M. Luby, M. Mitzenmacher, A. Rege, “A Digital Fountain Approach to Reliable Distribution of Bulk Data”, in Proc. ACM SIGCOMM ’98, pp. 56-67, Vancouver, B. C., Canada, August/September 1998. A. Chaintreau, F. Baccelli, and C. Diot, “Impact of Network Delay Variations on Multicast Sessions with TCP-like Congestion Control”, Proc. IEEE INFOCOM ’01, Vol. 2, pp 1133-1142, Anchorage, Alaska, April 2001 S. Floyd, V. Jacobson, C-G Liu, S. McCanne, and L. Zhang, “A Reliable Multicast Framework for Light-Weight Session and Application Level Framing”, IEEE/ACM Transactions on Networking, Vol. 5, No. 6, December 1997. ForCES (Forwarding and Control Element Separation) IETF Working group, URL=http://www.ietf.org/html.charters/forces-charter.html. J. Gemmell, et. al., ”The PGM Reliable Multicast Protocol”, IEEE Network, Vol. 17, No. 1, pp 16-22, January/February 2003. O. Hagsand, M. Hidell, P. Sjödin, "Design and Implementation of a Distributed Router", IEEE Symposium on Signal Processing and Information Technology (ISSPIT), Athens, Greece, December, 2005. M. Handley et.al, “The Reliable Multicast Design Space for Bulk Data Transfer”, Internet RFC 2887, August 2000. M. Hidell, P. Sjödin, and O. Hagsand, "Control and Forwarding Plane Interaction in Distributed Routers", in Proceedings of Networking 2005, Waterloo, Canada, May 2005. M. Hidell, P. Sjödin, and O. Hagsand, "Reliable Multicast for Control in Distributed Routers", in Proceedings of the 2005 IEEE Workshop on High Performance Switching and Routing (HPSR), Hong Kong, May 2005. INRIA—MCL website, http://www.inrialpes.fr/planete/people/roca/mcl/mcl.html. R. Kermode and L. Vicisano, “Author Guidelines for Reliable Multicast Transport (RMT) Building Blocks and Protocol Instantiation Documents”, Internet RFC 3269, April 2002. H. Khosravi and T. Anderson, ”Requirements for Separation of IP Control and Forwarding”, Internet RFC 3654, November 2003. B. N. Levine, J. J. Garcia-Luna-Aceves, “A comparison of reliable multicast protocols”, Multimedia Systems 6: 334-348, 1998.

[17] J. C. Lin, S. Paul, ”RMTP: A Reliable Multicast Transport Protocol”, Proc. IEEE INFOCOM ’96, Vol. 3, pp 1414-1424, San Fransisco, California, March 1996. [18] M. Luby et al, “Asynchronous Layered Coding (ALC) Protocol Instantiation”, IETF Internet RFC 3450, December 2002. [19] C. Neumann, V. Roca, and R. Walsh, “Large Scale Content Distribution Protocols”, ACM SIGCOMM Computer Communication Review, Vol. 35, No. 5, October 2005. [20] J. Nonnenmacher and E. W. Biersack, “Reliable Multicast: Where to use FEC”, In Proc. IFIP 5th International Workshop on Protocols for High Speed Networks (PfHSN – 96), pp. 134-148, INRIA, Sophia Antipolis, France, October 1996. [21] J. Nonnenmahcer, E. Biersack, D. Towsley, “Parity-Based Loss Recovery for Reliable Multicast Transmission”, in Proc. ACM SIGCOMM ’97, pp. 289-300, Cannes, France, September 1997. [22] NRL—NACK-Oriented Reliable Multicast website, http://tang.itd.nrl.navy.mil/5522/norm/norm_index.html. [23] K. Obraczka, “Multicast Transport Protocols: A Survey and Taxonomy”, IEEE Communications Magazine, Vol. 36, No. 1, January 1998. [24] RMT (Reliable Multicast Transport) IETF Working group, URL=http://www.ietf.org/html.charters/rmt-charter.html. [25] D. Towsley, J. Kurose, and S. Pingali, “A Comparison of SenderInitiated and Receiver-Initiated Reliable Multicast Protocols”, IEEE Journal on Selected Areas In Communication, Vol. 15, No. 3, April 1997. [26] B. Whetten, et. al., ”Reliable Multicast Transport Building Blocks for One-to-Many Bulk-Data Transfer”, IETF Internet RFC 3048, January 2001.

247

Suggest Documents