Optimizing Inter-domain Multicast through DINloop ...

3 downloads 1681 Views 196KB Size Report
the scalability problems existing in current inter-domain multicast protocols. In ... Multicast Address-Set Claim protocol to form the basis for a hierarchical address.
Optimizing Inter-domain Multicast through DINloop with GMPLS Huaqun Guo1, 2, Lek Heng Ngoh2, and Wai Choong Wong1, 2 1

Department of Electrical & Computer Engineering, National University of Singapore, 4 Engineering Drive 3, Singapore 117576 2 Institute for Infocomm Research, A*STAR, 21 Heng Mui Keng Terrace, Singapore 119613 E-mail: {guohq, lhn, lwong}@i2r.a-star.edu.sg

Abstract. This paper proposes DINloop (Data-In-Network loop) based multicast with GMPLS (generalized multiprotocol label switching) to overcome the scalability problems existing in current inter-domain multicast protocols. In our approach, multiple multicast sessions share a single DINloop instead of constructing individual multicast trees. DINloop is a special path formed using GMPLS to establish LSPs (Label Switched Paths) between DIN Nodes, which are core routers that connect to each intra-domain. We adopt link bundling method to use fewer labels. Packet Processing Module analyzes the coming multicast message and GMPLS Manager assigns a label to it. Then the multicast packet is fast forwarded using label. Simulations show that DINloopbased multicast uses the smallest message number needed to form the multicast structure compared with conventional inter-domain multicast protocols. In addition, the routing table size in core routers does not increase as the number of multicast group increases, and therefore the routing scalability is achieved.

1

Introduction

IP Multicast is an extremely useful and efficient mechanism for multi-point communication and distribution of information. However, the current infrastructure for global, Internet-wide multicast routing faces some problems. The existing multicast routing mechanisms broadcast some information and therefore do not scale well to groups that span the Internet. Multicast routing protocols like DVMRP (Distance Vector Multicast Routing Protocol) [1] and PIMDM (Protocol Independent Multicast – Dense Mode) [2], periodically flood data packets throughout the network. MOPSF (Multicast Open Shortest Path First) [3] floods group membership information to all the routers so that they can build multicast distribution trees. Protocols like PIM-SM (Protocol Independent Multicast – Sparse Mode) [4] and CBT (Core Based Tree) [5] scale better by having the members explicitly join a multicast distribution tree rooted as a Rendezvous Point (RP). However, in PIM-SM, RPs in other domains have no way of knowing about sources located in other domains [6]. CBT builds a bidirectional tree rooted at a core router and the use of single RP can potentially be subjected to overloading and single-point of failure. Thus, those protocols are mainly used for intra-domain multicast.

Multicast Source Discovery Protocol (MSDP) [7] and Border Gateway Multicast Protocol (BGMP) [8] were developed for inter-domain multicast. BGMP requires Multicast Address-Set Claim protocol to form the basis for a hierarchical address allocation architecture. However, this is not supported by the present structure of nonhierarchical address allocation architecture and it is not suitable for dynamic setup. MSDP requires a multicast router to keep forwarding state for every multicast tree passing through it and the number of forwarding states grows with the number of groups [9]. In addition, MSDP floods source information periodically to all other RPs on the Internet using TCP links between RPs [10]. If there are thousand of multicast sources, the number of Source Active (SA) message being flooded around the network would increase linearly. Thus, the current inter-domain multicast protocols suffer from scalability problems. MPLS Multicast Tree (MMT) [9] utilizes multiprotocol label switching (MPLS) [11] LSPs between multicast tree branching node routers in order to reduce forwarding states and enhance scalability. However, its centralized network information manager system is a weak point and the number of forwarding states in inter-domain routers grows with the number of groups. Edge Router Multicasting (ERM) [12] is another work using MPLS reported in literature. ERM limits branching point of multicast delivery tree to only the edges of MPLS domains. However, it is related to MPLS traffic engineering and it is an intra-domain routing scheme. To overcome the scalability problems in inter-domain multicast and centralized weakness posed by MMT, DINloop-based multicast with GMPLS [13] is proposed to optimize inter-domain multicast. The remainder of this paper is organized as follows. The solution overview is outlined in Section 2. Section 3 presents the details of DINloop-based multicast. To further investigate the proposed solution, we present experimental results in Section 4 followed by the conclusion in Section 5.

2

Solution Overview

In our approach, the core router referred to DIN Node, which connects to each domain, is chosen as the RP for that domain. Within a domain, the multicast tree is formed similarly to bidirectional PIM-SM, rooted at an associated DIN Node. In the core network, multiple DIN Nodes form a DINloop (see Section 3.2) using GMPLS for inter-domain multicast. The idea is that, in order to reduce multicast forwarding state and, correspondingly, tree maintenance overhead at core network, instead of constructing a tree for each individual multicast session in the core network, one can have multiple multicast sessions share a single DINloop. DINloop-based multicast achieves the scalability in the inter-domain multicast with below distinct advantages: • In inter-domain, multiple multicast sessions sharing a single DINloop reduces multicast state and, correspondingly, tree maintenance overhead at core network. • DINloop-based multicast consumes fewer labels than other multicast protocols. • Link bundling mechanism (described in Section 3.1) further improves routing scalability, as well as reduces the routing look-up time for fast routing. • DINloop-based multicast structure is setup fastest with least control messages for many-to-many multicast crossing multiple domains.

3

DINloop-based Multicast

Our scheme uses GMPLS to set up the DINloop in the core network by the way that LSPs are established between DIN Nodes. One example of DINloop-based multicast is shown in Fig. 1. DIN Nodes A, B, C and D are the core routers and function as RPs for the associated intra-domain (e.g., dot-line square area) respectively. The control modules in DIN Nodes are shown in Fig. 2. Intermediate System-toIntermediate System (IS-IS) and Open Shortest Path First (OSPF) are routing protocols. Resource Reservation Protocol - Traffic Extension (RSVP-TE) and Constraint-based Label Distribution Protocol (CR-LDP) are signalling protocols for the establishment of LSPs. Link Management Protocol (LMP) is used to establish, release and manage connections between two adjacent DIN nodes.

Fig. 1. DINloop-based multicast DIN Node Hosts

Packet Processing Module

MPLS Manager

Label Table

Neighbor DIN Node

LMP Signaling Module RSVP-TE/ CR-LDP

Routing Module OSPF/ IS-IS

Fig. 2. Control Modules in DIN Node

.

Hosts

3.1

Label Description

We use the Generalized Label that extends the traditional MPLS label by allowing the representation of not only labels that travel in-band with associated data packets, but also (virtual) labels that identify wavelengths. To avoid a large size for the routing table and gain the speed in table look-ups, we adopt a method to store one multicast group into one wavelength and bind all multicast groups in a bundled link. When a pair of DIN Nodes is connected by multiple links, it is possible to advertise several (or all) of these links as a single link (a bundled link) into OSPF and/or IS-IS. Each bundled link corresponds to a label. The purpose of link bundling is to improve routing scalability by reducing the amount of information that has to be handled by OSPF and/or IS-IS. The Generalized Label Request supports communication of characteristics required to support the LSP being requested. The information carried in a Generalized Label Request [13] is shown in Fig. 3. The length of LSP Encoding Type is 8 bits. It indicates the encoding of the LSP being requested. For our scheme, its value is 8 that it means the type is lambda. The length of Switching Type is 8 bits. It indicates the type of switching that should be performed on a particular link. In our scheme, the value is 150 that it means the switching type is Lambda-Switch Capable (LSC). The length of Generalized PID (G-PID) is 16 bits. It is an identifier of the payload carried by an LSP, i.e., an identifier of the client layer of that LSP. This is used by the nodes at the endpoints of the LSP, and in some cases by the penultimate hop.

Fig. 3. Generalized Label Request

LSC uses multiple data channels/links controlled by a single control channel. The label with 32bits label space indicates the data channel/link to be used for the LSP. 3.2

Setting up DINloop

One DIN Node can be elected as an initial node, said as DIN Node 1. The other DIN Nodes are called in loop sequence as DIN Node 2, DIN Node 3, …, DIN Node n. Starting from DIN Node 1, GMPLS Manager configures a control channel and data channels to connect to DIN Node 2. We assume that the control channel is set up in the Wavelength 0 and Wavelengths 1 to n are used for data channels. GMPLS Manager bundles Wavelengths 1 to n as a link with a bundle link label after looking up Label Table. Second, enable LMP on both DIN Node 1 and DIN Node 2. Third, configure an LSP on DIN Node 1 to DIN Node 2. DIN Node 1 sends an LSP request to the neighbour DIN Node 2 through signalling module.

For example in Fig. 4, Node 1 sends an initial PATH/Label Request message to DIN Node 2. The PATH/Label Request contains a Generalized Label Request. DIN Node 2 will send back a RESV/Label Mapping. When the generalized label is received by the initiator DIN Node 1, it then establishes an LSP with DIN Node 2 via RSVP/PATH message. Similarly, DIN Node 2 and DIN Node 3, …, DIN Node n and DIN Node 1 establish an LSP.

Fig. 4. LSP creation for DINloop

3.3

Assign Labels to Multicast Messages

When multicast packets arrive in the DIN Node, Packet Processing Module is activated. Packet Processing Module looks the IP header of multicast packets and identifies the source address and destination address. Then the information is reported to GMPLS Manager. GMPLS Manager looks up the Label Table and assigns an empty wavelength to this multicast message, as well as a wavelength (component link) ID under a bundled link ID corresponding to the destination address. 3.4

Forward Multicast message for inter-domain

DINloop in the core network allows DIN Nodes to share information about active sources. Each DIN Node knows about the receivers in their local domain respectively. For example in Fig. 1, when DIN Node B receives the join message for multicast group G from Receiver HB, it posts the join message in the DINloop. When DIN Node A in the source domain receives this join message and knows that there is the receiver in the remote domain, DIN Node A will forward the multicast message into the DINloop. Then DIN Node B will use a combination of two identifiers (bundled link ID (label ID), component link ID) to retrieve the multicast message from the DINloop and forward it to its local receiver HB. Multicast data can then be forwarded between the domains. Similarly, Receiver HC1, HC2 and HD in other domains can also join the multicast group G and receive the multicast information from Source S.

4

4.1

Experimental Results

Message Number

We use the message number to evaluate the performance of DINloop-based multicast versus MSDP and BGMP. The message load denotes the amount of signaling messages that is needed to form the multicast structure. The message load metric is incremented by one whenever a signaling message passes a link. MSDP floods source information periodically to all other RPs using TCP links between RPs. MSDP also allows receivers to switch to shortest-path tree to receive packets from sources and we refer it as MSDP+source. BGMP builds bidirectional shared tree rooted at the root domain and is referred as BGMP+shared. In addition, BGMP also allows source-specific branches to be built by the domains. We refer to the tree built by BGMP consisting of the bidirectional tree and the source-specific branches as BGMP+source. The simulations ran on the network topologies, which were generated using the Georgia Tech [14] random graph generator according to the transit-stub model [15]. We generate different network topologies. The number of nodes in transit domain was fixed, i.e., 25 that means 25 DIN Nodes for 25 domains. The number of nodes in stub domain was fixed to 1000 that spread in 25 domains. The number of source is varied from 2 to 25 and spread in 25 domains. The results are shown in Fig. 5. Messages Number (25 domains, 1000 nodes) 8000 M SDP M SDP+source

7000

BGM P+shared BGM P+source

Message Number

6000

DINloop 5000 4000 3000 2000 1000 0 0

5

10

15

20

25

Source Num ber

Fig. 5. Message number comparison on the effect of source number

From Fig. 5, the message numbers in MSDP and BGMP increase linearly as the source number increases. The slope in MSDP is higher than BGMP. However, the message number in DINloop-based multicast is the smallest and is not affected by the source number since all sources share the same DINloop.

4.2

Forwarding State Size

In this sub-section, we compare forwarding state size kept in core routers. We refer the current inter-domain multicast protocols as non-DINloop scheme. In non-DINloop scheme, the core router looks up IP header of every packet and identifies the destination address. Then, the core router looks up the routing table and forwards the packet to the correct interface (Fig. 6). Therefore, the routing table size is increased linearly with the number of multicast groups. Routing Table Multicast Group

Destination Address

Interface

Group 1

Address 1

Interface 1

Group 2

Address 2

Interface 2

Group 3

Address 3

Interface 3

……

……

……

Group n

Address n

Interface k

Fig. 6. Routing table size in non-DINloop scheme

In DINloop-based multicast, the single DINloop is shared by all multicast groups. So all multicast groups are bundled into a single bundled link (Bundled Link 1 in Fig. 7) corresponding to a label (Label 1). Each label has one egress interface, i.e., Label 1 corresponds to Interface 1 in the routing table. Multicast Group

Bundled Link ID

Group 1

Bundled Link 1

Group 2 Group 3

Component Link ID

……

Component Link 1

Group n

Component Link 2 Component Link 3

Routing Table Label ID

Interface

Label 1

Interface 1

Retrieval Multicast Data

…… Component Link n

Fig. 7. Routing table size in DINloop-based multicast

From Fig. 7, we obtains the results that the routing table size does not increase as the number of multicast group increases, and therefore the DINloop-based multicast increases the routing scalability for inter-domain multicast, as well as reduces the routing look-up time for fast routing at the price of adding additional GMPLS header to the multicast messages.

5

Conclusion

DINloop-based multicast with GMPLS is presented to optimize inter-domain multicast. Instead of constructing a tree for each individual multicast session in the core network, multiple multicast sessions share a single DINloop formed using GMPLS LSPs between multiple DIN Nodes. We adopt link bundling method to use fewer labels. Packet Processing Module analyzes the coming multicast message and GMPLS Manager assigns an empty wavelength to it, as well as a component link ID under a bundled link ID corresponding to the destination address. Then the multicast packet is fast forwarded using label rather than address. Simulations show that DINloop-based multicast uses the smallest message number needed to form the multicast structure compared with conventional inter-domain multicast protocols. In addition, the routing table size in core routers does not increase as the number of multicast group increases, and therefore the routing scalability is achieved.

Reference 1. Waitzman, D., Partridge, C., Deering, S.: Distance Vector Multicast Routing Protocol. RFC1075 (1988) 2. Adams, A., Nicholas, J., Siadak, W.: Protocol Independent Multicast - Dense Mode (PIMDM): Protocol Specification (Revised). Internet Draft (2003) 3. Moy, J.: Multicast Extensions to OSPF. RFC-1584 (1994) 4. Estrin, D., Farinacci, D., Helmy, A., Thaler, D., Deering, S., Handley, M., Jacobson, V., Liu, C., Sharma, P., Wei, L.: Protocol Independent Multicast-Sparse Mode (PIM-SM): Protocol Specification. RFC-2362 (1998) 5. Ballardie, A.: Core Based Trees (CBT version 2) Multicast Routing. RFC-2189 (1997) 6. IP Multicast Technology Overview, http://www.cisco.com/univercd/cc/td/doc/cisintwk/intsolns/mcst_sol/mcst_ovr.htm#53693 7. Fenner, B. (ed.), Meyer, D. (ed.): Multicast Source Discovery Protocol (MSDP). IETF RFC3618 (2003) 8. Thaler, D.: Border Gateway Multicast Protocol (BGMP): Protocol Specification.

RFC3913 (2004) 9. Boudani, A., Cousin, B., Bonnin, J.M.: An Effective Solution for Multicast Scalability: The MPLS Multicast Tree (MMT). Internet Draft (2004) 10. Diot, C., Levine, B.N., Lyles, B., Kassem, H., Balensiefen, D.: Deployment Issues for the IP Multicast Service and Architecture. IEEE Network, January 2000 (2000) 78-88 11. Rosen, E., Viswanathan, A., Callon, R.: Multiprotocol Label Switching Architecture. RFC3031 (2001) 12. Yang B., Mohapatra, P.: Edge Router Multicasting with MPLS Traffic Engineering. ICON (2002) 13. Berger, L.: Generalized Multi-Protocol Label Switching (GMPLS) Signaling Functional Description. RFC-3471 (2003) 14. Zegura, E., Calvert, K., Bhattacharjee, S.: How to model an internetwork. Proc. of IEEE Infocom (1996) 15. Modeling Topology of Large Internetworks. http://www.cc.gatech.edu/projects/gtitm/

Suggest Documents