Small Group Multicast with Deflection Routing in

0 downloads 0 Views 209KB Size Report
burst loss due to potential burst contentions, a deflection routing scheme for .... shortest path tree (SPT) algorithm [22] is the most appropriate for OXcast.
Small Group Multicast with Deflection Routing in Optical Burst Switched Networks Xiaodong Huang† , Qingya She† , Tao Zhang¦ , Kejie Lu‡ , and Jason P. Jue† †

Department of Computer Science The University of Texas at Dallas, Richardson, TX 75083 ¦ Department of Computer Science New York Institute of Technology, Old Westbury, NY 11568 ‡ Department of Electrical and Computer Engineering University of Puerto Rico at Mayaguez, Mayaguez, PR 00681 Email: {xxh020100, qxs032000, jjue}@utdallas.edu, [email protected], [email protected]

Abstract— In this paper, we focus on the problem of effectively supporting a large number of small group multicasts in optical burst switched (OBS) networks. We first propose a multicast scheme for small group multicast in OBS networks. To reduce burst loss due to potential burst contentions, a deflection routing scheme for multicast is proposed. We then develop an analytical model to evaluate the performance of the multicast scheme and the deflection routing scheme. To the best of our knowledge, this is the first analytical model proposed for the problem of multicast with deflection routing. The analytical model is verified through simulations. Numerical results show that the analytical model is accurate and that deflection routing can significantly reduce burst loss probability while slightly increasing burst delay.

I. I NTRODUCTION Multicast is a communication paradigm in which a message from a source is forwarded to multiple destinations. There are increasing demands for multicast applications, such as real time video conferencing, distributed database synchronization, and online multi-user video gaming [1]. The number of destinations in a multicast group is referred to as the group size. According to the group size, multicast communications could be roughly categorized as large group multicast and small group multicast. In IP networks, there are separate multicast protocols to support these two types of multicast. For example, DVMRP [2] and Xcast [3] are good at supporting large group multicast and small group multicast, respectively. In general, both large group multicast and small group multicast demand more bandwidth than unicast traffic on backbone networks. Fortunately, wavelength division multiplexing (WDM) [4] is becoming the dominant technology in backbones, enabling an optical fiber to carry multiple wavelengths simultaneously. Optical burst switching (OBS) [5] is one of the promising candidates for supporting bursty traffic all-optically in WDM networks. Although multicast could be implemented at different layers, optical layer multicasting does have potential advantages over IP layer multicasting, such as no optical-electronic-optical conversions, less delay, more efficient data replication, and a higher degree of data transparency [6] [7]. In this paper, we consider only OBS multicast.

In OBS networks, multiple packets to the same egress edge node are packed together into a data burst at ingress edge nodes. The control information for a data burst, contained in a burst header packet (BHP), is transmitted on a separate control channel. BHPs are processed electronically at each intermediate node to reserve network resources before the data burst arrives at a node. Data bursts will then be routed alloptically on data channels through the network. Burst contention is a special challenge in implementing OBS, because of the burstiness of OBS transmission and the lack of effective optical buffering technique. Burst contention occurs when multiple bursts contend for the same outgoing channel on the same wavelength at the same time. There are three well known contention resolution schemes, including wavelength conversion [8], fiber delay line (FDL) buffering [9], and deflection routing [10]. However, wavelength converters are expensive and FDLs tend to be bulky and provide very limited buffering capability when compared to electronic buffers such as in IP routers. On the contrary, deflection routing demands little extra hardware cost. Deflecting routing has been extensively studied for the unicast scenario in OBS networks [11]-[14]. There is already extensive research on multicast over optical circuit switching networks [15]-[18]. However, there is a relatively less research on multicast over OBS networks [6] [7] [19] [20]. In [6] [7], C. Qiao et. al address the optical multicast tree setup problem in IP over OBS networks based on DVMRP and MOSPF. Their approaches aim at large group multicast. In [19] [20], M. Jeong et. al consider a multicast routing problem in which the overhead of control packets and guard bands are non-negligible. The overhead is minimized by sharing a multicast tree among multiple multicast sessions. The resulting multicast trees are apt to have a large size. However, signal propagation attenuation and power loss due to optical splitting may put a physical constraint on the group size of OBS multicast sessions. To the best of our knowledge, there is not yet any research on small group multicast in OBS networks. This paper focuses on the problem of effectively supporting a large number of small group multicasts in OBS networks.

We first propose a multicast scheme for small group multicast in OBS networks. To reduce burst loss due to potential burst contentions, a deflection routing scheme for multicast is proposed. We then develop an analytical model to evaluate the performance of the multicast scheme and the deflection routing scheme. The model is very general in the sense that it can handle unicast traffic, multicast traffic, and the mixture of unicast and multicast traffic, with or without deflection routing. To the best of our knowledge, this is the first analytical model proposed for the problem of multicast with deflection routing. The analytical model is verified through simulations. The rest of the paper is organized as follows. In Section II, we discuss the details of the multicast scheme and the deflection routing scheme. We propose the analytical model in Section III, followed by numerical results in Section IV. The paper is concluded with Section V. II. S MALL GROUP MULTICAST A. Assumptions In this section, we propose an effective multicast scheme to support small group multicast in OBS networks. Throughout the rest of the paper, we have some basic assumptions for the multicast requests and the OBS networks. In the scenario of OBS multicast, the terms source and destination both refer to edge nodes in OBS networks. Communication is unidirectional, i.e., from the source to all destinations. We further assume that there is some group management mechanism such that the source can obtain the list of all destinations in the multicast session. Potential mechanisms include an extension to IP multicast protocols as described in [6] or through a third party communication. The detail of mechanisms is out of the scope of this paper. For OBS networks, we assume that all nodes are multicastcapable cross connects (MC-OXCs), either with shared optical splitter pools [18] or with a dedicated optical splitter per input port [21]. We assume that there is no blocking in MC-OXCs, and that there is no wavelength conversion or optical buffers in the OBS networks. Since the intelligence of an OBS network is in the control plane, we discuss only the processing of BHPs in the rest of this section. B. Multicast scheme We first briefly discuss explicit multicast (Xcast) [3] in IP networks. We then propose a new multicast scheme, OXcast, for small group multicast in OBS networks by extending Xcast. Xcast is a recently proposed multicast protocol to handle a large number of small multicast groups in IP networks. In Xcast, the source embeds a list of destinations of the multicast session in the header of an explicit multicast packet. Upon receiving an explicit multicast packet, a router looks up its unicast routing table to find the next hop for each destination listed in the header. If there are multiple next hops to route the packet, the packet will be duplicated into several copies, one copy for each next hop; if there is only one next hop to route the packet, no duplication is needed. In the header

of the packet for a next hop, only those destinations that should be routed through this next hop are embedded in the destination list. Then, each packet is routed to the designated next hop in the same way as routing a unicast packet. This process continues until the packet reaches the destinations or is discarded at some intermediate node. Xcast is scalable in terms of the number of multicast sessions for the following reasons. First, routers do not need to maintain state information about multicast sessions. Second, it uses only unicast routing tables for routing. Third, there is no overhead to manage global unique multicast addresses. The disadvantage is an overhead in bandwidth and routing due to the processing of the destination list. Under the assumptions in Section II-A, Xcast could be extended to support small group multicast in OBS networks. We refer to the extended Xcast in OBS networks as OBS explicit multicast (OXcast). The routing algorithm of OXcast is the same as Xcast, except 1) MC-OXC IDs are used instead of IP addresses; 2) data is routed as bursts instead of as IP packets. Thus, we need to discuss only special aspects of OBS related to the extension: burst assembling, routing, scheduling, and signaling. The burst assembling process for OXcast is the same as the burst assembling for unicast, except that an additional control field, destination list, is embedded in the BHP. The destination list includes all destinations in the multicast session. Since each burst consists of multiple packets, the overhead in bandwidth becomes negligible. Determining the optimal multicast tree for an OXcast session is the classic Steiner tree problem, which is NP-hard [22]. For dynamic traffic, as in OBS networks, the algorithm for calculating a multicast tree must run in real time. Among the approximation algorithms for the Steiner tree problem, the shortest path tree (SPT) algorithm [22] is the most appropriate for OXcast. In the SPT algorithm, the tree is constructed by merging shortest paths from the source to all destinations. With the destination list in the BHP of a burst, the SPT algorithm can be implemented in a distributed manner, which is exactly the way Xcast works. Thus, the calculating of a multicast tree in OXcast is with constant time complexity at each involved node. In OXcast, at a node with multiple child nodes, the scheduling to each child node is independent. Therefore, all proposed burst scheduling algorithms, such as LAUC and LAUC-VF [23], could be used without any modification in OXcast. If a burst is blocked at any child branch, the burst will continue to be routed to those un-blocked child branches. We refer to this scheduling policy as partial blocking policy. The rationale behind this policy is that an OBS network will do its best to reach as many destinations as possible and then, if necessary, a higher layer will re-transmit data to those blocked destinations. As some nodes on a multicast tree may have multiple branches, we need to extend OBS unicast signaling protocols to support OXcast. There are four well-known OBS unicast signaling protocols, i.e., tell-and-wait, tell-and-go, just-in-time, and just-enough-time [24]. All of these protocols could be

2

2 8

4

9

10

3

c1

11

1

4

3

c1 8

6

7

8

9

10

5

7

11

8

9

1 10

11

6

9

5

10

(a)

(b)

2

2 8

4

3

c1 9

8

10

6

7

9

4

1

10

8

c1

11

9

10

11

3

1

6

5

11

5

7

11

8

(c)

9

10

11

(d)

Fig. 1. OXcast with deflection routing: an illustrative example. (a) A multicast burst from 2 to destinations 8, 9, 10, 11. In the transition from 2 to 3; (b) After the burst passing through node 3 and suppose there is no blocking; (c) link h3, 7i is blocked; (d) link h3, 7i, link h3, 6i and link h3, 5i are blocked.

extended as follows. If there are more than one child branches upon which the burst is successfully scheduled, the BHP will be duplicated, one copy for each branch. The destination list in each BHP is updated in the same way as in Xcast. In summary, OXcast can be implemented with only minor extensions of the unicast OBS control plane. C. Deflection routing scheme When designing the multicast deflection routing scheme, we use the following guidelines: 1) the scheme should be simple such that it can be implemented in real time; 2) no state information of multicast sessions should be kept on routers, which will maintain the scalability of explicit multicast; 3) at most one additional outgoing link should be used for all blocked branches, which will reduce outgoing link dependency (see Section IV) and avoid overloading the network. We first define some terms for later use. We use child link to refer an outgoing link which leads to a child node on the tree. Each outgoing link is given a locally unique number called the link index. When a node receive a multicast burst, it attempts to forward the burst on all child links as without deflection. If some of the child links are blocked, traffic to the blocked links will be deflected to a non-blocked child link which has the smallest link index. If all child links are blocked, traffic will be deflected to a non-blocked link which is a non-child link; if no such link available, the request is blocked and discarded. In the case of successful deflection, destination lists in all initially blocked child links will be combined into the destination list of the link towards which blocked traffic is deflected. To illustrate the the concept of multicast deflection, an example is presented in Fig. 1. Suppose there is an OXcast burst from source 2 to destinations 8, 9, 10, and 11. The

link index for an outgoing link is the id of its child node. The routing tree for the burst is shown by the solid lines in Fig. 1 (a). Suppose the burst has arrived at node 3. If there is no blocking at node 3, the burst will be routed as shown in Fig. 1 (b). If link h3, 7i is blocked, the burst to destination 8 will be deflected to link h3, 5i, since this link has a smaller link index than link h3, 6i as shown in Fig. 1 (c). If all child links are blocked, all traffic will be deflected onto the link h3, 1i as shown in Fig. 1 (d). One issue with deflection routing is the routing loop problem, in which a data packet may pass through the same node more than once. Generally speaking, routing loop is a waste of resources. However, a temporary routing loop in OBS networks due to deflection routing may be beneficial. The reason is that burst contention does not necessarily mean the network is overloaded. At the time a burst deflected back to a previously visited node, the unavailable link in the previous attempt may become available. Thus, a routing loop is a way to utilize the network itself as a buffer. Another issue is the insufficient offset time problem [10] in which a data burst may overtake its BHP after deflection routings. Measures proposed in [10] can be helpful to alleviate this problem in multicast deflection routing. One potential disadvantage of deflection routing is the packet out-of-order problem. Unfortunately, all published deflection routing and alternative routing schemes for packetalike switching share this same drawback [11]-[14]. If inexpensive wavelength converters or RAM-like optical buffers were universally available, it may be better to avoid deflection routing; however, if such technology is not available, deflection routing is an effective technique to reduce data loss in OBS networks, especially for applications that are non-time critical and require reliable data transmission. We do not consider the power loss constraint due to deflection routing in this paper. Even with optical amplifiers, optical signals may not be able to deflect more than a couple of times. Authors in [10] reported that increasing the allowed deflection times from 1 to 2 only provides a marginal gain on burst loss performance for unicast traffic. In a future work, we may put a constraint on the allowed deflection times. We expect that a conclusion similar to [10] will also hold for multicast deflection routing. III. A NALYTICAL M ODEL In this section, we will present an analytical model for the proposed OXcast multicast routing scheme. We first observe that, under multicast traffic, the outgoing links at a node are highly dependent on each other. Such link dependency is referred to as outgoing link dependency. To account for outgoing link dependency, we model the state at a node by a multi-dimensional continuous time Markov chain. Using the concept of flows, we develop a load estimation model through which the load of a given multicast flow can be distributed over the network. The details of both models are determined by the multicast deflection routing scheme. At last, following the idea of reduced load approximation [25], we propose

1

0

2

λ i

2

1

m Fig. 3.

A generalized node.

Fig. 2. An illustrative example showing the outgoing link dependency at a node under multicast traffic.

B. Node model a network level model to evaluate the performance of the multicast routing scheme. The network model is based on the node model and the load estimation model. A. Outgoing link dependency under multicast In general, the route for a multicast session corresponds to a tree topology, which is referred to as a multicast tree. Nodes on a multicast tree can be categorized into four types: • Root node: The source of a multicast session. There is no incoming link but one or more outgoing links. • Leaf node: A destination node of a multicast session. There is one incoming link but no outgoing link. • Relay node: An internal node on the multicast tree, which has one incoming link and one outgoing link. • Branching node: An internal node on the multicast tree, which has one incoming link and with at least two outgoing links. Here, we are interested in the relationship among outgoing links. Therefore, if a root node has only one (more than one) outgoing link, the root node could be treated as a relay node (branching node). For a relay node, since there is only one outgoing link, there is no outgoing link dependency. However, the outgoing links at a branching node show strong dependency due to the property of multicasting. The outgoing link dependency has significant impact on the performance of deflection routing. An illustrative example is given in Fig. 2. The network has 3 nodes and 3 bi-directed links. All requests are assumed to be multicast with two destinations. Shortest path routing is assumed. In this case, source nodes are also branching nodes. Due to the symmetry of the network and the traffic, the two outgoing links at any of the 3 nodes will be free or busy simultaneously. For example, at node 0: p12 = p1 = p2 À p1 · p2 , where p1 and p2 are the probabilities with which link h0, 1i and link h0, 2i are busy, respectively; p12 is the probability with which both link h0, 1i and link h0, 2i are busy simultaneously. In this example, deflection routing will never succeed because all outgoing links at any node are dependent and will be busy simultaneously. In the above discussion, we consider only a single multicast tree. From a node’s point of view, however, a node may participate in more than one multicast trees. In the next subsection, we will propose a node model which considers the outgoing link dependency and all possible roles the node may take in different multicast trees.

In general, a node has multiple incoming links and multiple outgoing links. A flow from any incoming link may be destined to one or more outgoing links. Since, from a node’s point of view, it doesn’t matter where a flow comes from, all incoming links at a node can be combined into one incoming link, as shown in Fig. 3. We refer to the number of outgoing links of a node as the outgoing degree (or simply degree) of the node. We assume all outgoing links have the same transmission rate. We assume that all incoming flows are independent Poisson processes and that burst lengths have a negative exponential distribution. All flows which are headed to the same set of outgoing links can be aggregated into one Poisson process, because they are handled in the same way by the routing algorithm1 . Thus, we need only to consider a set of aggregated Poisson processes which covers all possible sets of outgoing links. For a node with degree of m, the total number of possible sets of outgoing links is 2m −1. Let U = {1, 2, · · · , m} be the set of all outgoing links. We use fD to denote the aggregated flow which is destined toward all outgoing links in D, D ⊆ U and D is nonempty. The arrival rate of flow fD is denoted as λD . For a flow fD with |D| = 1, no optical splitting is necessary; for a flow fD with |D| > 1, the traffic will undergo optical splitting. Each outgoing link has two states, either free or busy. For a node with degree of m, an intuitive definition of a state could be a vector of m elements, each element corresponding to the state of an outgoing link. Under multicast traffic, however, this definition does not work. The reason is that the state information of a node includes not only the states of all outgoing links but also information that indicates which busy links correspond to the same multicast session. Busy links in the same multicast session will become free simultaneously, which is not necessarily true for busy links that are not in the same multicast session. We define a state of a node as a partition of the set of busy outgoing links. It is well known that, for a set of k elements, the number of different partitions of the set is referred to as the Bell number PmBk . For a node with degree m, the size of its state space is k=0 (m k )Bk = Bm+1 . For example, the state in which all outgoing links are free is denoted as {φ}; the state 1 This is true with the routing algorithm of this paper. However, if flows are treated differently, flows should not be aggregated

{{1, 2}} µ {φ}

1

λ{1} λ{2} λ{1,2}

i

λ{1,2}

{{1, 2}} µ

λ{2} {{2}}

µ λ{1} µ λ{2} + λ{1,2}

µ 2 {{1}}

{φ} λ{1} + λ{1,2}

{{1}, {2}} µ

(a)

λ{1,2}

λ{2} {{2}}

µ

λ{1} µ λ{1} + λ{2} + λ{1,2}

µ {{1}}

λ{1} + λ{2} + λ{1,2}

{{1}, {2}} µ

(b)

(c)

Fig. 4. A node with out degree of 2 and partial blocking policy is applied. Assume all links have the same bit rate and the lengths of all bursts are of exponential distribution with the same average burst length. (a) node model; (b) state transition diagram without deflection routing; (c) state transition diagram with deflection routing.

in which all outgoing links are busy and independent to each other is denoted as {{1}, {2}, · · · , {m}}. It is worth noting that the size of the state space is an exponential function of the degree of a node. Fortunately, nodes in a real network usually have very limited degrees, typically between 2 and 5. We now define the state transitions of the node model. We first discuss state transitions due to a new arrival. Such state transitions depend on the routing algorithm. Recall that partial blocking policy is adopted and there is no preemption in scheduling. Suppose the current state is π and the incoming flow is fD with arrival rate of λD . Since preemption is not allowed, all occupied outgoing links will keep their current states. The only potential change is that some free outgoing links may becomes busy. In general, the next state π 0 is determined by π 0 = π ∪ {z(D, π)}, (1)

state π to state π − a for each a ∈ π with transition rate of 1/µ. For each state, we can find the next state according to Eq. 1 for each arrival or according to the rule we just discussed above for each departure. We denote rij as the state transition rate from state i to state j. Then, we can calculate the steady state probabilities P1×Bm+1 for a node with degree m by the following linear equations:

where the function z returns a set in which all outgoing links become busy due to the flow fD . Since function z is determined by the routing algorithm, we refer z as routing function. For the case in which there is no deflection routing, the routing algorithm of OXcast is straightforward: an incoming flow will go through all designated outgoing links which are in the free state. We have

(5)

z(D, π) = D − ∪a∈π a.

(2)

With deflection routing, the situation becomes more involved. According to the proposed deflection routing scheme (see Section II-C), we can define the routing function as: ½ D − ∪a∈π a, if D − ∪a∈π a 6= φ z(D, π) = (3) g(U − ∪a∈π a), otherwise. where U is the set of all outgoing links and function g(b), b ⊆ U , returns a set of a single element with the minimum index in b if b is nonempty; otherwise, it returns φ. State transitions due to a departure are straightforward. Under the assumption that all links have the same transmission rate and all bursts have a negative exponential distribution for length, the transition rate due to a departure can be simply denoted as 1/µ. For each state π, there is a transition from

P1×Bm+1 · QBm+1 ×Bm+1 = [0]1×Bm+1 , Bm+1

X

pi = 1,

(4)

i=1

where P1×Bm+1 = [p1 , p2 , · · · , pBm+1 ], QBm+1 ×Bm+1 = [qij ]Bm+1 ×Bm+1 , ½ rij , if i 6= j PBm+1 qi,j = − k=1 rik , if i = j.

and Bm+1 is the Bell number of m + 1. An example is shown in Fig. 4. For a node with a degree of 2, there are 3 possible sets of busy outgoing links, i.e., λ{1} , λ{2} , and λ{1,2} . There are total B3 = 5 states in the state space, i.e., {φ}, {{1}}, {{2}}, {{1}, {2}}, and {{1, 2}}. Without deflection routing, the state transition diagram is as shown in Fig. 4 (b). For example, suppose the current state is π = {{1}}. According to Eq. 2, we have z({1}, π) = φ, z({2}, π) = {2}, and z({1, 2}, π) = {2}. Then, according to Eq. 1, if the incoming burst is λ{1} , then π 0 = {{1}}, which means the incoming traffic λ{1} is blocked since outgoing link 1 is busy; if the incoming burst is λ{2} , then π 0 = {{1}, {2}}, since outgoing link 2 is free; if the incoming burst is λ{1,2} , then π 0 = {{1}, {2}}, due to the partial blocking policy. Note that, in the last case, even though both outgoing links are busy after accepting the “2” part of λ{1,2} , the traffic on outgoing link 1 and outgoing link 2 are not in the same multicast session. Therefore, the node transits to state {{1}, {2}} instead of to state {{1, 2}}. With deflection routing, the state transition diagram is shown as in Fig. 4 (c). The differences between Fig. 4 (c) and Fig. 4 (b) are an

λ · p π1 λ · p π1 λ

1

π1

λ · p π2

λ · pπ2

2

π2 λ · pπBm+1

λ · pπBm+1

m

πBm+1

Fig. 5.

A flow routed through a node with a degree of m.

additional transition rate from {{1}} to {{1}, {2}} and from {{2}} to {{1}, {2}}, since in all other cases Eq. 3 is equal to Eq. 2. For example, suppose π = {{1}} and there is an incoming burst λ{1} . According to Eq. 3, since D − ∪a∈π a = φ, then z(λ{1} , π) = g({1, 2} − ∪a∈{{1}} a) = g({2}) = {2}. Thus, π 0 = π ∪ {z(λ{1} , π)} = {{1}, {2}}. C. Load estimation model Given the steady state probabilities of all nodes in a network, we present a load estimation model through which the load of a given multicast flow can be distributed over the network. As we discussed, a node with degree of m has Bm+1 states. When a burst arrives at a node, the node could be in any one of Bm+1 states, with probability pπi of being in state πi . According to the state of a node, the routing algorithm will determine the set of outgoing links towards which the burst should be routed. Suppose the incoming flow has an arrival rate of λ at a given node. Statistically, there is λ · pπi amount of the flow arriving to the node when the node is in state πi and this load will then be applied to the set of outgoing links. The routing of a flow through a node is shown in Fig. 5. This approach is applied recursively to calculate the flow’s contribution to load on the entire network. We define a recursive function ψ(λ, u, D) to describe the load estimation model, where λ is the arrival rate of the incoming flow to node u, and D is the set of final destinations of the flow. The function ψ(λ, u, D) is shown in Algorithm 1. We first define some notations for the function ψ. λfu denotes the accumulated traffic which is received by node u from the flow f . λuN is the arrival rate of bursts to a specific set N of outgoing links at node u. θ is a preset threshold. Function g1 (u, D, N ) implements the next hop lookup function of the OXcast routing algorithm. It takes u and D as input parameters, where u and D are the current node and the set of final destinations of a flow, respectively; and returns a set of outgoing links in the variable N . Function g2 (u, D, π, olist, dlist) implements the routing algorithm of OXcast. It take u, D, and π as the input parameters, where u, D, and π are the current node, the set of final destinations of a flow, and the current state of the node, respectively; and returns the next hop list in olist and the destination list for each next hop in dlist.

Algorithm 1 Load estimation function ψ(λ, u, D) 1: if u ∈ D then 2: λfu ⇐ λfu + λ 3: D ⇐ D − {u} 4: end if 5: if D 6= φ then 6: call g1 (u, D, N ) 7: λuN ⇐ λuN + λ 8: for each state π at node u do 9: λ0 ⇐ λ · pπ 10: if λ0 > θ then 11: call g2 (u, D, π, olist, dlist) 12: for each o ∈ olist do 13: call ψ(λ0 , o, dlist[o]) 14: end for 15: end if 16: end for 17: end if

Function ψ(λ, u, D) works as follows. If the current node u is one of the intended destinations of the flow, it increases the traffic load λfu corresponding to the flow f and then deletes the node u from the destination list. For the remaining destinations, it first decides the set of outgoing links towards which the flow is initially destined. It then increases the traffic load λuN for this specific set of outgoing links. Next, the function ψ considers all possible states of the node and routes the flow through the node as follows. It first decides the next hop list and the destination list associated with each next hop. Basically, for each next hop, a new sub-flow is obtained which takes the next hop as the source and the corresponding destination list as its destination list. Therefore, we can recursively call the function ψ to further distribute each sub-flow. A traffic from a source node to a set of destinations is represented as a multicast flow f . We assume that the flow f is a Poisson process with arrival rate of λf and that the duration of a burst is a negative exponential distribution with an average length of 1/µ. The source of flow f is denoted as sf and the set of destinations of flow f is denoted as Df . For each flow f , calling on function ψ(λf , sf , Df ) will distribute the flow to the designated destinations (the λfu part for each u ∈ Df in the function ψ) and increase the arrival rates of corresponding sets of outgoing links at all involved nodes (the λN part in the function ψ). Note that, if function g2 implements the OXcast routing function without considering deflection, function ψ is for OXcast without deflection; otherwise, function ψ is for OXCast with deflection routing. D. Network model Based on the node model and the load estimation model, we propose the network analytical model. The model is an iterative procedure which can be outlined as follows. 1) Step 1: For each node, initialize the probability of the steady state in which all outgoing links are free to be

λ{1}

1

0.9

Fig. 6.

2

A node with a degree of 2 for the evaluation of the node model.

1.0 and all the other state probabilities to be 0; 2) Step 2: For each flow f , calculate the applied loads at involved nodes in the network by calling the load estimation function ψ(λf , sf , Df ) defined in Section IIIC; 3) Step 3: For each node, construct the state transition matrix based on the node model and then calculate the steady state probabilities by solving the linear equations denoted by Eq. 4; 4) Step 4: If the steady state probabilities on all nodes converge, continue to Step 5; otherwise, go to Step 2; 5) Step 5: Calculate the loss probability of each flow f and the average loss probability of the network as follows: P f f λ f ploss = 1 − fu∈D f u , (6) λ · |D | P P P f f f f u∈D f λu f (ploss · λf · |D |) P P = 1 − , ploss = f f f f f (λ · |D |) f (λ · |D |) (7) where λfu is obtained by calling the traffic distributing function ψ(λf , sf , Df ) in Step 2. Although, in general, such an iterative process is not guaranteed to converge, the above procedure does converge in only a few iterations in all of our tested instances. IV. N UMERICAL R ESULTS In this section, we will present numerical results under some typical network settings. We first verify the accuracy of the node model. We then evaluate the network performance by the analytical model and simulations. A. Accuracy of node model The node model is the foundation of the network model. Only if the node model is accurate, can we expect an accurate network model. We consider the traffic loss probability as the performance measure. According to the state transition diagram in Fig. 4, the traffic loss probability without and with deflection routing is defined respectively as (recall that partial blocking policy is assumed): ploss = x{{1},{2}} + x{{1,2}} · 1 + · x{{1}} · (λ{{1}} λ{{1}} + λ{{2}} + 2λ{{1,2}} ¸ + λ{{1,2}} ) + x{{2}} · (λ{{2}} + λ{{1,2}} ) , (8) pdef loss = x{{1},{2}} + x{{1,2}} .

(9)

0.7 0.6 0.5 0.4 0.3 Ana - no defl Sim - no defl Ana - with defl Sim - with defl

0.2 0.1 0 0

1

2

3

4 5 Node Load

6

7

8

Fig. 7. The traffic loss probability at a node with outgoing degree of two, without or with deflection routing, multicast traffic is the dominate (α = 0.1, β = 0.2). 0.9 0.8 Traffic Loss Probability

λ{2}

Traffic Loss Probability

0.8

λ{1,2}

0.7 0.6 0.5 0.4 0.3 Ana - no defl Sim - no defl Ana - with defl Sim - with defl

0.2 0.1 0 0

1

2

3

4 5 Node Load

6

7

8

Fig. 8. The traffic loss probability at a node with outgoing degree of two, without or with deflection routing, unicast traffic is the dominate (α = 0.3, β = 0.4).

We present results for nodes with an outgoing degree of 2 as shown in Fig. 6. Similar results are obtained for other values of outgoing degree. Although we are discussing multicast, from a node’s point of view, some multicast traffic looks like unicast traffic. Therefore, we discuss mixed traffic at a node. There are three independent Poisson flows with arrival rates of λ{1} = αλ, λ{2} = βλ, λ{{1,2}} = (1 − α − β)λ, where α, β, (α + β) ∈ [0, 1]. The length of bursts has a negative exponential distribution. Outgoing links have the same service rate such that the average service time is 1/µ. The load at the node is defined as λ/µ. Two typical instances are shown in Fig. 7 and Fig. 8. In the first figure, multicast traffic λ{1,2} takes 70% of the total traffic, while in the second figure unicast traffic λ{1} and λ{2} together dominate the traffic. Other traffic distributions also produce similar results. The figures clearly show that the node model is extremely accurate, either without or with deflection routing. It is observed that deflection routing is an effective approach to reduce data loss when the system has no buffer space.

TABLE I O UTGOING LINK DEPENDENCY UNDER MULTICAST TRAFFIC , WHERE p1 , p2 , p12 IS THE PROBABILITY OF LINK 1 BLOCKED , LINK 2 BLOCKED , BOTH OF LINK 1 AND LINK 2 BLOCKED , RESPECTIVELY. U-case load/1.3 0.002 0.004 0.008 0.020 0.040 0.080 0.200 0.400 0.800 1.000 M-case load 0.002 0.004 0.008 0.020 0.040 0.080 0.200 0.400 0.800 1.000

loss 0.00131 0.00261 0.00520 0.01291 0.02548 0.04969 0.11558 0.20712 0.34298 0.39480

x{φ} 0.99741 0.99482 0.98968 0.97450 0.94996 0.90360 0.78321 0.63004 0.43313 0.36765

x{{1}} 0.00120 0.00239 0.00475 0.01169 0.02280 0.04337 0.09399 0.15121 0.20790 0.22059

x{{2}} 0.00140 0.00279 0.00554 0.01364 0.02660 0.05060 0.10965 0.17641 0.24255 0.25735

x{{1},{2}} 0.000002 0.000007 0.000027 0.000164 0.000638 0.002430 0.013200 0.042300 0.116000 0.154000

x{{1,2}} 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0

p1 0.001199 0.002394 0.004777 0.011858 0.023437 0.045803 0.107185 0.193510 0.323900 0.374588

p2 0.001402 0.002797 0.005567 0.013807 0.027237 0.053031 0.122849 0.218711 0.358550 0.411353

p12 0.000002 0.000007 0.000027 0.000164 0.000638 0.002430 0.013200 0.042300 0.116000 0.154000

p1 · p2 0.000002 0.000007 0.000027 0.000164 0.000638 0.002429 0.013168 0.042323 0.116134 0.154088

loss 0.00131 0.00261 0.00520 0.01291 0.02548 0.04969 0.11558 0.20712 0.34298 0.39480

x{φ} 0.99800 0.99601 0.99205 0.98031 0.96121 0.92469 0.82713 0.69636 0.51434 0.44935

x{{1}} 0.00060 0.00120 0.00238 0.00589 0.01156 0.02228 0.05007 0.08489 0.12669 0.13889

x{{2}} 0.00080 0.00159 0.00317 0.00784 0.01536 0.02951 0.06573 0.11009 0.16134 0.17565

x{{1},{2}} 0.000001 0.000004 0.000014 0.000088 0.000346 0.001332 0.007449 0.025096 0.074194 0.101307

x{{1,2}} 0.000599 0.001195 0.002381 0.005882 0.011535 0.022193 0.049628 0.083563 0.123441 0.134804

p1 0.001199 0.002394 0.004777 0.011858 0.023438 0.045802 0.107143 0.193548 0.324324 0.375000

p2 0.001398 0.002792 0.005569 0.013807 0.027237 0.053030 0.122807 0.218750 0.358974 0.411765

p12 0.000600 0.001199 0.002395 0.005970 0.011881 0.023525 0.057076 0.108659 0.197635 0.236111

p1 · p2 0.000002 0.000007 0.000027 0.000164 0.000638 0.002429 0.013158 0.042339 0.116424 0.154412

After verifying the accuracy of the node model, we use the model to explore the outgoing link dependency under multicast traffic as we discussed in Section III-A. For the purpose of comparison with the test case in Fig. 8 (referred to as Mcase), we consider another test case with α = 0.6/(0.6 + 0.7), β = 0.7/(0.6 + 0.7) (referred to as U-case). That is, we keep the same load ratio between the two outgoing links in U-case as in M-case but with only unicast traffic. For a load λ/µ in M-case, we need a load 1.3λ/µ in U-case to obtain the same net load on each outgoing link. The results are shown in Table I. Under the partial blocking policy, it is not a surprise to see that both cases have the same traffic blocking performance. An interesting observation is that, although both cases have the same traffic loss probability under the same node load, their steady state probability distributions are completely different (see the left half of Table I). According to the steady state probabilities, we can calculate link blocking probability as follows: p1 = x{{1}} + x{{1},{2}} + x{{1,2}} ,

(10)

p2 = x{{2}} + x{{1},{2}} + x{{1,2}} ,

(11)

p12 = x{{1},{2}} + x{{1,2}} ,

(12)

where p1 , p2 , p12 are the blocking probabilities of link 1, link 2, and both link 1 and link 2, respectively. From the right half of the table, we can see that each individual link virtually has the same blocking probability (see p1 and p2 ). However, M-case has much higher probability with which both links are busy than U-case (see p12 ). Surprisingly, under low load, p12 of M-case is not just higher but several

order of magnitude higher than p12 of U-case. We further calculate the product of p1 and p2 (the last column of the table). Clearly, p12 is virtually equal to p1 · p2 in the U-case, which quantitatively proves the outgoing link independence under unicast traffic. On the contrary, for M-case, even with only 30% multicast traffic, p12 À p1 ·p2 , which shows a strong outgoing link dependence under multicast traffic. B. Network performance In this subsection, we evaluate the network performance by the network model and simulations. The considered network topology is shown in Fig. 9. All links are bidirectional. The distance between neighboring nodes is shown in units of kilometers. All links have the same transmission rate of 10 Gb/s, and the speed of light in optical fibers is assumed to be 250 km/ms. Bursts arrive to the network according to a Poisson process with the arrival rate of λ bursts per second. Each request has 4 destinations. The source and destinations of a multicast request are evenly distributed among all nodes. The length of a burst is exponentially distributed with expected service time of 1/µ second. Thus, we define the network load as λ/µ. There are no optical buffers or wavelength converters. Thus the WDM OBS network has multiple independent planes, one plane for each wavelength. As a first step, only a single wavelength plane is considered and a TAG-based signaling protocol is assumed. The performance metrics include average blocking probability and average delay. Fig. 10 and Fig. 11 show the average blocking probability as a function of the network load. Apparently, deflection routing can significantly reduce the blocking probability when the load is low. The reason is that, when the network is under lighter load, if a burst is blocked at one outgoing link, there is much

1100

1

10 800 1600

2800

1600

2400 6

3

1000

4

600

600

700

7

800

0.8

300 500

700

13

8

300

800

900

1100 2

9

1200

2000

0.9

11

12 900

5

Average Loss Probability

0

0.7 0.6 0.5 0.4 0.3 Ana - no defl Sim - no defl Ana - with defl Sim - with defl

0.2 0.1

Fig. 9. A network with 14 nodes and 21 bi-directional links (link distance in kilometers).

0 5

10 15 Network Load

20

0

Fig. 11.

Average blocking probability via. network load (under high load).

15

10-1

Sim - no defl Sim - with defl

14

10-2

10-3

Average Delay (ms)

Average Loss Probability

10

Ana - no defl Sim - no defl Ana - with defl Sim - with defl

10-4 0

0.2

0.4 0.6 Network Load

0.8

1

13 12 11 10 9 8 7

Fig. 10.

Average blocking probability via. network load (under low load).

0

Fig. 12.

higher probability that one of the other outgoing links will be freely available. When the network load is high, deflection routing will further overload the network which decreases the network performance. Fig. 12 shows the average burst delay measured under variable network loads. The burst delay continually decreases if there is no deflection routing, since bursts undergoing longer paths have higher chance of being blocked. With deflection routing, deflected bursts will take longer path. When the load begins to increase, deflected bursts have a good chance to reach their destinations such that the average delay increases. When the load increases after some point, bursts begin to have lower chance to be deflected or have a higher chance to be blocked even if they are deflected, which results in the decrease of average delay. In summary, from Fig. 10 - Fig. 12, we can see that deflection routing under multicast traffic has similar performance to deflection routing under unicast traffic, which is consistent with what we would expect from multicast deflection routing. At the same time, Fig. 10 and Fig. 11 clearly show that the proposed analytical model is quite accurate under variable network load, either with or without deflection routing.

5

10 15 Network Load

20

Average burst delay (in ms) via. network load.

V. C ONCLUSIONS We have studied the problem of supporting small group multicast in OBS networks. We proposed a multicast scheme, OXcast, for small group multicast in OBS networks, which is based on Xcast in IP networks. We found that only minor extensions to existing OBS techniques are necessary for supporting OXcast. To reduce data loss due to burst contention, we proposed a deflection routing scheme. We developed an analytical model to evaluate the performance of the proposed schemes. A node model was first proposed, which is based on our observation of the outgoing link dependency under multicast traffic. Following the idea of reduced load approximation, we developed an iterative network model which takes the node model as a corner stone. Numerical results show that the analytical model is accurate and that deflection routing can significantly reduce burst loss probability while slightly increasing burst delay. ACKNOWLEDGMENTS This work was supported in part by the National Science Foundation (NSF) under Grant number AN-0133899.

R EFERENCES [1] S. Paul, “Multicasting on the Internet and its applications,” Kluwer Academic, Boston, 1998. [2] D. Waitzman, C. Partridge, and S . Deering, “Distance vector multicast routing protocol,” IETF RFC 1075, 1988. [3] R. Boivie, N. Feldman, Y. Imai, W. Livens, D. Ooms, and O. Paridaens, “Explicit multicast (Xcast) basic specification, ” IETF Internet Draft, Jul. 2005. [4] B. Mukherjee, “Optical communication networks,” McGraw-Hill, 1997. [5] C. Qiao and M. Yoo, “Optical burst switching (OBS) - a new paradigm for an optical Internet,” Journal of High Speed Networks, vol. 8, no.1, pp. 69-84, Jan. 1999. [6] C. Qiao, M. Jeong, A. Guha, X. Zhang, and J. Wei, “WDM multicasting in IP over WDM networks,” The proceding of IEEE International Conference on Network Protocols, 1999. [7] X. Zhang, J. Wei and C. Qiao, “On fundamental issues in IP Over WDM multicasting,” Int’l Conf. on Computer, Communications and Networks (IC3N), Boston, MA, Oct. 1999. [8] B. Ramamurthy and B. Mukherjee, “Wavelength conversion in WDM networking,” IEEE Journal on Selected Areas in Communications, vol. 16, no. 7, pp. 1061-1073, Sept. 1998. [9] C. Gauger, “Dimensioning of FDL buffers for optical burst switching nodes”, Proceedings, Optical Network Design and Modeling (ONDM 2002), Torino, Feb 2002. [10] C. Hsu, T. Liu, and N. Huang, “Performance analysis of deflection routing in optical burst switched networks,” Proceedings, INFOCOM 2002, IEEE, vol. 1, Jun. 2002. [11] S. Kim, N. Kim, and M. Kang, “Contention Resolution for Optical Burst Switching Networks Using Alternative Routing,” ICC2002, pp. 26782681, 2002. [12] X. Wang, H. Morikawa, and T. Aoyama, “Deflection Routing protocol for burst switching SDM mesh networks,” IEEE Optical Magazine, Nov/Dec., 2002. [13] S. Lee, K. Sriram, H. Kim, and J. Song, “Contetion-based limited deflection routing in OBS networks,” globecom2003, 2003. [14] A. Zalesky, H. L. Vu, Z. Rosberg, E. W.M. Wong, and M. Zukerman, “Modelling and performance evaluation of optical burst switched networks with deflection routing and wavelength reservation,”, Infocom2004, 2004. [15] G. N. Rouskas, “Optical layer multicast: rationale, building blocks, and challenges,” Network, IEEE, Vol. 17, Issue 1, pp. 60 - 65, Jan/Feb., 2003. [16] G. Sahin and M. Azizoglu, “Multicast routing and wavelength assignment in wide area networks,” Proc. SPIE, vol. 3531, pp. 196-208, 1998. [17] R. Malli, X. Zhang, and C. Qiao, “Benefit of multicasting in all-optical networks,” SPIE Proc. Conf. All-optical Networking, vol. 3531, pp.209220, Nov., 1998. [18] L. H. Sahasrabuddhe and B. Mukherjee, “Light trees: optical multicasting for improved performance in wavelength routed networks,” Communications Magazine, IEEE, vol. 37, issue 2, pp. 67 - 73, Feb. 1999. [19] M. Jeong, H. Cankaya, and C. Qiao, “On a new multicasting approach in optical burst switched networks,” IEEE Communications Magazine, pp. 96-103, vol. 40, issue 11, Nov. 2002. [20] M. Jeong, C. Qiao, Y. Xiong, H. Cankaya, and M. Vandenhoute, “Tree-shared multicast in optical burst-switched WDM networks,” IEEE Lightwave Technology, Journal of On, issue 1, vol. 21, pp. 13-24, Jan. 2003. [21] W.S. Hu and Q.J. Zeng, “Multicasting optical cross connects employing splitter-and-delivery switch,” IEEE Photonics Technology Letters, vol. 10, no. 7, pp. 970-972, Jul. 1998. [22] F.K.Hwang, D.S.Richards, and P.Winter, “The Steiner Tree Problem”, Elsevier Science Publishers, 1992. [23] Y. Xiong, M. Vandenhoute, and H. C. Cankaya, “Control architecture in optical burst-switched WDM networks”, IEEE Journal on Selected Areas in Communications, vol. 18, pp. 1838-1851, 2000. [24] Y. Chen, C. Qiao, and X. Yu, “Optical burst switching: a new area in optical networking research,” IEEE Network, vol. 18, no. 3, pp. 16-23, May-June 2004. [25] R. R. Cooper and S. Katz, “Analsysis of alternative routing networks with account taken of nonrandomness of overflow traffic,” Technical Memorandum, Bell Telephone Laboratories, 1964.