Network Layer Support for Gigabit TCP Flows in Wireless Mesh ...

4 downloads 3194 Views 2MB Size Report
Network Layer Support for Gigabit TCP Flows in Wireless Mesh Networks. Chin-Ya Huang, Member, IEEE, and Parameswaran Ramanathan, Fellow, IEEE.
IEEE TRANSACTIONS ON MOBILE COMPUTING,

VOL. 14,

NO. 10,

OCTOBER 2015

2073

Network Layer Support for Gigabit TCP Flows in Wireless Mesh Networks Chin-Ya Huang, Member, IEEE, and Parameswaran Ramanathan, Fellow, IEEE Abstract—Next generation wireless networks (WMNs) are designed to provide better performance than other existing personal, local, and metropolitan wireless networks, such as Wireless Local Area Networks and WiFi. In WMNs, each link has different amount of available bandwidth, and the bandwidth fluctuates dynamically based on the wireless environment. When the available bandwidth fluctuates, Transmission Control Protocol (TCP) flows experience packet losses, packet re-ordering, and timeouts, resulting in low end-to-end throughput. To alleviate the problem of low TCP throughput, we propose a Spare-bandwidth Rate-adaptive Network Coding (SRNC) scheme. In SRNC scheme, each gateway node forwards packets after network coding. Each intermediate node also adaptively uses network coding before forwarding the packets to the outgoing links. Each mesh access node decodes the network coded packets before forwarding them to the destinations. The key feature of SRNC is that each node adapts its network coding rate based on the available bandwidth on the outgoing links, such that the access nodes can decode the packets with higher probability without significantly affecting the cross-traffic. Effectiveness of the proposed scheme is evaluated using simulation. The simulation results show that the proposed scheme uses the available bandwidth on each link efficiently and it significantly improves end-to-end throughput of TCP flows. Index Terms—TCP, multipath routing, network coding, multi-gigabit, bandwidth-delay product

Ç 1

INTRODUCTION

T

HE demands for real-time applications have increased rapidly, such as video on demand (VoD), high-definition television (HDTV) or mobile Internet access, and these applications will require more than 1 Gbps bandwidth to ensure transmission quality. These applications have two characteristic: long-lived and large bandwidth. Specifically, the traffic flows in networks must be more than gigabits per second when people simultaneously download bandwidthintensive applications from Internet. These applications also require low packet loss probability to sustain their Quality of Service (QoS). TCP is known to guarantee reliable and ordered packet delivery, and is commonly used for supporting real-time streaming [1], [2], [3], [4]. Current standards IEEE 802.11ac [5] and IEEE 802.11ad [6] support more than 1 Gbps wireless transmission rate, and with the development of other wireless technologies, building multi-gigabit wireless networks for end-to-end gigabit transmission can be expected within the next decade. Wireless networks will provide multi-gigabit data rate on each link, and the bandwidth-delay product (BDP) in the network will increase ten times more than the BDP of the wireless networks today whose data link rate is approximately 100 Mbps or less. However, the growth of link bandwidth is not equivalent to proportionally increasing network throughput. On the contrary, the factors



C.-Y. Huang is with the Department of Electrical Engineering, National Central University, Zhongli, Taiwan. E-mail: [email protected].  P. Ramanathan is with the Department of Electrical and Computer Engineering, University of Wisconsin, Madison, WI 53706, USA. E-mail: [email protected]. Manuscript received 5 May 2013; revised 13 Nov. 2014; accepted 14 Nov. 2014. Date of publication 1 Dec. 2014; date of current version 31 Aug. 2015. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference the Digital Object Identifier below. Digital Object Identifier no. 10.1109/TMC.2014.2375825

degrading network performance will be enhanced significantly when the BDP increases. Specifically, we consider a wireless mesh network (WMN), which is a promising approach to provide higher data rate and better performance in the future. In a WMN, wireless hosts access the Internet infrastructure over a fixed multihop wireless network of “mesh nodes” (see Fig. 1). Each wireless host is associated with one of the mesh nodes and some of the mesh nodes (henceforth referred to as gateway nodes) have direct connection to Internet. In WMN, the channel quality, as in typical wireless networks today, fluctuates due to multipath, Doppler, interference, and other fading effects. As a result of those factors, there is considerable variations on the bandwidth available to the mesh nodes. These variations create some problems for the mesh nodes in the WMN. In particular, this variability in bandwidth available for communication creates performance problems to the TCP based packet flows. For instance, when bandwidth reduces in the WMN, TCP flows passing through those nodes experience packet re-ordering, packet losses, and timeouts. In response, the corresponding TCP sources reduce their sending rates and re-transmit the lost packets. Meanwhile, the WMN routing algorithms may also attempt to re-route the TCP flows away from the congested nodes. While these adaptations to bandwidth reductions are taking place and/or after these adaptations have occurred, the changes of network condition may increase the bandwidth of the mesh node, and the WMN must begin another round of adaptation to exploit the availability of additional bandwidth. In this context, the contributions of this paper are twofold. First, we show that the TCP flows experience significant end-to-end throughput degradation due to bandwidth fluctuations in WMNs. Note that, these degradations are over and above those which occur in wireless networks due to channel quality fluctuations. We show that the problem

1536-1233 ß 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

2074

IEEE TRANSACTIONS ON MOBILE COMPUTING,

VOL. 14, NO. 10,

OCTOBER 2015

TABLE 1 Normalized TCP Throughput

NewReno CTCP

0.1 Gbps

1 Gbps

5 Gbps

8 Gbps

10 Gbps

0.9 0.9

0.9 0.9

0.8 0.9

0.5 0.9

0.6 0.9

mesh. In this example, we focus on wireless hosts associated to node A. We assume that there are ten such hosts and each host has a TCP-based downlink streaming from Internet. Furthermore, cross-traffic share the bandwidth between node H and node I when it is active. Other relevant characteristics of the network are: Fig. 1. Simple wireless mesh network topology.

gets particularly acute as the link bandwidth grows into gigabit range. Specifically, this paper shows that the impact of the bandwidth fluctuations on TCP throughput is more significant when BDP is large. This is true not only for traditional TCP versions such as NewReno [7] but also for TCP versions specifically designed for large BDP networks such as Compound TCP (CTCP) [8]. Second, we propose a new network layer approach, called SRNC, which overcomes the problems caused by bandwidth fluctuations. This approach integrates the following four ideas into a fully-distributed scheme implemented through each mesh node in a WMN. 

First, it exploits diagraph diversity within the WMN to take advantage of available bandwidth at many links in the network.  Second, each mesh node in the WMN performs intersession network coding of TCP packets before forwarding them on the appropriate outgoing links.  Third, each mesh node exploits excess bandwidth available on an outgoing link to introduce packet redundancy through network coding in order to provide easy recovery from packet losses within the WMN.  Fourth, each mesh node includes a buffer management strategy to ensure fair sharing of the limited storage and the excess bandwidth on an outgoing link among competing packet flows. With SRNC, each mesh node sends the number of redundant packets to the networks based on the available bandwidth on its outgoing links in order to overcome packet losses during transmission. Under this condition, packets are routed to the their corresponding destinations with higher probability without affecting other traffic flows. The rest of this paper is organized as follows. Motivation is described in Section 2. We illustrate the related work in Section 3. The proposed scheme is presented in Section 4. Simulation results evaluating the proposed strategy are shown in Section 5. The paper concludes with Section 6.

2

MOTIVATION

In this section, we motivate the problem to be addressed using a simple example. Consider the wireless mesh network shown in Fig. 1. The mesh has six nodes labeled A, B, C, G, H, I. Node G is the gateway node to Internet. Wireless hosts associate themselves with one of the nodes in the



The packet size of TCP and acknowledgement are 512 and 40 bytes, respectively.  The link delay from Internet to the gateway node is set to be 6 ms.  The average end-to-end Round-Trip-Time (RTT) is 12 ms.  The bandwidth of each link in the mesh fluctuates between x and 2x; we include results of several different values of x. Two different TCP versions, NewReno [7] and CTCP [8], are introduced in this example. CTCP is specifically designed for networks with large BDP. CTCP combines delay-based and loss-based components to adjust packet sending rate. Specifically, it measures variance in RTT and uses this information to multiplicatively increase the sending rate with an objective of achieving high bandwidth utilization. Once the link bandwidth is fully utilized, the packet sending rate is gracefully reduced to avoid network congestion. Furthermore, the loss-based component strives to provide lower bound of TCP throughput in the presence of packet losses. CTCP is implemented in existing operating systems such as Windows 7, Windows Server 2008, and Linux, and it performs well without any parameter tuning when network condition is stable. However, in the presence of large bandwidth fluctuations, CTCP does not perform well as data rate increases. Table 1 illustrates the average normalized end-to-end throughput of the ten TCP flows for x ¼ 0:1; 1; 5; 8; 10 (Gbps) without bandwidth fluctuations. Note that CTCP achieves higher TCP throughput when the link bandwidth exceeds 5 Gbps because CTCP is designed to aggressively probe the link bandwidth in the networks. Specifically, not only packet losses but also packet delay, including queueing delay and RTT are considered to adjust the congestion window, therefore CTCP is more suitable to deliver packets when BDP is large in comparison with traditional lossbased scheme, NewReno. However, as evident in Fig. 2, both versions do not perform well as data rate increases in the presence of congestion loss resulting from bandwidth fluctuations. Fig. 2 shows the average end-to-end throughput of the ten TCP flows for x ¼ 0:1; 1; 5; 8; 10 (Gbps). In particular, when x ¼ 5; 8; 10, the maxflow rate (i.e., maximum flow rate) between G and A is at least 99 percent of the link bandwidth. But, the normalized end-to-end TCP throughput of both versions are below 0.5. Since the RTT is kept unchanged for different values of x, the BDP increases in proportion to x. When link bandwidth

HUANG AND RAMANATHAN: NETWORK LAYER SUPPORT FOR GIGABIT TCP FLOWS IN WIRELESS MESH NETWORKS

Fig. 2. Evaluate TCP throughput of NewReno and Compound TCP under different link bandwidth scale.

fluctuations occur, TCP flows experience some packet losses, and as BDP increases, TCP congestion control scheme takes more time to adapt. Even CTCP, which is specifically designed to respond faster to congestion in high bandwidth-delay situations, seems not to do well in the presence of congested links resulted from bandwidth fluctuations. Furthermore, the time scales of these fluctuations cause both NewReno and CTCP to substantially underperform as data rate increases. To sum up, we need mechanisms supporting gigabit TCP throughput for each TCP flow in wireless networks. A wireless mesh network has the potential to provide multi-gigabits physical layer data rate. However, the dynamic and dramatic link bandwidth fluctuations result in low TCP throughput in multi-gigabits wireless networks. Although TCP mechanisms like CTCP are developed to enhance TCP throughput with large BDP, they are not designed for handling the dynamic changes of link bandwidth. Improving gigabit TCP throughput requires additional strategies instantly and efficiently utilizing the available bandwidth.

3

RELATED WORK

There is an extensive body of literature on increasing channel capacity, link bandwidth, and data rate in wireless networks [9], [10], [11], [12], [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23]. For example, a majority of this work focuses either on spectrum sensing to identify available primary user channels or on the physical layer design needed for wireless communication over a wide range of frequency spectrum. Although these works form the basis for studying on enhancing link bandwidth, data rate, and network throughput in the networks, they are not directly comparable to our paper since we focus on the effects of bandwidth fluctuations on transport layer performance. There are several papers which integrate network layer issues such as routing with link and physical layer issues in wireless networks. This body of work is more relevant to our paper. Some of these papers focus on traditional single path routing with, as needed, re-routing based on feedback from nodes in the network [16], [17], [18], [19]. Others, such as [20], [24], rely on multiple path routing to combat the effects of bandwidth fluctuations. Our paper also relies on multiple path routing. However, unlike these existing

2075

works, our scheme does not require any feedback information from the intermediate nodes to either TCP senders and/or any other node in the network. As a result, our scheme does not have any feedback overhead and/or experience any delay needed to gather and decide based on feedback information. We show through simulation that our scheme performs better than schemes which do re-routing based on feedback information, even in ideal case where the feedback delay is assumed to be zero. The literature on network coding is also extensive. Some papers [25], [26], [27], [28] also deal with both network coding and TCP. They mainly focus on the algorithm design and throughput analysis in controlling number redundant packets sent through one specific path to recover lost packets. The packet loss on each link is modeled as wireless loss with probability p. However, we consider the design, in response to link bandwidth fluctuations, from the viewpoint of routing packets in the network and distributedly utilizing the spare bandwidth to send redundant packets for TCP throughput enhancement. In [25], the authors propose a modification to protocol stack at the TCP sender and receiver to introduce a new network coding sublayer between the transport and network layers. The new sublayer encodes/decodes segments transmitted/received by TCP sender/receiver. As mentioned above, modification of protocol stack at the sender and receiver of a TCP flow is not desirable. In addition, unlike [25], in our scheme, network coding is done across multiple TCP flows. Furthermore, other nodes in the mesh network also perform inter-flow network coding to improve TCP performance. Our scheme also incorporates new ideas in buffer management and redundancy management. In [25], redundancy management is done at the TCP sender. The sender requires to collect information and estimate the spare bandwidth available in the route aimed to determine how many redundant packets can be transmitted. Sundararajan et al. [25] also propose an indirect way in [29] to manage redundancy at the sender using the feedback (i.e., acknowledgments) inherent in some protocols (e.g. TCP), although the scheme is described for a single hop broadcast scenario with multiple receivers. If too many redundant packets are transmitted, congestion will increase, and the end-to-end throughput will suffer. On the other hand, if too few redundant packets are transmitted, they will not be effective in recovering from packet losses. In contrast, our approach does not require any single node to aggregate and be explicitly aware of the amount of spare bandwidth in the network. It occurs implicitly based on local information at each node. In addition to the inter-flow network coding, this is another major advantage of our approach as compared to the one in [25]. Similar to our approach, the schemes in [30], [31], [32] are also targeted towards wireless mesh networks. Multipath routing and network coding are used as well in order to increase throughput. Specifically, Ding and Xiao [30] mainly focus on the joint routing and spectrum allocation design to maximize the number of traffic flows sent to the network at the same time, but unlike this paper, it does not address the challenge in dealing with TCP throughput degradation due to link bandwidth fluctuations. In [31], [32], both schemes take advantage of opportunistic routing for throughput improvement. Furthermore, the proposed method in [31]

2076

IEEE TRANSACTIONS ON MOBILE COMPUTING,

Fig. 3. Wireless mesh network.

focuses on IEEE 802.11 medium access control scheduling, and the mechanism in [32] controls number of packets for transmission with the assistance of feedback information. However, they do not focus on TCP issues, although TCP flows can benefit from the techniques in [31], [32]. Moreover, the works in [31], [32] are at different abstraction levels than our scheme. The papers in [26], [27] also use network coding for enhancing TCP throughput in multihop wireless networks based on IEEE 802.11. The focus of both these papers is also on the optimal use of the IEEE 802.11 medium access protocol, hence they are also at a different layer of abstraction than our scheme. There are several recent papers [33], [34], [35] which use inter-flow network coding similar to our approach. They rely on solving an optimization problem to do rate control, and the knowledge of available bandwidth on each link in the network is required to find the optimal solution which is not mandatory in our design. Furthermore, they do not deal with the effects of bandwidth fluctuations on TCP related issues. Hence, they are not directly comparable to our approach. Additionally, researchers has proposed several versions of TCP, such as FAST, HSTCP, CTCP, CUBIC, etc. to efficiently deliver packets to the networks with large BDP [8], [36], [37], [38]. The parameters of the growth of TCP congestion window (cwnd) are modified to more aggressively increase cwnd, and for cwnd reduction decision, loss-based and delay-based approaches are applied. In loss-based approaches, e.g., HSTCP, CTCP, cwnd is decreased when packet losses occur. In delay-based approaches, e.g., FAST, CTCP, RTT variances are considered for cwnd adjustment, and when delay increases, cwnd is reduced to avoid packet congestion in advance. Some schemes in further consider fairness while updating cwnd, e.g., CUBIC, CTCP. In this paper, we focus on network layer design to enhance the transport layer performance instead of designing new TCP mechanisms to enhance network performance.

4

SPARE-BANDWIDTH RATE-ADAPTIVE NETWORK CODING (SRNC) SCHEME

4.1 Overview Consider a WMN with Internet gateway node G as shown in Fig. 3. The other mesh nodes serve as access points to the wireless hosts. Therefore, downlink traffic from Internet to a particular wireless host enters the mesh network through node G, traverses one or more hops in the mesh network to

VOL. 14, NO. 10,

OCTOBER 2015

the corresponding access node (say node A) and then goes over the wireless link to the receiving host. In this paper, we describe the idea in terms of the actions taken by the mesh nodes in forwarding TCP packets to the wireless hosts connected to the access node A. For clarity of presentation, we refer to this set of downlink TCP flows as FA . Furthermore, we assume that each mesh node has access to a set of channels to communicate with neighboring mesh nodes. The quality of these channels varies dynamically which in turn changes the link bandwidth for packet transmission. To effectively utilize the available bandwidth in response to the dynamic changes of link bandwidth, we develop a methodology, SRNC which integrates with existing communication network stacks. Specifically, it is implemented between link and network layers. The SRNC consists of four key components, diagraph routing, network coding, exploiting available bandwidth, and buffer management.

4.1.1 Diagraph Routing Traditionally, packets of FA are forwarded in the WMN over a single route. Instead, in SRNC, the routing algorithm finds a directed acyclic graph (DAG), GA rooted at node G and ending at node A. Packets of FA are routed from G to A over all paths in GA . Only the sub-graph GA of the WMN is relevant to flows in FA . 4.1.2 Network Coding In traditional routing, a node simply forwards a received packet along one of its outgoing links in the routing DAG. Instead, in SRNC, each node computes a linear combination of packets in FA and forwards the resulting packets on one of the outgoing links in the DAG. Specifically, packets are network encoded across multiple flows (i.e., inter-session network coding) in contrast to commonly used intra-session network coding. In addition, we suggest systematic codes for encoding other than random linear combinations. As a consequence, not all network encoded packets are lost when a set of network encoded packets are undecodable. 4.1.3 Exploiting Available Bandwidth In a basic plus network coding scheme, a node forwards only as many packets in FA as it receives. Instead, in SRNC, a node forwards as many combination of packets as it possibly can, subject to certain constraints. These constraints ensure that the introduced additional redundancy does not adversely affect the other traffic flows in the network. This is achieved through both scheduling and the following Buffer Management strategy. 4.1.4 Buffer Management If a node i forwards as many packets as possible on its outgoing link to node j, the buffer at node j may get filled up by packets from node i. A full buffer will cause packet losses to flows in FA , thereby decreasing the performance of flows in FA . Also, it is unfair to other flows in the WMN. Therefore, each node implements a buffer management strategy which ensures that no particular set of flows can overly dominate the buffer space at any node in the WMN.

HUANG AND RAMANATHAN: NETWORK LAYER SUPPORT FOR GIGABIT TCP FLOWS IN WIRELESS MESH NETWORKS

2077

Fig. 4. Algorithm executed by the gateway node in SRNC. Fig. 5. Illustration of SRNC at gateway node.

In the rest of this section, we elaborate on the four notions described above. In this elaboration, we present SRNC from the perspective of three different types of nodes: gateway node G, intermediate node i, and mesh access node A.

4.2 Description of SRNC 4.2.1 SRNC at Gateway Node G Fig. 4 describes “Algorithm Gateway”, the steps taken by node G during the time interval ½kT; ðk þ 1ÞT Þ, where T is a pre-determined design parameter. Specifically, T is a design parameter chosen to ensure that the number of packets waiting for network coding is approximately equal to a desired target. For example, if the network coding algorithm can reasonably combine ten packets without significant overhead, T must be large enough to ensure that there will be approximately ten packets arriving into the queue every T time units. Note that up to T time units delay would be added into each packet, which simply delays the start of the applications running on top of TCP. As described in Fig. 4, at the start of the interval ½kT; ðk þ 1ÞT Þ, G discards all packets in FA which arrive in the interval ½ðk  2ÞT; ðk  1ÞT Þ, and simply buffers packets in FA which arrive during the interval ½kT; ðk þ 1ÞT Þ for later transmission. Let nA;k be the total number of packets in FA which arrive during ½ðk  1ÞT; kT Þ. G generates network encoded packets through linearly combining these nA;k packets, and adds a label on each encoded packet. All packets transmitted during this interval ½kT; ðk þ 1ÞT Þ has a special label with value k for other nodes to identify packets that are linear combination of the same set of packets using this label. During the interval ½kT; ðk þ 1ÞT Þ, G transmits linearly combined packets in FA arriving during the interval ½ðk  1ÞT; kT Þ. In order to assist buffer management policy, the first nA;k packets transmitted by G during this interval are marked as “High priority”, and all the rest are marked as “Low priority”. As described later, other nodes will give preferential treatment to “High priority” packets if their buffers are full. Example. Consider a gateway node G shown in the Fig. 5. Node G has two outgoing links into the mesh. Assume that the packets can be routed on both of these links to reach

access nodes A and B in the mesh. Several wireless hosts are connected to the access nodes A and B, and they are receiving downlink traffic from Internet through the gateway node G. As described in Algorithm Gateway, node G buffers the packets which arrive in the time interval ½ðk  1ÞT; kT Þ for forwarding in the time interval ½kT; ðk þ 1ÞT Þ. Suppose that node G receives only five packets identified as a1 , a2 , a3 , b1 and b2 during the interval ½ðk  1ÞT; kT Þ. Packets a1 , a2 and a3 are destined for wireless hosts connected to access node A. Likewise, packets b1 and b2 are destined for wireless hosts connected to access node B. Then, the actions taken by node G in ½kT; ðk þ 1ÞT Þ in forwarding these packets can be explained as follows. Suppose that the order in which the scheduler at node G considers these packets for forwarding is as shown in Fig. 5. In the traditional approach, the scheduler will forward these packets in this order when the outgoing links are available. In contrast, in SRNC, node G forwards linear combination of appropriate packets. More specifically, the first packet considered by the scheduler is a1 . Therefore, at time t0 , when an outgoing link is available to transmit a new packet, node G forwards a linear combination, f1 , of a1 , a2 , and a3 , i.e., packets which are destined for the access node A. Next, at time t1 , the scheduler considers packet b1 . Therefore, node G forwards a linear combination, g1 , of b1 and b2 , i.e., packets which are destined for the access node B. Similarly, at times t2 , t3 , and t4 , the scheduler considers packets b2 , a2 , and a3 and then first forwards a linear combination, g2 , of b1 and b2 followed by linear combinations, f2 and f3 , of a1 , a2 , and a3 . Furthermore, in the traditional approach, the scheduler will be done after transmitting the five packets. In contrast, in SRNC, node G continues forwarding packets derived from these packets until time ðk þ 1ÞT . For instance, at time t5 , node G goes back to the first packet considered by the scheduler, namely a1 , and forwards a linear combination, f4 , of a1 , a2 , and a3 . Consequently, in this example, node G has forwarded four packets which are linear combination of a1 , a2 , and a3 . At most three of these packets are linearly independent, and with high probability, any of three of these four packets are linearly independent. Specifically, forwarding of redundant packet helps the mesh cope with

2078

IEEE TRANSACTIONS ON MOBILE COMPUTING,

VOL. 14, NO. 10,

OCTOBER 2015

Fig. 7. Illustration of SRNC at intermediate node.

Fig. 6. Algorithm executed by the intermediate node in SRNC.

fluctuations in available bandwidth. At time ðk þ 1ÞT , node G will delete packets a1 , a2 , a3 , b1 , and b2 from its buffer and switch to forward packets which arrived in the interval ½kT; ðk þ 1ÞT Þ. Note that, all the packets forwarded by node G in the interval ½kT; ðk þ 1ÞT Þ, will have a label k so that the mesh nodes can identify the packets which can be network coded together.

4.2.2 SRNC at Intermediate Node i Fig. 6 describes “Algorithm Intermediate”, the steps taken by an intermediate node i with respect to packets in FA . It includes two parts. The first part describes the actions taken when a new packet in FA arrives. The other part describes the strategy used for transmitting packets on the outgoing links. The process relies on a self-clocking scheme with respect to labels since packets in FA arrive on each incoming link of intermediate node i in non-decreasing order of label values. In addition, node i implements a buffer management strategy. In this strategy, preference is given to “High priority” packets if the buffer is full. When a packet of label greater than k (e.g., k þ 1) arrives on an incoming link l, node i knows that it will not receive any packet with label value less than k on that incoming link. Furthermore, when node i has received packets with label greater than or equal to k þ 1 from each incoming link, it will not expect to receive packets with label value less than k in the future. Furthermore, when node i has received packets with label at least k þ 1 from each incoming link, node i starts processing label k. Specifically, the processed label k is updated to be L0 when L0  minfLl : incoming lg  1, where L0 is the smallest unprocessed label in buffer and Ll is the largest label of all FA packets received on incoming link l.

Moreover, the transmission strategy at an intermediate node is similar to that at gateway node, except that the packets in FA with the current label are used for linear combination. If node i receives a certain number of “High priority” packets in FA , it transmits at most that many “High priority” on its outgoing links. Example. Consider a typical node i in the mesh which is on the routing DAG for all downlink traffic to wireless hosts connected through access node A. As shown in Fig. 7, at time t1 and t2 , node i receives packets with label k destined for wireless hosts connected to access node A. Node i keeps buffering these packets even though node i receives the first packet with label ðk þ 1Þ from its incoming Link 1 at time t3 . At time t6 , node i has received a packet with label ðk þ 1Þ on each of its incoming links, it discards all packets with label k  1 from its buffer. This is because it now knows that it will not receive any packet with label k destined for wireless hosts connected to node A. Traditionally, node i simply forwards one copy of the packet it has received on an outgoing link in the routing DAG. In contrast, in SRNC, node i linearly combines all packets with label k before forwarding it on an outgoing link. Suppose node i has received p packets with label k destined towards node A. Node i, traditionally, will forward exactly p packets on its outgoing links. In contrast, node i, in SRNC, forwards as many linearly combined copies as possible until it is time to switch to label ðk þ 1Þ. Each of these forward copies will also have label k. If the available bandwidth on its outgoing links does not allow node i to transmit p copies, node i forwards fewer than p. On the other hand, if the available bandwidth on its outgoing links allows node i to transmit more than p copies, node i forwards more than p copies, even though some of them are redundant. Moreover, if node i has h high priority packets among packets with label k (destined towards node A), it forwards at most h high priority packets with label k (destined towards node A). If node i gets an opportunity to forward more than h packets, the remaining packets will be tagged as low priority. In SRNC, mesh nodes such as node i must also perform buffer management. If a low priority packet arrives on an incoming link of node i whose buffer is full, the packet is discarded. However, if a high priority packet arrives on an incoming link of node i whose buffer is full, node i drops the most recently arrived low priority packet from its buffer to make space for incoming high priority packet. On the other hand, if a high priority packet arrives on an incoming

HUANG AND RAMANATHAN: NETWORK LAYER SUPPORT FOR GIGABIT TCP FLOWS IN WIRELESS MESH NETWORKS

2079

Fig. 9. Evaluation mesh topology.

Fig. 8. Algorithm executed by the access node in SRNC.

link of node i whose buffer is full of high priority packets, node i simply discards this incoming packet.

4.2.3 SRNC at Access Node A Fig. 8 describes “Algorithm Access”, the steps taken by mesh access node A. Node A also implements a buffer management strategy similar to intermediate node i. Additionally, node A also uses a self-clocking algorithm with respect to labels as intermediate node i to determine which packets to decode. Node A decodes packets with label k when each incoming link has received a packet in FA with label greater than k, which is the smallest unprocessed label in buffer. If decoding is successful, the unencoded packets are forwarded to wireless hosts. Otherwise, undecoded packets are discarded. Example. Consider a typical access node A which is receiving network coded packets destined for wireless hosts connected to it. Similar to Fig. 7, node A buffers all packets with label k until it has received at least one packet with label ðk þ 1Þ on each of its incoming links. When it has received at least one packet with label ðk þ 1Þ on each of its incoming links, node A checks whether sufficient number of linearly independent packets with label k are present in its buffer in order to successfully recover the original packets. If original packets are recoverable, node A performs necessary decoding, reconstructs the original packets, and forwards them to the corresponding wireless hosts. If original packets are unrecoverable, all undecoded packets with label k are discarded, which in turn, will result in loss of corresponding original packets on the wireless hosts.

5

EVALUATION

5.1 Overview In this section, we present results of our experimental evaluation. The goals of this evaluation are two-fold. The first goal is to compare the performance of the proposed SRNC scheme to that of other schemes in literature under different scenarios. The second goal is to explore the effectiveness and fairness of SRNC. To meet the first goal, we compare

SRNC’s performance to the following three schemes from literature. Note that packets are sent through multipath routing in all schemes. These three schemes also demonstrate the necessary of integrating the four components of SRNC, diagraph diversity, network coding, exploiting available bandwidth, and buffer management in order to route packets in the mesh efficiently. 

Multipath Routing (MR) [20]: Packets from the gateway node are routed to the respective access node along the multiple edges of a DAG. Since multipath routing can exploit the bandwidth available on more than one route, it is not likely to be as susceptible to performance degradation as traditional single path routing when bandwidth reductions occur. However, multipath routing is more prone to packet reordering, especially when there are fluctuating and unequal available bandwidth along different routes. These packet re-orderings often result in Fast Retransmits from the TCP sender, which in turn reduces the TCP throughput.  Packet Re-ordering (PRO) [39]: Packet re-ordering occurs when packets experience different link delays through different routing paths. Although multipath routing balances traffic loads in the network, packet re-ordering still decreases TCP throughput. To solve this problem, packets are re-ordered before sent to transport layer of a receiver. When transport layer receives consecutive packets, it increases congestion window, and then TCP throughput is increased. However, packet losses causing packet retransmission still reduce TCP throughput.  Network Coding on TCP (JK2009) [25]: Sundararajan et. al propose network coding based mechanism aimed to improve TCP throughput by sending appropriate number of redundant packets to the network. Specifically, the number of redundant packets are sent according to the measured packet loss probability in the network. Therefore, lost packets could be recovered with higher probability by redundant packets, and TCP throughput could be improved. The performance evaluations of MR, PRO, and JK2009 are done using simulations in ns-2 [40]. Furthermore, CTCP [8] is applied as TCP congestion control protocol.

2080

IEEE TRANSACTIONS ON MOBILE COMPUTING,

5.2 Experimental Setup In this paper, we mainly include results for the topology as shown in Fig. 9 except for the experiment aimed to investigate the impact of performance in different topologies. As can be seen in Fig. 9, ten TCP flows are sent from their TCP senders, through Internet to their destinations (i.e., wireless hosts) in the mesh network. In the mesh network, the link bandwidth between two mesh nodes is x. The link bandwidth fluctuates between x and 2x. Furthermore, the link bandwidth between the access node A and each wireless host is y. Denote x as the average bandwidth on each link, the maxflow rate in the mesh network is minf2 x; 10yg. In addition, cross-traffic shares the link bandwidth between mesh node I and J when it is active. More specifically, x is set to be 5 Gbps, link delay from Internet to the gateway is set to be 6 ms, and the measured RTT between a TCP sender-receiver pair is approximately 12 ms on average. The long term measured maxflow rate from G to A is 14 Gbps while the bandwidth on each link varies independently within 5-10 Gbps based on Pareto distribution. Moreover, 3 Gbps cross-traffic is sent through the mesh when the cross-traffic is active. The cross-traffic is modeled as two state ON/OFF Markov chain with Pareto distribution. 5.3 Explore the Effect of Spare Bandwidth The goal of this experiment is to explore end-to-end network performance in response to different amount of spare bandwidth in the mesh. Three cases, large spare bandwidth, moderate spare bandwidth, and tight spare bandwidth are considered. To evaluate the impact of different spare bandwidth situations on end-to-end TCP throughput, different TCP traffic loads are sent to the mesh network in the presence of cross-traffic. Specifically, let y ¼ 0:5 and 0:8 Gbps in x large spare bandwidth case (i.e., 2 10 >> y), let y ¼ 1:0 and 1:2 x Gbps in moderate spare bandwidth case (i.e., 2 10 > y), and x let y ¼ 1:5 Gbps in tight spare bandwidth case (i.e., 2 10  y). In large and moderate spare bandwidth cases, the bottleneck is at the link between the access node and each destinated wireless host (i.e., y), and generally, each mesh node is able to send sufficient redundant packets to its outgoing links. However, in tight spare bandwidth case, the bottleneck is in the mesh, and each mesh node has little opportunity forwarding extra packets when the traffic loads sent from Internet is equal to the maxflow rate.  Performance Evaluation Fig. 10 shows the results for the four schemes in five different scenarios. In each figure, x-axis shows the link bandwidth between A and each wireless host. Furthermore, yaxis shows the average TCP throughput under different link bandwidth between A and each wireless host. Obviously, SRNC outperforms in all cases by adaptively exploiting spare bandwidth in the network. It is because SRNC adaptively utilizes available bandwidth to send redundant packets in the mesh, packet losses can be recovered with high probability. Furthermore, in SRNC, each mesh node adjusts number of packets forwarded to its next hop based on the availability of link bandwidth aimed to control the redundancy introduced to the network. JK2009 also sends redundant packets, but the TCP sender requires feedback information from the network to control number of

VOL. 14, NO. 10,

OCTOBER 2015

Fig. 10. Average TCP throughput over different link bandwidth between wireless hosts and egress node (bps).

redundancy. Due to the delays involved in this feedback, the effectiveness of these adaptations is reduced. PRO and MR do not send redundant packets even if sufficient bandwidth is available in some links. However, since PRO resolves packet re-ordering caused by multipath routing, it performs better than MR. x When mesh has large spare bandwidth, 2 10  y, SRNC reaches approximately 0.9 normalized TCP throughput while the rest schemes only reach 0.7 (i.e., the normalized TCP throughput is the ratio of the aggregated TCP throughput over maxflow rate). SRNC utilizes spare bandwidth to recover lost packets resulting from cross-traffic. Although JK2009 also sends extra packets to recover packet losses during transmission, the improvement in end-to-end throughput is limited. In JK2009, redundant packets are sent from TCP source nodes. Since the link bandwidth from TCP sources nodes to the mesh is much smaller than the link bandwidth in the mesh, the best TCP throughput JK2009 can reach is lower than SRNC. x When mesh has moderate spare bandwidth, 2 10 > y, SRNC and JK2009 obtain much higher TCP throughput than PRO and MR do because redundant packets sent by SRNC and JK2009 recover some lost packets. However, due to the fluctuations of link bandwidth as well as high traffic loads in the network, the throughput enhancements of SRNC and JK2009 are limited. Specifically, moderate spare bandwidth only allows certain number of extra packets sent on each outgoing link, and then packet losses are not completely recovered. As a consequence, SRNC obtains 0.7 normalized TCP throughput by effectively utilizing spare bandwidth. JK2009 also adaptively sends redundant packets to recover packet losses. However, the number of redundant packets are adjusted by feedback information. Since network environment changes dynamically, relying on feedback information still limits throughput enhancement, and JK2009 obtains 0.5 normalized TCP throughput. x When mesh has tight spare bandwidth, 2 10  y, the mesh network becomes bottleneck for packet transmission, and the normalized TCP throughput of each scheme is below 0.5. In SRNC, each mesh node still forwards some extra packets on its outgoing links when spare bandwidth is available, therefore SRNC reaches higher end-to-end throughput than other schemes. As for JK2009, the sending of redundant

HUANG AND RAMANATHAN: NETWORK LAYER SUPPORT FOR GIGABIT TCP FLOWS IN WIRELESS MESH NETWORKS

2081

Fig. 11. Impact of diversity among the flows on the normalized TCP throughput.

Fig. 12. Bandwidth fair share among different TCP flow sets.

packets also improves TCP throughput. However, the number of redundant packets in JK2009 cannot be controlled as well as in SRNC. The normalized TCP throughput of JK2009 is about 0.4. PRO and MR do not have any adaption scheme in response to the change of network condition, and they even do not obtain 0.3 normalized TCP throughput.

5.5 Explore Fairness among Different Network Encoded TCP Flow Sets In this experiment, we explore the sharing of spare bandwidth among multiple independent groups of TCP flows in SRNC. We assume that among ten TCP flows sent from Internet, five of them send TCP packets through G to the corresponding destinations connected to A. The other five TCP flows send their packets to their destinations associated with another access node in the mesh network. TCP flows destined towards access node A are not linearly combined with TCP flows destined towards the other access node. These two groups of encoded flows share the link bandwidth in the mesh network.

5.4

Explore the Impact of Diversity among All TCP Flows The goal of this experiment is to evaluate the TCP throughput among different TCP flow rates. Specifically, the link bandwidth between A and each wireless host varies from 0.2 Gbps to 2.0 Gbps, and thus, each TCP flow reaches different levels of TCP throughput. Rest network setup is the same as that in Section 5.2.  Performance Evaluation Fig. 11 illustrates the normalized TCP throughput of each TCP flow. x-axis is the link bandwidth between A and the destination of a specific TCP flow, and y-axis is the normalized TCP throughput by the measured TCP throughput over x. For the results shown in Fig. 11, the normalized TCP throughput of each flow, in SRNC, is almost above 0.5 while other schemes have large variations on the normalized TCP throughout among all flows. Specifically, when the link bandwidth of a flow is greater than 1600 Mbps, the normalized TCP throughput in MR, PRO and JK2009 are much lower than those in the rest flows. When packet losses result in timeout, TCP enters slow start and the end-to-end throughput drops significantly. To reach TCP throughput back to that before drops takes time, especially when the flow initially obtains high throughput (e.g. 1600 Mbps). Under this condition, the overall throughput of these flows are lower. However, SRNC does not suffer from this drawback because it adaptively sends redundant packets to recover most of the losses. Furthermore, on average, the normalized network throughput of MR, PRO, JK2009 and SRNC is 0.36, 0.41, 0.46, 0.59, respectively. SRNC and JK2009 send additional packets to recover packet losses in advance, and thus they have better throughput than PRO and MR have. Also, SRNC effectively utilizes available bandwidth in the network so that the network deliver the highest TCP throughput to wireless hosts.

 Performance Evaluation Fig. 12 shows the corresponding results. The y-axis is the average end-to-end throughput of each TCP flow set as a function of time. For these results, the bandwidth fluctuations occur dynamically over time in the shaded region. Clearly, both sets of flows have approximately the same TCP throughput, and these two set of flows share the network bandwidth equally between them. This further validates the claim that SRNC allows multiple competing TCP flows to co-exist in the network without adversely affecting their performances.

5.6 Explore Fairness among Heterogeneous TCP Flows In this experiment, we explore the sharing of the spare bandwidth with TCP cross-traffic which does not implement SRNC. As in Fig. 9, ten wireless hosts receive their own TCP packets from Internet to A through G, the link bandwidth between A and each wireless host is 1 Gbps. In addition, three TCP cross-traffic flows whose peak rate are 1 Gbps share the link bandwidth between I and J. For clarity, we normalize the end-to-end throughput in SRNC to that in MR. Ratio greater than 1 means SRNC has larger throughput than MR.  Performance Evaluation Fig. 13 shows the results when TCP and cross-traffic share link bandwidth between I and J. The “before” (“after”) bars show the average throughput of the TCP and crosstraffic before (after) the link bandwidth fluctuations occur.

2082

IEEE TRANSACTIONS ON MOBILE COMPUTING,

Fig. 13. Normalized throughput for TCP and cross-traffic.

For the results shown in Fig. 13, SRNC improves TCP throughput without reducing the throughput of cross-traffic. Specifically, observe that for cross-traffic, the ratio of SRNC over MR throughput remains around 1 before and after bandwidth varies. However, for TCP, although before bandwidth fluctuates, the ratio also remains around 1, the ratio becomes 1.9 after bandwidth fluctuates. TCP throughput in SRNC is 190 percent improved without affecting cross-traffic after bandwidth fluctuates.

VOL. 14, NO. 10,

OCTOBER 2015

Fig. 15. Topologies for TCP and cross-traffic.

Moreover, considering the three TCP cross-traffic flows are replaced by a UDP and five TCP cross-traffic flows. The average data rate of UDP is 2 Gbps, and the peak rate of each TCP cross-traffic flow is 1 Gbps. Figs. 14a and 14b present the average TCP throughput over time in SRNC and MR. The link bandwidth changes from 10 to 5, 7.5, and 10 Gbps at time 104.5, 117.0, and 109.5 (sec), respectively. When the link bandwidth becomes 5 Gbps, packet losses resulting from traffic congestion occur at the shared link, and the throughput for both TCP and cross-traffic decrease. Since in SRNC, redundant packets are introduced to recover lost packets when spare bandwidth is available, SRNC obtains higher throughput. After the link bandwidth increases, the congestion releases and the throughput of TCP and cross-traffic increase.

5.7 Explore the Impact of Performance in Different Topologies In this experiment, we evaluate the performance of SRNC in seven topologies as shown in Fig. 15. In each topology, the green arrow represents ten TCP flows sent through the network and the red arrow represents three TCP cross-traffic flows sent across a link in the network. The peak rate of each traffic flow for TCP and cross-traffic is 1 Gbps, and cross-traffic does not implemented SRNC. Additionally, the link bandwidth varies between 4 and 8 Gbps in Figs. 15c, 15d, 15e and 15f and varies between 5 and 10 Gbps in the rest topologies.

Fig. 14. Average TCP throughput with respect to bandwidth fluctuations.

 Performance Evaluation Table 2 illustrates the ratio of the throughput in SRNC over that in MR before and after bandwidth fluctuates, and the ratio greater than 1 means SRNC obtains larger throughput than MR. The “before” (“after”) means before (after) the link bandwidth fluctuates. Before link bandwidth fluctuates, the ratio for both TCP and cross-traffic are around 1. After link bandwidth fluctuates, the ratio for TCP increases while that for cross-traffic remains 1 approximately. Although packet losses occur after bandwidth fluctuates, SRNC sends redundant packets when bandwidth is available aimed to recover lost packets, and the throughput is enhanced. Furthermore, because SRNC adaptively controls the number of redundant packets based on the available link bandwidth, the ratio for cross-traffic is not affected even when the TCP performance improves significantly in

HUANG AND RAMANATHAN: NETWORK LAYER SUPPORT FOR GIGABIT TCP FLOWS IN WIRELESS MESH NETWORKS

2083

TABLE 2 Ratio of SRNC over MR Throughput in Different Topologies

Topo 1 Topo 2 Topo 3 Topo 4 Topo 5 Topo 6 Topo 7

(TCP, Before)

(TCP, After)

(Cross-traffic, Before)

(Cross-traffic, After)

0.93 0.95 0.95 0.90 0.93 0.93 0.93

1.98 5.25 2.15 3.61 9.11 4.23 0.72

1.00 1.00 1.00 1.00 1.00 1.00 1.00

1.05 1.04 1.00 1.09 1.36 1.48 1.43

Topo 2 (911 percent improvement) and Topo 5 (525 percent improvement). In addition, Topo 5 shows that SRNC also improves the throughput when the topology simply consists of disjoint routes. The TCP throughput in Topo 7 decreases slightly because the bandwidth is so tight that the access node cannot receive sufficient number of packets for decoding when bandwidth fluctuations occur. However, in the presence of spare bandwidth, SRNC effectively enhances TCP throughput without affecting the performance of cross-traffic.

5.8 Explore the Impact of Wireless Packet Loss The goal of this experiment is to investigate the effect of unrecoverable packet errors resulting from wireless packet loss in the mesh. Three schemes, PRO, JK2009, and SRNC are evaluated. Note that in all previous experiments, we consider the ideal case, in which the medium access protocol perfectly recovers all packet errors which occur in an wireless link. Since MR and PRO do not send any redundant packets to recover lost packets during transmission, they reach similar throughput. Furthermore, MR is unable to handle packet re-ordering, and it achieves lower throughput than that of PRO. As a result, in the experiments we only show the results of PRO, JK2009, and SRNC. In this experiment, the link bandwidth in the mesh as well as the link bandwidth between the access node A and each wireless host are 1 Gbps, and the rest of the setup in the mesh is as described in Section 5.2. Furthermore, denote packet loss probability on each link in the mesh as p, and p is independent and identically distributed (i.i.d). Then, the average packet loss

Fig. 16. TCP throughput under different wireless packet loss rate in the network.

rate in the mesh is pmesh ¼ 1  ð1  pÞ4  4p. We evaluate the performance when p from 0 to 2.5 percent (i.e., pmesh from 0 to 10 percent).  Performance Evaluation Fig. 16 shows the TCP throughput as a function of packet loss probability. PRO has low end-to-end TCP throughput even under small packet loss probability. PRO does not send any redundant packets to recover from the packet losses, and consequently, the TCP throughput drops significantly. SRNC and JK2009 benefit from sending extra packets especially when the loss probability is low (i.e., pmesh  1%). In addition, each mesh node, in SRNC, appropriately forwards redundant packets on its outgoing links, and most lost packets are recovered successfully at the access node A. Hence, the throughput of SRNC is not as sensitive to packet loss probability as compared with PRO and JK2009. When 1%  pmesh  8% (i.e., 0:25%  p  2%), the TCP throughput of JK2009 drops more significant than that of SRNC. In JK2009, the amount of redundancy is not sufficient to recover all lost packets since the redundant packets are only created at TCP sources. However, in SRNC, redundant packets are introduced and forwarded on each outgoing when bandwidth is available. Therefore, SRNC obtains about 600-700 Mbps throughput which is much higher than that of JK2009.

6

CONCLUSIONS

Wireless networks are expected to support multi-gigabit flow rates with the increasing demands of running bandwidth-intensive applications on mobile devices. Although wireless technologies are able to provide multi-gigabit link bandwidth for wireless communication, wireless channel quality changes dynamically, and the link bandwidth can vary over the lifetime of a typical traffic flow. Due to certain inherent protocol characteristics, these fluctuations on link bandwidth significantly lower end-to-end throughput of TCP flows especially for gigabit TCP flows. In this paper, we propose a novel approach called SRNC to overcome the problem of low end-to-end throughput for downlink TCP flows due to the effects of bandwidth fluctuations. The proposed scheme incorporates several ideas including inter-flow network coding, multiple path routing, and distributed redundancy management to adaptively and reliably route packets from Internet to the corresponding destinations. The proposed scheme is implemented in the network simulator ns-2 and its performance under different scenarios is compared against three conventional schemes from

2084

IEEE TRANSACTIONS ON MOBILE COMPUTING,

the literature. More specifically, SRNC is executed in a distributed fashion at each mesh node without any feedback information for decision making. Simulation results show that the improvement in the end-to-end throughput of TCP flows is significant as compared to that of the three schemes in literature. The results also show that the performance of the proposed scheme is robust to the variations in bandwidth in WMNs.

ACKNOWLEDGMENTS This work was supported in part by the US National Science Foundation grants CNS 1060344 and CNS 1257482.

REFERENCES [1]

[2]

[3]

[4]

[5]

[6]

[7] [8] [9]

[10] [11] [12] [13]

[14]

[15]

[16]

B. Wang, J. Kurose, P. Shenoy, and D. Towsley, “Multimedia streaming via TCP: An analytic performance study,” ACM Trans. Multimedia Comput., Commun. Appl., vol. 4, no. 2, pp. 16:1–16:22, May 2008. B. Wang, W. Wei, Z. Guo, and D. Towsley, “Multipath live streaming via TCP: Scheme, performance and benefits,” ACM Trans. Multimedia Comput., Commun. Appl., vol. 5, no. 3, pp. 25:1– 25:23, Aug. 2009. R. Kuschnig, I. Kofler, and H. Hellwagner, “An evaluation of TCPbased rate-control algorithms for adaptive Internet streaming of H.264/SVC,” in Proc. ACM SIGMM Conf. Multimedia Syst., 2010, pp. 157–168. M. A. Hoque, M. Siekkinen, and J. K. Nurminen, “TCP receive buffer aware wireless multimedia streaming: An energy efficient approach,” in Proc. ACM Workshop Netw. Oper. Syst. Support Digit. Audio Video, 2013, pp. 13–18. “IEEE Draft Standard for Information Technology-LAN/MANPart 11: Wireless LAN Medium Access Control and Physical Layer SpecificationsAmendment: Enhancements for Very High Throughput for Operation in Bands Below 6GHz”, IEEE P802.11ac/D5.0 (2013). C. Cordeiro, D. Akhmetov, and M. Park, “IEEE 802.11ad: Introduction and performance evaluation of the first multi-gbps Wifi technology,” in Proc. ACM Int. Workshop mmWave Commun.: From Circuits Netw., 2010, pp. 3–8. S. Floyd and T. Henderson, “The NewReno modification to TCP’s fast recovery algorithm.” RFC Editor, 1999. K. Tan, J. Song, Q. Zhang, and M. Sridharan, “A Compound TCP approach for high-speed and long distance networks,” in Proc. Int. Conf. Comput. Commun., 2006, pp. 1–12. I. F. Akyildiz, W. Y. Lee, M. C. Vuran, and S. Mohanty, “Next generation/dynamic spectrum access/cognitive radio wireless networks: a survey,” (Elsevier) Comput. Netw. J., vol. 50, no. 13, pp. 2127–2159, Sep. 2006. Y. R. Kondareddy and P. Agrawal, “Effect of dynamic spectrum access on transport control protocol performance,” in Proc. Global Commun. Conf., 2009, pp. 1–6. L. Lai, H. E. Gamal, H. Jiang, and H. V. Poor, “Cognitive medium access: Exploration, exploitation, and competition,” IEEE Trans. Mobile Comput., vol. 10, no. 2, pp. 239–253, Feb. 2011. K. R. Chowdhury, M. Di Felice, and I. F. Akyildiz, “TCP CRAHN: A transport control protocol for cognitive radio ad hoc networks,” IEEE Trans. Mobile Comput., vol. 12, no. 4, pp. 790–803, Apr. 2013. M. Di Felice, K. R. Chowdhury, and L. Bononi, “Modeling and performance evaluation of transmission control protocol over cognitive radio ad hoc networks,” in Proc. Int. Symp. Model. Anal. Simul. Wireless Mobile Syst., 2009, pp. 4–12. Y. Yuan, P. Bahl, R. Chandra, T. Moscibroda, and Y. Wu, “Allocating dynamic time-spectrum blocks in cognitive radio networks,” in Proc. Int. Symp. Mobile Ad hoc Netw. Comput., 2007, pp. 130–139. A. Plummer, Jr., T. Wu, and S. Biswas, “A localized and distributed channel assignment framework for cognitive radio networks,” in Proc. Int. Workshop Cogn. Wireless Netw., 2007, pp. 1–7. G. Cheng, W. Liu, Y. Li, and W. Cheng, “Joint on-demand routing and spectrum assignment in cognitive radio networks,” in Proc. Int. Conf. Commun., 2007, pp. 6499–6503.

VOL. 14, NO. 10,

OCTOBER 2015

[17] C. Xin and C.-C. Shen, “Reliable routing in programmable radio wireless networks,” in Proc. Workshop Netw. Technol. Softw. Defined Radio Netw., 2006, pp. 84–92. [18] A. C. Talay and D. T. Altilar, “RACON: a routing protocol for mobile cognitive radio networks,” in Proc. workshop Cogn. radio netw., 2009, pp. 73–78. [19] S. Krishnamurthy, M. Thoppian, S. Venkatesan, and R. Prakash, “Control channel based MAC-layer configuration, routing and situation awareness for cognitive radio networks,” in Proc. Mil. Commun. Conf., 2005, pp. 455–460. [20] X. Wang, T. T. Kwon, and Y. Choi, “A multipath routing and spectrum access (MRSA) framework for cognitive radio systems in multi-radio mesh networks,” in Proc. workshop Cogn. radio netw., 2009, pp. 55–60. [21] L. Geng, Y.-C. Liang, and F. Chin, “Network coding for wireless ad hoc cognitive radio networks,” in Proc. Int. Symp. Pers., Indoor Mobile Radio Commun., 2007, pp. 1–5. [22] X. Chen, Z. Zhao, H. Zhang, T. Jiang, and D. Grace, “Inter-cluster connection in cognitive wireless mesh networks based on intelligent network coding,” in Proc. Int. Symp. Pers., Indoor Mobile Radio Commun., 2009, pp. 1251–1256. [23] H. Su and X. Zhang, “Modeling throughput gain of network coding in multi-channel multi-radio wireless ad hoc networks,” IEEE J. Select. Areas Commun., vol. 27, no. 5, pp. 593–605, Jun. 2009. [24] D. Passos and C. V. N. Albuquerque, “A joint approach to routing metrics and rate adaptation in wireless mesh networks,” IEEE/ ACM Trans. Netw., vol. 20, no. 4, pp. 999–1009, Aug. 2012. [25] J. K. Sundararajan, D. Shah, M. Medard, M. Mitzenmacher, and J. Barros, “Network coding meets TCP,” in Proc. Int. Conf. Comput. Commun., Apr. 2009, pp. 450–458. [26] P. Samuel David and A. Kumar, “Network coding for TCP throughput enhancement over a multi-hop wireless network,” in Proc. 3rd Int. Conf. COMm. Syst. softW. MiddlewaRE, 2008, pp. 224–233. [27] Y. Huang, M. Ghaderi, D. F. Towsley, and W. Gong, “TCP performance in coded wireless mesh networks,” in Proc. Commun. Soc. Conf. Sensor, Mesh Ad Hoc Commun. Netw., 2008, pp. 179–187. [28] M. Kim, M. Medard, and J. A. Barros, “Modeling network coded TCP throughput: A simple model and its validation,” in Proc. Int. ICST Conf. Perform. Eval. Methodologies Tools, 2011, pp. 131–140. [29] J. K. Sundararajan, D. Shah, and M. Medard, “Feedback-based online network coding,” CoRR, vol. abs/0904.1730, 2009. [30] Y. Ding and L. Xiao, “Video On-Demand streaming in cognitive wireless mesh networks,” IEEE Trans. Mobile Comput., vol. 12, no. 3, pp. 412–423, Mar. 2013. [31] B. Radunovic, C. Gkantsidis, P. B. Key, and P. Rodriguez, “Toward practical opportunistic routing with intra-session network coding for mesh networks,” IEEE/ACM Trans. Netw., vol. 18, no. 2, pp. 420–433, Apr. 2010. [32] D. Koutsonikolas, C.-C. Wang, and Y. Hu, “Efficient network-coding-based opportunistic routing through cumulative coded acknowledgments,” IEEE/ACM Trans. Netw., vol. 19, no. 5, pp. 1368–1381, Oct. 2011. [33] A. Khreishah, C.-C. Wang, and N. B. Shroff, “Cross-layer optimization for wireless multihop networks with pairwise intersession network coding,” IEEE J. Select. Areas Commun., vol. 27, no. 5, pp. 606–621, Jun. 2009. [34] Y. Kim and G. de Veciana, “Is rate adaptation beneficial for intersession network coding?” IEEE J. Select. Areas Commun., vol. 27, no. 5, pp. 635–646, Jul. 2009. [35] M. Kim, M. Medard, U.-M. O’Reilly, and D. Traskov, “An evolutionary approach to inter-session network coding,” in Proc. Int. Conf. Comput. Commun., 2009, pp. 450–458. [36] D. X. Wei, C. Jin, S. H. Low, and S. Hegde, “FAST TCP: Motivation, architecture, algorithms, performance,” IEEE/ACM Trans. Netw., vol. 14, no. 6, pp. 1246–1259, Dec. 2006. [37] S. Floyd, “HighSpeed TCP for large congestion windows,” RFC 3649, Dec. 2003. [38] S. Ha, I. Rhee, and L. Xu, “CUBIC: A new TCP-friendly highspeed TCP variant,” ACM SIGOPS Oper. Syst. Rev.- Res. dev. Linux kernel, vol. 42, no. 5, pp. 64–74, Jul. 2008. [39] K.-C. Leung, V. O. K. Li, and D. Yang, “An overview of packet reordering in Transmission Control Protocol (TCP): Problems, solutions, and challenges,” IEEE Trans. Parallel Distrib. Syst., vol. 18, no. 4, pp. 522–535, Apr. 2007. [40] The Network Simulator NS-2. [Online]. Available: http://isi.edu/ nsnam/ns/, 2009.

HUANG AND RAMANATHAN: NETWORK LAYER SUPPORT FOR GIGABIT TCP FLOWS IN WIRELESS MESH NETWORKS

Chin-Ya Huang received the BS degree in electrical engineering from the National Central University, Zhongli, Taiwan, in 2004, the MS degree in communication engineering from the National Chiao-Tung University, Hsinchu, Taiwan, in 2006. She also received the MS degrees in both electrical engineering and computer science, and the PhD degree in electrical and computer engineering, in 2008, 2010, and 2012, respectively, from the University of Wisconsin, Madison, WI. She is currently with the Department of Electrical Engineering, National Central University, Zhongli, Taiwan, as an assistant professor. She was a member of technical staff at Optimum Semiconductor Technologies Inc. (OST) Tarrytown, NY, from 2013 to 2015. At OST, she designed 4G and 5G DSP algorithms for SDR-enabled chipset. She also worked on the development of video stabilization mechanisms for OST’s new GPU. In 2013, she joined National Chiao-Tung University, Hsinchu, Taiwan, as a research fellow and participated in the project SDN-enabled Cloud-based Wireless/ Broadband Network Technologies & Services. She also worked as a senior engineer at Qualcomm Inc., San Diego, for RF signal process design and development in 2012. She is a member of the IEEE.

2085

Parameswaran Ramanathan received the Btech degree from the Indian Institute of Technology, Bombay, India, in 1984, and the MSE and PhD degrees from the University of Michigan, Ann Arbor, MI, in 1986 and 1989, respectively. Since 1989, he has been faculty member in the Department of Electrical and Computer Engineering, University of Wisconsin, Madison, WI, where he served as a Department Chair from 2005 to 2009. He leads research projects in the areas of sensor networks and next generation cellular technology. From 1997 to 1998, he took a sabbatical leave to visit research groups at AT&T Laboratories and Telcordia Technologies. He was also a Visiting Professor at Kanwal Rekhi School of Information Technology, Indian Institute of Technology, Bombay, India, in 2004. His research interests include wireless and wireline networking, real-time systems, fault-tolerant computing, and distributed systems. He has served as an Associate Editor for the IEEE Transactions on Mobile Computing, Associate Editor for the IEEE Transactions on Parallel and Distributed Computing (1996–1999) and the Elsevier AdHoc Networks Journal (2002–2005). He has also served on program committees of conferences such as Mobicom, Mobihoc, International Conferences on Distributed Systems and Networks, Distributed Computing Systems, Fault-tolerant Computing Symposium, Real-time Systems Symposium, Conference on Local Computer Networks, and International Conference on Engineering Complex Computer Systems. He was the Finance and Registration Chair for the Fault-tolerant Computing Symposium (1999). He was the program co-chairman of the Workshop on Dependendability Issues in Wireless Ad Hoc Networks and Sensor Networks, Workshop on Sensor Networks and Applications (2003), Broadband Wireless (2004), Workshop on Architectures for Real-time Applications (1994) and the program vice-chair for the International Workshop on Parallel and Distributed Real-time Systems (1996). He is the General Chair of Mobicom 2011. In 2009, he was elevated to Fellow of the IEEE for his contributions to real-time systems and networks. " For more information on this or any other computing topic, please visit our Digital Library at www.computer.org/publications/dlib.

Suggest Documents