congestion and halves the sending rate and enters the fast ... mal sending rate of the protocol. ..... To generate traffic, we use the bulk send application, a pre-.
H-TCP Implementation in ns-3 (extended version) Amir Modarresi∗ , Siddharth Gangadhar∗ , Truc Anh N. Nguyen∗ , and James P.G. Sterbenz∗†§ ∗ Information and Telecommunication Technology Center Department of Electrical Engineering and Computer Science The University of Kansas, Lawrence, KS 66045, USA ‡ School of Computing and Communications (SCC) and InfoLab21 Lancaster University, LA1 4WA, UK § Department of Computing The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong
{amodarresi, siddharth, annguyen, jpgs}@ittc.ku.edu www.ittc.ku.edu/resilinets ABSTRACT
1.
Along with the evolution of the Internet, high bandwidthdelay product (BDP) network environments are becoming more common for data transfer. However, TCP, being the most popular transport protocol in the Internet, is not able to fully utilize the available resources in these connections. Conventional TCP has been proven to react conservatively, especially in the presence of packet losses. Multiple variants have been proposed to address this issue and Hamilton TCP (H-TCP) is one such variant. H-TCP implements a lossbased congestion control algorithm that modifies the congestion avoidance and the fast retransmit/recovery states to allow for the protocol to achieve better throughput in high BDP instances. In this paper, we present our implementation of the H-TCP protocol in the open source network simulator (ns-3) and validate the model. We also perform multiple experiments with H-TCP comparing it against existing ns-3 TCP algorithms in varying scenarios.
TCP congestion control algorithm consists of three main modes: slow start, congestion avoidance and fast retransmit/recovery. During slow start, TCP increases its congestion window exponentially until the window size reaches the predefined slow start threshold (ssthresh), upon which it switches to the congestion avoidance state. In this state, TCP continues to increase the size of the congestion window, albeit at a lower rate. When three duplicate acknowledgements (ACKs) are received, TCP assumes the presence of congestion and halves the sending rate and enters the fast retransmit mode. If a new ACK arrives, TCP returns to the congestion avoidance state. However, upon an expiration of the retransmission timer, TCP changes to slow start and the congestion window is reset. The conservative AIMD (Additive Increase Multiplicative Decrease) behavior is amplified in a high bandwidth network, as it takes a considerable amount of time for standard TCP flows to fill up the pipe. In order to enhance the poor behavior of TCP, many variants have been designed and have been made part of the Linux kernel. Additional mechanisms have been included such as additional additive factors in the additive increase (AI) phase to utilise link untilisation. Each variant relies on specialised congestion control algorithms that rely on network parameters such as RTT and try to deduce the optimal sending rate of the protocol. Existing variants include TCP Hybla [3], Scalable TCP [6], and HighSpeed TCP [4]. Much work has been focused on allowing these algorithms to not only use the large pipes, but also behave fairly in a shared environment, particularly with NewReno which is still widely used in the Internet. H-TCP [7][8] is one such loss based algorithm that intended for high BDP environments. The objective of this paper is to present our implementation of H-TCP in ns-3, an open source network simulator [1] and validate it against the results in the original paper [7]. In addition to implementation and validation, we perform multiple experiments that compare H-TCP to other variants in varying scenarios that include both congestion and corruption. The implementation, validation and experiments of this protocol has been done in the development version of ns-3 (ns-3-dev), available between version 3.24 and 3.25, that has gone through a major overhaul of the TCP framework.
Categories and Subject Descriptors I.6 [Simulation and Modeling]: General, Model Development, Model Validation and Analysis; C.2.2 [ComputerCommunication Networks]: Applications— Transport protocol
General Terms Implementation, Simulation, Analysis, Verification
Keywords TCP, NewReno, Westwood, HighSpeed, Hamilton, congestion control, ns-3 network simulator, performance evaluation
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Wns3 2016 June 15, Seattle, United States. Copyright 2016 ACM ...$15.00.
INTRODUCTION
The rest of this paper is organized as follows: In Section 2, we review related work. We present the implementation of the H-TCP protocol in ns-3 in Section 3. We proceed to validate our implementation in Section 4. We perform multiple experiments comparing our implementation with the currently existing TCP variants and analyse their performance in Section 5. Finally, we conclude this paper in Section 6.
2.
BACKGROUND AND RELATED WORKS
In this section, we review some of the TCP variants currently implemented in ns-3, including TCP NewReno, TCP HighSpeed, TCP Hybla and TCP Westwood(+).
2.1
TCP NewReno
TCP NewReno [5] uses three duplicate ACKs to trigger the fast retransmit/recovery phases; however, it uses a modified version of the algorithm explained in RFC2581 [2]. NewReno uses a new variable recover in its first step and some modification in processing acknowledgments. If a sender is not in the Fast Recovery phase and receives three duplicate ACKs, it sets ssthresh to the value calculated by the following formula:
ssthresh = max (
FlightSize , 2 × MSS ) 2
(1)
where Flightsize is the number of unacknowledged packets in flight. NewReno also keeps the highest transmitted sequence number in the variable recover. When NewReno enters the Fast Retransmit phase, it retransmits the lost segments and sets cwnd to ssthresh plus 3×MSS. This causes an artificial inflation in the window size. 3×MSS adds those packets that have been received in the destination and buffered, but they are out of order. In Fast Retransmit phase, receiving each additional duplicate ACK adds one MSS to cwnd. If the size of cwnd is less than the receiver window, the sender transmits a new segment. When a new ACK arrives, it returns to the congestion avoidance phase. This new ACK either acknowledges all the previous transmitted packets until the sequence number stored in recover or a few packets less. In this case, it is called partial ACK. When NewReno returns to the congestion avoidance phase, it sets cwnd either to min (ssthresh, Flightsize + MSS). This is called “deflating” the window. If a partial ACK arrives, the sender retransmits the first unacknowledged segment, deflates the congestion window by the amount of new acknowledged data, then adds one MSS, and if the window size allows, transmits new segments.
2.2
HighSpeed TCP
HighSpeed TCP [4] is a variant of standard TCP to support connections with a large congestion window that commonly occurs in high bandwidth-delay product networks. In standard TCP, the average congestion window size in segments is proportionally related to the inverse of the square root of probability of error in the network (1.2/sqrt(p)) [4]. This condition needs a very low error rate with steady state for a long time to get a proper utilization of a high bandwidth link. HighSpeed TCP function has been designed to be activated only for the connections with high window size. Therefore, there is no need to change the functionality of
the standard TCP. Furthermore, the behavior of the TCP remains intact in networks with heavy congestion. In order to do that, HighSpeed TCP defines three parameters namely, Low Window, High Window and High P. When the congestion window is in the range of the Low Window, it works as standard TCP considering high packet drop rate in range of 10−3 . While congestion window is in High Window range, HighSpeed TCP can get the utilization of a high bandwidth network. In such conditions, packet drop rate which is indicated by High P would be smaller than the previous condition. To get the utilization of a 10Gb/s network, High P must be around 10−7 and the congestion window size can be calculated from 0.12/p0.835 [4]. When HighSpeed TCP receives an ACK in congestion avoidance state, it increases the window size by a(cwnd)/cwnd. Upon receiving triple duplicate ACKs, it shrinks the congestion window by (1 − b(cwnd))/cwnd where cwnd is the current congestion window size and a(cwnd) and b(cwnd) are functions of the current window size. cwnd is compared with two Low Window and High Window size to return the correct value from the functions. In standard TCP a(cwnd) and b(cwnd) are equal to 1 and 0.5 respectively.
2.3
TCP Hybla
Standard TCP suffers from low performance in long delay links such as satellite and wireless transmissions. Such long delays keep the congestion window small and causes low performance in these kind of links. TCP Hybla [3] has been proposed to work on links with long delays. TCP switches from the slow start state to the congestion avoidance state when the congestion window reaches the slow start threshold (ssThresh). However, for a particular ssthresh and various RTT, switching time between these two states would be different and it causes lower throughput for links with higher RTT. TCP Hybla has been designed to detach the growth of congestion window from RTT by considering a normalized calculation of RTT for each flow. In this case, all TCP flows with particular ssthresh can switch between slow start and congestion avoidance at the same time with more aggressiveness for flows with longer RTT in slow start. Consequently, a flow with high delay using TCP Hybla has higher throughput than the same flow using standard TCP.
2.4
TCP Westwood (+)
TCP Westwood [9] works based on end-to-end bandwidth estimation in both wired and wireless networks and it is compatible with standard TCP. Since it estimates end-toend bandwidth, it can discriminate the cause of packet loss due to congestion or noisy channel. This estimation is used to calculate cwnd and ssthresh after receiving triple duplicate ACKs or a timeout. Instead of dividing cwnd in half, the protocol calculates cwnd and ssthresh based effective bandwidth at the time of congestion. TCP Westwood keeps the slow start and congestion avoidance states intact, while it changes fast recovery by calculating cwnd and ssthresh based on bandwidth. Westwood+ seeks to improve on the Westwood algorithm, particularly in the presence of congestion, by estimating the bandwidth only on RTT intervals rather than for every ACK received in the case of Westwood. This directly affects the estimated bandwidth especially in congested links where ACK compression generally occurs. In the presence of ACK compression, Westwood tends to overestimate the bandwidth due to the reception of bursts of
ACKs at the sender whereas Westwood+ provides a much better estimate due to its calculations that occur at RTT intervals.
• TcpTxBuffer: This class implements the sending buffer in TCP protocols and keeps data sent by the application layer until their ACKs arrive.
3.
• TcpRxBuffer: This class is responsible to implement receiving buffer for packet reordering and its related tasks in TCP protocol.
H-TCP IMPLEMENTATION
We implement the H-TCP protocol in ns-3 and work on the development version that contains the new framework. As part of this section, we explain TCP class interactions first, followed by the H-TCP architecture.
3.1
TCP Class Interaction in ns-3
TCP functionality in ns-3 is provided by a set of classes interacting with each other. TCP sockets interacts with TCP protocols to pass data segments to IP modules [1]. TCP protocols are located between the socket and IP layer. After ns-3.24, a major overhaul was performed in the structure and relationship among modules. A new class, TcpSocketState, was introduced to keep track of the common attributes and states of sockets like congestion window size and slow start threshold. These attributes had been defined in TcpSocketBase in older versions. Another class, called TcpCongestionOps, was designed as a base class to control the congestion operations. On the other hand, class TcpNewReno was set as the parent class of other TCP protocols including TCP HighSpeed, Hybla and Westwood inherited from TcpCongestionOps. Figure 1 illustrates the relationship among some of the main classes responsible for TCP functionality in ns-3. • TcpSocketBase: This class implements a stream socket between TCP protocols and the application layer. Connection oriented functionality and sliding window flow control are controlled by this class. It is the inheritance of abstract class TcpSocket. Some of the main attributes of this class are m rWnd, m maxWinSize and m minRto. • TcpSocket: This is an abstract class that defines some of the essential attributes of TCP socket like SenBufSize, RcvBufSize and, SegmentSize which define the buffer size for sending and receiving. • TcpCongestionOps: This newly introduced class inspired from Linux kernel v4.0, controls common functions of congestion control removed from TcpSocketBase. The methods defined in this class are GetSsThresh to set the slow start threshold after a packet loss, IncreaseWindow to implement congestion avoidance algorithm, and PktsAcked to keep timing information and it is called whenever an ACK is received. • TcpSocketState: This class defines a data structure to keep track of a connection congestion state. m cWnd, m ssThresh, m segmentSize an other similar attributes are stored in an instance of this class for each connection. • TcpNewReno: This class was introduced in the new framework and implements standard TCP processes including slow start and congestion avoidance. Currently, other TCP variants are inherited from this class. • TcpHeader: This class implements TCP header fields such as the port number, sequence and ACK numbers.
• TcpL4Protocol: This class provides an interface between a TCP socket and the lower layer endpoint. Ipv4(6)EndPointDemux is one of the main class in the network layer, called by this class. Packet checksum is another responsibility of this class. • RttHistory: This is a helper class written in tcpsocket-base.cc to store RTT measurement. It keeps the sequence number, the number of bytes and time for each sent packet. As observed in Figure 1, and in contrast of previous versions, all current TCP variants in ns-3 including TCP HighSpeed, Hybla and Westwood are children of the TcpNewReno class. H-TCP explained in this paper follows the same relationship and inheritance from TcpNewReno.
3.2
H-TCP Implementation in ns-3
TCP variants focus on two main factors to increase the throughput of flows in networks with high bandwidth-delay product. One solution is increasing the congestion window in AIMD by a factor such as α, in a way that cwnd increases by α × cwnd. α can be a function of different characteristics in each variant. For example, HighSpeed TCP uses this approach to increase the throughput. Another approach is decreasing the congestion window during the backoff process by β × cwnd. Likewise, β can be a function of the different attributes in each variant. The overall result would be more aggressiveness in increments and less conservative in decrements processes. The success of these solutions depends on how accurately α and β consider different conditions. H-TCP [7][8] is another variant designed for networks with high bandwidth-delay product. It is compatible with standard TCP, while it can switch to more aggressive behavior in high bandwidth networks. H-TCP considers elapsed time since last congestion. Then it calculates factor α as a function of last congestion time. The value of α starts larger, if congestion has not occurred. This value increases the size of the congestion window every time an ACK is received. Therefore, the protocol can utilise the bandwidth faster than standard TCP resulting in higher throughput and link utilization. H-TCP does not change the slow start process. However, α is used in the congestion avoidance process to adjust the congestion windows size. If Delta is the elapsed time in seconds since the last congestion for a specific flow, then the α can be calculated as a function of Delta such as f alpha(Delta). Moreover, in order to be compatible with standard TCP, the following condition is considered to switch between standard TCP and the enhanced mode provided by H-TCP [8]. if Delta 6 Delta L alpha = 1 else alpha = f alpha(Delta)
TcpOp-on
TcpHeader
m_kind m_size m_content CreateOpHon() Deserialize() GetInstanceTypeID()
m_source m_sequenceNumber m_protocol Serialize() Deserialize() GetSerializedSize()
m_size m_firstByteSeq Add() Available()
TcpConges-onOps
TcpOp-onWinScale m_scale
m_echo m_Hmestamp
m_retxThresh m_inFastRec m_limitedTx
Seriialize() Deserialize() SetMSS()
SetScale() GetSerializedSize() GetInstanceTypeId()
GetKind() SetTimestamp() NowToTsValue()
NewAck() DupAck() Retransmit()
HTcp
PktsAcked() IncreaseWindow() GetSsThresh()
SetSndBufSize() = 0 SetSSThresh() = 0 SetIniHalSSThresh() = 0
TcpL4Protocol
* m_sockets
m_node m_endPoints m_sockets Allocate() SendPacket() Receive()
TcpSocketFactory
GetTypeID()
TcpSocketFactoryImpl m_tcp SetTcp() DoDispose() CreateSocket()
TcpNewReno
TcpOp-onTS
m_mss
m_alpha m_delta m_beta
{abstract}
TcpSocketBase m_rWnd m_state m_nextTxSequence ForwardUp() SendPendingData() SendEmptyPacket()
Fork() GetName() GetSsThresh() IncreaseWindow()
TcpOp-onRFC793
TcpSocket
TcpTxBuffer
TcpRxBuffer m_size m_nextRxSeq Add() Available()
TcpHighSpeed m_ackCnt GetName() GetSSThresh() TableLookupA() TableLookupB()
TcpHybla
TcpWestwood
m_cWndCnt m_minRS m_rho
m_currentBW m_lastBW m_minRS
GetName() GetTypeId() RecalcParam()
CountAck() CalculateBW() EsHmateRS()
Figure 1: TCP class interaction In the above function, f alpha(Delta) is calculated as below:
modified from ns-3’s default parameters. The packet MTU size across our experiments are fixed at 1500B. Across our experiments, we start the initial flow at 4s. We set the router 2 queue size to be BDP across all scenarios. The rest of the f alpha(Delta) = 1+10(Delta−Delta L)+0.5(Delta−Delta L) parameters are scenario specific and are tabulated for each (2) scenario. where Delta L = 1. Setting Delta L = 1 results the congestion period as a function of congestion window [8]. The backoff coefficent β is calculated based on maximum Source Sink and minimum RTT since the last congestion and throughput IP of the flow from the following relation: Bi (k + 1) − Bi (k) Access link Bottleneck link 0.5 | | B i (k) βi (k + 1) = RTTmin,i otherwise RTTmax,i Figure 2: A network topology with one router where Bi is the throughput of source i before the congestion and k denotes the congestion period. HTCP is an inheritance of TcpNewReno. Since the slow start process is the same as standard TCP, we do not define any new method for slow start. Moreover,HTcp::Congestion Avoidance is defined to modify m cWnd based on H-TCP protocol. HTcp::CongestionAvoidance uses AlphaFunction IP IP method to calculate the correct value of α based on EquaBottleneck link tion (2). Htcp::PktsAcked is used to keep RTTmin , RTTmax and throughput for each congestion period. We use HTcp:: GetSsthresh to adjust ssthresh value based β formula.
4.
VALIDATION
In order to validate our implementation, we use two topologies illustrated in Figure 2 and Figure 3. The first topology consists of a single sender and receiver connected by a router. Our second topology is the popular dumbbell topology with multiple senders and receivers separated by a pair of routers. To generate traffic, we use the bulk send application, a preferred traffic application type for TCP experiments. In all of our experiments, traffic generation is unidirectional and from the sender to the receiver. The TCP socket buffer sizes, along with the TCP delayed ACK and its timeout are un-
Figure 3: Dumbbell network topology
4.1
Slow Start and Fast Recovery
As part of the validation process, we verify all states of the protocol in varying corruption and congestion scenarios and explain its behavior using TcpNewReno as the base for comparison. In this subsection, we focus on highlighting the slow start and fast recovery phases of H-TCP and TcpNewReno. To achieve this, we use the single router topology provided in Figure 2. The bandwidths used on both links
Parameter Access link bandwidth Bottleneck link bandwidth Access link propagation delay Bottleneck link propagation delay Packet MTU size Error model Error rate Application type Simulation time Queue Size
Values 10 Mb/s 8 Mb/s 0.1 ms 45 ms 1500 B none 0 Bulk send application 20 s BDP
Table 1: Slow start and fast recovery simulation As described in the protocol algorithm, H-TCP has the same behavior as TCP New Reno in the slow start and fast recovery phases. This is clearly shown in the Figure 4 as the congestion window trends of both protocols overlap each other. This exact correlation validates the slow start and fast recovery phases of our implementation. Furthermore, since the network condition is stable and no error is imposed to the simulation, a short run time such as 20 seconds is enough to confirm this behavior. 8.00
congestion window [Mb]
6.00 5.00 4.00 3.00 2.00 1.00 0.00 0
5
10
15
20
time [s] Figure 4: Slow start and fast recovery validation
4.2
10.00
Validation in presence of Congestion
We now proceed to validate the congestion avoidance state by simulating a scenario that showcases all states of the algorithm. We use all of the parameters of the previous scenario except for the bottleneck link bandwidth which is changed to 8 Mb/s to allow for the protocols to activate their congestion avoidance process. The rest of the parameters are retained from Table 1. According to Formula (2) and its related conditions, α is increased either by 1 or a quadratic equation represented by Formula (2). When a specific amount of time elapses from the last congestion, then we should observe a quadratic curve in the congestion avoidance state in congestion window plot, as presented in paper [7]. The quadratic equation pushes
HTcp TcpNewReno
9.00 8.00 7.00 6.00 5.00 4.00 3.00 2.00 1.00 0.00 0
20
40
60
80
100 120 140 160 180 200
time [s] Figure 5: Validation of all operational states
4.3
HTcp TcpNewReno
7.00
the window to grow at a higher speed thus allowing the protocol to reach greater speeds as shown in Figure 5. The steeper peak observed for H-TCP is due to the extra adder component added in the congestion avoidance phase to allow the protocol to operate in high speed environments.
congestion window [Mb]
are 10 Mb/s and no errors happen in the scenario. The rest of the parameters are given in Table 1.
Effects of Corruption in addition to congestion
As an additional experiment for our validation, we inject errors into the network to investigate the behavior of H-TCP and New Reno in addition to congestion happening due to a bottleneck link. We use a packet error rate of 10−4 installed using the ReceiveErrorModel at the bottleneck link. The congestion window for both protocols are shown in Figure 6. Congestion occurs due to the presence of a bottleneck link similar to our previous scenario. We use exactly the same parameters provided in Table 1 with bottleneck bandwidth changed to 8 Mb/s and packet error rate of 10−4 . From Figure 6, we notice a greater irregular pattern in both protocols when compared to the previous scenario. This is attributed the random drops that occur due to our error model. We also notice that, at any given time, H-TCP performs better than NewReno due to the additional component added to the congestion window during congestion avoidance. We also notice that H-TCP does not drop very much in the presence of congestion. This is attributed to the β value which serves as the coefficient for the multiplicative decrease in the presence of congestion. We notice that, for the particular simulation, β does not exceed 0.55, a value close to the static 0.5 parameter used by NewReno. Furthermore, H-TCP recovers faster than New Reno due to its aggressiveness. However, this behavior has a side effect which is observed at the end of the simulation through congestion window reset events. Since H-TCP sends segments faster than New Reno in each particular time, the number of bytes in flight with H-TCP is more than New Reno. Therefore, when an error occurs in the network, the probability of effected segments in H-TCP flow is higher than similar New Reno flow causing H-TCP returns to slow start in many instances. In order to confirm this behavior, we compare H-TCP with TCP HighSpeed, a very aggressive
protocol, implemented in the current version of ns-3. We observe that HighSpeed returns to slow start more frequently than H-TCP when the errors are present in the network. This behavior is illustrated in Figure 7. 10.00 HTcp TcpNewReno
9.00
congestion window [Mb]
8.00 7.00 6.00
Values 10 Mb/s 8 Mb/s 0.1 ms 45 ms 1500 B none 0 Bulk send application 150 s BDP 2
5.00
Table 2: Simulation parameters for validation
4.00 3.00 2.00 1.00 0.00 0
20
40
60
80
100 120 140 160 180 200
time [s] Figure 6: H-TCP and NewReno comparison
12.00 HTcp TcpHighSpeed
plot in the original paper. We observe in both plots that initially the first flow tried to fill up the pipe and operates very close to the bottleneck bandwidth. Once the second flow starts at 14 seconds, we notice in both plots that the amount of data sent by the first flow gradually comes down with the second flow grabbing a higher share of the bandwidth. Ultimately, both flows seem to reach a stable state with its own fair share of the bandwidth. H-TCP has been considered to have a high intra-protocol fairness index and is displayed clearly in both plots. Thus, this behavior validates our plot against the original paper, albeit under different conditions. 8E+03
10.00
HTcp flow 1 HTCP flow 2
7E+03 8.00 6E+03
throughput [b/s]
congestion window [Mb]
Parameter Access link bandwidth Bottleneck link bandwidth Access link propagation delay Bottleneck link propagation delay Packet MTU size Error model Error rate Application type Simulation time Queue Size Number of flows
6.00 4.00 2.00
5E+03 4E+03 3E+03 2E+03
0.00
1E+03 0
20
40
60
80
100 120 140 160 180 200
time [s]
0E+00 0
20
40
60
80
100
120
140
160
time [s] Figure 7: H-TCP and HSTCP comparison
4.4
Effects of multiple flows
As a very important step to complete the validation of our H-TCP protocol, we compare its performance with HTCP in the original paper [7] and we compare the trend of the achieved results with the paper, particularly Figure 10. The exact simulation is not possible, because we can not conclude all of the necessary parameters from the paper. We consider two H-TCP flows operating in the dumbbell topology provided in Figure 3. The first flow starts at 4 seconds and the second flow starts 10 seconds later. The rest of the parameters are presented in Table 2. The instantaneous throughput for the simulation time is plotted for both flows in Figure 8. We notice a very similar trend in the plot when compared to the congestion window
Figure 8: Validation against original H-TCP paper
5.
EXPERIMENTS
In this section, we perform experiments to confirm H-TCP performance and compare it with the performance of the current available protocols in ns-3, namely NewReno, Hybla, HighSpeed and Westwood+. We include scenarios covering corruption, congestion and varying BDP network environments for our analysis.
5.1
Effects of Corruption only
As part of our initial set of experiments, we plan to compare H-TCP with the existing variants in ns-3 in a lossy
Parameter Access link bandwidth Bottleneck link bandwidth Access link propagation delay Bottleneck link propagation delay Packet MTU size Error model Error rate Application type Simulation time Queue Size
Values 10 Mb/s 10 Mb/s 0.1 ms 100 ms 1500 B Uniform Error Model 10−4 to 5−2 Bulk send application 100 s BDP
Table 3: Corruption only scenario simulation parameters In this experiment, we investigate the effect of errors on the performance of the protocols when packet drops are attributed to corruption only. As is observed from Figure 9, the throughput of all protocols increase when the number of drops decreases. Westwood+ outperforms the rest of the protocols with NewReno predictably performing the worst. H-TCP, Hybla and HSTCP follow a similar trend, and closely match each other at both extremes. The bad performance of NewReno is due to the fact that all other protocols has some variant technique of estimating the BDP of the path and using its estimation in determining its sending rate.
5.2
Effects of change in bottleneck delay
Next, we study the effects of congestion along with packet error in the network in this experiment. The error rate is retained from the previous scenario and a bottleneck link is configured as 8Mb/s. We vary the bottleneck delay from 10ms to 300ms. These values were chosen so as to emulate various network environments such as LAN, Internet and interplanetary communications. It is also noted that for the considered parameters, congestion decreases high to none as the bottleneck delay increases. This is primarily due to the reduction in the congestion window updates rate with the increase in RTT resulting in less number of packets sent over the network. The simulations are repeated 5 times due to the randomness of the errors and throughput is analysed. The individual simulation parameters are provided in Figure 4. Figure 10 shows the performance of the considered protocols in the varying bottleneck delay environments. We see an overall decrease in the throughput as the delay increases. This is primarily due to the increase in delay which
5.50 Westwood+ TcpHighSpeed HTcp TcpHybla TcpNewReno
5.00 4.50
Throughput [Mb/s]
environment. To achieve this, we take advantage of the existing ReceiveErrorModel feature in ns-3 to configure drops once packets reach the receiver. We follow a uniform error model and to analyse performance, vary the average error rate from 10−2 to 10−4 which is common range to encompass TCP’s working region for errors. We use throughput as our primary metric for comparison in this scenario. To account for the randomness of packet drops, we run our simulations 5 times and plot the mean throughput along with 95% confidence intervals as error bars. The topology used for this scenario is the earlier mentioned single sender-receiver topology that transmits a single flow of data across the network with a router that operates at a queue size equal to the BDP of the link. The individual simulation parameters are provided in the Table 3.
4.00 3.50 3.00 2.50 2.00 1.50 1.00 0.50 0.00 1E-04
1E-03
1E-02
1E-01
PER Figure 9: Throughput vs PER with no congestion Parameter Access link bandwidth Bottleneck link bandwidth Access link propagation delay Bottleneck link propagation delay Packet MTU size Error model Error rate Application type Simulation time Queue Size
Values 10 Mb/s 8 Mb/s 0.1 ms 10 ms - 300 ms 1500 B Uniform Error Model 0.001 Bulk send application 100 s BDP
Table 4: Varying bottleneck delay simulation parameters increases the gap between packets transmitted. Similar to the previous scenario, we notice that Westwood+ performs best in high congestion and corruption scenarios due to its ability to handle both packet loss and congestion. H-TCP, on the other hand, performs mediocrely in low delay cases and slightly better than the rest in high delay cases.
5.3
Effects of change in bandwidth
As part of our final experiment, we vary the bottleneck bandwidth from 12 Mb/s to 20 Mb/s while operating at the same error rate, 10−3 and observe the throughput change for the considered protocols. The experiment parameters are presented in Table 5. We repeat the simulations 5 times to take into account the randomness of the packet drops. It is observed in Figure 11 that the throughput of all protocols increase with the increase in bottleneck bandwidth, although NewReno’s increase is minimal. The performance standings seem to remain the same as other scenarios with Westwood+ outperforming the rest and having a much steeper slope. This is primarily due to its algorithm both accounting for both corruption by estimating bandwidth and congestion by sampling based on RTT. H-TCP does not seem to take advantage of the additional bandwidth in this scenario and only gains less than 2Mb/s over the course of the simulation.
6.
CONCLUSION
7.00
20.00 Westwood+ TcpHighSpeed HTcp TcpHybla TcpNewReno
5.00
18.00
throughput [Mb/s]
Throughput [Mb/s]
6.00
4.00 3.00 2.00 1.00
0
50
100
150
200
250
300
bottleneck delay [ms] Figure 10: Throughput vs Bottleneck delay Parameter Access link bandwidth Bottleneck link bandwidth Access link propagation delay Bottleneck link propagation delay Packet MTU size Error model Error rate Application type Simulation time Queue Size
Values 20 Mb/s 12 − 20 Mb/s 1 ms 20 ms 1500 B Uniform Error Model 0.001 Bulk send application 100 s BDP
Table 5: Varying bandwidth simulation parameters
In this paper, we explain the implementation of the HTCP protocol in the new ns-3 framework, introduced after version 3.24. We validate our protocol by examining each state of the protocol and comparing its behavior with TCP NewReno in addition to validating its behavior against the original paper. Furthermore, we compare the performance of the protocol with other available protocols in ns-3. We understand from the experiments that, H-TCP is able to fill up the pipe much faster than NewReno, and performs in the range of other high speed protocols such as HighSpeed and Hybla. It is also shown to perfectly coexist with itself in similar network environments.
ACKNOWLEDGMENT
We would like to acknowledge the assistance of the members of the ResiliNets research group for their advice and suggestions which helped us with this implementation.
8.
14.00 12.00 10.00 8.00
0.00
7.
16.00
Westwood+ TcpHighSpeed TcpHybla HTcp TcpNewReno
REFERENCES
[1] The ns-3 Network Simulator. http://www.nsnam.org, July 2009. [2] M. Allman, V. Paxson, and W. Stevens. TCP Congestion Control. RFC 2581 (Proposed Standard), Apr. 1999. Obsoleted by RFC 5681, updated by RFC 3390.
6.00 12.0 13.0 14.0 15.0 16.0 17.0 18.0 19.0 20.0
bandwidth [Mb/s] Figure 11: Throughput vs Bottleneck bandwidth [3] C. Caini and R. Firrincieli. TCP Hybla: a TCP enhancement for heterogeneous networks. International journal of satellite communications and networking, 22(5):547–566, 2004. [4] S. Floyd. HighSpeed TCP for Large Congestion Windows. RFC 3649 (Experimental), Dec. 2003. [5] T. Henderson, S. Floyd, A. Gurtov, and Y. Nishida. The NewReno modification to TCP’s fast recovery algorithm. RFC 6582 (Standard Track), Apr. 2012. Obsoletes RFC 3782. [6] T. Kelly. Scalable tcp: Improving performance in highspeed wide area networks. SIGCOMM Comput. Commun. Rev., 33(2):83–91, Apr. 2003. [7] D. Leith and R. Shorten. H-TCP protocol for high-speed long distance networks. In Proc. PFLDnet, 2004. [8] D. Leith and R. Shorten. H-TCP: TCP congestion control for high bandwidth-delay product paths. online, June 2007. [9] S. Mascolo, C. Casetti, M. Gerla, M. Y. Sanadidi, and R. Wang. TCP Westwood: Bandwidth estimation for enhanced transport over wireless links. In Proceedings of the 7th Annual International Conference on Mobile Computing and Networking, MobiCom ’01, pages 287–297, New York, NY, USA, 2001. ACM.