Progress in Various TCP Variants: Issues ...

2 downloads 0 Views 249KB Size Report
utilization of TCP FACK is almost identical to SACK but it establishes a little enhancement evaluated to it. It uses SACK option to better estimate the amount of ...
MASAUM Journal of Computing, Vol.1 No.4, November 2009

493

Progress in Various TCP Variants: Issues, Enhancements and Solutions B. Qureshi, M. Othman, Member, IEEE, and N. A. W. Hamid Abstract—Transport Control Protocol (TCP), a basic communication language, consists of set of rules that control communication. There are many version of TCP which modified time to time as per need. Initially we discuss the basic functions of TCP and their role to control the congestion. The TCP flow control illustrated by sliding window structure then we graphically examine slow start, congestion avoidance, fast retransmission and fast recovery. The key issues and enhancement solutions are described by three categories i.e., link layer, split and end-to-end. The slower and reliable link layer solution described either by aware (Snoop) and unaware (Transport Unaware Link Improvement Protocol, TULIP) and Delayed Duplicate Acknowledgements. Split enhancement solution for the heterogeneous networks can distinguish errors on wireless links. The advance approaches of split solutions are Indirect-TCP, Mobile TCP and Mobile End Transport Protocol. This paper compares the performance of different TCP variants and their end-to-end solutions specifically Tahoe, Reno, New Reno, Westwood, Selective Acknowledgment (SACK), Forward Acknowledgement (FACK) and Vegas. TCP Vegas algorithms are explained by new structure mechanism, new congestion avoidance and modified slow start mechanisms. Subsequently, a table is derived to evaluate TCP variants on the basis of algorithms. We converse the progress, and evaluate advantages and disadvantages of above TCP variants. This paper finally concludes that TCP Vegas is better than all other TCP variants. Index Terms—Slow start, retransmission, Fast recovery

Congestion

Avoidance,

Fast

I. INTRODUCTION

T

CP is the most dominant transport protocol used globally and implemented in current data networks. TCP ensures each transmitted packet received correctly by the destination but some packets lost due to network congestion. Therefore congestion control is the main problem to reduce packet Manuscript received October 9, 2009. B. Qureshi is a PhD candidate in the Department of Communication Technology and Network, Faculty of Computer Science and Information Technology, University Putra Malaysia. (phone: +603-89466556; fax: +60389466576; e-mail: [email protected]) M. Othman and N. A. W. Hamid are with the Department of Communication Technology and Network, Faculty of Computer Science and Information Technology, University Putra Malaysia. (e-mail: [email protected], [email protected]).

losses and increase throughput both in wired and wireless links. Due to the network congestion TCP efficiency is also decreased, thus it is the key issue to resolve for stability. For this purpose TCP wireless links divided into three categories link layer, end-to-end and split connection. The purpose of this review is to enlighten the TCP performance enhancement its issues and solutions. Subsequently compare different TCP algorithms as TCP Tahoe, Reno, New Reno, Westwood, SACK, FACK and Vegas. During last three decades many schemes have been developed to control the congestion. Different researchers compare TCP versions to achieve performance improvements in terms of algorithms. In this regard the author [1] argued and compared the performance of TCP Reno and TCP Vegas. This paper emphasizes that due to the use of round-trip time measurement, window dynamics of TCP Vegas are much more stable than those of TCP Reno, resulting in a more efficient utilization of the network recourses. TCP Reno discriminates against users with long propagation delays; whereas TCP Vegas fairly shares the available bandwidth between users, without involving users’ propagation delays. In [2] the author compared Tahoe, Reno and SACK TCP on the basis of simulation and explored the benefits of adding SACK and Selective repeat to TCP. New Reno shows that without SACK, Reno TCP’s performance problems, when multiple packets are dropped from a window of data. It needs waiting for retransmission timer expiration before reinitiation data flow. In the absence of SACK, both Reno and New Reno sender can retransmit at most one dropped packet per round trip time, even if sender recover from multiple drops in window of data without waiting for a retransmit timeout. Forward Acknowledgment (FACK) congestion control algorithm is developed in [3], which address many performance problems in the internet. FACK is designed on the basis of first principle of congestion control and TCP SACK option. When the congestion is decoupled from other algorithms like data recovery, it achieves more exact control over the data flow in the network. In this paper two algorithms are designed to develop the performance in the exact state. Simulation is used for the comparison of FACK with Reno and Reno with SACK. Finally it proves the impact and potential performance of FACK in the internet. The reminder of this review paper is organized in five sections, section 2 is based on background and related work about TCP sliding window structure, section 3 composed network congestion and its algorithms, section 4 described TCP key issues and enhancement solutions, section 5 performance evolution of TCP variants including a table

MASAUM Journal of Computing, Vol.1 No.4, November 2009

which express TCP variants on the basis of algorithms. Section 6 concludes the paper.

II.

BACKGROUND AND RELATED WORKS

A. Sliding Window Structure A sliding window scheme is employed by TCP to achieve the flow control. Its size used to control number of bytes in transit. When window full of data is in transit TCP must stop transmitting any further segments and wait for acknowledgement from the receiver. Once the acknowledgement arrives, TCP can transmit new bytes which are not more than the number of bytes acknowledged. The perception of sliding window is demonstrated in Fig. 1, see [4]. The figure shows six bytes size window which allow a maximum of six bytes to be transit.

494 implements a set of mechanisms collectively called congestion control, which is described below: A. Congestion control The basic role of control congestion is to adjust the transmission window of the sender in such a manner that buffer overflow is prevented not only at the receiver but also at the intermediate routers. To achieve this, TCP uses another window control parameter called congestion window (cwnd) parameter. TCP maintains congestion windows, which represent an estimate number of segments that can be injected in to network without causing congestion (segment is any TCP data or Acknowledgement packet or both). The challenge is utilization of available buffer space in network routers. Routers do not participate at the TCP layer and can not use TCP ACK segments to adjust the window. To resolve this problem, TCP assumes network congestion whenever a retransmission timer expires, and it reacts to network congestion by adjusting congestion window using three algorithms, namely slow start, congestion Avoidance and multiplicative decrease. B. Slow start In [5], Fig. 2 demonstrates the exponential growth of congestion window. The slow start state, begins with the small window size that increases slowly as ACKs arrive, this is the rule behind the slow start mechanism.

Fig. 1. TCP sliding window [4]

It consists of the following steps. Step1. At the initial stage bytes 0, 1, and 2 have already been transmitted and acknowledged by the receiver. Afterward bytes 3, 4, and 5 have already been sent, and sender is waiting for ACK. Since the window size is 6, bytes 6, 7, and 8 are allowed to be transmitted but bytes 9 and above cannot be sent for the reason of window size limitation. Step2. TCP has sent bytes 6, 7, and 8, and waiting for ACK for all segments in its current window. A window full of data is in transit, no more data can be sent at this stage. Step3. In this phase ACK for byte 3 and 4 has been received. Now the sliding window slides by two to the right, making bytes 9 and 10 appropriate to be sent. Step4. Finally TCP sends bytes 9 and 10 and again starts waiting for ACK. III. TCP NETWORK CONGESTION When a link transmits heavy traffic, it may slow down the network response time and also effects on queuing delay, packet loss and blocking the connection, this state occurs due to network congestion. To resolve network congestion, TCP

Fig. 2. Congestion window increasing [5]

The initial value of cwnd is set between 1 to 4 packets in the beginning of this state. Receiver maintains an advertise window (rwnd) which indicates the maximum number of bytes it can accept. The value of the rwnd is sent back to the sender collectively with each packet going back. The amount of outstanding data wnd is limited by the minimum of cwnd

MASAUM Journal of Computing, Vol.1 No.4, November 2009

and rwnd. New packets are only sent if allowed by both congestion window and receivers advertised window, as follows in Equation (1):

495 D. Fast Retransmit and Fast Recovery The function of fast retransmit and fast recovery algorithm is shown in Fig. 4 that makes TCP to detect loss before the transmission timer expires. When out-of-order packet arrives at the receiver, the receiver transmits duplicate ACK.The sender obtains it as a packet loss or packet delay. Afterward if three or more duplicate ACKs are received in a row, the sender concludes that the misplaced packet was lost. Now the sender performs retransmission of what happens to be the misplaced packet, without waiting for a coarse-grain timer to expire. The sstresh is set to the same value as given in the case of timeout Equation (3). Subsequent to fast retransmission, fast recovery is performed until all data is convalesced. Consequently, the congestion window is set to three packets more than ssthresh. These supplementary three packets take account of the number of packets (three) that have left the network and which the receiver has buffered. Therefore, TCP is capable to evading the needless slow starts due to unimportant congestion occurrence, with fast retransmission and fast recovery.

()Τϕ /Φ2 17.414(1) Τφ 1 0 0 1 152.88 705.33 Τµ ()

In slow start phase, when ACK received the congestion window is increased by 1 segment (cwnd=cwnd+1). This phase is used at the times when new connections are established and after retransmission due to time-outs occurring. Slow start increases cwnd exponentially by adding one packet each time it receives an ACK at sender point. Slow start control the window size controls until cwnd achieves a threshold called slow start threshold (ssthresh). C. Congestion avoidance When cwnd reaches ssthresh, the congestion avoidance state begins. In congestion avoidance phase the congestion window is increased by one packet per round trip time, at this state window increased linearly. When non duplicate ACK received cwnd is increased as follows: cwnd = cwnd + MSS * MSS / cwnd

(2)

Equation (2) provides an acceptable approximation to the underlying principle of increasing cwnd by 1 full sized segment per RTT [6]. When timeout occurs, the ssthresh is reduced to one-half the current window size as follows:

20

Congestion avoidance (Linear)

3rd Duplicate ACK (Fast Transmitting)

ssthresh

New ACK

18

Congestion Window (Segments)

wnd = min rwnd , cwnd

16

Fast Recovery

14 12

Cong. Avoidance

ssthresh 2 (3) Τφ 1 0 0 1 New ()Τϕ /Φ2 17.414 167.76 422.85 Τµ ()

ssthresh = min rwnd , cwnd

Congestion avoidance (Linear)

20

Congestion Window (Segments)

18

Timeout

6

Slow start (Exponential)

4

0 0

Congestion avoidance (Linear)

14 12

Fig. 4.

2

4

6

8 10 Round trip times

12

14

16

TCP Fast retransmit and fast recovery phase

New ssthresh

10 8 6

IV. TCP ENHANCEMENT AND SOLUTION

Slow start (Exponential)

Slow start (Exponential)

4 2 0 0

Fig. 3.

8

2

ssthresh

16

10

2

4

6

8 10 Round trip times

12

14

16

18

TCP slow start and congestion avoidance phase

Fig. 3 shows changes in congestion window throughout the slow start and the congestion avoidance phase. The illustration explains when new connection starts, TCP sets cwnd to 1 packet and then exponentially increase as 2, 4, and 8 and so on. The maximum value cwnd is 20 therefore the position of new ssthresh after time out is set to 10. The TCP works in slow start mode until window size reaches ssthresh is 16. Now congestion avoidance phase start with linearly cwnd growth, when reaches at max value of cwnd (i.e. 20) time out occurs and again slow start phase begin with (new ssthresh) is at 10.

According to the behavior of TCP many protocols are developed to enhance the performance of TCP over wireless system. The three classified categories of proposed scheme are link layer solution, split connection solution and end-toend solution. A. Link Layer solution In the three categories of enhancement solutions the link layer improvement method is slower and reliable, wireless protocol. The main purpose of link layer solution is to make wireless link layer behave like wired segment with respect to high level protocol. The reliability is in low error rate when use forward error recovery or Automatic Repeat Request (ARQ) for lost frame or combination of both. The link layer solution can be either TCP aware including Snoop Link Layer Protocol and unaware including Transport Unaware Link Improvement Protocol (TULIP), Delay Duplicate Acknowledgement.

MASAUM Journal of Computing, Vol.1 No.4, November 2009

Snoop Link Layer Protocol: It designed to improve TCP performance over wired and single-hop wireless links [7, 8]. It is work as a Snoop agent at the base station and performing retransmission of lost segment based on duplicate ACK and estimate lost-hop round trip time. The agent also suppresses duplicate acknowledgments corresponding to wireless losses from the TCP sender, thereby preventing unnecessary congestion control invocations at the sender. This combination of local retransmissions based primarily on TCP acknowledgments, and suppression of duplicate TCP acknowledgments, is the reason for classifying Snoop as a transport-aware reliable link protocol. TULIP Protocol: Transport Unaware Link Improvement Protocol (TULIP) presented by [9], it improves the performance the TCP over lossy wireless links without competing and modifying the transport protocol. It is efficient link layer protocol that takes advantage of opposing flows by piggyback link layer ACKs with transport layer ACKs. It maintains very high throughput that is three times higher then TCP with no modification. It does not need extra bandwidth because it is able to adapt to different version of TCP TULIP piggyback TCP ACKs with link layer ACKs. Delayed Duplicate Acknowledgments: One of TCP unaware approach stated by [10], used to improve the performance of TCP over wireless without taking TCP specific action at the intermediate nodes is Delayed Duplicate ACKs. In out-of-order delivery of packets, the receiver sends duplicate ACKs for first two out-of-order packets. The third and subsequent duplicate ACKs are delay for duration. In this way TCP retransmission is not triggered at the sender. Therefore it is more reliable then snoop protocol when encryption is used. B. Split solution TCP enhancement scheme for heterogeneous network is split connection. It can distinguish errors on the wireless link from congestion losses by dividing end-to-end wireless and wired connections separately. In cellular networks, it corresponds to the GGSN (Gateway GPRS support node) since the base station is not IP capable, where as in LANs correspond to access points. The software agent handled the splitting connection at the wireless gateway is proxy solution. It is between the sender and terminal, permits splits the connection between sever to proxy, and proxy to terminal. Therefore sever can monitor usual wired network changing in the system on the proxy and the terminal. The advance enhancement in split connection approaches are Indirect TCP (I-TCP), Mobile TCP (M-TCP) and Mobile End Transport Protocol (METP). Indirect TC (I-TCP): I-TCP [11] is one of the most well known split connection protocol deal with wireless links. ITCP which suggests that any communication between a mobile host (MH) and fixed host (FH) should be split into two separate interactions: one between the MH and its Mobile

496 Support Router (MSR) over the wireless medium and another between the MSR and the FH over the fixed host. Data sent to the wireless host is first received by the MSR. The MSR sends acknowledgement to the FH on behalf of the MH, and forwards the data to the MH on a separate connection. The MSR and the MH need not use TCP for communication. The main drawback of I-TCP is that end-to-end semantic of TCP acknowledgements is violated, since acknowledgements can reach the FH even before the packets can reach the MH. Another inconvenient of I-TCP is that every packet incurs the overhead of going through the TCP protocol processing twice at the base station, as compared to just once in a non splitconnection approach. Mobile TCP (M-TCP): The M-TCP protocol [12] is almost same as I-TCP, with the exception that it is used to prevail over the weakness of I-TCP. It also split a TCP in to two connections, one from MH to Supervisory Host (SH) and another between SH to FH. The sender on FH uses unmodified TCP to send data to the SH, while SH uses MTCP for delivering data to MH. When the host sends segments, the SH receives them and passes them to the MH. Unlike I-TCP, ACKs will not be sent to the sender until they are received by the mobile host. Mobile End Transport Protocol (METP): The new protocol operates directly over the link layer and eliminates TCP/IP layer uses smaller headers is Mobile End Transport Protocol [13]. Now connection established between MH and base station used to overcome the performance problem indicated [7, 8]. In this protocol base station are splitting points acts as proxy for a TCP connection, it provides a conversion of the packets received from the fixed network into METP. In this way reduced TCP and IP header i.e., source and destination, address port and connection related information are removed. C. End-to-end solutions This solution comprises modify to TCP mechanics and implement at sender, receiver and intermediate router or optimizing parameters used by TCP connection to achieve good performance. There many end-to-end enhancement schemes including Tahoe, Reno, New Reno Eiffel, Westwood, SACK, FACK and Vegas. The most common end-to-end enhancement schemes are describe in next section. V. PERFORMANCE EVOLUTION OF TCP VARIANTS The most pervasive transport protocol engaged is the Transmission Control Protocol. In the very earliest achievement of TCP, little was done to minimize network congestion. Implementation used cumulative positive acknowledgements and the expiry of a retransmit timer to provide reliability based on a simple go-back-n model. Several succeeding versions of TCP based on congestion control and avoidance mechanism have been developed, nowadays. In following section, we argue performance of

MASAUM Journal of Computing, Vol.1 No.4, November 2009

various TCP versions like regarding Reno, SACK, FACK and Vegas.

Tahoe, Reno, New

A. TCP Tahoe One of the TCP congestion control algorithms, is TCP Tahoe described [14], adds some new and enhance the earlier TCP implementation, including slow start, congestion avoidance and fast retransmission. This enhancement comprises change in round-trip-time estimation used to position retransmission time out values. TCP Tahoe fast retransmission algorithm outperforms the most when the packets are lost due congestion. Sender should be waiting, for retransmission timer to expire, in without fast retransmit algorithm. Hence, fast retransmission can save numerous seconds every time packet loss occurs, and the throughput is improved, consequently. The shortcoming in TCP Tahoe is that packet loss is detected after the whole timeout interval. When a packet loss detected, TCP Tahoe performance becomes slow. Due to this reason transmission flow decreases rapidly. While Fast-Retransmit makes Tahoe perform significantly better than a TCP implementation in which means of loss detection are merely the retransmission timers. It obtains significantly less than optimal performance on high delaybandwidth connections because of its initiation of Slow-Start (which TCP Reno conversed in following section). Also, in the case of multiple losses within a single window, it is possible that the sender will retransmit packets which have already been received [2]. B. Reno The Reno TCP mechanism is similar to the TCP Tahoe, except it maintains improvements over Tahoe by adding to the fast recovery phase known as fast recovery algorithm [15]. The significant improvement in TCP Reno in contrast to TCP Tahoe, prevent the communication path “pipe” from going empty after fast retransmit, and in that way it avoids slow start to fill it again after a packet loss. TCP Reno maintains the clocking of new data with duplicate ACKs which make it more beneficial then TCP Tahoe. In this way, TCP allows to directly cut its throughput in half without the need for a Slow-Start period to reestablish clocking between the data and ACKs. This improvement has the most noticeable effect on long delay-bandwidth connections where the Slow-Start period lasts longer and large windows are needed to achieve optimal throughput. When a single packet is loss form a window of data, TCP Reno maintains it by fast recovery mechanism, in contrast when multiple packets lost, Reno’s performance are same here as Tahoe. This indicates that if multiple packets are lost from the same window, TCP Reno almost immediately drag out of fast recovery, and stopped up until no new packet can be sent.

497 The above discussion leads to conclusion that fast recovery mechanism introduced by TCP Reno handles multiple packet losses within a single window poorly. C. TCP New-Reno Hoe [16] intended that the experimental version of TCP Reno is known as TCP New Reno. It is slightly different then TCP Reno in fast recovery algorithm. New Reno is more competent then Reno when multiple packets losses occur. New Reno and Reno [17] both correspond to go through in to fast retransmit when multiple duplicate packets received, except another way that afterward it does not come out from fast recovery phase until all outstanding data at the time it entered fast recovery are acknowledged. It implies that in New Reno partial ACK do not take TCP out of fast recovery but they are treated as an indicator that the packet immediately following the acknowledged packet in the sequence space has been lost, and should be retransmitted. Therefore, when multiple packets are lost from a single window of data, at this time New Reno can improve without retransmission time out. The retransmitting rate is one loss packet per round trip time until all of the lost packets from that window have been transmitted. It exists fast recovery when all data injects into network, and still waiting for an acknowledgement at the movement that fast recovery was initiated, has acknowledged. The critical issue in TCP New-Reno is that it is capable of handling multiple packet losses in a single window. It is limited to detecting and resending at most one lost packet per round-trip-time. This insufficiency becomes more distinct as the delay-bandwidth becomes greater. More importantly, there are situations where stalls can still occur if packets are lost in successive windows. Also, like all of the previous versions of TCP discussed above, New-Reno still infers that all lost packets are caused by congestion and it may therefore unnecessarily cut the congestion window size when errors occur. D. TCP Westwood The modified version TCP Reno is TCP Westwood, [18] it extends the window control and backoff processing. When the packet loss occurs, three DUPACKs are received; the sender uses the bandwidth estimates to properly adjust the congestion window and the slow start threshold. By backing off to cwnd and ssthresh values that are based on the estimated available bandwidth (rather than simply halving the current values as Reno does), TCP Westwood avoids reductions of cwnd and ssthresh that can be excessive or insufficient. Westwood adopt a strategy of Additive Increase and Adaptive Decrease (AIAD) instead of Additive Increase and Multiplicative Decrease (AIMD). Therefore, TCP Westwood guarantees both the faster recovery and more effective congestion avoidance.

MASAUM Journal of Computing, Vol.1 No.4, November 2009

E. TCP SACK It is an enhanced version (EV) of TCP Reno and New Reno with Selective Acknowledgement. In [19] the author discovered two main problems, the detection of multiple lost packets and the retransmissions of more than one lost packet per round trip time, it can be solved through TCP SACK. In this protocol, the packets are acknowledged selectively rather than cumulatively. This application allows a TCP receiver to send SACK information as a set of options within the TCP header which is complimentary to the existing TCP ACK. The author states a new option fields that indicate the starting and ending sequence of non-contiguous sets of data existing at the receiver. Depending on the other TCP options being used by the connection, a maximum of either two or three blocks of data may be reported by a single ACK. Receivers include SACK information in the TCP header only when duplicate ACKs are sent in response to the arrival of an outof-order packet. A new scoreboard matrix is introduced at the sender to keep track of this SACK data, and a new pipe mechanism is used to track the number of packets currently in transit (i.e. in the “network data pipe”). SACK performs FastRetransmit just like New-Reno. It enters the Fast-Retransmit phase when a loss is detected, and it exits when all of the data has been acknowledged which was outstanding when the Fast-Retransmit phase began. Therefore it is examined that SACK still makes no attempt to distinguish between losses due to congestion and errors on the wireless link (it just decreases the effects losses have on performance). Also, it seems that with the currently proposed implementation of SACK, there are still situations where stalls could occur if packets are lost in very specific patterns. F. TCP FACK The development in TCP SACK with Forward Acknowledgement is identified as TCP FACK. The utilization of TCP FACK is almost identical to SACK but it establishes a little enhancement evaluated to it. It uses SACK option to better estimate the amount of data in transit [3]. TCP FACK introduces a better way to halve the window when congestion is detected. When the cwnd is immediately halved, the sender stops transmitting for a while and then resumes when enough data has left the network. This unequal distribution of segments over one RTT can be avoided when the window is gradually decreased [3]. When congestion occurs, the window should be halved according to the multiplicative decrease of the correct cwnd. Since the sender identifies congestion at least one RTT after it happened, if during that RTT it was in Slow Start mode, then the current cwnd will be almost double than the cwnd when congestion occurred. Therefore, in this case, the cwnd is first halved to estimate the correct cwnd that should be further decreased. G. TCP Vegas A new version of TCP, named TCP Vegas, proposed by

498 [20] with a fundamentally different congestion avoidance scheme from TCP Reno. The author studied that, Vegas predicts the beginning of congestion by observing the difference between the expected rate and actual rate. It adjust congestion window at source sending rate in an effort to keep a small number of packets buffered in the routers along the transmission path. TCP Vegas sender stores the current value of the system clock for each segment it sends. Therefore, it is able to know the exact RTT for each sent packet. The author introduced the innovation of TCP Vegas as follows: New Retransmission Mechanism: In this algorithm when duplicate acknowledgement is received, the sender checks if (current time – segment transmission time) > RTT. If this statement is true, the sender provides a retransmission waiting neither for the traditional retransmission timeout nor for three duplicate ACKs. This development can prevent a state in which the sender receives three duplicate ACKs and must rely on the coarse-grain timeout. In another case [21] when non duplicate ACK is received, Vegas second time check the time interval. If the segment was sent is greater than the timeout value then Vegas retransmits the segment. This also grabs hold of any other segments that may have been lost previously to the retransmission without having to wait for a duplicate ACK. New Congestion Avoidance: When receiving an ACK, the sender calculates the difference of the expected and the actual throughputs as follows:

()Τϕ /Φ2 17.219 Τφ 1 0 0

diff = exp ected − actual * baseRTT

exp ected = cwnd / baseRTT actual = cwnd average measued RTT

Expected throughput represents the available bandwidth for this connection without network congestion, and actual throughput represents the bandwidth currently used by the connection. Vegas defines two thresholds (á, â) as a tolerance that allows the source to control the difference between expected and actual throughputs in one RTT. The cwnd is increased by one packet if diff < á and decreased by one packet if diff > â. That is cwnd = cwnd + 1, if diff < α cwnd = cwnd – 1, if diff > β cwnd = cwnd , otherwise Modified Slow-start Mechanism: To detect and avoid congestion during slow-start, Vegas doubles it window size every other RTT, not every RTT as does Reno. The cause for this alteration is that when a connection starts at the initial state there is not any initiative for the sender about the existing bandwidth. In consequence, it possibly will happen that throughout exponential increase it over shoots the existing bandwidth by a large amount as well as congestion. When the edge value is achieved in the difference between current RTT and the last RTT at this state slow start is ended.

MASAUM Journal of Computing, Vol.1 No.4, November 2009

This represents a modification compared to others TCP versions where the boundary is set in cwnd size. H. TCP Variants Comparison The Table: 1 expresses the evaluation of various TCP variants, on the basis of Algorithms. It is derived from the related literature from [3],[14],[15],[16,17],[18],[19],[20,21]. TABLE 1 TCP VARIANTS EVALUATION ON THE BASIS OF ALGORITHMS Algorithms/ TCP Variants

TCP Tahoe

TCP Reno

TCP TCP TCP TCP TCP New Westwood SACK FACK Vegas Reno

SS

Yes

Yes

Yes

Yes

Yes

EV

EV

CA

Yes

Yes

Yes

Yes

Yes

Yes

EV

FR

Yes

Yes

Yes

Yes

Yes

Yes

Yes

FRR

No

Yes

EV

EV

EV

EV

Yes

RM

N

N

N

N

N

N

NM

CCM

N

N

N

N

N

NM

NM

SACK-M

No

No

No

Yes

Yes

Yes

No

(N = Normal, E V = Enhanced Version, N M = New Mechanism),( SS=Slow Start, CA=Congestion Avoidance, FR=Fast Retransmit, FRR=Fast Recovery, RM=Retransmit Mechanism, CCM= Congestion Control Mechanism, SACK-M= Selective ACK Mechanism)

VI. CONCLUSIONS In this paper we conclude that congestion is the main issue in different variants of TCP. The including table developed from the theoretical related literature shows the progress of different TCP algorithms. Link layer solution are efficient during handoff between base-station, do not interface with end-to-end semantics. In result link layer failure will not lost the data. Split connections are completely separated wireless and congestive losses, so very good throughput obtained. End to- end provides same performance improvements regardless the wired or wireless connection. Tahoe performance becomes slow when a packet loss occurs and it is detected after the whole timeout interval. Reno also behaves poorly like Tahoe when multiple packets loss occurs within window. New Reno is limited to detecting and resending at most one lost packet per round-trip time. The new approach AIAD implemented by the Westwood make it guaranteed for fast recovery and efficient congestion avoidance. We discovered that there are fundamental restrictions imposed by the lack of SACK in TCP. We examined a TCP implementation that incorporates SACK into Reno while making minimal changes to TCP’s underlying congestion control algorithms. Although there are still situation where halt can occurs when packets are lost. FACK introduces better way to halve the window when congestion is detected and estimate the correct decreased cwnd. In TCP Vegas, we focused on the innovations like new retransmission, congestion avoidance and slow start. Finally it is concluded that TCP Vegas is stable, efficient and fair than the other TCP variants. Because it provides window size that is set to an optimal value and the extra number of packets is limited to the range of á and â. We

499 assume that TCP Vegas will open the ways for the further development of the other TCP Variants. REFERENCES [1]

T. Bonald, “Comparison of TCP Reno and TCP Vegas: efficiency and fairness,” in Performance Evaluation, vol. 36-37, pp. 307-332, 1999. [2] K. Fall and S. Floyd, “Simulation-based comparisons of Tahoe, Reno, and SACK TCP,” in ACM Computer Communication Review, vol. 26, pp. 512, 1996. [3] M. Mathis and J. Mahdavi, “Forward acknowledgment: refining TCP congestion control,” in Proc. ACM SIGCOMM Conf. Application, Technologies, Architectures and Protocol of computer Communications, New York, 1996, pp. 281–291. [4] H. Mahbub, and R. Jain, High Performance TCP/IP Networking Concept, Issues and Solutions .Pearson Prentice Hall, 2004, pp. 20-21. [5] L. Dostàlek, and A. Kebelovà, Understanding TCP/IP A clear and comprehensive guide to TCP/IP Protocol. PACKT Publishing Ltd. Birmingham-Mumbai, 2006, pp. 258-259. [6] M. Allman, V. Paxson and W. R.Stevens, “TCP congestion control,” IETF Request For Comments (RFC2581), Standards Track, April 1999. [7] Balakrishnan, H., S. Seshan, and R. H. Katz, “Improving Reliable Transport and Handoff Performance in Cellular Wireless Networks,” ACM Wireless Networks, vol.1, no. 4, pp. 469–81, 1995. [8] Balakrishnan, H., S. Seshan, E. Amir, and R. H. Katz, “Improving TCP/IP Performance over Wireless Networks,” in Proc. of 1st ACM Conference on Mobile Computing and Networking (ACM Conference on Mobile Computing and Netuoring (MOBICOM 95), Berkeley, California, 1995. [9] Parsa, C., and J. J. Garcia-Luna-Aceves, “TULIP: A Link-Level Protocol for Improving TCP over Wireless Links,” in Proc. IEEE Wireless Communications and Networking Conference (WCNC 99), New Orleans, Louisiana, vol. 3,1999. [10] N. Vaidya, M. Mehta, C. Perkins, and G. Montenegro, “Delayed Duplicate-Acknowledgements,” A Proposal to Improve Performance of TCP on Wireless Links,” Texas A&M University, Tech. Rep. 99-003, 1999. [11] A. V. Bakre and B. R. Badrinath, “Implementation and performance evaluation of Indirect TCP,” IEEE Transactions on computers, vol. 46, no. 3, pp.260–278, 1997. [12] K. Brown and S. Singh, “M-TCP: TCP for Mobile Cellular Networks,” ACM SIGCOMM Computer Communication Review, vol. 27, no. 5, pp. 19–42, 1997. [13] K.-Y.Wang and S. K. Tripathi, “Mobile-End Transport Protocol: An Alternative to TCP/IP overWireless Links,” Proc. IEEE INFOCOM, 1998. [14] V. Jacobson, “Congestion avoidance and control,”in Proc.ACM SIGCOMM Symposium Application, Technologies, Architectures and Protocol of Computer Communications, New York, 1988, pp. 314–329. [15] V. Jacobson, “Modified TCP congestion avoidance algorithm.end2endinterest mailing list,” IEICE Transactions on Communications. Tech. Rep. E90-B(3):516-52, 2007. [16] J. Hoe, “Improving the start-up behavior of a congestion control scheme for TCP,” in Proc. ACM SIGCOMM Symposium Application, Technologies, Architectures and Protocol of Computer Communications, 1996, pp. 270-280. [17] S. Floyd, T. Henderson, and A. Gurtov, “The new Reno modification to TCP’s fast recovery algorithm,” IETF Request for Comments (RFC3782), Standard Track, April 2004. [18] C. Casetti, M. Gerla, S. Mascolo, M. Y. Sanadidi and R. Wang, TCP Westwood: bandwidth estimation for enhanced transport over wireless link”, in Wireless Network, Vol. 8, pp. 467-479, 2002. [19] M. Mathis, S. Floyd and A. Romanow, “TCP selective acknowledgment options,” IETF Report for Comments (RFC2018), Standard Track, October 1996. [20] L. Brakmo, S.O. Malley and L. Peterson, “TCP Vegas: New techniques for congestion detection and avoidance,” in Proc. ACM SIGCOMM Symposium Application, Technologies, Architectures and Protocol of Computer Communications, 1994, pp. 24-35. [21] L. Brarkmo, and L. Peterson, “TCP Vegas end to end congestion avoidance on a Global Internet,” IEEE JSAC, vol.13, pp.1465-1480, 1995.

MASAUM Journal of Computing, Vol.1 No.4, November 2009 B. Qureshi received the B.Sc. and M.Sc. degrees in Mathematics and Computer Science from University of Sindh, Jamshoro, Pakistan, in 1989 and 1992, respectively. From January 1992 to November 1995, he was a System Analyst at Sindh Development Studies Centre (SDSC), University of Sindh. In November 1995, he joined Sindh Agriculture University Tandojam, Pakistan, as an Assistant Professor. He is now on study leave for pursuing Ph.D in Computer Networks, Department of Communication Technology and Network, Faculty of Computer Science and Information Technology, University Putra Malaysia. His research work is in the area of Congestion control and Flow Control of High speed networks. M. Othman He received his PhD from the National University of Malaysia with distinction (Best PhD Thesis in 2000). Now, he is a Professor in Computer Science and Deputy Dean of Faculty of Computer Science and Information Technology, University Putra Malaysia (UPM) and prior to that he was a Deputy Director of Information Development and Communication Center (iDEC) where he was incharge for UMPNet network campus, uSport Wireless Communication project and UPM DataCenter. In 2002 till 2009, he received many gold and silver medal awards for University Research and Development Exhibitions and Malaysia Technologies Exhibition which is at the national level. His main research interests are in the fields of parallel and distributed algorithms, high-speed networking, network design and management (network security, wireless and traffic monitoring) and scientific computing. He is a member of IEEE Computer Society, Malaysian National Computer Confederation and Malaysian Mathematical Society. He already published 120 National and International journals and more than 200 proceeding papers. He is also an associate researcher and coordinator of High Speed Machine at the Laboratory of Computational Science and Informatics, Institute of Mathematical Science (INSPEM), University Putra Malaysia. N. A. W Hamid is a Lecturer at the Department of Communication Technology and Network, University Putra Malaysia. She received her PhD from University of Adelaide, Australia. Her main research interests are in the fields of parallel and distributed algorithms, high performance computer and grid computing. She is also an associate researcher of High Performance Computer at the South Australian Partnership of Advanced Computing (SAPAC) at University of Adelaide, Australia and Laboratory of Computational Science and Informatics, Institute of Mathematical Science (INSPEM), University Putra Malaysia. .

500

Suggest Documents