On the Efficiency and Fairness of TCP over Wired ... - CiteSeerX

4 downloads 815 Views 399KB Size Report
On the Efficiency and Fairness of TCP over. Wired/Wireless Networks by. Dimitrios Vardalis. Master of Science in. Computer Science. State University of New ...
On the Efficiency and Fairness of TCP over Wired/Wireless Networks by Dimitrios Vardalis Master of Science in Computer Science State University of New York at Stony Brook 2001 The continuous growth in the number of wireless components comprising the underlying network infrastructure of the Internet, in conjunction with some fundamental differences they exhibit from the wired ones, created the need for exploring TCP’s behavior in such compound wired/wireless environments. Many researchers have investigated the efficiency of TCP in heterogeneous environments and proposed solutions to improve it, but presently no studies on TCP’s fairness in such environments are available. In this work, we address the issue of the efficiency and fairness of TCP in networks consisting of both wired and wireless components under various conditions such as wireless error, congestion introduced by multiple competing flows, and different Round Trip Time (RTT). We base this thesis on testing two TCP versions, implementing a conservative and an aggressive congestion control strategy. In order to properly evaluate the protocol performance, we define a new metric for the fairness of a system with multiple flows. We also employ a handful of other metrics (i.e. overhead, goodput) that reflect recently raised requirements from wireless transport protocols. Throughout the experiments, we identify the cases where one congestion control strategy is favored over the other, and analyze the factors that lead to these results.

To Gogo, Xeni, and Gabrilo, The best family in the world

Table of Contents LIST OF FIGURES ................................................................ ................................ ................. V LIST OF TABLES................................................................ ................................ .................. VI ACKNOWLEDGMENTS................................................................ ................................ ......VII 1

INTRODUCTION................................................................ ................................ ............. 1 1.1 INTERNET EVOLUTION AND TCP ................................................................ ................... 1 1.1.1 Congestion control................................................................ ............................... 1 1.1.2 The Wireless Days................................................................ ................................ 2 1.2 THE AIMD PRINCIPLE ................................................................ ................................ .. 3 1.3 FAIRNESS ISSUES IN WIRED NETWORKS ................................................................ .......... 4 1.3.1 Variant propagation delay................................................................ .................... 4 1.3.2 Asymmetry and head-start ................................................................ .................... 5 1.3.3 Packet dropping policy................................................................ ......................... 5 1.3.4 Large number of flows................................................................ .......................... 6 1.3.5 MAIMD ................................................................ ................................ ............... 6 1.4 THESIS DESCRIPTION ................................................................ ................................ .... 7 1.5 PRESENTATION PLAN ................................................................ ................................ .... 8

2

TCP OVERVIEW ................................................................ ................................ ........... 10 2.1 2.2 2.3

3

TESTING ENVIRONMENT ................................................................ .......................... 13 3.1 3.2 3.3

4

GENERAL PROTOCOL CHARACTERISTICS ................................................................ ...... 10 TCP TAHOE ................................................................ ................................ ............... 11 TCP RENO ................................................................ ................................ ................. 12

SIMULATED CONDITIONS ................................................................ ............................ 13 DATA TRANSFERS ................................................................ ................................ ....... 14 RANDOMNESS IN THE ENVIRONMENT ................................................................ ........... 14

TESTING METHODOLOGY AND PARAMETERS OF SIGNIFICANCE ................ 16 4.1 AGGRESSIVE VS. CONSERVATIVE STRATEGIES.............................................................. 4.2 TESTING STAGES ................................................................ ................................ ........ 4.2.1 Low RTT-one flow (LR1) ................................................................ .................... 4.2.2 Low RTT-two flows (LR2)................................................................ ................... 4.2.3 Low RTT-three flows (LR3) ................................................................ ................ 4.2.4 High RTT-one flow (HR1) ................................................................ .................. 4.2.5 High RTT-two flows (HR2)................................................................ ................. 4.3 TESTING PARAMETERS ................................................................ ................................ 4.3.1 Error phase duration................................................................ .......................... 4.3.2 Data size................................................................ ................................ ............ 4.3.3 Number of repetitions................................................................ ......................... 4.4 PERFORMANCE METRICS ................................................................ ............................. 4.4.1 Throughput ................................................................ ................................ ........ 4.4.2 Goodput................................................................ ................................ .............

iii

16 16 17 17 17 17 17 18 18 18 18 19 19 19

4.4.3 Bandwidth Utilization ................................................................ ........................ 19 4.4.4 Overhead ................................................................ ................................ ........... 20 4.4.5 Fairness................................................................ ................................ ............. 20 4.5 CAPTURING FAIRNESS ................................................................ ................................ . 22 5

BANDWIDTH UTILIZATION ................................................................ ...................... 23

6

LOW RTT, TWO FLOWS ................................................................ ............................. 25 6.1 6.2

7

SAME PROTOCOL ................................................................ ................................ ........ 25 DIFFERENT PROTOCOL ................................................................ ................................ 28

LOW RTT, THREE FLOWS ................................................................ ......................... 31 7.1 7.2

8

SAME PROTOCOL ................................................................ ................................ ........ 31 DIFFERENT PROTOCOL ................................................................ ................................ 35

HIGH RTT, TWO FLOWS ................................................................ ............................ 39 8.1 8.2

9

TCP IN A HIGH-RTT ENVIRONMENT ................................................................ ............ 39 FAIRNESS IN A HIGH-RTT ENVIRONMENT ................................................................ .... 42

CONCLUSIONS ................................................................ ................................ ............. 44

10 10.1 10.2 11

OPEN ISSUES................................................................ ................................ ............. 46 FUTURE WORK ................................................................ ................................ ........... 46 ENERGY CONCERNS ................................................................ ................................ .... 46 REFERENCES................................................................ ................................ ............ 48

iv

List of Figures FIGURE 1: THROUGHPUT AS A FUNCTION OF LOAD ................................................. 3 FIGURE 2: THROUGHPUT ACHIEVED BY ONE, TWO AND THREE TAHOE FLOWS .......23 FIGURE 3: TIME TO COMPLETE TRANSFER FOR ONE AND TWO FLOWS OF THE SAME PROTOCOL................................................................ ................................ .... 26 FIGURE 4: FAIRNESS INDEX FOR TWO TAHOES, TWO RENOS AND ONE TAHOE ONE RENO ................................................................ ................................ ........... 27 FIGURE 5: TIME FOR A RENO AND A TAHOE COMPETING FLOWS ...........................29 FIGURE 6: THROUGHPUT ACHIEVED BY A TAHOE AND A RENO COMPETING FLOWS 30 FIGURE 7: OVERHEAD INTRODUCED BY TAHOE AND RENO WHEN RUNNING IN TWO COMPETING FLOWS ................................................................ ....................... 30 FIGURE 8: TIME FOR A SINGLE TAHOE AND A SINGLE RENO FLOW, AND FOR THREE TAHOE AND THREE RENO FLOWS ................................................................ .. 31 FIGURE 9: AVERAGE GOODPUT OF THREE TAHOE AND THREE RENO COMPETING FLOWS ................................................................ ................................ ......... 32 FIGURE 10: FAIRNESS INDEX FOR THREE TAHOE AND THREE RENO FLOWS............34 FIGURE 11: AVERAGE TIME FOR THREE TAHOE AND THREE RENO FLOWS, IN TWO SETS OF EXPERIMENTS ................................................................ .................. 36 FIGURE 12: AVERAGE OVERHEAD FOR THREE TAHOE AND THREE RENO FLOWS IN TWO SETS OF EXPERIMENTS................................................................ ........... 37 FIGURE 13: AVERAGE OVERHEAD FOR THREE TAHOE AND THREE RENO FLOWS, IN TWO SETS OF EXPERIMENTS................................................................ ........... 38 FIGURE 14: THROUGHPUT ACHIEVED BY TAHOE WITH HIGH AND LOW RTT..........39 FIGURE 15: TIME TO COMPLETE TRANSFER FOR ONE AND TWO FLOWS OF THE SAME PROTOCOL................................................................ ................................ .... 40 FIGURE 16: TIME FOR TAHOE AND RENO COMPETING FLOWS ...............................41 FIGURE 17: FAIRNESS INDEX FOR TWO TAHOES, TWO RENOS AND A TAHOE AND A RENO ................................................................ ................................ ........... 42

v

List of Tables TABLE 1: TIME VALUES FOR FIVE RUNS UNDER 0% ERROR RATE ..........................33 TABLE 2: TIME VALUES FOR FIVE RUNS UNDER 1% ERROR RATE ..........................33

vi

Acknowledgments I would like to thank the members of my Thesis committee: Hussein Badr, Tzi-ker Chiueh, and Vassilios Tsaoussidis. I am also grateful to H. Badr for his valuable assistance and responsiveness. I would especially like to thank V. Tsaoussidis for closely watching, directing, and guiding me throughout each stage of this work. I am greatly obliged to him for his faith and support over the last five years.

1

Introduction

The Transmission Control Protocol (TCP) has been the dominant reliable transport layer protocol ever since the appearance of its original version in 1981 [28]. The motivation behind TCP was to add reliability on top of an inherently unreliable IP network. The original TCP incorporated a “sliding window” mechanism, which, in conjunction with packet acknowledgments and segment sequence numbers, guaranteed a reliable data transmission as well as flow control. 1.1

Internet evolution and TCP

In the early 1980’s, network congestion did not constitute a focus of concern due to the limited number of interconnected hosts, and TCP’s original version was deemed adequate. As the number of hosts that joined the Internet increased, congestion problems, caused by lack of available bandwidth, became more and more evident. The deficiency of the original TCP was the absence of a mechanism that would adjust the sending rate responding to changes in the network load, namely congestion control. As a result, the network would flood and its overall performance would be severely degraded, leading to a series of ‘congestion collapses’ in the mid 1980’s. 1.1.1 Congestion control It was not until 1988 that a widely accepted congestion control algorithm was finally suggested [20]. This algorithm employed the Additive Increase Multiplicative Decrease (AIMD) principle. According to the AIMD, a protocol should increase its sending rate by a constant amount and decrease it by a fraction of its original value, each time an adjustment is necessary. This mechanism is the base of virtually all TCP implementations used in today’s Internet, since it is proven to converge to both a desirable level of efficiency as well as a desirable level of fairness among competing flows [12]. In the years that followed the establishment of AIMD as the standard algorithm to be used in TCP, Internet underwent numerous changes and rapidly increasing popularity. With the availability of widespread services such as e-mail and the World Wide Web (WWW), the Internet became accessible to a broader range of people, including users lacking any particular familiarity with computers. Although new competing technologies emerged and the demands from a transport layer protocol were highly increased, TCP not only survived but also became an integral ingredient of the Internet, experiencing only minor modifications. These modifications reflect to the different in-use TCP versions (TCP-Tahoe, TCP-Reno, TCP-NewReno) [20, 2, 14], experimental TCP versions

1

(TCP-SACK, TCP-Vegas) [25, 10], as well as special-purpose TCP versions (T/TCP) [9]. Some of these versions are described in forthcoming sections of this work. 1.1.2 The Wireless Days Among other innovations in computer communication, wireless networks became a very important part of the Internet in a number of different forms. Some of the wireless networking applications include mobile networking devices that are able to roam through the cells of a wireless network, wireless LANs connecting steady hosts in a small area, and satellite links. The proliferation of wireless networks combined with some fundamental differences they have from the wired networks led to a need for reevaluation of TCP performance in combined wired/wireless environments. More specifically, the way that TCP handles network congestion revealed a major deficiency in networks with wireless components. The congestion control algorithm, embedded in most TCP versions, uses packet losses as an indication for heavy network load and adjusts the sending rate accordingly. In a wireless environment, transient link interruptions that can be caused by weak signal, weather conditions, physical obstacles or handoff procedures, introduce a highly bursty error that is not related to congestion. In these cases, TCP makes a false assumption as to what the real cause of the error is and consequently may take the wrong action in response. Studies on TCP over wireless LANs can be found in [21, 22, 36], while in [23] the authors examine high RTT TCP connections when a wireless error is present. Besides the preexisting requirements of a transport protocol that is to be efficient and fair, mobile networking introduced energy expenditure issues, due to the limited power supply of mobile devices. The observations mentioned above triggered a series of new propositions and debates among the research community as to what the solution to this problem should be. Among the proposed solutions to improve TCP’s efficiency, we find a large variety of mechanisms focusing on different characteristics of wireless networks. As a general way of distinguishing between random error and congestion, the authors in [29] propose an Explicit Congestion Notification (ECN) from the network. In [1] and [19] the researchers describe schemes optimized for channels with high bandwidth and propagation delay, characteristics that correspond to satellite links. In networks where a mobile host is connected to the Internet through a base station, there is a clear distinction between the wired part and the wireless part of the path. In such cases, TCP improvements involve caching on the base station and locally retransmitting lost packets by either splitting the TCP connection [3, 11] or employing some specialized link-layer protocol [5, 6, 7, 30].

2

Finally, numerous end-to-end solutions have been proposed using more sophisticated acknowledgment techniques [13], probing mechanisms to detect network conditions [31, 32], and feedback from the wireless network adapter on the receiver [16]. 1.2

The AIMD principle

As mentioned earlier, the basic concept of AIMD was proven to yield satisfactory results when the network infrastructure consisted of hard-wire connected components. One year after the appearance of AIMD in 1988, the authors in [12] provided a detailed analysis of different congestion control strategies, as well as what renders the existence of such a strategy in a transport protocol crucial. Below we give a few important points made in this work. The major issue of concern to a transport protocol is its efficiency. On a network link crossed by a number of different flows running the same protocol, the ideal situation is to utilize as much of the available bandwidth without introducing congestion (i.e. packets queuing up on the router). In Figure 1, we see the achieved throughput as a function of the network load. It becomes clear that we need to avoid overloading the link, since the achieved throughput will diminish. For a protocol to operate in the area between the points labeled as Knee and Cliff, a congestion control mechanism is necessary. In [12] efficiency is defined as the closeness of the total load to the Knee, which is a good starting point.

Figure 1: Throughput as a function of load

Besides utilizing a high portion of the available bandwidth, a transport protocol must also be fair to the rest of the flows traversing the same part of the network. An efficient transport protocol does not necessarily mean that it is also fair. A single flow might take up the largest portion of the available bandwidth

3

while the rest remain idle. Obviously, this is an undesirable behavior and in certain cases, gaining higher fairness is worthwhile even at the cost of reduced efficiency. Intuitively, fairness is the closeness of the throughput achieved by each flow to its fair share. To measure fairness, the authors in [12] define a fairness index as:

F( x) =

(∑ xi )2

n(∑ xi 2 )

Where, xi is the throughput of the ith flow and n is the total number of flows. The fairness index of a system ranges from 0 to 1, with 0 being totally unfair and 1 being totally fair. In the third chapter we define our own fairness index and provide a more detailed analysis of this one. Along the lines of efficiency and fairness, as determined previously, four different scenarios were tested: Additive Increase Additive Decrease, Additive Increase Multiplicative Decrease, Multiplicative Increase Additive Decrease, and Multiplicative Increase Multiplicative Decrease. These scenarios were evaluated in terms of how fast they converged to the desirable efficiency and fairness levels. The AIMD scheme was found to be the one that better matched the required characteristics. Recent studies [37] provide a more in-depth analysis, regarding the impact of the AIMD parameters on the performance of TCP. 1.3

Fairness issues in wired networks

With the Internet growing in size and complexity, further experimenting and analysis indicated that, under certain circumstances, TCP’s performance suffered even in wired networks. Below we describe some of the occasions where TCP’s fairness was questioned. 1.3.1 Variant propagation delay Bandwidth allocation on a link was found to be far from fair when the flows sharing it had different end-to-end propagation delays. Flows with smaller delay occupied more bandwidth than their fair share, while these with larger delay suffered from low throughput.

4

1.3.2 Asymmetry and head-start In [4], the authors report on a case where two competing flows share a path with congested reverse channel link. The major parameters used in these experiments are the congestion experienced only by the acknowledgments, and the second flow starting its transmission a few seconds after the first one. It appears that the first flow occupies virtually all of the reverse bandwidth and when the second flow enters, it is unable to expand. The bandwidth allocation on this series of experiments was close to being totally unfair. In a scenario very common in today’s Internet, a flow merely initiating a transfer after other flows have already expanded (even if no special restrictions apply to the forward and the reverse paths) still raises the question: How long does it take TCP to converge to a fair bandwidth allocation? The answer to this question is very crucial, since most TCP connections are very short-lived. Claiming that a protocol is fair when it converges within ten minutes appears to be an oxymoron. The authors in [26] suggest a scheme, where the buffer queues on the network routers are different for long- and short-lasting TCP flows, thus protecting short flows from being unable to obtain their fair share of the available bandwidth. Their claim is that such a distinction significantly improves the stability and fairness convergence of TCP. However, implementing this approach requires certain modifications to the network infrastructure. 1.3.3 Packet dropping policy Another critical issue to TCP performance appeared to be the packet dropping policy on the Internet routers. Since the only way for TCP to receive feedback from the network is by packet losses, an unfair dropping policy directly impacts fairness among the flows. A simple FIFO drop-tail queue, that is, the router drops the tail of its FIFO queue at a buffer overflow, tends to discard packets unevenly. Consequently, some flows experienced heavier error than others and fairness is violated. A solution to this problem came with a router packet dropping policy introduced in [15], called Random Early Detection (RED). According to that policy, the possibility of a router to drop a certain incoming packet depends on the closeness of the recent average queue length to the maximum threshold, as well as the time that the last drop occurred. This way, the dropping probability increases whenever congestion builds up and no packet drops have occurred. Essentially, RED forces the transport protocols to reduce their sending rate, before the available buffer space is exhausted. A router implementing RED does not discard subsequent packets (that are more likely to belong to the same flow), so that each

5

flow senses roughly the same packet loss rate, resulting in a more fair bandwidth allocation. 1.3.4 Large number of flows In [27], TCP was challenged with another consequence of the Internet’s expansion, which is the large number of competing flows. Two sets of experiments are presented here, one with 30 and one with 1500 competing flows. While in both cases TCP achieves high bandwidth utilization, the fairness results are not analogous. In the 30-flow case the protocol is roughly fair and the throughput for most flows is slightly below the fair share. When the number of competing flows is as high as 1500 and the availability of buffer capacity is limited on the intermediate router, the configuration exhibits high variation in the bandwidth achieved by each flow. This causes the system to be unfair over intervals many seconds long. Part of the problem is detected in TCP’s inability to send less than one packet per RTT, but more than one packet per timeout. In this configuration, TCP would either be idle, or send at a rate higher than its fair share, due to its relatively coarse grain transmission rate. 1.3.5 MAIMD In spite of the AIMD principle’s popularity and public acceptance, it was recently disputed by the authors in [17]. In this work, there is an extensive analysis of AIMD in comparison with a different policy called MultiplicativeAdditive Increase/Multiplicative Decrease (MAIMD). In MAIMD, the policy for increasing the sending rate (when the bandwidth utilization is under the desirable level) involves multiplying the previous value by a factor greater than one and then adding a constant. MAIMD essentially speeds up the rate by which the protocol increases its sending rate. The authors argue that in a number of cases MAIMD yields faster convergence to the desirable fairness and efficiency levels than AIMD. However, the outcome is highly affected by the corresponding parameters used to control the sending rate adjustment (i.e. the constant and the factor). An interesting point made in this work is that AIMD does not converge in terms of fairness, and converges slowly in terms of efficiency when the system is asynchronous; meaning that multiple flows do not readjust their sending rate neither at the same time nor at the same frequency. This last definition of an asynchronous system matches the Internet, in that different flows are not synchronized in any sense, and the congestion control mechanism functions individually in each one of them. Applying MAIMD on an asynchronous system does not yield fairness convergence either.

6

Simulation experiments presented in [17] showed that TCP does not converge in terms of fairness in the case when one flow initiates a connection while another has already taken up all the available bandwidth. In the following years, it will become apparent if the perception of AIMD as presented in this paper will replace the common belief that it is the optimal algorithm to be used in transport protocols. 1.4

Thesis description

In this work we address the issue of how efficient and fair TCP is in networks consisting of both wired and wireless components under various conditions, such as wireless error, congestion introduced by multiple competing flows, and different Round Trip Time (RTT). We base this analysis on testing TCP Tahoe and TCP Reno that implement a conservative and an aggressive congestion control strategy, respectively. The motivation behind our selection was to evaluate these strategies. New protocols exist, however they are comparable in this respect. For the tested TCP versions, we identify the cases where one is favored over the other and analyze the factors that lead to these results. Related studies, comparing different versions of TCP can be found in [18, 35]. To measure the performance of the tested protocols we define a new fairness index and employ newly introduced performance metrics such as Goodput and Overhead. The traditional performance metrics were deemed insufficient, for reason explained in following chapters. Throughout the analysis, our basic argument is that fairness in a heterogeneous wired/wireless environment must be evaluated in conjunction with efficiency. In a wired environment, high bandwidth utilization is, in most cases, achieved so that throughput difference of the competing flows can be safely interpreted as a fairness issue. On the contrary, when an error of wireless nature is present, high bandwidth utilization is not necessarily preserved. In such a situation, differences among the throughput achieved by each flow should not be directly translated into fairness problems, since the flows barely affect each other. For example, consider a situation where the average bandwidth utilization is 10% and two competing flows are present. If one occupies twice as much bandwidth as the other, it would be misleading to derive that the configuration is unfair. At such low bandwidth utilization values, the interaction between the two flows is very limited. However, assuming that the flows do not affect each other at all would also be misleading, since during certain periods of the communication time, the two flows may occupy a great portion of the available bandwidth. Hence, they have an impact on one another, while very low bandwidth usage during the rest of the time would result in the measured average of 10%. In

7

this work, we find a way to bypass this problem by conducting multiple experiments under different configurations and cross-examine the results. Coupling efficiency and fairness is also important to get a clear view of the overall performance of each protocol, as well as to identify any efficiency and fairness tradeoffs. For example, achieving high fairness factor while the level of efficiency is low should not be viewed as desirable behavior. Instead it might be preferable that we gain overall efficiency at the cost of losing fairness. Our analysis also includes identifying such tradeoffs, and discusses their impact on the system performance. 1.5

Presentation plan The rest of this work is structured as follows: • In Chapter 2, we present an overview of the congestion control strategies, used in TCP-Tahoe and TCP-Reno. • Chapter 3 describes the hosts used in our experiments; the simulated network conditions, as well as important parameters that affect the results, such as the size of the transferred data and the random nature of the error. • Chapter 4 includes a comparison of an aggressive vs. a conservative strategy. Here, we also list the configurations of the conducted experiments and comment on our choices of the experiment parameters. Finally, we define the performance metrics that are used in the result analysis, and explain their use. • In Chapter 5, general comments on the bandwidth utilization of TCP are presented. The results report on the portion of the available bandwidth occupied when one, two and three flows are present on the network. • Chapter 6 reports on the results of experiments consisting of two flows in a low-RTT environment. The tests include three combinations of protocols: Tahoe-Tahoe, Tahoe-Reno, Reno-Reno. The metrics calculated from this set of experiments are presented separately for the cases where the competing flows run the same TCP version and where each flow runs a different TCP. • Chapter 7 reports on the results of experiments consisting of three flows in a low-RTT environment. The tests include five combinations of protocols: 3 Tahoes, 2 Tahoes and 1 Reno, 1 Tahoe and 2 Renos, 3 Renos. As in Chapter 6, the performance metrics are presented in two categories. • In Chapter 8, we consider two flows competing in a high-RTT environment. Our report involves the implications of the high RTT to

8

the bandwidth utilization achieved by TCP, as well as comments on the gathered statistics. • Chapter 9 presents the conclusions made throughout this work. • Chapter 10 describes some of the issues that are not covered, but are closely related to this work.

9

2

TCP overview

In this section, we briefly describe the two TCP versions we experimented with, TCP Tahoe and TCP Reno, focusing our attention on the congestion control scheme. TCP Tahoe was the first modification to the original TCP [28], which incorporated the congestion control algorithm proposed in [20]. The newer TCP Reno introduced the Fast Recovery algorithm [2], and was followed by New Reno [14] and the Partial Acknowledgment mechanism for multiple losses in a single window of data. 2.1

General protocol characteristics

TCP Tahoe and Reno use the same algorithm at the receiver but follow a different approach during the transmission process at the sender. In the beginning of the communication session, the receiver advertises a window size according to its available receiving buffer. Throughout the whole TCP session, the sender ensures that it keeps the number of the unacknowledged bytes below the advertised value. This way TCP implements flow control guaranteeing that the server will not overload the receiver. Having set up the connection, the receiver sends an acknowledgment for each correctly received packet, including the number of the next in-sequence packet. On the sender side, a window mechanism defines the maximum number of inflight (unacknowledged) packets, controlling the rate by which the sender transmits data. During the communication time, the size of this window can increase and decrease in order to adjust the transmission rate, never exceeding the receiver’s advertised window. The sender also maintains a timeout value, which is dynamically calculated every time a new acknowledgment arrives. This is based on the weighted average of previous RTT measurements as well as the standard deviation of these samples. Whenever a packet loss occurs, the timeout value is doubled, reflecting the more general philosophy that being too conservative is better than being too aggressive. When the network is heavily loaded and the intermediate routers are dropping packets, resending the lost packets quickly would introduce a positive feedback, further overloading the network. Doubling the timeout value ensures that at the occurrence of packet drops, TCP will exponentially back off, eliminating the possibility of adding more burden to the network. The timeout value is very important to the protocol efficiency. If it is too small, the sender may resend a packet even though it was not dropped. Thus unnecessarily injecting data to the network. If it is too large, the protocol does not sense that a packet was lost until it has been idle for a significant amount of time, resulting in degraded throughput.

10

In TCP, the mechanism to recover from lost packets is oriented towards congestion control. Congestion control and error recovery are very closely related to each other, since TCP’s only feedback regarding the congestion present on the network comes in the form of missing packets. The goal of congestion control is to determine the available network capacity and to adjust the congestion window accordingly. Acknowledgements received at the sender are interpreted as available bandwidth, while missing packets as an indication of network congestion. TCP Tahoe and TCP Reno differ in the actions they take in response to the events mentioned above. 2.2

TCP Tahoe

TCP Tahoe congestion control incorporates the Slow Start, congestion avoidance, and Fast Retransmit mechanisms [2, 20]. When a new session is initiated, the protocol enters the Slow Start phase. During this phase, the congestion window is expanded by one packet at the receipt of each acknowledgment, leading to exponential growth of the window. The protocol remains in this stage until a timeout occurs. When the timeout timer goes off, the protocol divides the current congestion window size by two and stores this value to a variable for later use. This value is called congestion threshold. Each subsequent Slow Start phase ends when the window size reaches the congestion threshold. Setting the congestion threshold to half the congestion window and exercising Slow Start until the threshold value is reached corresponds to the Multiplicative Decrease part of the AIMD principle. The protocol will then enter the congestion avoidance phase, during which it will increase the current window by one packet for each full window of data that is acknowledged. The window expansion during this stage of the communication is linear and corresponds to the Additive Increase part of the AIMD. In the Fast Retransmit mechanism, a number of successive (the threshold number is usually three) duplicate acknowledgments for the same packet trigger a retransmission without waiting for the associated timeout to occur. In response to such an “early timeout,” Tahoe takes the same action as it would for a regular one, setting the congestion threshold to half the current congestion window and entering Slow Start. However, Slow Start is not always efficient, especially if the error is not caused by network congestion. In such cases, shrinking the congestion window is unnecessary and can lead to low bandwidth utilization.

11

2.3

TCP Reno

TCP Reno introduces Fast Recovery used in conjunction with Fast Retransmit. Upon the arrival of a duplicate acknowledgment (dack) on the sender side, the protocol expands the congestion window by one, interpreting a dack as an indication of available bandwidth. When the protocol receives the threshold number of duplicate acknowledgments, it enters the Fast Recovery phase. The sender retransmits one segment, halves the congestion window, and sets the congestion threshold to the size of the congestion window. For as long as it remains in Fast Recovery, it increases the congestion window by one for each additional dack received. The sender exits the Fast Recovery phase when an acknowledgment for new data is received. It then sets the size of the congestion window to the congestion threshold and resets the dack counter. Compared to Tahoe, Reno uses a more aggressive error recovery strategy. When it receives the threshold number of dacks, it does not enter Slow Start, which would shrink the window to a single packet, but effectively sets it to half its previous value.

12

3

Testing environment

The two TCP versions tested in this work were implemented using the xkernel protocol framework [35]. The experiments involve real data transfers between two hosts connected over a 10Mbps switched Ethernet. The hosts were two Sun Ultra-5 machines with 64MB RAM, running SunOS 5.4. The configuration of these machines guarantees that the outcome of the experiments is not biased by any additional delays due to lack of processing power. The experiments were conducted at times when the two computers were not performing any operations besides running the protocols, and the local network was essentially idle. Typically the tests took place between 11:00 pm and 6:00 am. This way the protocols were allowed to exhibit their behavior without being affected by any uncontrolled activity. 3.1

Simulated conditions

Although the experiments involved actual data transfers as opposed to a simulation, there are still some parameters of the connection that need to be controlled, namely the wireless error as well as the Round Trip Time. The wireless error is a fundamental parameter throughout the whole series of tests conducted here, since our primary objective was to expose the protocols to an error pattern that resembles the error present on a wireless network. For this purpose we employ an error model developed for the xkernel platform [33]. The error model consists of states A and B, or “Off” and “On”, respectively. For each of these states, we set the mean sojourn time (the mean time the protocol will remain in each state) as well as the error rate. The error mechanism visits each state and settles there for an exponentially distributed amount of time before going to the next state. The error rate during the “Off” phase is set to 0%, while during the “On” phase it ranges between 0% and 50%. An error of 10% in the “On” state will cause the error model to drop all incoming packets for a continuous 10% of the overall time it spends in this state. We have used the well-known, two-state Markov model for simulating the wireless channel errors. However, we have not simulated network congestion; instead, real congestion is present on the network. This way, we are able to combine wired and wireless errors. To test the protocol performance under different RTT values, we use a mechanism that delays incoming packets. This mechanism takes two parameters representing the minimum and the maximum delay a certain packet will experience during the session. The minimum value corresponds to the propagation delay of the connection, while the maximum value corresponds to any additional delays due to increased congestion and packet queuing that

13

possibly occurs on the intermediate routers. The delay value is randomly sampled between the min and the max values, every time a packet arrives. 3.2

Data transfers

Throughout the experiments, we set up a number of TCP connections that transmit a predefined chunk of data. The sender, in such a connection, is saturated so that there is always data available to be sent and the application layer does not introduce any additional delays. At the end of each experiment, we record the amount of time required to complete the transfer and the total bytes that were ultimately transmitted, including the protocol headers and packet retransmissions. From these two pieces of information, a handful of other metrics are derived, such as Throughput, Fairness, etc. More detailed description of the used metrics is provided in the next chapter. In experiments where multiple flows are involved, multiple TCP sessions are set up between the same two machines. Preliminary experiments have shown that the tested protocols exhibit the same behavior when there is only one connection between each pair of machines (i.e. multiple pairs of machines are used), and when all connections are set up between only two hosts. The latter scenario was preferred, due to increased control over the synchronization among multiple flows. More specifically, multiple flows must all start simultaneously to ensure that no flow will have time to expand before the rest initiate their transfers. Running all protocols on the same machine allows initiating all flows virtually at the same time. 3.3

Randomness in the environment

The experimenting configuration involves an error of random nature. Both the duration of the error, as well as the time it occurs, are random. The former has the obvious consequence that a protocol experiences either heavier or lighter error conditions on different runs (under the same error configuration), while the latter appears to also affect the results, for the reasons explained below. Depending on the state a protocol exists in when the error occurs, the impact it has on the performance can significantly vary. For instance, if the protocol has developed a large congestion window when the channel switches to a “bad phase,” it will result in multiple packet drops and consequently, multiple timeouts. A large number of timeouts leads to an exponential increase of the timeout value, which renders the protocol unable to detect further packet losses in a timely fashion. On the other hand, if the error burst occurs when the window is relatively small, fewer packets will be lost and the impact on the overall protocol performance will be much milder.

14

Having the randomness introduced by the error scheme in mind, each experiment is repeated several times and averaged statistics are extracted from all runs. This will ensure that our conclusions will not be based on single nonreproducible results but rather, on the average of multiple separate runs. At the end of each transfer, the TCP connection is released and enough time is allowed for all coexisting flows to complete their transfer before the next experiment is initiated.

15

4

Testing methodology and parameters of significance

The experiments presented here were conducted using two versions of TCP, namely Tahoe and Reno. As described in Chapter 2, Tahoe implements a conservative congestion control algorithm, while Reno a more aggressive one. The motivation behind this selection was to derive more general conclusions on the impact of a conservative/aggressive strategy on the protocol performance, in a compound wired/wireless environment. Identifying the cases where each strategy yields better results is useful to determine the ideal behavior of a transport protocol. 4.1

Aggressive vs. conservative strategies

Reno’s aggressive congestion control was released as an improvement to the original congestion control mechanism implemented in Tahoe. The ability of Reno to recover from a single packet drop per window faster than Tahoe, rendered it more appropriate to be used in the Internet (most of today’s TCP implementations incorporate the congestion control implemented in Reno). However, in a network with wireless components, this observation is not always true. The key factor that determines the protocol behavior is the tradeoff between the amount of sent data and the timeout value. An aggressive strategy is more persistent, and does not immediately back off at the occurrence of some packet drop. If this drop was caused by a wireless link interruption, it is likely that a number of subsequent packets will also be lost, experiencing the same “bad phase” of the communication channel. Consequently, even more packets can be lost resulting to an extended timeout value (TCP doubles its timeout value every time the timeout timer goes off). An overextended timeout slows down subsequent packet drop detection and degrades the protocol efficiency. On the other hand, a conservative strategy will immediately back off without having the chance to quickly recover. However, this will eliminate the possibility of overextending the timeout value. 4.2

Testing stages

Our experiments with Tahoe and Reno can be divided into five main categories. In all of the below listed cases, the protocols were tested for six different error rates: 0%, 1%, 10%, 20%, 33% and 50%.

16

4.2.1 Low RTT-one flow (LR1) In this set of experiments, Tahoe and Reno are tested separately in a low RTT environment where only one flow is present on the network. The RTT perceived from the application layer of the xkernel platform (queuing delays due to congestion are not included) was measured to be roughly 1 ms. The results from LR1 show the performance achieved by the two protocols when no congestion is present, and are used as a guide for evaluating their relative behavior when competing flows exist. 4.2.2 Low RTT-two flows (LR2) The tests conducted in this category involve two competing Tahoe flows (TaTa), two competing Reno flows (Re-Re), and a Tahoe vs. a Reno competing flows (Ta-Re). This set of experiments reveals how fair and efficient each combination is. 4.2.3 Low RTT-three flows (LR3) According to the previous set, we test all four different combinations of the two TCPs. These combinations are three Tahoe flows (Ta3), three Reno flows (Re3), two Tahoe and one Reno flows (Ta2Re1), and one Tahoe and two Reno flows (Ta1Re2). 4.2.4 High RTT-one flow (HR1) Here, Tahoe and Reno are separately tested in a high-RTT environment. To simulate the high RTT, we added a delay of 20-25 ms according to the scheme described in section 3.1, bringing the RTT to a value of roughly 50 ms. The results indicate the performance achieved by the protocols when no congestion is present. 4.2.5 High RTT-two flows (HR2) Finally, we experiment on the fairness and efficiency of the two TCP versions in a high RTT environment, when a competing flow is also present. The tested combinations are the same as in LR2: Ta-Ta, Re-Re and Ta-Re.

17

4.3

Testing parameters

A number of different testing parameters had to be determined in order to carry out the experiments included in this work. The configuration we used was based on a series of preliminary tests that helped conclude as to what values would produce the most widely accepted and representative results. 4.3.1 Error phase duration For the set of experiments in a low RTT environment, the values for the “On” and “Off” error state durations were both set to 1500 ms. During the stage of determining these values, Tahoe appeared to be favored when the “On” state was significantly smaller than the “Off” state; whereas Reno showed the opposite results. A value of 1500 ms for both states was a reasonable intermediate choice, being well above the typical UNIX timer precision of 500 ms. In the high RTT, case an additional delay of 20-25 ms is introduced in both directions. For this case, the durations of the two states remained equal, but instead of 1500 ms, they were set to 4500 ms. Due to the increased RTT, TCP needs more time to sense changes in the network condition. By increasing the “On” and “Off” states average sojourn time, we essentially slow down the rate by which changes are taking place on the network. 4.3.2 Data size The size of the data used in the TCP transfers was set to 5 MB and 20 MB, depending on the rest of the experiment parameters. In a low RTT environment with two competing flows, the data amount was 20 MB. In an environment with either high RTT value or with three coexisting flows, the amount of data was 5 MB. The data size was selected so that the communication time would be adequate in length for the protocols to exhibit their characteristics. In the low RTT – two flows case, the efficiency of the individual connections is better than those in the case of high RTT or three flows, and the larger amount of sent data was chosen to increase the communication time. 4.3.3 Number of repetitions For each different experiment configuration, that is one of the five basic categories for a particular error rate, 15 separate tests were conducted. We observed that further increase of the repetition number for a certain configuration

18

did not significantly alter the aggregate statistics. A single test will be referred to as a run. 4.4

Performance metrics

Essentially, the only measurements we take at each run are the transmission time (Time) and the total number of bytes transferred (TotalBytes), including protocols headers and packet retransmissions. From these two, along with the size of the data set transmitted from the application layer, we calculate Throughput, Goodput, Bandwidth Utilization, Overhead, and Fairness. 4.4.1 Throughput The throughput of a flow is defined as:

Throughput =

TotalBytes Time

Throughput represents the bandwidth taken up by a flow, but it is not always related to the efficiency of the protocol running. An inefficient strategy might yield bandwidth utilization very close to 100%, but the produced throughput could consist of a significant amount of unnecessary retransmissions, rendering the protocol efficiency poor. 4.4.2 Goodput As opposed to the throughput mentioned in the previous section, goodput gives the actual transmission rate perceived on the receiver’s application layer. The protocol headers and any packet retransmissions are not reflected in this metric. Goodput is defined as:

Goodput

=

DataSize Time

4.4.3 Bandwidth Utilization Bandwidth utilization is the percentage of the available bandwidth occupied by all competing flows. In our case, the available bandwidth is 10Mbps. Although on an Ethernet the available bandwidth is practically less than 10Mbps, bandwidth utilization can be used to compare the performance of different configurations.

19

4.4.4 Overhead Overhead is the extra number of bytes the protocol transmits, expressed as a percentage, over and above the size of the data delivered to the application at the receiver, from connection initiation to connection termination. The overhead is given by the formula: TotalBytes − DataSize Overhead = 100 Datasize 4.4.5 Fairness To measure the overall fairness in a system with multiple flows we introduce a new performance metric. Our motivation was to create an index that will reflect the fairness experienced by each flow. The formula for our index is: n

FairnessIn dex = 1 −

∑ | Ti − Avg | i =1

2( n − 1) Avg

Where, n is the number of flows, Ti is the throughput of flow i, and Avg is the average throughput achieved by all n flows. As defined above, the Fairness Index displays the following properties: • It ranges between 0 and 1 for any number of flows. A totally fair bandwidth allocation will have an index of 1 (all flows have the same throughput). A totally unfair allocation will have an index of 0 (one flow takes up all the bandwidth and the rest of them are idle). • It is continuous (every change in the throughput of any flow affects the value of the index). • The unit of measurement used does not affect the index. • If out of n flows, k are equally sharing all the bandwidth and the rest n-k are idle, the fairness index is (k-1)/(n-1). The numerator of the fraction in the formula is the sum of the absolute value of the difference of each flow’s throughput from the average throughput. The larger this sum is, the more unfair the system is, since the individual throughputs diverge from the average. In the worst-case scenario, one flow is the only active flow while the rest of them remain idle. In this situation, the absolute difference for each idle flow is going to be Avg (since Ti = 0), while for the active flow, the absolute difference will be (n-1)*Avg. The sum of the first n-1 terms is (n1)*Avg. Adding the absolute difference for the active flow, we get 2(n1)*Avg, which is the value we divide by to normalize the result.

20

In section 1.2, we presented the original fairness index introduced in [12]. Below, we provide a brief comparison of the two indices and explain the reasons that led us to the decision of defining a new one. In the following paragraphs, the fairness index defined in [12] will be referred to as the old index, while the index defined here will be the new one: • The old index ranges from 1/n to 1, where n is the number of flows. For instance, if there are two flows and only one is active while the other is idle, the old fairness index will be 0.5. The new index always ranges from 0 to 1, regardless of what the value of n is. In the previously mentioned example, the value of the new index would be 0. • The new index is more sensitive to changes, especially when the number of flows is small. For example, in the case where there are two flows and one achieves 50% more throughput than the other, the old index would give 0.96, while the new would give 0.8. When one flow uses exactly twice as much bandwidth as the other, the old index gives 0.9, while the new one gives 0.66. In general, the new index reflects the system fairness more accurately, especially when the number of flows is small, which is the case for the experiments conducted in this work. When dealing with only two flows, the formula for the fairness index is reduced to: T1 + T2 T + T2 | + | T2 − 1 | 2 2 = FairnessIn dex = 1 − T1 + T2 2 2 | 2T1 − (T1 + T2 ) | | T − T2 | =1− = 1− 1 T1 + T2 T1 + T2 | T1 −

This is essentially one minus the portion of the overall throughput represented by the difference in the two individual throughput values. That means, if two flows are present and the fairness index has a value of 0.95, the throughput difference of the two flows is 5% of the overall throughput achieved by both flows. The ratio of the higher throughput over the lower one can be found by solving the above equation for T1/T2: T1/T2 = (2 – FI) / FI. For the previous example, this would be roughly 1.10, which denotes that one flow, took up 10% more bandwidth than the other. At this point, it is worthwhile noticing that when examining the fairness, we consider the throughput achieved by a flow as opposed to the goodput. A flow being unfair and taking up most of the available bandwidth does not necessarily mean that this flow is more efficient than the rest, since part of the transmitted

21

data consists of packet retransmissions. Thus, fairness is not concerned with how the occupied bandwidth is used, but rather with the bandwidth utilization itself, and is unrelated to any efficiency metrics. It is understood that if goodput were used as a metric to compute fairness, the results would not reflect the reality as perceived on the link layer. Another fairness index used in the literature is the min-max ratio [24], defined as:

M = min{ i, j

xi } xj

Where, xi and xj are the throughput values for flows i and j, respectively. This is essentially the ratio of the throughput achieved by the flow that took up the smallest portion of the available bandwidth over the throughput of the flow that took up the greatest portion of the available bandwidth. This index is useroriented, as opposed to the other two, which are system-oriented. The min-max ratio is 0 if any of the flows has 0 throughput, even if the rest share the bandwidth equally. The assumption made by this index is that if one user is dissatisfied, the system is unfair regardless of how the rest of the bandwidth allocation is done. In our analysis, we do not use the min-max ratio because in the cases we examined (i.e. small number of flows), it does not provide us with any additional information about the overall system fairness. 4.5

Capturing fairness

During our study, we combine the results of a vertical and horizontal analysis over the gathered statistics. In the horizontal analysis, the fairness index for each individual experiment under a certain error rate is calculated. The average of the fairness index for each run represents the fairness index for this error rate. This is an indication of how well the protocols interact with each other and how fair the combination is. The fairness index calculated during the horizontal analysis might be slightly inaccurate when the difference in the time required by the two flows is large. Once a flow has transferred the predefined amount of data, it stops transmitting and essentially the second flow continues with no competing traffic for the rest of the connection. In such a case, the fairness index assumes that the two flows were competing until the slower flow completed its transfer. What the horizontal analysis does not achieve is determining which one of the two flows was favored, and by what amount. The vertical analysis involves averaging the throughput values of each flow over all runs, for a certain error rate. By comparing the outcomes we can conclude which protocol was favored over the other. When all flows run the same protocol, the vertical analysis is of no significance since the averaged throughput values will ultimately be equal.

22

5

Bandwidth Utilization

In Figure 2, the pink line corresponds to the throughput achieved by Tahoe when running with no competing flows, the blue one to the total throughput achieved by two simultaneously running Tahoe flows and the yellow by three Tahoe flows. As it can be observed from this chart, the larger the number of flows, the higher the bandwidth utilization at all different error rates. In error-free conditions, the difference between the 1-flow and the 2-flow cases is insignificant and the bandwidth utilization for both is close to 80%. The three-flow case achieves roughly 10% more total throughput than the other two, even at 0% error rate. As the error rate increases, the two-flow case uses an extra 33% of the bandwidth used by the single flow, while the three-flow case uses almost 100% over that value. Setting the value for the desirable bandwidth utilization on an Ethernet to a rough 80% shows that a single flow achieves that only at 0% error rate, the two flows up to an error of 1%, and the three flows up to an error of 10%. For the rest cases, all three combinations essentially underutilize the available bandwidth. 100

Bandwidth utilization

90 80 70 60 50 40 30 20 10 0 0

1

10 20 Error rate (%) Ta-Ta Tahoe

33

50

Ta3

Figure 2: Throughput achieved by one, two and three Tahoe flows

Because of the existence of the wireless error, the protocol shrinks its window at the occurrence of a packet drop, although there is still bandwidth available. In the case where multiple flows are present, as one flow backs off, the rest take up

23

some of the unused bandwidth and the overall throughput remains high. This is more obvious at error rates between 1% and 10%, where the overall throughput for two and three flows is only slightly affected by the additional packet drops, while the single flow suffers low bandwidth utilization even at an error rate of 1%. At higher error rates, it is more likely that all flows will be experiencing a “bad phase” at the same time and the system will not be able to compensate for the wasted bandwidth. Thus, the system efficiency degrades. The situation is analogous for one Reno and multiple coexisting Reno flows.

24

6 6.1

Low RTT, two flows Same protocol

Figure 3 illustrates the time needed by the protocols to complete the 20MB data set transfer. The lines labeled as Ta-Ta and Re-Re show the time for two competing flows of the same protocol, while the lines labeled as Tahoe and Reno show the time for a single flow. For the two-flow case, the line represents the average of the time for each flow. When only one flow exists, the two protocols behave similarly up to an error rate of 20%. Under more deteriorated conditions, Reno gains 15% and 6% in performance for error rates of 33% and 50%, respectively. Reno’s more aggressive strategy is favored when there is a high wireless error, but no actual congestion on the network. With two competing flows of the same protocol, the pattern is analogous to the previous situation, except when the error rate is 50%. At this error rate, the two Reno flows need 7% more time than the two Tahoe flows to complete the transfer. Although a single Reno flow is more efficient than a single Tahoe flow, at 50% error rate, two Tahoe flows are more efficient than two Reno flows. In such cases where wireless error as well as congestion exists in the network, Tahoe’s conservative behavior is favored in terms of efficiency. It must be noted here that congestion at such high error rates does not imply that the bandwidth utilization is high. As depicted in Figure 2, the bandwidth utilization at an error rate of 50% is approximately 50% for the two-flow case examined in this section. This 50% corresponds to the average throughput throughout the whole session, and not to the actual throughput value at each given moment of the transfer. If the bandwidth utilization were 50% at all times during the session, the time for a single flow and for two coexisting flows would be equal since the two flows would virtually not affect one another at all. However, in Figure 3, it can be seen that for an error rate of 50%, a single flow needs about 60 seconds to complete the transfer, while it takes the two flows about 90 seconds to accomplish the same. Thus, even if the average bandwidth utilization for a certain experiment is low, the actual bandwidth utilization for some portions of the transfer time takes high values. During these portions the two flows indeed compete with each other and congestion is present on the link.

25

120

100

Time(sec)

80

60

40

20

0 0

1

Ta-Ta

10 20 Error rate (%) Re-Re Tahoe

33

50

Reno

Figure 3: Time to complete transfer for one and two flows of the same protocol

In Figure 4, we display the fairness index of all three different combinations of coexisting flows. At zero error rate, any packet drops experienced by the protocols are caused solely by congestion. Consequently, the fairness index is very close to 1. This conforms to previous observations that TCP is adequately fair in wired network environments (at least under the conditions present in these experiments), where the throughput values of different flows ultimately converge to a common level. However, when an error of wireless nature is present, the fairness index drops to roughly 0.95 and keeps decreasing as the error rate goes higher. A fairness index of 0.95 essentially means that, on average, one flow is taking up 10% (using the formula T1/T2 = (2 – FI) / FI) more bandwidth than the other. For an error rate of 33%, this value is as high as 35%. When one flow experiences a “bad phase” of the communication channel, it backs off, that is, it slows down its transmission rate by shrinking its congestion window, translating the lost packets as an indication of congestion. The second flow, at the absence of packet drops, continues increasing its window size and injecting more packets into the network. Eventually, it reaches a stage where it occupies portion of the bandwidth that belongs to the other flow. At that time, the

26

flow that experienced the error is unable to utilize its fair share of the bandwidth. After the “bad phase” for the first flow is over, the protocol will try to expand, but the network will be congested by the second flow. In the first chapter, we mentioned TCP’s deficiency in term of convergence to a desirable fairness level when one flow is initiated while the network is already congested. The above scenario takes place numerous times through each run, and whatever fairness is achieved by the system until that point, it resets and the process for a fair bandwidth allocation starts again from scratch. The occurrence of the scenario described previously is random, since it depends on the random error present on the connection. This randomness does not allow the system to converge; instead, one of the two flows will ultimately be favored over the other resulting to poor performance in terms of fairness. 1.025 1

Fairness index

0.975 0.95 0.925 0.9 0.875 0.85 0.825 0.8 0

1

10 20 Error rate (%) Ta-Ta

Re-Re

33

50

Ta-Re

Figure 4: Fairness index for two Tahoes, two Renos and one Tahoe one Reno

The value of the fairness index for each protocol combination shows that the system with two Tahoe flows yields the best performance for the majority of the tested error rates. A Tahoe flow that has expanded when the second flow starts expanding itself will shrink its window at the first packet drop due to congestion, so both flows will have the chance to divide the available bandwidth in a fair manner. A Reno flow, on the other hand, is more likely to overcome single packet drops in one congestion window, even if they were caused by the second flow

27

trying to expand. As a result, the second flow will not be able to get its fair share of the bandwidth. For the same reasons that Tahoe yields better fairness, it is outperformed by Reno in terms of efficiency. Reno’s ability to recover after a single packet loss in a window yields better throughput at the cost of decreased fairness. The situation is reversed at 50% error rate, where Reno’s more aggressive algorithm has the opposite effect. The decision to not shrink the congestion window reacting to a lost packet leads to even more lost packets and an overextended timeout value. In this context, Tahoe’s conservative behavior allows it to experience less packet drops and to retain a relatively small timeout value, sensing subsequent losses more timely than Reno. This drawback of the aggressive approach exists only when the network suffers real congestion. The bottom part of Figure 2 depicts the time needed for a Tahoe and a Reno flow to complete the file transfer when no other flows are present. Reno needs less time than Tahoe for all error rates, even for the 50% rate discussed previously. Network congestion, besides introducing packet-drops due to buffer overflows on the intermediate nodes, has the effect of increasing the RTT perceived by the sender. The RTT measured by the sender along with its standard deviation determines the amount by which the current timeout value will be altered. When a Reno flow runs without any competing flows, no congestion is present and consequently, the RTT remains small with a small standard deviation. Even if multiple drops occur, Reno will be able to readjust its timeout value relatively easily. However, when congestion is present, both the RTT and its standard deviation increase. Readjusting the timeout value requires more time, resulting in degraded efficiency. Combining the graphs in Figures 3 and 4 we see that there is an evident tradeoff between efficiency and fairness. Up to an error rate of 33%, the two Tahoe flows are less efficient than the two Reno flows, but they are also more fair to each other. On the other hand, when the error is 50%, the two Tahoes are more efficient but less fair than the two Renos. 6.2

Different protocol

Figure 5 shows the time for Tahoe and Reno when these two protocols compete with each other. In this scenario, the two protocols perform almost equally (with Tahoe yielding slightly better results), up to an error rate of 33%. Under a 50% error rate, Reno requires 6% more time than Tahoe to complete the transfer.

28

105 90

Time (sec)

75 60 45 30 15 0 0

1

10 20 Error rate (%) Tahoe Reno

33

50

Figure 5: Time for a Reno and a Tahoe competing flows

In the Tahoe-Reno combination, the throughput values of each flow are roughly equal while the fairness index is relatively low. This signifies that at a certain run, one flow takes up more bandwidth than the other. However, this happens in a random fashion so that the overall efficiency throughout the whole set of runs is equal for both protocols. Figures 6 and 7 depict the throughput achieved and the overhead introduced by each flow respectively. The throughput graph indicates that for most error rates, Tahoe achieves slightly better throughput than Reno. In Figure 7, we see that Reno introduces more overhead than Tahoe for error rates greater than 10%. Reno, besides achieving slightly lower throughput than Tahoe, is also using it less efficiently by having to retransmit more packets than Tahoe.

29

4500

Throughput (Kbps)

4000 3500 3000 2500 2000 1500 1000 500 0 0

1

10 20 Error rate (%) Tahoe Reno

33

50

Figure 6: Throughput achieved by a Tahoe and a Reno competing flows

4 3.5 3 Overhead

2.5 2 1.5 1 0.5 0 0

1

10 20 Error rate (%) Tahoe

33

Reno

Figure 7: Overhead introduced by Tahoe and Reno when running in two competing flows

30

50

7

Low RTT, three flows

7.1

Same protocol

Figure 8 shows the time for Tahoe and Reno when running with no competing flows (bottom two lines), and the average time for three Tahoe and three Reno flows (top two lines). For the single flow case, the two protocols perform almost equally for most of the tested error rates, with Reno being slightly more efficient than Tahoe. For the three competing flows of the same protocol, the three-Reno combination is more efficient up to an error rate of 10%. Above this error rate, Tahoe takes over and the difference between the two protocols grows higher as the error rate increases. This observation conforms to the observation made in the previous chapter, that at high error rates when congestion is present on the network, Tahoe outperforms Reno in terms of efficiency. The difference between the two-flow case and this one is the threshold value beyond which Tahoe becomes more efficient than Reno. 35 30

Time(sec)

25 20 15 10 5 0 0

1

10 20 Error rate (%)

Ta3

Re3

Tahoe

33

Reno

Figure 8: Time for a single Tahoe and a single Reno flow, and for three Tahoe and three Reno flows

31

50

In Figure 3 we saw how at 50% error rate, Reno is less efficient than Tahoe. Figure 8 indicates that when three flows are present, Tahoe is more efficient, even for an error rate of 20%. Due to increased congestion caused by three flows instead of two, RTT is less steady and takes higher values throughout the connection. Because of that, Reno’s deficiency with overextending its timeout value makes its presence felt for lower error rates than when the network is less congested. 3500

Goodput (Kbps)

3000 2500 2000 1500 1000 500 0 0

1

10 20 Error rate (%) Tahoe Reno

33

50

Figure 9: Average goodput of three Tahoe and three Reno competing flows

Figure 9 shows the average goodput achieved by the three Tahoe and the three Reno flows. Here, it is easier to indicate the threshold value of the error rate where a conservative strategy becomes more efficient than an aggressive one. This value falls between the error rates of 10% and 20%. The threshold value corresponds to the equilibrium of the aggressive/conservative tradeoff, as explained in section 4.1. A rather surprising observation made from Figure 9 is that both protocols are slightly more efficient under an error rate of 1% than in error free conditions. Table 1 contains the time values for five runs under error free conditions, while in Table 2, we present the time values for five individual runs under 1% error rate to get a clearer view of the relative efficiency of each flow within a certain run.

32

Exp 1(sec) Exp 2(sec) Exp 3(sec) Exp 4(sec) Exp 5(sec)

Flow 1

Flow 2

Flow 3

14.7

14.3

14.8

14.1

13.5

13.9

13.1

13.6

14.8

14.6

14.7

13.6

14.4

15

14.9

Table 1: Time values for five runs under 0% error rate

For each run at 0% error rate, the time values for the three flows are very close to each other. At an error rate of 1%, the time for the slowest flow is always higher than all the time values in the 0% error rate table; while the time of the faster flow is always less than all the time values in the 0% error table. The low error rate of 1% slows down one or two of the flows, while the rest have a chance to expand occupying more bandwidth. Additionally, when one flow finishes the transfer, it frees bandwidth and allows the rest to complete their transfer with less competition. This situation slightly favors the overall efficiency of the system as opposed to the error-free conditions.

Exp 1(sec) Exp 2(sec) Exp 3(sec) Exp 4(sec) Exp 5(sec)

Flow 1

Flow 2

Flow 3

11.5

16.2

15.4

12.7

9.8

15.6

10.6

14.5

15.7

15.7

12.3

15.3

11.6

10.9

15.6

Table 2: Time values for five runs under 1% error rate

33

1

Fairness index

0.95 0.9 0.85 0.8 0.75 0.7 0.65 0.6 0

1

10 20 Error rate (%) Ta3 Re3

33

50

Ta-Re

Figure 10: Fairness index for three Tahoe and three Reno flows

The unfair bandwidth allocation as shown in Table 2 also reflects on the fairness index. Figure 10 shows the fairness index of the two combinations. Even at the low error rate of 1%, the fairness index drops to approximately 0.9 for both protocols. For the same scenario with two flows, this value was 0.95 (Figure 4). Additionally, the fairness index keeps decreasing, reaching the minimum value of 0.65 for an error rate of 50%, whereas the minimum fairness is 0.85 in the twoflow case. This comparison leads us to the general conclusion that when an error of wireless nature is present on the link, TCP is significantly less fair with three competing flows as compared with only two of them, for both versions tested here. Looking more closely at the fairness achieved by Tahoe and Reno in Figure 10, we observe that Reno is more fair up to an error rate of 10%. Above 10%, Tahoe appears to yield a higher fairness index. The tradeoff we indicated in the two-flow case no longer exists for three flows. Instead, the fairness and efficiency levels for both protocols are following the same pattern. Reno is both more efficient and fair than Tahoe up to 10% error rate, and Tahoe is both more efficient and fair than Reno for error rates greater than or equal to 20%. In the previous paragraphs, we have seen how the aggressive strategy followed by Reno appears to be superior for relatively low error rates, when it runs along with other Reno flows. However, the situation changes when it competes with Tahoe flows.

34

7.2

Different protocol

Figure 11 combines the time measurements gathered from the remaining two sets of experiments in this category; namely, two Tahoe - one Reno flows and one Tahoe - two Reno flows. The time for each protocol as presented in Figure 11 is the average time of all three flows of that protocol in both sets. Instead of providing separate graphs for each case, we aggregated the results from both experiments to draw more general conclusions about the case when three flows of different protocols are competing on the link. Similarly, in Figure 10, the TahoeReno line corresponds to the average fairness index for a certain error rate of both experiments. As shown in Figure 11, Tahoe completes the transfer faster for all tested error rates except for 20%. For error rates where Reno was more efficient when competing with other Reno flows, Tahoe seems to now “steal” some of Reno’s bandwidth and gain in efficiency. Moreover, Reno’s deficiency at high error rates is even more apparent when Tahoe flows are also present. At 50% error rate, Reno requires 35% more time than Tahoe to complete the transfer. The fact that Tahoe flows are favored when competing with Reno flows is common to this case and the two-flow case studied in the previous chapter. When an aggressive strategy competes with a conservative one, it is actually the conservative strategy that is favored. Thus, an aggressive algorithm does not necessarily occupy more bandwidth than its fair share. It becomes clear that the words “aggressive” and “conservative” do not refer to the performance that each strategy yields, but to the functions that take place in the congestion control algorithm. Our experiments show that in many cases, an aggressive strategy yields more conservative results and vice versa.

35

45 40 35 Time (sec)

30 25 20 15 10 5 0 0

1

10 20 Error rate (%) Tahoe Reno

33

50

Figure 11: Average time for three Tahoe and three Reno flows, in two sets of experiments

The yellow line in Figure 10 represents the fairness index for the two cases where both Tahoe and Reno flows are present. Besides the error rate of 20%, this combination yields the poorest performance in terms of fairness. The constant higher performance achieved by Tahoe for most of the error rates is reflected through the low value of the fairness index. At 20%, this combination yields a better fairness index than the rest. Also, 20% it is the only error rate at which, Reno achieves better performance than Tahoe. The same observation holds for the two-flow case. In Figure 4, we saw that the Tahoe-Reno combination is more fair than both the two Renos and the two Tahoes at 33% error rate, and this is again the only error rate for which Reno outperforms Tahoe in terms of efficiency. It appears from these observations that when Reno is more efficient than Tahoe, the overall fairness of the system is better. Figure 12 presents the overhead for Reno and Tahoe for the two sets of experiments that involve flows running both protocols. The aggressive algorithm implemented by Reno has indeed an aggressive result, up to an error rate of 20%. The additional overhead introduced by Reno is caused by more packet retransmissions than that of Tahoe. Revisiting the issue of the tradeoff between the amount of sent data and the timeout value (i.e. the aggressive/conservative

36

tradeoff), we see how Reno introduces less overhead than Tahoe for the error rates of 33% and 50%, while it is significantly less efficient for the same error rates. Since Tahoe completes its transfer in much less time and sends more data, we conclude that Reno spends a large portion of the communication time being idle, waiting for a timeout to occur. The large timeout value developed by Reno due to its persistence when an error occurs, combined with the high network congestion, renders the protocol unable to identify packet losses in a timely fashion and is ultimately inefficient.

4.5 4 3.5 Overhead

3 2.5 2 1.5 1 0.5 0 0

1

10 20 Error rate (%) Tahoe Reno

33

50

Figure 12: Average overhead for three Tahoe and three Reno flows in two sets of experiments

Finally, Figure 13 depicts the average throughput of Tahoe and Reno. The throughput graph supports our rather unexpected observations in both the twoflow and the three-flow cases that, when competing with Reno, Tahoe yields better performance, even though for low error rates with flows of the same protocol, Reno is superior.

37

3500

Throughput (Kbps)

3000 2500 2000 1500 1000 500 0 0

1

10 20 Error rate (%) Tahoe Reno

33

Figure 13: Average overhead for three Tahoe and three Reno flows, in two sets of experiments

38

50

8

High RTT, two flows

8.1

TCP in a high-RTT environment

Figure 14 depicts the total throughput achieved by a single Tahoe and two coexisting Tahoe flows for high RTT. For the sake of comparison, the throughput achieved by the same protocol combinations in a low-RTT environment is also included. Tahoe’s bandwidth utilization is very poor when the RTT for the connection is high. Even in error-free conditions, it utilizes roughly 70% of the available bandwidth. A single Tahoe flow yields even lower bandwidth utilization that does not exceed 60%. For more persistent error conditions, the achieved throughput is yet more degraded and it reaches 1 Mbps (10% of the available bandwidth). 9000 8000

Throughput (Kbps)

7000 6000 5000 4000 3000 2000 1000 0 0

1

10

20

33

50

Error rate (%) Ta-Ta (high)

Ta-Ta (low)

Ta (high)

Tahoe (low)

Figure 14: Throughput achieved by Tahoe with high and low RTT

TCP’s inability to efficiently utilize the available bandwidth even in error-free conditions is related to the increased RTT and the relatively short duration of the connection. For TCP to fully exploit the link capacity, the number of in-flight bytes (that is, the size of the window) must be equal to the bandwidth times delay product. A rough calculation of this value for a 10Mbps link and an RTT of 55 ms is 550 Kb. With a Maximum Segment Size of 1500 Bytes or 12 Kb (essentially

39

the size of each TCP packet), approximately 45 packets must be on the fly. The Slow Start exponential algorithm needs log2 45 x 55 ms = 385 ms to reach this value. Every time TCP exceeds the network capacity and is forced to a Slow Start, it will exponentially expand its window up to the congestion threshold 45/2, in log2 (45/2) x 55 ms = 335 ms. Compared to the 7 sec of the total connection time, 335 ms is a considerable amount of time. Even if TCP enters Slow Start only a few times, the bandwidth utilization is degraded. The very low bandwidth utilization indicates that the interaction between the two flows will also be limited. However, there still is some real congestion on the network. In order to conclude that no congestion is present and the two flows essentially do not compete, the overall throughput achieved by the two flows must be twice the throughput of the single flow. As we see in Figure 14, this does not hold for any of the tested error rates. The fact that there indeed is some competition between the two flows can be also seen by the time graph presented in Figure 15. 100 90 80

Time(sec)

70 60 50 40 30 20 10 0 0

1 Ta-Ta

10 20 Error rate (%) Re-Re Tahoe

33

50

Reno

Figure 15: Time to complete transfer for one and two flows of the same protocol

For the two-flow case of the same protocol, Tahoe and Reno perform equally in terms of efficiency, up to an error rate of 20%. Above that, Tahoe appears to be

40

less efficient than Reno, as opposed to what we have seen for both the two-flow and the three-flow with low RTT cases. The reason why Reno is unaffected by the problem of overextending its timeout value is that, as we mentioned in the beginning of the chapter, the real congestion in the high RTT case is much lower than when the RTT is small. Thus, the RTT remains about the same throughout the connection and its standard deviation is small, allowing Reno to quickly readjust its timeout value whenever multiple errors occur. 120 100

Time (sec)

80 60 40 20 0 0

0.01

0.1 0.2 Error rate Tahoe Reno

0.33

0.5

Figure 16: Time for Tahoe and Reno competing flows

In Figure 16 we present the time for a Tahoe and a Reno flows when they compete. Their performance does not deviate from the previous case, where both flows were running the same protocol. Although in the low RTT experiments we have seen that Tahoe was more efficient when competing with a Reno flow, here both protocols yield roughly the same performance. Based on these experiments, we are unable to determine if the milder network conditions due to the higher RTT are accountable for that. Experiments with a higher number of flows under the same RTT would help to produce more congestion and discover whether Tahoe would actually outperform Reno in that case.

41

8.2

Fairness in a high-RTT environment

The fairness index for all three protocol-combinations for the high RTT case is depicted in Figure 17. For an error as low as 1%, the fairness index is below 0.9 for all cases. A fairness index of 0.9 translates to an average of 22% more throughput achieved by one flow during each run. For the corresponding lowRTT experiments, the value of the fairness index at 1% error rate was 0.95 (one flows achieves 10% more throughput than the other during a run). Generally, for all tested error rates, TCP is considerably more unfair in a high-RTT environment. There are two major factors degrading TCP’s performance in terms of fairness in a high RTT environment. The first is the Slow Start phase duration as explained in the previous section. When one flow suffers a “bad phase” and shrinks its window, eventually the second flow expands and takes up more bandwidth. So far the scenario is identical to that in the low-RTT environment. However, when the first flow exits the “bad phase,” it now needs more time to expand itself and reclaim its fair share of the bandwidth. 1.1 1

Fairness index

0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 0

1

10 20 Error rate (%)

Ta-Ta

Re-Re

33

50

Ta-Re

Figure 17: Fairness index for two Tahoes, two Renos and a Tahoe and a Reno

42

The second factor is related to the bandwidth utilization. When the bandwidth utilization is low, as in the high-RTT set of experiments examined here, a low fairness index also reflects any differences in the error experienced by the two flows. Since the interaction between the two flows is limited, the protocol cannot be held responsible as the sole cause for a low fairness index. This can be seen from the values of the fairness index for the error rates of 33% and 50%. The fairness value of 0.3 exhibited by Tahoe at 50% error rate stands for one flow using 5 times more bandwidth than the other. With the utilized bandwidth being 1Mbps, we cannot conclude that the difference in throughput comes as a result to TCP’s unfair algorithm. In such cases, competing flows affect each other in a limited way, and the low fairness index is mostly a consequence of the randomness of the error.

43

9

Conclusions

The widespread popularity of wireless networking has recently focused the attention of the research community into a re-evaluation of the traditional congestion control strategy implemented in TCP. In the original concept of TCP’s congestion control, packet losses are used as a feedback from the network, indicating that congestion is present. This assumption is mostly true for wired networks, but it appears to be false in combined wired/wireless networks, where error not related to congestion causes TCP to yield poor performance. In this context, we conducted experiments using two TCP versions, Tahoe and Reno, implementing a slight variation of the same basic, window-driven algorithm. Our conclusions include: • To measure the efficiency of the tested protocols the traditional metrics do not suffice. For instance, the throughput achieved by a TCP flow does not capture the protocol efficiency, since it includes any extra overhead introduced by the congestion control strategy. Additionally, the traditional fairness index is very insensitive and does not reflect the system fairness adequately, especially when the experiments involve a small number of flows. For the abovementioned reasons, we defined a new fairness index and employed newly introduced performance metrics such as Goodput and Overhead. • The experimental results demonstrated that when a wireless error is introduced, the AIMD principle implemented in TCP yields poor bandwidth utilization. The achieved throughput drops significantly, although bandwidth is still available on the network. • With a small number of competing flows AIMD does not preserve fairness. Even at an error rate as low as 1% when two TCP flows are present, one flow occupies on average 10% more bandwidth than the other. Fairness appears further degraded when three flows are present, as well as when the RTT increases. • At high simulated wireless error, in combination with network congestion, Reno’s aggressive behavior yields lower performance than Tahoe’s conservative one. • When the two protocols compete, Tahoe appears to be favored over Reno by occupying a greater portion of the available bandwidth. • In cases where wireless error, without real network congestion, is present, the aggressive strategy yields better results.

44



The tradeoff between the amount of sent data and the ability to maintain a low timeout value is identified as the key factor, which determines the relative performance between a conservative and an aggressive algorithm. Through our examination of TCP Tahoe and TCP Reno, we saw that neither the conservative nor the aggressive behavior seem to be adequate for a wireless transport protocol. Instead, the congestion control strategy should be adaptive, from conservative to aggressive, depending on the nature of the error. For example, if the error is of wireless nature the protocol should not decrease its congestion window, nor extend its timeout, but rather wait until the link becomes error-free and resume from its previous state. Under congestion-related error, a conservative strategy would be more appropriate so that the network will not be overloaded and new flows will be able to get their fair share of the bandwidth. Such a mechanism that combines a conservative and an aggressive strategy has been proposed in [31] with TCP Probing.

45

10 Open issues In this work, we tested TCP under a few basic scenarios that helped us identify some of the protocol deficiencies, as well as how an aggressive and a conservative strategy affect its performance. Our goal was to isolate TCP’s reactions, to changes in environmental parameters such as wireless error, competing flows, and the RTT value. As these parameters changed throughout the experiments, some non-negligible parameters remained constant. More extended testing is needed, to determine how TCP performs when these constant parameters change. 10.1 Future work One alteration in the experimenting parameters is the amount of sent data that determines the duration of the connection. For example, in the low-RTT twoflow case, the connection time for high error rates was above 1 minute. An issue worth exploring here is the performance of Tahoe and Reno for variable connection durations. Another important factor is that in experiments involving multiple flows, they all initiated the data transfer simultaneously. Further experimenting is necessary to cover the situation where one flow starts the transfer after the other has already had time to expand. Also, a modification can be made concerning investigation in increasing the number of flows. Our experiments showed that with three flows, TCP performs worse in terms of fairness compared to the two-flow case. Although this gives us a solid argument that with more flows fairness would be even more degraded, testing that would include a larger number of flows would be useful to confirm it. Increasing the number of flows would also increase the bandwidth utilization that is very important to the interaction between the competing flows, especially in the experiments conducted with high RTT. Finally, newly introduced protocols, such as TCP Probing that mentioned in the previous chapter, must also be tested and evaluated under wireless conditions. 10.2 Energy concerns Mobile devices with limited power supply raised one more requirement for a transport protocol, that is to be energy conserving. In [33], the authors study the energy expenditure of different TCP versions. Their approach uses the combination of time and byte overhead as an indication of the energy expenditure of a transport protocol.

46

Recently, researchers have proposed transport protocols that incorporate mechanisms, oriented to energy conservation. In [8], the authors propose switching off the network adapter of the mobile host at times when no data is expected (Selective Idling). Other studies suggest alternative congestion control strategies that use probing mechanisms to discover the network conditions and minimize the effort made by the network adapter, when the channel is in a “bad phase” (TCP-Probing) [31]. More extensive analysis of energy expenditure can be found in [31, 32, 33, 38].

47

11 References 1. M. Allman, D. Glover, and L. Sanchez, “Enhancing TCP Over Satellite Channels using Standard Mechanisms,” RFC 2488, January 1999. 2. M. Allman, V. Paxson, W. Stevens, “TCP Congestion Control”, RFC 2581, April 1999. 3. A. Bakre and B. Badrinath, “I-TCP: Indirect TCP for Mobile Hosts,” in Proceedings of the IEEE ICDCS '95, pp. 136-143, 1995. 4. H. Balakrishnan, V. Padmanabhan, and R. Katz, “The Effects of Asymmetry in TCP Performance,” in Proceedings of the 3rd ACM/IEEE Mobicom Conference, September 1997. 5. H. Balakrishnan, V. Padmanabhan, S. Seshan, and R. Katz, “A Comparison of Mechanisms for Improving TCP Performance over Wireless Links”, ACM/IEEE Transactions on Networking, December 1997. 6. H. Balakrishnan, S. Seshan, E. Amir, and R. Katz, “Improving TCP/IP Performance over Wireless Networks,” in Proceedings of the 1st ACM Int'l Conf. On Mobile Computing and Networking (Mobicom), November 1995. 7. H. Balakrishnan et. al, “Improving Reliable Transport and Handoff Performance in Cellular Wireless Networks,” Wireless Networks, 1995. 8. I. Batsiolas and I. Nikolaidis, “Selective Idling: An Experiment in Transport-Layer Power-Efficient Protocol Implementation,” in Proceedings of the International Conference on Internet Computing, June 2000. 9. R. T. Braden, “T/TCP-TCP Extensions for Transactions, Functional Specification,” RFC 1644, July 1994. 10. L. Brakmo and L. Peterson, “Tcp Vegas: End to End Congestion Avoidance on a Global Internet,” IEEE Journal on Selected Areas of Communications, October 1995. 11. K. Brown and S. Singh, “M-TCP: TCP for Mobile Cellular Networks,” in Proceedings of the ACM SIGCOMM CCR, pp. 19-43, 1997. 12. D. Chiu, R. Jain, “Analysis of the Increase and Decrease Algorithms for Congestion Avoidance in Computer Networks”. 13. M. Elaoud and P. Ramanathan, “TCP-Smart: A technique for Improving TCP Performance in a Spotty Wide Band Environment,” in Proceedings of the IEEE ICC'2000, 2000. 14. S. Floyd, T. Henderson, “The New Reno Modification to TCP’s Fast Recovery Algorithm”, RFC 2582, April 1999. 15. S. Floyd and V. Jacobson, “Random Early Detection Gateways for Congestion Avoidance,” IEEE/ACM Transactions on Networking, vol. 1,

48

16.

17. 18.

19.

20. 21.

22.

23.

24. 25. 26.

27. 28. 29. 30.

31.

pp. 397-413, August 1993. T. Goff, J. Moronski, and D. Phatak, “Freeze-TCP: A true end-to-end Enhancement Mechanism for Mobile Environments,” in Proceedings of the INFOCOM, (Israel), 2000, 2000. S. Gorinsky and H. Vin, “Additive Increase Appears Inferior,” tech. rep., University of Texas, Austin, 2000. G. Hasegawa, K. Kurata, and M. Murata, “Analysis and Improvement of Fairness between TCP Reno and Vegas for Deployment of TCP Vegas to the Internet,” in Proceedings of the IEEE International Conference on Network Protocols, November 2000. T. Henderson and R. Katz, “Transport Protocols for Internet-Compatible Satellite Networks,” IEEE Journal on Selected Areas of Communications, 1999. V. Jacobson, “Congestion avoidance and control” in Proc. Of ACM SIGCOMM ’88, August 1988. C. Koksal, H. Kassab, and H. Balakrishnan, “An Analysis of Short-Term Fairness in Wireless Media Access Protocols,” in Proceedings of the ACM SIGMETRICS, June 2000. A. Kumar, “Comparative Performance Analysis of Versions of TCP in a Local Network with a Lossy Link,” in ACM/IEEE Transactions on Networking, August 1998. A. Lahanas, D. Vardalis, and V. Tsaoussidis, “On the Performance of Reliable Transport Protocols over Wide Area Networks,” in Proceedings of the International Conference on Internet Computing, IC 2000, CSREA Press, Las Vegas, Nevada, June 2000. M.A. Marsan and M. Gerla. “Fairness in local computing networks,” in Proceedings IEEE ICC’82, June 1982. M. Mathis, J. Mahdavi, S. Floyd, and A. Romanow, “TCP Selective Acknowledgement Options,” RFC 2018, April 1996. I. Matta and L. Guo, “Differentiated Predictive Fair Service for TCP Flows,” in Proceedings of the IEEE International Conference on Network Protocols, November 2000. R. Morris, “TCP Behavior with Many Flows”, in Proc. Of IEEE ICNP, Atlanta, October 1997. J. Postel, “Transmission Control Protocol,” RFC 793, September 1981. K. Ramakrishnan and S. Floyd, “A Proposal to add Explicit Congestion Notification (ECN) to IP,” RFC 2481, January 1999. K. Ratnam and I. Matta, “WTCP: An Efficient Mechanism for Improving TCP Performance over Wireless Links,” in Proceedings of the 3rd IEEE Symposium on Computer and Communications (ISCC '98), June 1998. V. Tsaoussidis, H. Badr, “ TCP-Probing: Towards an Error Control

49

32.

33.

34.

35. 36. 37.

38.

Schema with Energy and Throughput Performance Gains”, The 8th IEEE Conference on Network Protocols, ICNP 2000, Osaka, Japan, November 2000. V. Tsaoussidis, H. Badr, R. Verma, “Wave and Wait: An Energy-saving Transport Protocol for Mobile IP-Devices”, in Proc. Of IEEE ICNP ’99, Toronto, October 1999. V. Tsaoussidis, H. Badr, G. Xin, and K. Pentikousis, “Energy / Throughput Tradeoffs of TCP Error Control Strategies,” in Proceedings of the 5th IEEE Symposium on Computers and Communications, ISCC, 2000. D. Vardalis, V. Tsaoussidis, “On the Efficiency/Fairness of Protocol Recovery Strategies in Networks with Wireless Components,” to appear in IC 2001, Las Vegas Nevada. The X-Kernel: www.cs.arizona.edu/xkernel. G. Xylomenos and G. Polyzos, ``TCP and UDP Performance over a Wireless LAN,'' in Proceedings of the IEEE INFOCOM, 1999. Y. Yang and S. Lam, “General AIMD Congestion Control,” in Proceedings of the IEEE International Conference on Network Protocols, November 2000. M. Zorzi, R. Rao, “Energy Efficiency of TCP”, MoMUC ’99, San Diego, California, 1999.

50

Suggest Documents