University of Leeds
SCHOOL OF COMPUTER STUDIES RESEARCH REPORT SERIES Report 97.35
Proposed Modi cations to TCP Congestion Control for High Bandwidth and Local Area Networks by
Rik Wade, Mourad Kara, Peter Dew frik, mourad,
[email protected] June 1997
Abstract In this report, we propose extensions to TCP congestion control which, on a congested network result in signi cantly better use of available bandwidth by eliminating the requirement for a slow start with each TCP restart. Mechanisms are implemented which enable the sending TCP to restart its data
ow at a suitable level for the current connection. A fallback mode is provided to prevent the source from overloading intervening routers should congestion be suciently high. Rigorous testing of the new algorithms was undertaken using the REAL network simulator and various benchmark scenarios. In addition to the benchmark scenarios, further models were developed in order to simulate real-world situations. Over the suite of tests, our modi cations showed on average a 20-30% speed increase over REAL's standard TCP-RENO protocol (which is based on BSD's TCPRENO) with some sources showing up to a 100% improvement. In the worst-case scenarios, the modi ed TCP functioned at least as well as TCP-RENO.
Contents
1 Introduction 2 Proposed Congestion Control Extensions 3 Simulation of the Proposed Extensions
3.1 Window Recalculation . . . . . . . . . . . . . . . . . . . . . . . 3.2 Restart ACK Level . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 Benchmarking . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3.1 Experiment 1 (exp1.l) . . . . . . . . . . . . . . . . . . . 3.3.2 Experiment 2 (exp7.9.l) . . . . . . . . . . . . . . . . . . 3.3.3 Equilibrium of TCP Connections and Combined Trac
4 Evaluation of Experiment Results 5 Related Work 6 Conclusions and Future Directions 7 Acknowledgements 8 Appendix A 9 Appendix B 10 Appendix C 11 Appendix D
. . . . . .
1 3 6
7 9 10 11 14 17
20 23 24 25 28 28 28 29
11.1 REAL Data Output Tables . . . . . . . . . . . . . . . . . . . . . 29
1 INTRODUCTION
1
1 Introduction The Internet's Transmission Control Protocol (TCP) is the most widely used transport protocol in the world today. It is used to complement the Internet Protocol (IP) addressing scheme which is simply a "datagram" protocol [17]. TCP provides a reliable transport layer to ensure integrity and delivery of network data. It does this through a complex interaction between many dierent algorithms and data structures [22]. TCP maintains a congestion window which indicates the number of (TCP) segments that may be safely dispatched onto the current network connection. However, if this window should exceed the available buer space on the receiving machine then the lower of these two values will be used for transmission. In addition to this, there is a slow start threshold [8] which is initialised to 65535 bytes by the system [22]. This value may be altered by TCP according to the current state of network congestion. When initiating transmission, TCP will set its congestion window to a size of 1 segment. This value is increased exponentially up to the slow start threshold. Once at this threshold, the window increments one segment at a time to "cautiously" expand its transmission potential. Each packet of data that is sent has its own sequence number. The basic premise is that upon receipt, the receiver will acknowledge this data by returning an appropriately numbered acknowledgement (ACK) packet. If data is lost (i.e. an ACK is not received) then TCP may retransmit the missing data. TCP is "self clocking" in that it expands its congestion window only when an ACK packet is received. This indicates that data is still getting through to the receiver and prevents the sender from ooding the network with a quickly increasing data stream. However, the advent of Short/Long Fat Networks ([S,L]FNs) and multimedia/WWW trac have created dierent requirements for TCP congestion and window control to those originally implemented [7]. Recent TCP extensions such as Window Scaling [7] and Fast Retransmit / Recovery [21] have done much to cope with increasingly reliable, high-bandwidth, bre-based communications. However, issues related to TCP's sometimes inappropriate reaction to network congestion have not been addressed [7]. In particular, reference has been made to TCP's slow start algorithm which is deployed when recovering from a transmission timeout or excessive network congestion. In [7], it is noted that over certain networks, an algorithm which allow faster recovery would be desirable. This would allow the sender to expand its congestion window more quickly, thus taking advantage of the available bandwidth. However, it is important to ensure that if the slow start was incurred due to excessive network congestion, the more aggressive algorithm should not endanger network stability by ooding packets onto the link [8]. For example, in the case of ATM, it may be that a TCP connection is running over a dedicated 25Mbps link. A desirable transport protocol may be one that is able to react quickly to any changes in congestion window size. Current implementations of TCP are not able to do this due to the enforced "slow start" upon network timeout or packet loss i.e. If the sending TCP does not receive an acknowledgement for its sent data within a given time period then it may "time out" and assume extreme
1 INTRODUCTION
2
network congestion. If a modi ed protocol were used that could react more quickly to such congestion then better use could be made of available bandwidth thus giving a higher overall throughput over a given time period. After all, TCP packet loss over ATM may be due to a single cell being lost in a congested buer. Such modi cations to the slow start algorithm should be of great bene t to ATM and Satellite networks which, having dierent protocol requirements to those of the Internet [13], could react favourably to a more aggressive approach to temporary congestion. For example, it should be unlikely for a well-managed ATM network to experience high volumes of unanticipated congestion and have to degrade the available bandwidth to its existing connections. It is more likely that an intermediate switch, or the remote host, entered a temporary state where is was unable to accept further data. This may be due to a sudden increase in network load which should be controlled by ATM's natural congestion and ow control (bit rate control and Quality of Service (QoS)[2] [23] enforcement). In any case, the previous QoS should be restored within a very small period of time. However, in the case of normal TCPbased congestion (i.e. the addition of further TCP ows on the connection), normal congestion avoidance algorithms should be utilised to free bandwidth for the extra trac. This idea of a "dedicated connection" would imply that perceived extreme congestion or network timeouts should be transient in nature and not be due to competing trac on the link. Intermediate switches may become loaded due to the number of connections that they are handling, but should quickly stabilise as the data streams are throttled by their TCPs or other congestion control mechanisms. Similarly, end hosts are likely to experience temporary congestion in the form of extra processes, memory demands or input/output bus congestion. Extra loading of this type is likely to be bursty and should not therefore aect the overall performance of a TCP connection. If this is not the case, due to persistent switch, router or end-host buer congestion, then the sending TCP should react accordingly. However, in normal circumstances, a desirable action would be for the TCP to recover from its slow start state as quickly as possible and resume data transfer at the maximum possible rate. In more general circumstances, when communicating over the Internet, for example, this approach to congestion control may be perceived as aggressive and detrimental to the network's stability [21]. Network congestion in this case is most likely to be due to the sheer load of trac on either the end hosts or intermediate routers (or both!). A host which sends bursts of data, irrespective of network congestion information [21] is likely to cause problems for other users in addition to the network itself [8]. In such a situation, an "intelligent" TCP would be able to distinguish between transient and incremental network loading [7]. The aim of this paper is to outline proposed modi cations to TCP's congestion control mechanisms which enable more ecient use of available bandwidth on a link. These are described in section two and are followed in section three by simulation experiments performed using the REAL (packet) network simulation package [14]. Section four examines and discusses the outcome of these experiments and section ve relates them to other work in this area. Conclusions and proposed future work are covered in the sixth and nal section.
2 PROPOSED CONGESTION CONTROL EXTENSIONS
3
2 Proposed Congestion Control Extensions As detailed in [8] [21], TCP currently uses the following algorithm when reacting to excessive link congestion or transmission timeout. A congestion window (cwnd) and a slow start threshold (ssthresh) are maintained for each TCP connection 1 . If a timeout occurs, or duplicate ACKs are received, the cwnd is halved and copied into the ssthresh. If congestion was due to a transmission timeout, then the cwnd is set to one (segment) and a slow start ensues. Exponential window scaling will take place until the new (lower) ssthresh is reached. Linear congestionavoidance is then used. The following formula describes the number of round trips required to recover from a complete slow start. The recovery period is the time from the cold start to the time at which the window is re-established at the pre-restart size.
w = pre-restart congestion window (cwnd) t = slow start threshold (ssthresh) RoundT rips =
(
log t ) + (w ? t) : w t ( log 2 w ( log ) : wrst_counter >= RST_LIMIT) && (fp->ack_counter < ACK_LIMIT)) /* FULL Restart (complete slow start) */ fp->cur_window = win = 1; /* reset window */ fp->rst_counter = fp->ack_counter = 0; /* reset_counter */ } else { /* PARTIAL Restart (use congestion window average) */ fp->cur_window = win = fp->avg_window; /* partial restart */ }
or, put simply: if ('R' restarts occur before 'A' acknowledgement packets are received) then do a complete slow start by setting the cwnd to one segment else set cur_window to avg_window and attempt reconnection
This "fallback" facility provides extra functionality to the modi ed congestion mechanism which protects against overly aggressive packet transmission. This was the original purpose of the slow start function [8]. The proposed alterations to TCP would mean that:
a = congestion window (cwnd) average for this connection RoundT rips =
(
(w ? a) : a t ( log(logt?2a) ) + (w ? t) : a < t
For example, with a window size of 128kB 2 and threshold of 64kB, a restart under normal circumstances would incur: 6 + 64 = 70 trips in order to reinstate the window size. However, the modi ed TCP, with a link average of 127kB this would be reduced to: 1 trip. This is assuming that the network timeout was due to temporary congestion which has now been alleviated. Provided that the remote host can provide an acknowledgement to transmitted segments, normal congestion control in the form of congestion window manipulation can take place. Alternatively, if congestion is still present and the connection continues to time out then a full restart will take place as above. This can be seen in the following gures. Figure 1 shows a TCP Reno connection which, towards the end of its lifetime, experiences a restart due to link congestion. Firstly, normal congestion avoidance is enforced as the connection's window size is halved. However, the congestion is still too great and a slow start is enforced as the window size shrinks to a single segment. 2
Using TCP window scaling
2 PROPOSED CONGESTION CONTROL EXTENSIONS
5
Figure 1: TCP Reno Congestion Window Activity 250 "reno.dat"
Window Size
200
150
100
50
0 0
200
400
600 Time
800
1000
1200
In gure 2, the Reno connection is replaced by one with the proposed modi cations. Here, it is evident that despite the fact that normal congestion avoidance takes place, a full restart (slow start) is not required. During this simulation a restart is still signalled at the same point as before, however due to the new congestion control algorithm the connection is restarted at the previous link average.
Figure 2: Modi ed TCP Congestion Window Activity 250 "mod.dat"
Window Size
200
150
100
50
0 0
200
400
600 Time
800
1000
1200
3 SIMULATION OF THE PROPOSED EXTENSIONS
6
3 Simulation of the Proposed Extensions The basic TCP used for our simulation experiments was the TCP RENO implementation in the REAL packet network simulation package 3 . Development and experiments in REAL were run under the Linux Operating System. After extension and recompilation of the simulation libraries to include the updated TCP, experiments were run using both RENO and the modi ed protocols. Detailed logging information was produced which detailed the changing window (and average window) sizes in addition to the occurance of simulated network timeouts. This allowed a detailed analysis of the results and worked well in conjunction with the standard REAL output les which include ne-grained and summary throughput and queueing information for each network node. In order to demonstrate the eectiveness of our modi ed TCP congestion control, many simulation scenarios were run including REAL's example benchmarks. Through comparison of the resulting data, it was evident that these scenarios provided the best illustration of the new congestion mechanisms, even when compared with custom scenarios which had been used during the initial development phase. During this period, tests were run to decide on appropriate values for various variables in the prototype model. In particular, attention was given to the algorithm for congestion window recalculation and the sensitivity of the TCP's "fallback" mode. The majority of the following experiment network scenarios were selected from REAL's suite of example benchmark network con gurations. It was felt that these scenarios provide very good examples both of interaction between TCP sources and competition between aggressive Poisson and exible TCP data ows [14] [15]. Experiment 1 (section 3.3.1) utilises a network based mainly on TCP with a single background ow to bottleneck the trac which simulates an incrementally loaded network with varying trac. Experiment 2 (section 3.3.1) is performed using mainly Poisson sources with a single TCP ow for measurement. This simulation provides information on the way that the modi ed TCP reacts when competing with nonresponsive trac such as UDP. The developmental suite of experiments was conducted as follows: 1. Window recalculation These experiments used congested links and compared RENO with the modi ed TCP in order to discover the most eective formula for re-calculating the connection's average window size (; ). 2. Restart ACK Level In the new algorithm, an ACK-level is maintained in order to allow a full restart if heavy congestion were to occur. The algorithm states that over the rst 'R' RSTs after a "warm" restart, the sender should receive 'A' ACKs for its data.
3 The REAL network simulation package is available via the World Wide Web from http://www.cs.cornell.edu/Info/People/skeshav/ and further information is available in [15]
3 SIMULATION OF THE PROPOSED EXTENSIONS
7
For example, if a "warm" restart were to occur then counters are initialised which meant that if the sender receives fewer than 'A' acks before it receives a further 'R' RST packets, then a "cold" (full) restart should be initiated (setting the window size to 1). Through experimentation, a guideline value was selected to initiate cold start under stressful situations while continuing to warm start under the majority of timeouts. In these experiments, a heavily congested network scenario condition was developed and the ACK level altered until the modi ed TCP outperformed TCP RENO. 3. Benchmarking The new TCP was then tested against the example benchmarking scenarios where it performed at least as well as TCP RENO. In the majority of cases, this improvement was in the region of 25%. The resulting data was then validated by running the experiments with different random simulator seeds. Further experiments were run to examine the interaction between the modi ed TCP and TCP Reno. This is valuable if the two TCPs were to have to share network bandwidth. For example, a dominant TCP may squash other streams thus denying them a fair share of the available bandwidth.
3.1 Window Recalculation
In this suite of experiments, several of REAL's benchmark scenarios were used to simulate both TCP RENO and modi ed TCP behaviour. Several runs were conducted using the latter protocol, each time with diering ratios applied to the window recalculation algorithm. The ratios in table 1 were selected in order to represent the spread of possible values.
Table 1: Alpha and Beta Values for Window Recalculation
0.9 0.1 0.5 0.5 0.1 0.9 is the weighting allocated to the current window size and is the weighting of the existing average window. These selections give the TCP ow a gradually increasing (from top to bottom) "memory" for the averaging mechanism. The "memory" for a TCP de nes the in uence that past network averages have on the current avg window value. For
3 SIMULATION OF THE PROPOSED EXTENSIONS
8
Figure 3: Performance of TCP Reno vs Modi ed TCP with the Proposed Window Recalculation Values
example, a weighting of 0.9 (), 0.1 ( ) would give the average a very short "memory" as it would be highly in uenced by recent events (). Conversely, a weighting of 0.1 (), 0.9 ( ) would give a longer "memory" to the ow as it would be more in uenced by distant events ( ) when recalculating the avg window. Through the series of test runs 4 , it was apparent that the most successful algorithm was one which adhered closely to the average window, thus using a "long memory" as opposed to calculating the average over a short time period. This result was obtained after experimentation with high congestion scenarios such as exp7.9 (used in experiment 2 and documented in appendix C) where a TCP source has to contend with other non-responsive sources such as Poisson and On-O data ows. Such ows eectively 'throttle' the TCP ow and impose a limit on the available bandwidth at any one time. Therefore, the performance of the TCP ow is purely a measurement of how it reacts to sudden changes in bandwidth availability. In such circumstances, the less aggressive average recalculation means that a source will experience fewer timeout and transmission errors than if it adopts a stronger window expansion algorithm. For example, for a given connection during the experiment, the rst example gave overall mean throughput of those gures given in table 2. 4 Graphical results from running experiment 1 (see section 3.3.1) with the proposed weightings are shown above
3 SIMULATION OF THE PROPOSED EXTENSIONS
9
Table 2: Alpha and Beta Value Performance : packets
TCP Reno 0.9:0.1 0.5:0.5 0.1:0.9
117.93 132.47 132.47 146.73
The majority of other TCP connections in the experiment showed similar improvements in performance. This is due to the increased usage of available bandwidth that the modi ed TCP provides. Those that did not show an improvement were those which may have sacri ced bandwidth in favour of a more "dominant" connection. In these cases, however, only a slight degradation in performance was noted. REAL benchmark scenarios were used as network models for testing the various weightings. See Appendix A for details of experiment results
3.2 Restart ACK Level
After a timeout has been received, the modi ed TCP enters what may be termed a "critical" phase. During this phase, counters are maintained for the receipt of both Acknowledgment and Restart packets. If during this period, 'R' Restart packets are received before 'A' Acknowledgments, then a full (slow) restart is enforced. This means that the modi ed TCP will not be overly aggressive in its timeout policy. However, if an appropriate number of ACK packets are received during this period then the counters are reset and a normal state of operation is resumed. In order to de ne a guideline value for this "ACK level", the benchmark scenarios (experiments 1 and 4) were simulated along with one custom TCP and Poisson model (see appendix C). The "ACK level" variable was varied and compared with both other modi ed results and those of TCP Reno. Initially, a validation of the new protocol was performed by setting the ACK level variables to impose a full restart on every receipt of a Restart packet. It was anticipated that this would impose the same restart policy as that of TCP Reno. Upon comparison of the simulation results this was con rmed to be the case. The resulting transfer statistics were identical between the test runs. (Sample experiment data can be found in Appendix B.) Simulation runs were then conducted using ACK levels of between 1 and 100. These were found to show the most variation in throughput on the network speeds that we were using for experimentation. Despite only minor variations in the overall usage of available bandwidth with the dierent ACK level values, it was noted that with levels of around 50 ACKS, the usage by modi ed TCP sources was marginally higher. For example, when run on the rst experiment's scenario, no apparent improvement in performance was obtained by varying the ACK level. However, in a modi ed version of this scenario, using identical connection speeds but with
3 SIMULATION OF THE PROPOSED EXTENSIONS
10
additional load placed on the network, variations in the overall bandwidth usage were observed.
Table 3: ACK Level Comparison TCP Source 1 2 3 avg
Reno 226.08 225.63 226.20 225.97
ACK 1 ACK 50 ACK 100 226.82 236.19 230.79 219.69 229.98 224.76 219.50 231.13 225.59 222.0 232.43 227.04
In the above simulation, the modi ed TCP was required to make three restarts before being allowed to fall back into a slow start. As a side eect, ACK 1 has a lower mean throughput when compared with the TCP Reno sources. If this variable is reduced to 1 restart then the source behaves identially to TCP Reno. In further experimentation, the value of 50 was selected for the ACK level as it showed itself to be an appropriate level for our simulation scenarios. It is anticipated that an active mechanism be built into future modi cations. This will allow appropriate realtime adjustment of this variable according to round trip times and link bandwidth. For example, a high bandwidth local area network will require a higher ACK level than that of a dial-up connection. This is due to the number of ACK packets that should be expected to arrive within a given time period. An area for future work may to examine the relationship between the modi ed TCP's ACK level and the weightings used for window recalculation. In a situation where a short memory is used for recalculation, it may be that less resistance is required from the slow start fallback provided by the ACK level mechanism. If a shorter memory were to yield a higher avg window size, then a low ACK level (i.e. an ACK level with requiring few acknowledgement packets before a slow start is granted) could be detrimental to the data ow. In this situation, a higher level could be used to more easily trigger a slow start. More acknowledgement packets would be required before a restart at the avg window size was granted. Furthermore, the number of restarts required before a slow start is granted could be reduced.
3.3 Benchmarking
The REAL simulator includes several example scenarios which are appropriate benchmarking environments for protocol models [14] [15]. In these scenarios, dierent mixtures of ow-controlled, poisson and on-o sources are brought together. However, many of these network models are variations on several main themes and as such have not been greatly utilised in our experimentation. For the purposes of our benchmarking, we ran many of the example scenarios with both TCP Reno and the modi ed TCP. This provided us with detailed output of their behaviour under these often taxing scenarios. In addition to the standard
3 SIMULATION OF THE PROPOSED EXTENSIONS
11
output, debugging information concerning the TCP restarts and window sizing was written to log les for further examination. An explanation of the REAL output tables can be found in Appendix D.
3.3.1 Experiment 1 (exp1.l) Both of the simulator con gurations used in this paper are based on a similar network topology. That is, several sources connected to a single router which is reliant on a bottleneck connection. This bottleneck is used to link the two backbone routers and has the side eect of highlighting interesting characteristics of the transport protocols used over the network's connections. A further router is used to distribute the sources to their appropriate destinations (sinks). In order to examine the facets of a particular protocol, it is useful to provide it with a dedicated sink which will be unaected by other sources. This means that the only disruption of this source will take place en route at the routers and on the bottleneck connection. Other sources are used to load the intervening hardware, but not the destination buer. In the rst benchmark experiment (table 4), the TCP sources [1,5..14] were started at 44 microsecond5 intervals after time period 100. Source 16 is a constant background on-o source. All sources except number 1 are sending to sink 15. Source number 1 sends to sink 4 6 .
Figure 4: Experiment 1 (exp1.l) Network Topology TCP Sources
1
[..]
Background Source
14
16
Bandwidth: 200,000 Latency: 1,000,000
Router (2) Bandwidth: 40,000 Latency: 20,000,000
Router (3) Bandwidth: 200,000 Latency: 1,000,000
4
5 6
15
REAL measures bandwidth in bits/sec and latency in microseconds Full details of the REAL network models can be found in the standard REAL distribution.
3 SIMULATION OF THE PROPOSED EXTENSIONS
12
Figure 5: TCP Reno vs Modi ed TCP Performance in Experiment 1
Nd
T'put(mean,sd)
Q'ing(mean,sd)
TCP Reno 1 5 6 7 8 9 10 11 12 13 14 16
RTT(mean,sd)
Drop(mean,sd)
Retx(mean, sd)
(117.93 (70.93 (58.53 (47.27 (48.47 (36.73 (30.20 (82.80 (82.47 (32.00 (38.27 (5.00
116.74) 57.53 ) 50.33 ) 43.70 ) 45.40 ) 40.20 ) 39.50 ) 105.05) 101.73) 45.74 ) 55.50 ) 0.00 )
(17.47 (16.44 (14.67 (12.78 (10.50 (15.01 (12.55 (12.99 (9.59 (6.87 (6.67 (0.08
29.31 ) 29.06 ) 25.46 ) 26.11 ) 21.41 ) 31.08 ) 28.80 ) 24.87 ) 19.61 ) 14.98 ) 12.47 ) 0.10 )
(31.94 (30.30 (27.56 (33.90 (20.78 (31.23 (23.97 (31.27 (31.68 (23.04 (24.65 (0.00
40.49 ) 38.35 ) 32.53 ) 26.54 ) 26.51 ) 27.76 ) 26.86 ) 33.21 ) 28.67 ) 25.47 ) 28.39 ) 0.00 )
(17.33 (17.93 (18.13 (13.13 (14.60 (9.27 (9.73 (12.13 (9.47 (5.07 (1.60 (0.00
41.82 ) 42.59 ) 42.75 ) 34.02 ) 41.10 ) 25.01 ) 29.76 ) 32.15 ) 24.18 ) 14.18 ) 5.99 ) 0.00 )
(0.20 (0.13 (0.13 (0.73 (0.13 (0.47 (0.60 (0.13 (0.13 (0.20 (0.20 (0.00
Modified TCP 1 5 6 7 8 9 10 11 12 13 14 16
(146.73 (70.93 (58.47 (84.73 (48.47 (57.20 (55.60 (78.13 (78.33 (56.80 (63.00 (5.00
121.92) 57.53 ) 50.36 ) 84.44 ) 45.41 ) 65.57 ) 60.54 ) 70.35 ) 72.77 ) 78.81 ) 76.03 ) 0.00 )
(18.19 (16.44 (14.67 (21.95 (10.52 (20.31 (16.80 (22.90 (18.06 (7.39 (7.67 (0.08
28.93 ) 29.06 ) 25.46 ) 32.03 ) 21.40 ) 31.22 ) 28.78 ) 26.33 ) 21.91 ) 14.82 ) 13.17 ) 0.10 )
(35.15 (30.30 (27.56 (37.59 (20.78 (32.65 (24.76 (43.47 (43.19 (24.00 (25.09 (0.00
39.73 ) 38.35 ) 32.53 ) 29.05 ) 26.51 ) 33.82 ) 27.61 ) 40.55 ) 35.49 ) 26.38 ) 28.72 ) 0.00 )
(17.33 (17.93 (18.13 (13.87 (14.60 (9.27 (10.00 (12.93 (9.53 (5.07 (1.60 (0.00
41.82 ) 42.59 ) 42.75 ) 33.85 ) 41.10 ) 25.01 ) 29.69 ) 31.99 ) 24.15 ) 14.18 ) 5.99 ) 0.00 )
(0.20 (0.13 (0.13 (0.73 (0.13 (0.60 (0.53 (0.27 (0.27 (0.20 (0.20 (0.00
0.54 0.50 0.50 1.57 0.50 0.81 1.14 0.50 0.50 0.54 0.54 0.00
) ) ) ) ) ) ) ) ) ) ) )
0.54) 0.50) 0.50) 1.18) 0.50) 1.14) 1.09) 0.68) 0.68) 0.54) 0.54) 0.00)
3 SIMULATION OF THE PROPOSED EXTENSIONS
13
Table 4: TCP Reno vs Modi ed TCP Performance Statistics Source Dierence((Modi ed/Reno)/Modi ed) 1 +20% 5 equal 6 -0.1% 7 +44% 8 +2% 9 +36% 10 +46% 11 -6% 12 -5% 13 +44% 14 +39% 16 equal Avg +18% The longest running source (source one) shows a 20% increase in performance. This source uses a dedicated sink for its trac which implies that any loss or gain in throughput is due to congestion on intermediate links and routers. Therefore, this shows that when provided with sucient buer space at the receiver, the modi ed TCP displays signi cant performance increases even with a congested intermediate network. This indicates that the modi ed TCP was able to recover quickly from cross-trac congestion and due to the dedicated buer space at the receiver, was able to resume maximum transfer more quickly than its counterpart. In other connections on shared portions of the network, however, the modi ed TCP almost doubled the throughput when compared with TCP Reno. For source seven, a 44% increase in performance was evident on a connection which had both heavily congested intermediate and end nodes. This illustrates that the congestion control extensions provide much better usage of available bandwidth under heavy load. Over this simulation, TCP Reno received six congestion/timeout restarts, the modi ed TCP seven. In this case, the additional restart occurred due to the intermediate load caused by more ecient bandwidth utilisation. On average, the modi ed TCP streams performed much better under these conditions than those of the unmodi ed version. There were three modi ed sources which had a slightly lower throughput compared with those under Reno. This was probably due to the expansion of their congestion window being limited by the already utilised bandwidth of the other TCP connections. These sources did not timeout during the experiment, so indications suggest that the additional congestion was not sucient to "crush" these sources.
3 SIMULATION OF THE PROPOSED EXTENSIONS
14
3.3.2 Experiment 2 (exp7.9.l) This network model is based on the same topology as that of the rst experiment with all of the intermediate TCP sources being replaced with non-reactive Poisson streams. In terms of the simulation, a Poisson source will eectively bottleneck the link as it does not react or throttle its ow based on other network trac, it simply sends at the maximum possible rate according to its peak/average parameters. Therefore, a TCP source may nd it dicult to share a link with such trac due to the aggressive nature of the Poisson model. It is likely that a TCP source under such circumstances may nd itself in a constant state of restart/congestion control and as such have a wildly uctuating window size. In this type of situation, an aggressive TCP such as the modi ed source shows one of its advantages over less reactive algorithms.
Figure 6: Experiment 2 (exp2.l) Network Topology TCP Source
1
Poisson Sources
5
Bandwidth: 200,000 Latency: 1,000,000
[..]
Background Source
14
16
Router (2) Bandwidth: 40,000 Latency: 20,000,000
Router (3) Bandwidth: 200,000 Latency: 1,000,000
4
15
In order to validate our congestion control algorithms on dierent network infrastructures, the aforementioned scenario was run with varying latency values. Firstly, the experiment was run as per the standard benchmark scenario with the latency values as depicted in the preceding topology. However, in order to simulate a low latency network such as those based on bre (as is ATM in most cases), a second experiment was run where the latency values were reduced by a factor of 1000. This made the latency values (including routing) 1 and 20 milliseconds respectively, which is more applicable to modern network technologies (albeit with a high latency bottleneck link such as a combined ATM/satellite network).
3 SIMULATION OF THE PROPOSED EXTENSIONS
15
Figure 7: TCP Reno vs Modi ed TCP Performance in Experiment 2
Nd
T'put(mean,sd)
TCP Reno 1 5 6 7 8 9 10 11 12 13 14 16
Q'ing(mean,sd)
RTT(mean,sd)
Drop(mean,sd)
Retx(mean, sd)
(152.67 (178.96 (177.33 (177.88 (177.62 (182.38 (179.42 (183.29 (185.54 (179.04 (177.71 (10.00
78.94) 11.67) 9.91) 9.68) 10.93) 8.73) 13.40) 12.47) 18.35) 11.17) 13.63) 0.00)
(117.13 (5.68 (4.98 (6.63 (5.72 (11.60 (6.84 (12.27 (20.90 (6.08 (5.49 (0.43
114.05) 5.30) 4.73) 7.86) 5.05) 13.22) 6.32) 12.98) 25.07) 5.93) 6.06) 0.19)
(145.83 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00
127.05) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00)
(3.17 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00
10.51) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00)
(0.21 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00
0.58) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00)
Modified TCP 1 5 6 7 8 9 10 11 12 13 14 16
(176.54 (178.58 (175.92 (179.38 (179.96 (180.71 (178.75 (179.42 (179.04 (183.79 (184.75 (10.00
61.15) 9.37) 12.19) 8.33) 12.78) 11.58) 10.90) 11.55) 12.34) 11.08) 12.96) 0.00)
(220.27 (5.46 (4.51 (5.38 (11.07 (6.58 (6.54 (5.39 (5.46 (10.63 (9.48 (0.53
88.27) 4.39) 3.16) 4.11) 13.66) 5.94) 6.89) 3.97) 4.12) 8.94) 7.28) 0.17)
(244.96 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00
111.29) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00)
(1.88 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00
8.99) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00)
(0.08 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00
0.40) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00)
3 SIMULATION OF THE PROPOSED EXTENSIONS
16
Figure 8: TCP Reno vs Modi ed TCP Performance in Experiment 2 with low RTT
Nd
T'put(mean,sd)
TCP Reno 1 5 6 7 8 9 10 11 12 13 14 16
Q'ing(mean,sd)
RTT(mean,sd)
Drop(mean,sd)
Retx(mean, sd)
(155.29 (177.58 (178.42 (182.33 (183.46 (181.00 (178.67 (182.67 (180.62 (181.62 (179.71 (10.00
65.63) 8.87) 13.35) 7.24) 10.79) 5.39) 7.92) 8.71) 8.50) 9.63) 9.90) 0.00)
(82.66 (9.05 (11.34 (7.83 (16.31 (15.05 (9.28 (14.77 (10.23 (10.78 (8.92 (0.30
86.53) 8.45) 10.45) 4.97) 16.58) 11.47) 9.20) 11.50) 9.14) 7.97) 7.33) 0.17)
(65.14 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00
77.68) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00)
(0.33 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00
1.60) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00)
(0.21 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00
0.71) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00)
Modified TCP 1 5 6 7 8 9 10 11 12 13 14 16
(178.21 (182.33 (181.08 (178.04 (182.33 (178.46 (181.88 (181.04 (181.42 (182.33 (178.29 (10.00
55.62) 13.64) 12.21) 8.25) 10.53) 7.43) 10.68) 12.78) 9.80) 16.28) 10.33) 0.00)
(211.58 (20.63 (11.44 (8.80 (10.89 (7.80 (13.88 (11.82 (8.13 (13.36 (5.88 (0.58
88.27) 20.71) 11.06) 9.98) 9.97) 6.79) 12.15) 10.27) 6.27) 16.08) 4.19) 0.22)
(206.51 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00
89.71) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00)
(1.92 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00
8.99) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00)
(0.17 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00 (0.00
0.55) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00) 0.00)
3 SIMULATION OF THE PROPOSED EXTENSIONS
17
The preceding data shows that the simulated congestion control modi cations are eective on both high and low latency networks. Both examples give a ~15% (comparing source 1 in the above tables) average increase in throughput for source 1. This improvement is relatively high considering the highly congested link and non-reactive nature of the other sources. The extra throughput has been achieved through the decreased TCP congestion window response time function as shown in section 2.
3.3.3 Equilibrium of TCP Connections and Combined Trac In order to examine how TCP Reno and Modi ed TCP work in conjunction with each other, experiments were performed using the network model from Experiment 1 (exp1.l). The number of streams was reduced to three and all connections were assigned 15,000 packets to transmit. This would allow a clear picture to be established of how the dierent protocols interact with each other. For our purposes, equilibrium is de ned as a state of stability. For example, a situation where competing TCP sources have obtained equal shares of the available bandwidth. This state should remain constant until further events cause disruption. In the rst experiment, all three streams were set to be either TCP Reno, or the Modi ed source. The results can be seen in gure 9.
Figure 9: Comparison of TCP Reno and Modi ed TCP Fairness of TCP Reno vs Modified 500
TCP Reno
400
number of packets
"modified" "modified" "modified" "reno" "reno" "reno"
300
Modified TCP 200
100
0 0
100
200
300
400
500 time step
600
700
800
900
1000
3 SIMULATION OF THE PROPOSED EXTENSIONS
18
Figure 9 shows that the modi ed TCP sources arrive at equilibrium (where all sources have a fair share of the available bandwidth) much sooner due to their faster responses to network congestion. However, in the separate experiment done using all TCP Reno sources, they gradually ght for bandwidth after a slow start and eventually arrive at equilibrium around time step 850. In the Modi ed TCP experiment, just after time step 800, the longest running source nishes its transmission which frees bandwidth for the other competing sources. They therefore obtain a share of ~250 packets as opposed to the previous ~170. In the following experiments (depicted in gures 10 and 11), the modi ed and Reno sources are mixed in order to observe their interaction. Ideally, the modi ed TCP should perform in a fair manner and quickly establish an equilibrium with the competing sources. In the rst experiment, the longest running source (starting at time 0) is set to be TCP Reno. At time steps 100 and 144, other modi ed sources are engaged.
Figure 10: Modi ed TCP Sources Competing with TCP Reno Fairness of TCP Reno vs Modified 500 TCP Reno
number of packets
400
300 Modified TCP 200
100
0 0
100
200
300
400
500 time step
600
700
800
900
1000
Here, the sources perform much as when run in their isolated experiments ( gure 9). However equilibrium is reached at a midpoint (t=~700) between the two previous runs. This would suggest that despite the modi ed sources taking advantage of the available bandwidth, a competing Reno source is able to obtain a fair share. The point at which the modi ed TCP begins to relinquish bandwidth is around time step
3 SIMULATION OF THE PROPOSED EXTENSIONS
19
600. The long running Reno source nally terminates its transmission just before time step 1000.
Figure 11: TCP Reno Sources Competing with Modi ed TCP Fairness of TCP Reno vs Modified 500 Modified TCP
number of packets
400
300
200
100 TCP Reno 0 0
100
200
300
400
500 time step
600
700
800
900
1000
In the second scenario, the dominant modi ed source makes good use of the available bandwidth and is able to terminate its transmission around time step 770. However, the competing Reno trac does not seem to deviate from its performance in the preceding experiments. It follows much the same curve as is seen in gures 9 and 10. Equilibrium between the sources should have been reached by time step ~800. A secondary experiment was run where the modi ed source was assigned 25,000 packets to transmit. This showed the point of equilibrium to be time step 810 with all sources having a ~170 packet share.
4 EVALUATION OF EXPERIMENT RESULTS
20
4 Evaluation of Experiment Results In the previous section we gave some examples of the anticipated performance improvements from modi ed TCP congestion control algorithms. It is apparent from the preceding simulation results that a signi cant increase in bandwidth utilisation is achieved using the proposed modi cations without causing excessive demands on the network. For example, scenarios one (exp1.l) and four (exp4.l) 7 give the restart traces in gures 12 and 13 (t=time step).
Figure 12: Connection Restarts for Reno and Modi ed Sources (Experiment 1) Experiment 1 TCP Restarts 15 14 13
Modified TCP TCP Reno
12 11 10
Source
9 8 7 6 5 4 3 2 1 0 100
200
300
400
500
600
700 800 900 1000 1100 1200 1300 1400 1500 1600 Time Step
Table 5: Reno and Modi ed Connection Restart Statistics (Experiment 1) exp1.output.mod t900: source: 7 RESTART t1100: source: 9 RESTART t1100: source: 10 RESTART t1300: source: 14 RESTART t1300: source: 13 RESTART t1400: source: 1 RESTART t1500: source: 10 RESTART 7
See Appendix C for scenario models
exp1.output.reno t900: source: 7 RESTART t1100: source: 9 RESTART t1100: source: 10 RESTART t1300: source: 14 RESTART t1300: source: 13 RESTART t1500: source: 1 RESTART
4 EVALUATION OF EXPERIMENT RESULTS
21
Figure 13: Connection Restarts for Reno and Modi ed Sources (Experiment 2) Experiment 2 TCP Restarts 15 14 Modified TCP TCP Reno
13 12 11 10
Source
9 8 7 6 5 4 3 2 1 0 0
200
400
600
800
1000 1200 Time Step
1400
1600
1800
2000
Table 6: Reno and Modi ed Connection Restart Statistics (Experiment 2) exp4.output.mod t900: source: 7 RESTART t900: source: 1 RESTART t900: source: 5 RESTART t1000: source: 6 RESTART t1300: source: 5 RESTART t1200: source: 12 RESTART t1300: source: 8 RESTART t1400: source: 9 RESTART t1600: source: 10 RESTART t1600: source: 13 RESTART
exp4.output.reno t900: source: 7 RESTART t900: source: 1 RESTART t900: source: 5 RESTART t1000: source: 6 RESTART t1300: source: 9 RESTART t1300: source: 8 RESTART t1400: source: 12 RESTART t1400: source: 13 RESTART t1500: source: 14 RESTART
4 EVALUATION OF EXPERIMENT RESULTS
22
Here, we can see that the modi ed TCP caused marginally more TCP restarts than TCP Reno. However, in other tests, on more congested links that were run over a much longer period the results in table 7 were observed.
Table 7: Comparison of Connection Restarts TCP Reno Modi ed TCP 373 Restarts 380 Restarts Taking into account the small increase in TCP restarts, it would therefore be reasonable to assume that in small networks such as the ones modeled, the modi ed TCP does not cause excessive strain on the network by bombarding it with an excessive number of packets [21]. Furthermore, as shown by results in the previous section, the modi ed TCP is able to provide the performance improvements recommended in RFC 1106 [7] which may be desirable in higher bandwidth connections.
5 RELATED WORK
23
5 Related Work Possibly the most notable work in this area has been performed by Larry Peterson and the Advanced Protocol Design group at the University of Arizona [4][5][6]. Using the X-Kernel network simulation environment, a re-implementation of TCP was performed (named TCP Vegas) which addressed many of the most prominent performance issues. In TCP Vegas, proactive congestion control and avoidance mechanisms were implemented into both slow start and normal TCP operation which monitored the connection's Round Trip Time (RTT) in order to anticipate potential congestion situations. Further enhancements were made to TCP's retransmission algorithms which, when combined with the advanced congestion control gave performance increases of between 40 and 70% 8 . However, the modi cations made to TCP's congestion control had emphasis placed on minimising packet loss through a cautious slow start algorithm involving window resizing with every other ACK that is received. This was designed to allow reaction to excessive RTTs which is important when considering intermediate hardware on the Internet. In addition to his (and Michael Karels's) paper on "Congestion Avoidance and Control" [8], Van Jacobson has continued to work on extensions to the TCP protocol [9] [10] (both obsoleted by [12]), [20] and assisted with work on TCP header compression in [11]. In [18], V. Paxson compiled a summary of common problems in TCP implementations with particular reference to congestion control. Details, including illustrative TCP connection dumps are given on aspects of TCP slow start, packet retransmission and the failure to retain above-sequence data for future transmission. Further details on the required functionality of a TCP component can be found in RFC 1122 [3], which details the communication layers' requirements for Internet hosts. Our initial modi cations to TCP's congestion control are more oriented towards High Bandwidth and Intranet performance enhancements as the eects of more aggressive slow start algorithms over larger scale networks have not been simulated or prototyped at this time. It is hoped that future work will also lead to a generic solution to this problem that can be applied to both small and large scale internets of any capacity.
8
see http://www.cs.arizona.edu/protocols for further information
6 CONCLUSIONS AND FUTURE DIRECTIONS
24
6 Conclusions and Future Directions In this paper, we have shown how modi cations to TCP's slow start and congestion control algorithms can improve performance over smaller scale networks without adversely aecting the overall ow of data. In particular, simulations using modi ed TCP code showed how a more aggressive slow start algorithm can signi cantly improve overall throughput for a TCP connection without incurring excessive restarts or connection timeouts. Such modi cations to the sender side TCP were shown to give performance measurements at least as good as those of standard TCPs but with up to 100% improvement in some cases. The average increase for simulated data was in the region of 25-30%. It is anticipated that the proposed modi cations, when prototyped into the Linux kernel will provide similar results. In particular, experimentation will involve rigorous testing over an ATM LAN which should bene t greatly from the modi ed congestion control algorithms [1] [19].
7 ACKNOWLEDGEMENTS
25
7 Acknowledgements Valuable ideas and feedback were received from Dr. Dave Morris of the Virtual Science Park project at the University of Leeds. Further support was provided by members of the ATM-MM group, Abdulfatah Mashat, Mohammed Rahin and Jim Jackson. Feedback was also gratefully received from Craig Partridge (BBN Corp., US), Alan Cox (Cymru.net, Wales), Werner Almesberger and Philippe Oeschlin (LRC at EPFL, Switzerland) . This work has been supported and funded by the EPSRC and Torch Telecom with a CASE Studentship grant.
REFERENCES
26
References [1] I. Andrikopoulos et al:TCP/IP Throughput Performance Evaluation for ATM Local Area Networks 4th IFIP Workshop on Performance Modelling & Evaluation of ATM Networks, Ilkley, 1996 [2] C. Aurrecoechea, A.T. Campbell, L. Hauw:A Review of Quality of Service Architectures ACM/Springer Verlag Multimedia Systems, Vol. 3, No. 5, November 1995 [3] R. Braden:Requirements for Internet Hosts { Communication Layers RFC 1122, 1989 [4] L. Brakmo, S. O'Malley, L. Peterson:TCP Vegas: New Techniques for Congestion Detection and Avoidance Proceedings of SIGCOMM '94, 1994 [5] L. Brakmo, L. Peterson:TCP Vegas: End to End Congestion Avoidance on a Global Internet IEEE Journal on Selected Areas in Communication, Vol 13, No. 8 (October 1995) pages 1465-1480 [6] L. Brakmo, L. Peterson:Performance Problems in BSD4.4 TCP Computer Communication Review, Vol 25, No. 5 (October 1995), pages 69-86 [7] R. Fox.:TCP Big Window and Nak Options, RFC1106, 1989 [8] V. Jacobson, M.J. Karels:Congestion Avoidance and Control SIGCOMM88, 1988 [9] V. Jacobson, R.T. Braden:TCP Extensions for Long-delay Paths RFC 1072, 1988 [10] V. Jacobson, R. Braden, L. Zhang:TCP Extension for High-Speed Paths RFC 1185, 1990 [11] V. Jacobson:Compressing TCP/IP Headers RFC 1144, 1990 [12] V. Jacobson, R. Braden, D. Borman:TCP Extensions for High Performance RFC 1323, 1992 [13] S. Kalyanaraman, R. Jain, R. Goyal, S. Fahmy:Performance of TCP/IP Using ATM ABR and UBR Services over Satellite Networks, 1996 [14] S. Keshav:REAL: A Network Simulator Technical Report 88/472, Department of Computer Science, UC Berkeley, 1988 [15] S. Keshav:An Engineering Approach to Computer Networking ISBN 0-20163442-2, Addison Wesley, 1997 [16] K. Lakshman M. Manoharan: Providing Quality of Service support for TCP/UDP applications over ATM Networks http://yellow.ccs.uky.edu/ lakshman/Research/ipqos/syp.html
REFERENCES
27
[17] J. Nagle:Congestion Control in IP/TCP Internetworks, RFC896, 1984 [18] V. Paxson:Known TCP Implementation Problems IETF Internet Draft, March 1997, draft-ietf-tcpimp-prob-00.txt [19] A. Romanow, S. Floyd:Dynamics of TCP Trac over ATM Networks IEEE JSAC Vol. 13 No. 4, p633-641, 1995 [20] H. Schulzrinne, S. Casner, R. Frederick & V. Jacobson: RTP: A Transport Protocol for Real-Time Applications RFC 1889, 1996 [21] W. Stevens:TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms, RFC2001, 1997 [22] W. Stevens:TCP/IP Illustrated, Volume 1/2 Addison- Wesley, 1994 [23] A. S. Tanenbaum:Computer Networks (Third Edition) ISBN 0-13-394248-1, 1996
8 APPENDIX A
28
8 Appendix A Unfortunately, the volume of data in the appendices is too great to publish in this format. Please contact the authors or visit the group's Web pages to obtain the complete publication. A complete version of this report is available from: http://dream1.leeds.ac.uk/~atm-mm
9 Appendix B A complete version of this report is available from: http://dream1.leeds.ac.uk/~atm-mm
10 Appendix C A complete version of this report is available from: http://dream1.leeds.ac.uk/~atm-mm
11 APPENDIX D
29
11 Appendix D
11.1 REAL Data Output Tables
The REAL simulation package, in its standard con guration, will output a 'dump' le for any given simulation run. This consists of each time step in the simulation which contains details of the queue length, throughput etc for each node in the network. For example: Time
#
Type
G/w
Xmit
200
1
JK_RENO
2(FQ) 24
Q'ing(min,ave,max)[#] Drops Retx RTT(min,ave,max) 0.00 0.32 0.78 [0]
0
0
44.15 44.31 44.53
Indicates that at time step 200, node number 1 which is a TCP Reno source was using gateway number 2. It transmitted 24 packets with a zero queue length. During its lifetime, the minimum queue length is zero, the average 0.32 and the maximum 0.78. In this time step, it dropped zero packets and received zero retransmit requests. The minimum, average and maximum round trip times for this node are contained in the following three numbers. Upon completion, the simulator outputs a summary of information for the network. This is the information used in this report. The heading is as follows: Node G/w T'put(mean,sd) Q'ing(mean,sd) RTT(mean,sd) Drops(mean,sd) Retxs(mean, sd)
Example data may be: 1 2 0.54 )
(132.47 115.34) (17.91
29.09 ) (35.19
39.74 ) (17.33
41.82 ) (0.20
This would indicate that node 1 using gateway 2 had a mean throughput of 132.47 packets per time step. The standard deviation for this throughput was 115.34 packets. Queues at this node were 17.91 on average with a standard deviation of 29.09. The mean Round Trip Time for this node was 35.19 msecs with a 39.74 standard deviation. Etc. In order to trim the line length in the body of this report, the table headers have been edited so that 'Node' becomes 'Nd' and the G/w (gateway) column is removed. Note that queue length is a measure of time between the time the last bit in a packet arrived in a router and the time its rst bit was placed on the outbound network connection. Furthermore, REAL measures time in microseconds and bandwidth in bits/second.