Queue management for TCP traffic over 3G links - Semantic Scholar

7 downloads 131 Views 414KB Size Report
detection of congestion, we must allow the TCP load to grow to at least 2⋅PC. ..... of a 64 kbit/s link by testing how well a small object is transferred over an ...
Queue Management for TCP Traffic over 3G Links Mats Sågforsa, Reiner Ludwigb, Michael Meyerb, Janne Peisaa a) Oy LM Ericsson Ab, FIN-02420 Jorvas, Finland b) Ericsson Research, Ericsson Allee 1, D-52134 Herzogenrath, Germany Abstract— We propose a novel active queue management scheme tailored to the specific characteristics of third generation (3G) cellular networks. Such links are often the bottleneck for an endto-end connection and dedicated to one host. Taking advantage of these specific characteristics, we developed a queuing scheme that is simpler than popular Random Early Detection (RED) schemes. Despite its simplicity, our solution yields superior performance for 3G links in terms of high link utilization, low queuing delay, and high end-to-end throughput for TCP bulk data transfers. Our simulation results show a clear win over both conventional drop-tail queuing and also a comparable RED scheme.

I. INTRODUCTION The deployment of third generation cellular networks will facilitate wireless Internet access at data rates up to 384 kbps for wide area coverage. Still, it is likely that such wireless links will often be the bottleneck of an end-to-end connection. In overload situations of the link, the amount of data queued at the link will increase, and in a persistent overload situation the queue will eventually overflow. In this paper, we focus on the queue management for the Interactive bearer, which is expected to carry the majority of the TCP traffic. The goal of queue management is to achieve high link utilization without introducing excessive delays into the endto-end path. For good link utilization, it is necessary that the queue can accommodate for variations in traffic load. These variations include both sudden packet bursts, as well as load changes due to the congestion control mechanisms of TCP [2]. In addition, variations could be due to sudden changes of the radio channel conditions. However, over-buffering may lead to a number of unwanted effects such as spurious timeouts, unfairness between competing flows, slow reactivity in websurfing and long TCP timeouts [13]. For Internet routers, the queue management problem has been subject to extensive research lately and a number of methods for controlling the queue size have been proposed. Common to these Active Queue Management (AQM) methods [3] is the way of detecting congestion before the limits of the queue have been reached. Of the AQM methods proposed in the literature, Random Early Detection (RED) [9] is the method that has found widest acceptance. A number of modifications to RED have been suggested, see [8] and the references therein. These AQM methods are reported to work well in fixed network routers handling large volumes of aggregate traffic. In this paper, however, we argue why RED may not be the best choice for last/first-hop wireless links. As described in Section II, the link studied in this work is characterized by a number of features common to many last/first-hop wireless links: (i) The link queue is dedicated to

0-7803-7700-1/03/$17.00 (C) 2003 IEEE

1663

one user, (ii) the link is likely to be the bottleneck of an endto-end connection, (iii) the link has a high latency, and (iv) a major part of the pipe-capacity comes from the bottleneck link, making it feasible to estimate the end-to-end pipe capacity. Previous studies on link buffer design for cellular systems include e.g. [10], [12] and [15]. The main contribution of this paper is a proposal for how to implement Active Queue Management for wide-area wireless networks such as GPRS and WCDMA. The active queue management scheme we propose differs from existing solutions such as RED in two aspects: 1. It exploits knowledge about the link's capacity to set the minimum threshold allowed for the instantaneous queue size. 2. It uses a deterministic rather than a random dropping policy. In addition, it operates on the instantaneous rather than an average queue size. In our analysis we show that our queue management scheme not only performs considerably better than conventional droptail queuing scheme, but also performs better than RED queuing for the typical configurations of a 3rd generation wireless link. The paper is organized as follows. Section II begins with a characterization of the wireless link studied in this work followed by a discussion on the objectives of queue management. In Section IV, we illustrate why RED did not yield the desired performance. Our solution is presented in Section V, including both the method for estimating the minimum queue size and the new AQM algorithm. Finally, Section VI contains simulation results illustrating the potential performance improvements of the proposed methods. II. LINK CHARACTERISTICS OF 3G SYSTEMS Cellular networks like GSM, EDGE, WCDMA or CDMA2000 exhibit certain characteristics which need to be considered as a background. First, these systems are tuned to support TCP traffic in the best possible way as described in [6]. Namely, TCP prefers links, which do not suffer from noncongestion related packet losses and packet reordering. Thus, cellular systems use link layer protocols, which include a highly persistent Automatic Repeat reQuest (ARQ) mechanism with in-order delivery of packets. Radio transmission errors are tackled with retransmissions of the erroneous blocks. Consequently, TCP observes radio link problems (besides during severe outage situations) just as varying delays. Therefore, packet losses due to wireless transmission errors do not need to be considered.

Second, the end-to-end pipe-capacity1 PC of a TCP connection, is dominated by the capacity of the wireless link LC: LC + LCr = PC ,

where LCr denotes the capacity for the path excluding the wireless link. LC is typically a few times LCr, because the round trip time of the wireless link which we assume is the bottleneck link is also the main contributor to the end-to-end round trip time. Note that PC above does not include the capacity of the link (bottleneck) queue. The large round trip times of the wireless link are due to architectural reasons for supporting mobility, time-consuming signal processing for efficient radio transmission, forward error correction including interleaving, and finally the above mentioned link layer retransmissions. Radio Networks differ from the wired Internet in the ease with which local information allows for good estimates of the link capacity. Since the wireless link dominates the pipe capacity, this local knowledge can be exploited in the design of queue management algorithms, as will be shown later. Third, in cellular networks the queue, which feeds the wireless link layer, is a per host queue resulting in a low degree of statistical multiplexing. In particular, this means that often only a single TCP flow or at most 2-4 TCP flows occupy the queue. Cellular networks use per host queues, because this approach enables an efficient usage of scarce radio resources by applying appropriate Medium Access Control (MAC) mechanisms, which require per host information. Finally, it should be mentioned that wireless wide area networks support both dedicated and common radio channels. For dedicated channels, the radio resources are allocated solely to one user, while for common channels, several users might share common resources by multiplexing. The latter concept is applied in GPRS and EGPRS, while both are used in WCDMA. However, those networks implement dedicated queues. Therefore, the basic ideas of the queue management discussed in this paper apply to both channel allocation schemes. III. DESIGN GOALS FOR QUEUE MANAGEMENT Designing a queue management scheme involves a number of decisions. One way to describe the task is to divide the problem into the following questions • What is the appropriate size of the queue? • How should the desired queue size be maintained (reactive “Drop-on-Full” or proactive “AQM”)? • What dropping strategy should be applied (Dropfrom-Front, or Drop-from-Tail; probabilistic versus deterministic packet dropping)? One goal of queue management is to achieve a high degree of link utilization. This is particularly important for resource limited systems like cellular networks, where the radio spectrum is a scarce resource. Thus the objective is to have data available in the queue, whenever there are free resources over the link. However, excessive queuing should also be avoided. Below, we state a number of known problems associated with over-buffering (see [13]). 1

In this paper, the word “capacity” denotes the bandwidth-delay product.

1664

1. RTO Inflation: Large queues may result in excessive delays during time-out triggered loss recovery. 2. Competing TCP flows: A TCP connection that occupies a large queue may lock out other connections entering the link resulting in unfair sharing of the available bandwidth. 3. Timeouts on SYN-segments: The RTT over an occupied queue may exceed the default initial value of the TCP timeout timer. Thus, new TCP connections entering a shared and congested link may experience timeouts during the initial handshake. 4. Viscous Web surfing: A typical web-surfing scenario is to jump from one web site to another before the transfer of the first is completed. In an over-buffered link, the queue may then be filled with unwanted data. This is a waste of radio resources as the link is transferring stale data.

IV. PROBLEMS WITH RED The state of the art in Active Queue Management today is Random Early Detection (RED) and the algorithm is recommended in [5] and [14] for dedicated high-latency links. In RED, a queue size threshold Tmin below the absolute queue limit Tmax is defined. When a low-pass filtered measurement of the queue size is greater than the early congestion threshold Tmin but less than Tmax, incoming packets are discarded with a certain probability. At these intermediate queue sizes, a typical discarding probability is up to a few percent of the total number of arriving packets [7]. RED has been reported to work well for routers handling large volumes of aggregate traffic, see [8]. For such routers, the queuing delay can in general be kept low. E.g. the queuing delay contribution of a large 1.5 kbyte packet in a 12 Mbit/s router is only 1 ms. This means that large fluctuations in the instantaneous queue size can be accepted without introducing too much delay into the TCP connections. A sudden buildup of the queue is drained fast, unless the overload is of persistent nature. Therefore, only persistent router overload situations must be handled, which justifies the low-pass filter of the instantaneous queue. The traffic is usually aggregated from a large number of connections, which is why the probabilistic scheme of signaling congestion to a limited number of connections is justified. These relations do not, however, hold for the dedicated queue at the wireless link. For bit rates typical of last-hop wireless links, the queuing delay caused by each additional packet in the queue is not negligible. For e.g. a 64 kbit/s link each 1.5 kbyte packet contributes more than 180 ms to the overall queuing delay. Therefore, the instantaneous queue size has to be controlled adequately, to avoid the unwanted effects of over-buffering. A sudden buildup of the queue is not easily drained. The effect of the queue filter is illustrated in Figure 1b. As the filtered measurement reaches Tmin (6 kbyte) in the figure, the actual queue size is already more than 10 kbyte. Thus, in the simulations in Section VI, we let RED operate on the instantaneous queue size. The probabilistic discarding mechanism imposes additional problems. It may lead to the fact that the congestion feedback is delayed by a considerable number of packets after that congestion has been detected, with the possible result that the

a.) One TCP over a 64 kbit/s link

b.) Close-up view of a. Queue fill state Discarded Packets Filtered queue

20 15

25 Queue fill state (kbyte)

Queue fill state (kbyte)

25

20 15

10

10

5 0 0

20

40 Time (s) 60

80

100

5 0 0

5

10 Time (s)

15

20

Figure 1: RED in the per-host wireless queue. Tmin=6 kbyte, Tmax=24 kbyte. Max dropping probability 10%. The queue filter has a time constant of 2 sec.

queue is heavily loaded or saturated. Increasing the discarding probability to avoid queue saturation may lead to several lost segments from the same window of a TCP connection, which may result in under-utilization of the link. Hence, for link buffers that only experience a low degree of statistical multiplexing, the probabilistic approach is not equally justified. The large queue fluctuations seen in Figure 1a are mainly due to the randomness of the dropping pattern. This probabilistic discarding mechanisms poses a very difficult tradeoff problem: A rapid action in order to avoid overbuffering would call for a rather high dropping probability with a ‘steep’ increase as the queue grows. However, a high dropping probability is likely to result in multiple segment losses from the same TCP sending window. For RED, we did not find an adequate compromise between these two operating points. RED has been developed for routers handling traffic with highly varying and unknown characteristics. As the present problem has a number of special features as described in Section II, it is not too far-fetched to expect that some method tailored for such systems could perform better. In particular, we stress the possibility of estimating the end-to-end pipe capacity, which allows for a tight control of the queue size. We made considerable efforts in searching for adequate RED parameter settings for the problem at hand. Still, the best RED solutions that we found where outperformed by the simpler method proposed in the following section. That does not say that RED is inappropriate for the present queuing problem, but that better results can be achieved with less effort. V. THE PACKET DISCARD PREVENTION COUNTER As a result of our analysis of the problem in the previous section, we develop a new AQM scheme that performs well, both in terms of per-packet delay link utilization. It uses a deterministic dropping policy at times of link congestion. Strict control of the queue size is achieved by applying control on the instantaneous queue size. This approach can be contrasted to RED, which uses a probabilistic discarding policy based on a filtered queue size measurement. Similarly to RED, the suggested method uses an early congestion threshold Tmin. The key idea of the proposed method is the action taken when this threshold is reached. When the queue size reaches Tmin, a single packet is discarded to signal congestion to the TCP sender.

1665

The algorithm that we use to prevent closely spaced packets from being discarded, the Packet Discard Prevention Counter algorithm (PDPC), is illustrated in Figure 2. For each arriving packet, the present queue size q is compared with the maximum threshold Tmax. If the queue size exceeds the absolute queue limit Tmax the queue is controlled according to a conventional drop-on-full scheme. For intermediate queue load, (Tmin< q Tmax? No Is C > 0? No

Yes Discard Packet Set CÅ n; n>0

CÅ 0 Yes CÅ C -1

Is q >Tmin? No

Accept packet

Figure 2: The Packet Discard Prevention Counter algorithm.

conditions, which is common in resource limited systems like terrestrial wireless networks. We note that our method for estimating an appropriate value for the Tmin threshold is independent of which AQM method is chosen, as the reasoning is applicable to all methods, including RED, that are based on such an early congestion detection threshold. The value of Tmax is less critical, and a well behaving queue should not reach this size during normal operation. However, Tmax serves to guard against non-responsive, ill-behaving flows, and it constrains the worst case queuing delay of the link. In a typical implementation of such queues, several links may share a common memory resource pool. Thus, Tmax is also needed to ensure that one link does not starve the other links from memory resources. In the simulations, we have used Tmax = 4⋅Tmin. We note that in contrast to RED, the value of Tmax has no influence on the packet dropping pattern, unless the queue size reaches Tmax. This coupling in RED between Tmin,, Tmax, and the linear increase in dropping probability appears to be one of the causes for the difficulties in tuning RED in an adequate manner. The parameter n should be set to prevent packet losses from the same TCP window. A lower bound of a TCP sender’s congestion window at the time of a packet drop can be estimated as PC drop = PC est + Tmin . If the size of the packets arriving into the queue is SIP, a lower limit on n can thus be defined according to n ≥ PCdrop / S IP . The simplest way to use the counter is to use a static value of n based on some expected value of the typical packet-size that flows over the link. However, we observe that the parameter n can be adaptively set to reflect changing conditions while ensuring that at least data amounting up to the pipe capacity is accepted after the packet drop. Instead of counting packets, we could also measure the amount of data (in bytes) that is transmitted over the link, and let one PCdrop of data flow over

1666

the link after a packet-drop, regardless of the sizes of the packets that flow through. Using this approach and setting n = 2⋅Tmin, the proposed method has only one parameter which affects the dropping pattern at times of congestion detection. The reasoning above on the values of Tmin, Tmax, and n is based on the argument of a single TCP connection loading the link. But typical use-cases, like web-browsing, may often generate several TCP flows that simultaneously load the link. An obvious question is therefore whether the arguments above hold when the packets of several TCP connections are multiplexed over the same link. However, since the link buffer in cellular systems is dedicated to a single terminal, the number of simultaneous connections will be rather limited. Since each TCP connection increases its load every RTT, it is obvious that the load build-up is stronger when several connections share the link. In addition, the decrease in load resulting from one packet drop is smaller, as it affects only one of the active connections. Thus, provided that several TCP flows would continuously share a link, a smaller value for Tmin than the one recommended here would be preferable. However, assuming flow-unaware queuing, we propose to dimension Tmin for a single flow. In case of multiple flows this results in increased per-packet delays, but optimal link utilization. Nevertheless, introducing flow-awareness and perflow queuing is an interesting topic for future research. VI. SIMULATION RESULTS A. The simulator The algorithms proposed in this paper have been evaluated by means of an event-driven protocol simulator. This simulator comprises detailed implementations of TCP and the relevant link layer protocols of a WCDMA system [4]. In particular we used TCP Reno, which is configured for a mobile environment according to the recommendations in [11], i.e. we used a large Maximum Segment Size (MSS) of 1460 bytes, an initial TCP window of 3 segments according to [1] and a sufficiently large socket buffer of 64 kbytes. The Internet and core network have been simplified to add a constant delay of 70 ms (one-way) to transmitted IP packets. Furthermore, it is assumed that packet losses occur only due to the applied dropping mechanisms at the link layer queue. Prior to the transmission over the radio link, the IP packets are segmented into radio blocks of up to 40 bytes. A simple block error model of uniformly distributed block errors of 10% has been used to capture transmission errors over the air, but as discussed above, a persistent link layer error recovery guarantees an error-free delivery of all IP packets. However, since the round trip time of the link layer protocol has been assumed to be 120 ms, the link layer retransmissions add to the end-to-end delay experienced by TCP. B. Results In this section, we compare the proposed solution with both RED and conventional Drop-on-Full strategies. Benchmarking against RED is not trivial, as RED delivers different results depending on the tuning of the parameters. We ended up with a RED configuration which has no queue filter, i.e. it operates on the instantaneous queue, and we use a linearly increasing

15

PDPC (n=10) RED

55

Tail Drop

Time (s)

50 45 40 35

10

Drop-on-Full, Drop-from-Tail, Tmax=6 kbyte Drop-on-Full, Drop-from-Front, Tmax =6 kbyte AQM RED, Drop-from-Front, Tmin = 6 kbyte,Tmax = 24 kbyte, Pmax=10%

5

0 0

3

TABLE 1: NUMERICAL VALUES FROM FIGURE 3: A.) FILE TRANSFER TIME, B.) SEGMENT TRANSFER TIME.

b.) Mean queue fill state at packet arrival versus Tmin

a.) Mean file transfer time versus Tmin

Queue fill st ate (k byte)

60

2

4

6

8 10 Tmin (kbyte)

12

14

16

0

c.) Mean e2e segment transfer time versus Tmin

50

2

4

6

8 10 Tmin (kbyte)

12

14

16

46 Time (s )

Time (s )

44 42

1

40 0

38 0

2

4

6

8 10 Tmin (kbyte)

12

14

16

B.) 1017 ms 1007 ms 1759 ms

40.2 s

1099 ms

d.) Mean file transfer time versus n (Tmin = 6 kbyte)

48 2

AQM PDCP, Drop-from-Front Tmin = 6 kbyte,Tmax = 24 kbyte, n= 10

A.) 54.8 s 47.3s 41.1 s

0

5

10

15 Value of n

20

25

30

Figure 3: File transfer of a 250 kbyte object. a.) Mean file transfer time as a function of the early congestion threshold Tmin. b.) Mean queue size at times of packet arrival as a function of Tmin. c.) Mean end-to-end segment transfer time as a function of Tmin.

dropping probability from 0% to 10% over Tmin to Tmax, such that Tmax=4·Tmin2. This configuration resulted in good throughput, but the per-packet delay is not as good, since the queue size is fluctuating a lot caused by the random dropping mechanism. Since the wireless resources are scarce, we preferred to optimize throughput in favor of delay. 1. Single TCP connection: File transfer The first traffic scenario is FTP sessions transferring 250 kbyte files over a 64 kbit/s WCDMA link. Only one TCP session occupies the link at a time. In Figure 3 we compare the performance of the proposed method (PDPC) with both RED and conventional Drop-on-Full with Drop-from-Tail (TailDrop) for a set of parameter values. Both PDPC and RED use Drop-from-Front. Drop-from-Front is preferred, since the slow-start overshoot is much less severe when the queuing delay is bypassed by dropping the oldest packet in the queue. In Table 1, we have summarized some numerical values from Figure 3. As can be seen, the AQM methods both outperform the passive queue. With respect to bulk transfer-time (Figure 3a), RED and PDPC show comparable performance – the PDPC is only slightly better. However, in the RED case, this is done at the expense of operating a much larger queue, which is seen from the graphs showing the average per-packet delay (Figure 3c) and average queue usage at times of packet arrival (Figure 3b). This is because the random dropping mechanism introduces large queue fluctuations, while PDPC manages to keep the queue size smaller – still without draining the queue. The passive queue shows the shortest segment transfer times, since the queue size is constrained to an absolute limit. However, passive queuing results in long file transfer times due to frequent TCP timeouts. It is therefore interesting to see that the PDPC manages to perform comparably to a passive 2 Note that common RED recommendations propose Tmax=3Tmin. In this case, however, a steeper dropping profile only resulted in frequent TCP timeouts.

1667

queue with respect to the per-packet delay, but with a filetransfer delay comparable to RED. Thus, PDPC manages to provide both high throughput and low per-packet delay. Note that the performance of both AQM methods improve (in terms of file transfer time) as Tmin grows, until the improvement flattens out at a queue size comparable to PC. A further increase of Tmin only adds more per-packet delay without improving link-utilization. Indeed, this justifies the method of setting Tmin as described in Section V. In Figure 3d, we have illustrated the development of the object transfer delay as a function of the counter value n. Note that n=0 represents a passive queue with Drop-from-Front. As expected, the performance improves as we increase n. However, when n is larger than the flight of the TCP connection, no further improvement occurs. We note that the method is not particularly sensitive to the value of n, as long as it is large enough to prevent packet losses from the same sending window of TCP. 2. Several TCP connections: Downloading a web-page In the following, we emulate downloading of a web-page using four parallel TCP connections. The index page is assumed to be 50 kbyte with four linked objects of 50 kbyte each. The linked objects are downloaded after the index-page has been received. This mimics an HTTP session without pipelining, with a browser configured to allow up to four simultaneous TCP connections. The results are shown in Figure 4. As expected, the AQM methods outperform passive queuing. We note that both RED and PDPC result in similar performance in terms of transfer time (Figure 4a), and the lead by PDPC in terms of packet delay (Figure 4c) and queue occupancy (Figure 4b) seen for one TCP flow is now reduced significantly. This confirms expectations, because RED has been developed for queues handling aggregate traffic of many simultaneous sources. It is interesting to see that the mean queue occupancy (Figure 4b) is in fact larger than Tmin for both AQM methods. This is because the four simultaneous TCP connections configured according to the recommendations in [11] place a very aggressive load on the link. 3. Resource sharing of a loaded link In the following case, we test the resource-sharing capability of a 64 kbit/s link by testing how well a small object is transferred over an already loaded link. The link is continuously loaded with a persistent TCP connection transferring an object of “infinite” size. Over the same occupied link, we transfer 50 kbyte objects using different TCP connections, such that the 50 kbyte objects are

a.) Mean Web-page transfer time versus Tmin 60

b.) Mean queue fill state at packet arrival versus Tmin

PDPC (n=10) RED

Queue f ill stat e (kbyt e)

Tail Drop

55

30

25 Time (s )

Time (s)

15

50

10

45

40

20

5

0 0

2

4

6

8 10 Tmin (kbyte)

12

14

0

16

c.) Mean e2e segment transfer time versus Tmin

4

2

4

6

8 10 Tmin (kbyte)

12

14

16

56

Time (s )

52 50 48 1 46 0 0

2

4

6

8 10 Tmin (kbyte)

12

14

16

2

4

6

8 10 Tmin (kbyte)

12

14

16

Figure 5: Mean transfer time of a 50 kbyte object over a link which is occupied by a highly persistent TCP connection.

54

2

15 0

d.) Mean Web-page transfer time vs. n (Tmin = 6 kbyte)

3 Time (s)

PDPC (n=10) RED Tail Drop

20

44 0

5

10

15 Value of n

20

25

30

Figure 4: File transfer of a Web-page. a.) Mean transfer time as a function of the early congestion threshold Tmin. b.) Mean queue size at times of packet arrival as a function of Tmin. c.) Mean end-to-end segment transfer time as a function of Tmin. d.) Mean page transfer time as a function of the counter value for the proposed method.

capacity usually dominates the end-to-end pipe capacity. Moreover, the active queue management scheme we propose avoids the averaging of the queue size, and instead operates on the instantaneous queue size. Our simulation results show a clear gain over conventional drop-tail queuing and a comparable RED scheme. REFERENCES

transferred sequentially. The results are illustrated in Figure 5. As expected, link resource sharing works better with a low degree of queuing. Thus, PDPC, which has tighter queue control than RED, shows better performance. Tail-drop has a hard constraint on the queue size, but here the performance is suffering from the uncontrolled packet dropping policy. The best result for PDPC is 18.0 seconds at Tmin = 7.5 kbyte, whereas RED has its best performance at Tmin = 3 kbyte with a mean transfer time of 19.3 seconds. (Tail-Drop: 23.4 s at Tmin= Tmax= 7.5 kbyte) The fact that RED performs best at a lower Tmin is explained by the fact that RED on average allows the queue to grow beyond Tmin before any packets are discarded. An over-buffered link (Tmin=70 kbyte, not illustrated in the figure) results in typical lock-out behavior, and the average transfer time of 50 kbyte over such a link was 89 seconds in our study. VII. CONCLUSION In this paper, we developed a novel active queue management scheme tailored to the specific characteristics of third generation (3G) wireless links. Those links mostly dominate the end-to-end pipe capacity, and only experience a low degree of statistical multiplexing – typically 1 – 4 TCP flows at any point in time. We show that passive queues exhibit a poor performance and argue why Random Early Detection (RED) might not be the best choice in this case, and confirm that with simulation results. Our solution, the Packet Discard Prevention Counter (PDPC) algorithm, is not only simpler than RED but also allows to achieve both design goals: Apart from ensuring high link utilization, it can be tuned to minimize queuing delays while also minimizing the risk of dropping multiple packets from the same flight. The latter is achieved with a deterministic dropping strategy that is tailored to an estimate of the end-to-end pipe capacity and TCP's rate halving policy. For the estimate, our scheme exploits the fact that the 3G link

1668

[1] Allman, M., Floyd, S., Partridge, C.: “Increasing TCP's Initial Window”, RFC 2414, September 1998. [2] Allman, M., Paxson, V., Stevens, W.: “TCP Congestion Control”, RFC 2581, April 1999. [3] Braden, B. et al.: “Recommendations on Queue Management and Congestion Avoidance in the Internet”, RFC 2309, April 1998. [4] Dahlman, E. et al.: “WCDMA - The Radio Interface for Future Mobile Multimedia Communications”, IEEE Trans. on Vehicular Technology, 47(4), pp. 1105-1118, November 1998. [5] Dawkins, S et al.: “End-to-end Performance Implications of Slow Links”, RFC 3150, July 2001. [6] Fairhurst, G., Wood, L.: “Advice to link designers on link Automatic Repeat reQuest (ARQ)”, RFC 3366 , August 2002. [7] Floyd, S.: “RED: Discussions of Setting Parameters”, http://www.aciri.org/floyd/REDparameters.txt, November 1997. [8] Floyd, S.: “RED (Random Early Detection) Queue Management”, http://www.aciri.org/floyd/red.html, April 2002. [9] Floyd, S., and Jacobson, V.: “Random Early Detection Gateways for Congestion Avoidance”, IEEE/ACM Transactions on Networking, 1(4), August 1993. [10] Gurtov, A.: “Efficient Transport in 2.5G3G Wireless Wide Area Networks”, Licentiate Thesis, University of Helsinki, September 2002. [11] Inamura, H (Editor) et al.: “TCP over 2.5G and 3G Wireless Networks”, Work in Progress:draft-ietf-pilc-2.5g3g-12.txt, December 2002. [12] Khan, F. et al.: “Link Layer Buffer Size Distributions for HTTP and FTP Applications in an IS-2000 System”, In IEEE VTC'2000, Boston, USA, 2000. [13] Ludwig, R. et al..: “Multi-Layer Tracing of TCP over a Reliable Wireless Link”, Proc. of ACM SIGMETRICS ’99, Atlanta, USA, June 1999. [14] Montenegro et al.: “Long Thin Networks”, RFC 2757, January 2000. [15] Shiu, H.-K. et al.: “Performance Analysis of TCP over Wireless Link with Dedicated Buffers and Link Level Error Control”, IEEE ICC 2001, Helsinki, June 2001.