Shielding TCP from Wireless Link Errors: Retransmission ... - CiteSeerX

3 downloads 459 Views 337KB Size Report
and some version of the IEEE 802.11 wireless MAC protocol is used over the wireless link. ... Next, we study the effect of the channel delay to the data rate reduction. ... using a sliding window mechanism for flow control and error recovery.
Shielding TCP from Wireless Link Errors: Retransmission Effort and Fragmentation Peter Langend¨orfer, Michael Methfessel, Horst Frankenfeldt, Irina Babanskaja, Irina Matthaei, and Rolf Kraemer IHP, Im Technologiepark 25, D-15204 Frankfurt/Oder, Germany [email protected] Abstract A known problem for TCP connections over wireless links is that errors in the wireless channel interfere with the TCP protocol even for minor packet loss. In the first part of this paper we evaluate how the data rate reduction depends on the channel delay. For comparatively short delays in the order of 100 ms, the decrease of the throughout is noticeable but not dramatic. This indicates that the problem is not severe if the communication partners are located in the same WLAN or interact over a fast Internet connection. A significant throughput reduction arises in the case of a large network delay. Simulation results for the uplink transmission are presented as part of an overall strategy in which all improvements are made by optimizing the mobile end device only, an approach which allows performance improvements without any protocol modifications.

Keywords: TCP, 802.11x MAC, wireless networks, energy saving, adaptable protocols

1

Introduction

There is a general consensus that the Internet of the future will be accessed to a large part by mobile handheld devices over a wireless link. While such a solution will bring 1

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

2

large benefits to the user in comparision to today’s stationary PCs, it poses a number of technological challenges. Handheld devices have limited resources in terms of power supply and processing capabilities. Running TCP as an end-to-end protocol drains down the battery of the handheld device. This is due to the fact that errors on a wireless link interfere with the TCP protocol, which can reduce the throughput drastically even when only a small percentage of the data is actually lost. This extends the time needed to complete TCP based data exchange, leading to an increased power consumption. In this paper, we focus on a prototype system in which a handheld device is used to enter the Internet. TCP runs end-to-end between the mobile device and a remote server, and some version of the IEEE 802.11 wireless MAC protocol is used over the wireless link. Numerous effective suggestions have been made to improve the performance of such a system. However, these often involve modifications to either the link-layer protocol or to TCP itself. Furthermore, generally these modifications are done to both the handheld device and the base station. An alternative useful approach could be to restrict all the modifications to the mobile handheld device [Meth02]. The main advantage is a straightforward integration into existing and future networks without any additional assumptions. The price of this approach is that two distinct problems must be solved: optimization of the uplink (from the mobile to the base station) and of the downlink (from the base station to the mobile device). While some features will be common to the two tasks, the main challenge remains, namely that there is no control over the transmission parameters in the downlink. Thus, the mobile device must be able to handle all incoming data efficiently, even when the channel conditions are not taken into account by the transmitter. Before turning to specific results for the optimization of the uplink, we revisit the question of the interference between wireless link errors and TCP. The background is that Internet connections are becoming more efficient, which has a direct influence on the severity of the problem because the amount of in-flight data is reduced. The results show three distinct types of behavior as the network delay is given different values in the simulation. For long delays, the TCP throughput is limited by the congestion window and indeed drops drastically when packets are lost. As the network delay gets shorter, the throughput is dominated first by fast retransmissions and then retransmission timeouts. For these two cases, the reduction

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

3

of the data rate is considerably less severe as packets are dropped in the wireless channel. The conclusion is that, for modern high-speed networks, the TCP/wireless link problem tends to solve itself, at least for a number of realistic and important scenarios in which the round-trip delay is small. In the remaining part of the paper, we discuss specific strategies to optimize the uplink transmission, defering the downlink case to a future publication [Meth02a]. These strategies build on previous work done in the context of adaptive link layers, applied to the specific MAC protocol studied here. The focus is on the retransmission effort in the MAC layer and the effect of packet fragmentation. Overall, the results obtained here support the feasibility and usefulness of an asymmetric approach in which optimization is restricted to the mobile wireless device. This allows major improvements of the performance within existing network protocols. The rest of this paper is organized as follows. After giving a short overview of related work, we describe our simulation setup and sketch the behavior of the TCP and MAC implementations. Next, we study the effect of the channel delay to the data rate reduction. After that we discuss our simulation results for the TCP/MAC interaction and some strategies to optimize the performance. The paper closes with a summary of the main results.

2

Related Work During the last few years extensive research has been done to investigate the behavior of

TCP in wireless networks. We focus here on investigations that describe the interference of TCP and lower protocol layers. [Desi95] states that introducing a reliable link layer protocol does not improve the TCP throughput since it interferes with TCP’s retransmission timer. However, these authors do not take into account the coarse granularity of the TCP timers. Our simulation results as well as the measurements presented in [Pars00] show that this is indeed not a problem. Interesting concepts for reliable link layer protocols which use TCP acknowledgements to detect packet losses have been researched in [Bala96]. Such link layer protocols are not suitable to shield TCP from the faulty channel since TCP and the link layer protocol are not

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

4

independent. In contrast to this type of link layer protocol, IEEE 802.11x uses MAC-layer acknowledgements and can shield TCP from packet losses. [Tsao99] proposes to extend the original TCP with a so-called probing phase. After a transmission error, TCP sends probe packets to the receiver in order to detect when the channel becomes error-free again. At that time the transmission is continued. This approach helps to reduce energy consumption, but it can not be applied without changing existing TCP implementations. [Pars00] proposes a new link layer protocol called TULIP which provides a reliable service using a sliding window mechanism for flow control and error recovery. The new feature is that it is aware of the required service, and it provides reliability only for packets requiring such service. The presented measurements show that TULIP increases TCP’s throughput slightly more than Snoop [Bala95]. TULIP can only be used if the already existing link layer protocols are adapted. Thorough investigations of adaptive strategies to improve link-layer protocols have been done by Lettieri et al. [Lett98]. Similar ideas are also pursued here, in the context of the specific interaction between TCP and the IEEE 802.11 protocols.

3

Simulation Set-Up

The structure of the network for which our simulations are done corresponds to the system shown in Figure 1. TCP runs as end-to-end transport protocol on the mobile device and the server. The IEEE 802.11 MAC protocol controls the single hop between the mobile and the base station. We remind that TCP is fundamentally a sliding-window protocol, designed for a situation in which several packets can be underway at the same time. When the transmitter sends out a packet, it must always wait at least one round-trip time before it knows the response from the receiver. For example, the success of a fast retransmit attempt can only be determined after this delay. To include these effects, we simulated the wired Internet part as a two-way channel with a given data rate and transmission delay. No packets are discarded in this part of the simulation. An important quantity is the capacity of the two-way pipe, given by twice the delay-bandwidth product. In the results presented below,

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

5

Figure 1: Sketch of the relevant network structure.

we used a data rate of 0.08 Mb/s in the Internet block with a delay of 1s in either direction, giving a pipe of 20 kB. The size of the TCP receive buffer was chosen equal to this value. This is the lowest value which allows TCP to reach the full data rate by keeping the pipe full at all times. In contrast, the IEEE 802.11x protocol family uses a stop-and-wait approach, which is appropriate for a short channel such as a single wireless hop. The main task is to mediate between different stations requesting access to the shared medium. This is handled by a CSMA/CA scheme, whereby each station with a data packet to transmit choses a random slot number. By counting down after the completion of the previous transmission, the station with the lowest slot number obtains the right to transmit its packet, while the other stations defer until the next attempt. The receiver acknowledges each error-free packet by sending back a short ACK packet, which is given a higher priority than all normal data packets. There is some overhead associated with this algorithm (see Figure 2) which reduces the effective data rate. Furthermore, some collisions can still occur because the same slot number can be selected. In this case the colliding packets are corrupted, no ACKs are sent, and both transmitters choose new slot numbers after increasing the maximal slot count. Packets corrupted by channel errors are treated in the same way as collisions. The simulations were done in the BONeS tool set from Cadence. The main effort was to implement realistic TCP and MAC layers; the IP and LLC layers were included in a rudimentary form. TCP was coded in C++ and was restricted to the established state, since connection establishment plays a minor role for the overall performance. All relevant

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

6

IEEE 802.11, 1 Mb/s PHY-PR 96 PHY-HD 32

DIFS 128 BO 375

SIFS 28

HEAD 568

DATA 800

ACK 240

IEEE 802.11, 2 Mb/s PHY-PR 96 PHY-HD 32

DIFS 128 BO 375

HEAD 284

SIFS 28 DATA 400

ACK 184

IEEE 802.11a, 6 Mb/s (scale x 5) PHY-PR 16 PHY-HD 4

DIFS 34 BO 67

HEAD 100

SIFS 16 DATA 232

ACK 44

Figure 2: Required time in µs at the PHY level for the various phases in the transmission of a packet with 100 bytes of data, for IEEE 802.11 (at 1 Mbit/s and 2 Mbit/s) and for 802.11a (at 6 Mbit/s). Shown are the MAC DIFS idle period, the average backoff time, the PHY preamble and header, the sum of the TCP/IP/DL/MAC headers, the user data, the MAC SIFS delay, and the sum of all terms for the MAC acknowledgement.

algorithms (congestion control, fast retransmit, retransmit timer adjustment etc.) were included. The MAC layer includes the full algorithm sketched above, based on a model for the PHY layer which takes into account the correct timings and delays. For the wireless channel, the simulations used the parameters of the IEEE 802.11 standard for the FHSS case with a data rate of 2 Mb/s.

4

TCP Throughput under Packet Loss The topic of this paper is the response of the TCP protocol to the loss of packets on

an IEEE 802.11 wireless link, and possible means to avoid the reduction of the data rate which results. We have described our simulation set-up which combines a realistic TCP

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

7

implementation with a detailed model of the MAC and PHY layers. Before considering specific features of the TCP/MAC layer interaction, we study the general response of TCP to packet loss. This serves as a basis for the later discussion and as a check of the simulation, since we can compare to known results [Pad98]. It is well known that TCP throughput depends critically on the relation of the sliding window size to the capacity of the pipe. In fact, keeping the former small by reducing the congestion window is the mechanism by which TCP throttles its data rate when congestion is perceived. The pipe contains the packets which are under way, i.e., those that have been sent but for which no response has yet been received. The sliding window limits the amount of data which can be managed at the transmitter and receiver in this manner. If the sliding window is smaller than the pipe, the pipe cannot be kept full and the data rate is reduced. The effective data rate thus depends on the maximal amount of data which can be in flight at any time. This is the minimum of the send and receive windows and the pipe capacity: MiF = min(Swind, Rwind, 2∆r0 )

(1)

where r0 is the nominal data rate and ∆ is the one-way transmission delay. The attainable effective data rate is reff = MiF/2∆

(2)

which equals r0 only if MiF is not limited by the send or receive window. If the send window is artificially kept small by the congestion window, the data rate is reduced accordingly. We are, of course, looking for the well-known phenomenon that TCP throughput decreases drastically even if only a few packets are lost in the channel. Beyond this, we are specifically interested in the effect of the channel delay on this problem. For each of the channel delays the data rate reduction is dominated by a different factor. Retransmission timeouts (RTO) Figure 3 shows the TCP throughput for a channel delay of 10ms in case of regular and random packet losses, respectively. For regular losses the data rate does not differ noticeably from the theoretically achievable throughput (see Figure 3). This indicates that the time that passes until the sender recognizes that the fast retransmit was successful does not play a role for very short channel delays. For random

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

8

losses, however, the throughput is decreased nearly 40 per cent for a packet error rate of six per cent. The reason for the different behavior for regular and random losses results from the different error correction mechanisms that were triggered. In both cases fast retransmits were used to recover, but RTOs were only triggered in the case of random losses (see table 1). Since the channel delay of 10 ms reduces the pipe capacity to 0.2 kB the reduction of the congestion window has no influence on the achieved data rate. Thus, the reduction of the data rate results exclusively from RTOs. Inspection of the TCP packet traces show two separate error patterns by which the data transfer is blocked until a retransmission timeout (RTO) starts it up again, which entails large delays: • In a fast retransmit, the retransmitted packet itself can get lost. In that case, TCP blocks after it has exhausted the available send window. • A number of separate fast retransmits can occur close together, but without overlap of the associated fast retransmit phases. This reduces the congestion window rapidly until, for the last lost packet, not enough packets are underway to trigger the fast retransmission by three duplicate acknowledgements.

Packet 1 2 regular FST 20 40 RTO 0 0 random FST 11 25 RTO 1 3

error rate 3 4 5 6 62 84 105 126 0 0 0 0 39 45 45 66 3 6 16 24

Table 1: Number of fast retransmit phases (FST) and retransmission timeouts (RTOs) for regular and random packet losses and channel delay of 10 ms

Fast retransmission As long as the channel delay is very short, no data rate reduction can be observed for regular losses. This phenomenon changes when the channel delay is increased to 100 ms (see Figure 4). Then the data rate is reduced about forty per cent for

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

9

throughput (Mb/s)

0.12 0.1 0.08 0.06

theoretical throughput regular loss random loss

0.04 0.02 0 0

1

2

3

4

5

6

packet error rate (%)

Figure 3: Response of the TCP throughput to lost packets, for a channel delay of 10 ms. Each value was determined by averaging over ten simulations for the transfer of 2 MB in which packets were randomly respectively regularly deleted in the uplink according to the given packet error rate. The packet length was 500 bytes.

packet error rate of six per cent. Since the channel delay of 100 ms reduces the pipe capacity to 2 kB the shrunken congestion window does not limit the data rate. Table 2 shows that for regular packet losses no retransmissions are triggered. This implies that the data rate reduction is caused by the fast retransmissions resulting from the fact that the data transfer is blocked until the fast retransmit was successful. In case of random packet losses the data rate is reduced additional ten per cent (see Figure 4). Table 2 depicts that for random packet losses fast retransmissions as well as retransmission time outs can be observed. The discussion of the case of regular losses enables us to quantify the effects of fast retransmissions and RTOs. Forty per cent of the data rate reduction stem from fast retransmissions whereas RTOs cause a data rate reduction of ten per cent, only. To sum, for channel delays of about 100ms fast retransmissions are the dominant factor of the data rate reduction.

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

Packet 1 2 regular FST 20 40 RTO 0 0 random FST 10 29 RTO 1 3

10

error rate 3 4 5 6 62 84 105 124 0 0 0 0 41 51 55 66 2 6 13 23

Table 2: Number of fast retransmit phases (FST) and retransmission timeouts (RTO) for regular and random packet losses and channel delay of 100 ms Packet 1 2 regular FST 20 40 RTO 0 0 random FST 12 29 RTO 0 1

error rate 3 4 5 6 62 84 100 100 0 0 0 0 41 52 62 77 2 3 7 13

Table 3: Number of fast retransmit phases (FST) and retransmission timeouts for regular and random packet losses and channel delay of 1000 ms Congestion window For a delay of 1000 ms TCP indeed shows extreme sensitivity to lost packets. For a packet error rate of only 0.033 (i.e., the loss of one packet in 300) the throughput has already dropped to one-sixth (see Figure 5). The same data rate reduction can be observed for random as well as for regular losses. In the latter case no RTOs are triggered. The number of fast retransmissions is in the same range as it is for a channel delay of 100ms for regular as well as for random losses. However, the repeated fast-retransmit operations keep the congestion window down to a few kB, permitting only a few packets in flight most of the time. This is much smaller than the pipe capacity of 20 kB, leading a reduced effective data rate according to Eqs. (1, 2). For such a situation, there is a major problem when TCP is used over a lossy wireless link. In sum, we find that the magnitude of the basic problem when TCP runs over lossy

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

11

throughput (Mb/s)

0.12 0.1 0.08

theoretical throughput regular loss random loss

0.06 0.04 0.02 0 0

1

2

3

4

5

6

packet error rate (%)

Figure 4: Response of the TCP throughput to lost packets, for a channel delay of 100 ms. Each value was determined by averaging over ten simulations for the transfer of 2 MB in which packets were randomly respectively regularly deleted in the uplink according to the given packet error rate. The packet length was 500 bytes.

wireless links depends significantly on the channel delay. When the delay is in the order of one second or larger, the pipe capacity is large. TCP then reacts sensitively to lost packets as it reduces the congestion window until it is smaller than the pipe. For shorter channel delays, the pipe is so small that this effect does not play a role. Although TCP performance still lies below the theoretical limit, the problem is less severe and is caused by retransmission timeouts and delays caused by fast retransmits. Together the results indicate that (at least for modern high-speed Internet connections and in-house WLANs) the behavior of TCP should be adequate if error rates are not too extreme. In the rest of this paper, we focus on the case where there is a definite problem, based on a channel delay of 1000 ms in each direction.

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

12

throughput (Mb/s)

0.12 0.1 0.08

theoretical throughput regular loss random loss

0.06 0.04 0.02 0 0

1

2

3

4

5

6

packet error rate (%)

Figure 5: Response of the TCP throughput to lost packets, for a channel delay of 1000 ms. Each value was determined by averaging over ten simulations for the transfer of 2 MB in which packets were randomly respectively regularly deleted in the uplink according to the given packet error rate. The packet length was 500 bytes.

5

Results and Discussion

After describing how losses on the MAC level interfere with TCP, we discuss some options to improve the performance. 5.1

Basic TCP/MAC interaction

During the simulation, TCP generates packets and places them in the output queue. The MAC layer tries to transmit them over the wireless channel, which may or may not succeed. If yes, the packet is delayed in the Internet part and then passed up to TCP on the receiver (see Figure 1). There, TCP acknowledgements are generated and sent back in the same way. Note that there are two data streams competing for the wireless channel, namely the TCP data packets and the returning TCP acknowledgements. For a faulty wireless channel, it can happen that the MAC layer cannot deliver some packets. The simulation shows this leads to two types of interference with the TCP protocol, as has been pointed our before [Laks97, Tsao99]:

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

13

• For moderate bit error rates (BERs), loss of a single packet triggers the TCP fast retransmit algorithm. Unfortunately, TCP then reduces the congestion window and (more significantly) the threshold between exponential and linear increase of the window. This reduces the data rate severely since the congestion window stays small on average, preventing TCP from keeping the pipe filled. For example, loss of one packet in 100 already halves the data rate. • At higher BERs, burst errors occur in which several packets are lost in a short interval (e.g. within one round-trip time). With a high probability the TCP connection then blocks until a retransmission timeout occurs. To handle burst errors more effectively, different flavors of TCP (such as New Reno, also used here) have been introduced. We find that this does not greatly improve the situation because too many things can go wrong: the retransmitted packet, an acknowledgement, or some other packet needed to keep the pipe flowing can all be lost once error rates are in this range. For the user, there are two unpleasant consequences: the transmission rate is reduced, and energy is wasted as more attempts are needed to transfer each packet. To improve the situation, the MAC layer has a number of options: • It can modify the retransmit strategy used on the MAC layer itself. • It can fragment a packet into smaller sections and send these separately, thereby reducing the data loss when an error occurs. • Possibly it can switch to a more robust but slower modulation scheme. We discuss these options in more detail next. 5.2

Modifying the MAC retransmission

When a packet is lost (either by a collision or by channel error) the MAC layer retransmits the packet up to a maximal number of attempts (denoted here by LRL and by aLongRetryLimit in the standard). When the limit is reached, the packet is discarded and the

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

14

Figure 6: Overall data rate as function of the wireless channel bit error rate for various values of the MAC long retry limit.

procedure is restarted with the next packet in the queue. However, it is the job of TCP to make sure that every packet arrives sooner or later. When the MAC layer discards a packet, the consequence is that TCP must retransmit it, albeit at a later time. In other words, every packet must be transmitted repeatedly until it finally arrives, and the only question is which part of the protocol stack does this. By varying LRL in the simulation, we can shift the burden of retransmitting lost packets between the MAC and TCP layers. We display the results in two ways for simulations with a fixed packet payload length of 1000 bytes. Figure 6 shows the data rate as function of the BER for various choices of LRL. This shows that the value of LRL makes a large difference in the transmission rate. The best performance is reached when LRL is made large. Figure 7 shows the total number of bytes sent into the wireless channel, which is a measure of the energy consumption. Interestingly, this turns out to be essentially independent of LRL. Thus (for a given packet length, coding scheme, and BER) the required energy is a constant, while the required time becomes significantly larger if TCP takes care of retransmissions. Some thought shows that this reasonable. At a given BER, a certain average number of

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

15

Figure 7: Total number of bytes transmitted into the wireless channel as function of the BER and the MAC long retry limit.

attempts NTRY is needed until a packet is finally transferred correctly. This number depends on the quality of the channel but not on which layer does the retransmission. More precisely, at a BER p there is is a fixed probability P (the packet error rate) that a packet is corrupted during the transmission. The probability of a successful delivery in a given attempt is 1 − P so that NTRY = 1/(1 − P ) tries are needed on average. Assuming a typical time τ for each transmission attempt (as given in Figure 2), the total time to transmit the data is the number of distinct data packets times τ NTRY , giving the overall data rate: R=

(packet length) (1 − P ) . τ

(3)

This is the best performance we can attain at the given BER and packet length, and it is reached only as the retry limit is made large. The conclusion is that we always lose a certain amount of energy as we throw the packet against the faulty channel until it finally gets through, but that we only make things worse if this is done by TCP and not the MAC layer. The optimal MAC strategy for TCP data streams could thus be called “never give up” (NGU); there is no point in ever dropping a

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

16

Figure 8: Transmission time for 1 MByte for the case of fragmentation on TCP and MAC layer, respectively, as function of the payload per packet at different channel error rates.

packet and proceeding to the next one at the MAC level. Some notes to this strategy are in order: • Even for a large number of MAC-layer retransmits, the total delay is generally small compared to the minimal TCP retransmission timeout. In any case, TCP adjusts the timeout using measured round-trip times. • The strategy can only be used if we are sure this is a TCP data stream. For other types of data, it might well degrade the performance. We return to this below. • Our simulations were done for static channel conditions. However, the basic idea of shielding TCP from packet loss by increasing the effort at the MAC level should be valid for time-varying channels. 5.3

TCP vs. MAC Layer Fragmentation

In the case of good channel conditions, the only effect of reducing the packet length is to increase the overhead needed to transmit a certain chunk of data. But if the channel becomes

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

17

Figure 9: Bytes transmitted over the channel as function of packet length and BER.

bad, large packets are more likely to be damaged, increasing the number of attemps needed to transmit the packet correctly. This slows down the transmission and increases the power consumption. In order to minimize these effects it makes sense to adapt the packet length to the channel conditions [Lett98]. This involves a tradeoff between the improved probability that the packet is transmitted correctly and the increased overhead. One question of interest is whether a reasonable compromise can be found which gives adequate performance over a wide range of BERs, or whether a significant further gain can be obtained by chosing the length dynamically depending on the error rate. The packet length can be changed either by reducing the size of a TCP packet, or by introducing fragmentation at the MAC level. The most important difference is the amount of overhead added to the total data amount by each newly-created packet. For example, for the IEEE 802.11 standard at 2 Mbit/s, Figure 2 shows that transmission of the overhead takes 1127 µs, while the user data requires 4000 and 400 µs for payloads of 1000 and 100 bytes, respectively. The percentage used to transmit actual user data thus reduces from 78% to only 8%. The point is that a reasonable degree of fragmentation quickly leads to

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

1Mbps, Tx+Rx, 1000ms, LRL7

6.90

power consumption (mAh)

18

6.70

2Mbps, Tx+Rx, 1000ms, LRL4

6.50

2Mbps, Tx+Rx, 1000ms, LRL7

6.30 6.10

2Mbps, Tx+Rx, 1000ms, LRL10

5.90 5.70

2Mbps, Tx+Rx, 1000ms, LRL30

5.50 25

20

15

10

5

0

SNR

Figure 10: Power consumption of transmitter plus receiver of the mobile device for the transmission of 1 MByte in dependency of SNR and data rate. The LRL was varied only for 2 Mb/s. The power consumption of the receiver was 185 mA and 300 mA for the transmitter, respectively.

a situation in which the user data portion is small compared to the overhead. Figure 2 shows this similar in the IEEE 802.11a case. In contrast, the overhead added by MAC-layer fragmentation is smaller, since it avoids the higher-level headers, the SIFS delay, and the backoff slots. The effect of TCP and MAC fragmentation on the data throughput at various error rates is compared in Figure 8. Performance is always better for MAC fragmentation as expected. While the overhead is acceptable down to payloads of about 200 bytes for the MAC case, TCP fragmentation throughput decreases steadily with reduced packet size. Thus, fragmentation at the MAC layer is clearly preferable, leading to an improvement by a factor of two to three in some cases. Furthermore, the curves show that a fixed payload length of around 300 is gives a reasonable performance over a wide range of channel conditions when MAC fragmentation is used, whereas a fixed choice is difficult to find for TCP-layer fragmentation. Finally, Figure 9 shows that the same conclusions hold when the focus is on energy consumption.

power consumption (mAh)

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

6.90

19

2Mbps, Rx+Tx, 1000ms

6.70

1Mbps, Rx+Tx, 1000ms

6.50 6.30

2Mbps, Rx+Tx, 1000ms, Fragm

6.10 5.90

1Mbps, Rx+Tx, 1000ms, Fragm

5.70 5.50 25

20

15

10

5

0

SNR

Figure 11: Power consumption of transmitter plus receiver of the mobile device for the transmission of 1 MByte in dependency of the SNR and MAC fragmentation. The MAC packet length was 300 byte, the LRL was set to seven. The power consumption of the receiver was 185 mA and 300 mA for the transmitter, respectively.

5.4

Adjusting the Data Rate

The IEEE 802.11 standard uses the data rates 1 Mb/s and 2 Mb/s. As long as the wireless channel is good it is advantageous to send with the higher data rate since this minimizes the time interval in which the receiver and the transmitter are switched on. When the channel conditions become worse the lower data rate consumes less power since it is more robust i.e. the transmitted data are not corrupted during transmission. The problem to solve is: when to reduce the data rate. As long as the data rate is the only parameter that can be adjusted to the cannnel condition the decision is straight forward. If the SNR (signal to noise ratio) is about fifteen the data rate should be reduced. This rule is still valid if only the LRL may be adjusted additionally. Figure 10 depicts that the LRL does not enlarge the range in which it needs less power to send with 2 Mb/s instead of sending with 1 Mb/s. The situation becomes more complicated when we take fragmentation into account. Fragmentation leads to an imbalance between user data and overhead. This implies that the power consumption is increased when fragmentation is used while the channel is in a good

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

20

state. Figure 11 depicts this for a data rate of 1Mb/s up to an SNR of nine. But please note that fragmentation enlarge the range in which the power consumption for a data rate of 2 Mb/s is less than those of a data rate of 1 Mb/s (see Figure 11). Fragmentation shifts the threshold when to reduce the data rate from an SNR of 15 to an SNR of 12.5. In this interval the power consumption is about 0.15mAh less than than for a data rate of 1Mb/s. This is due to the fact that smaller packets are more likely to be delivered correct. In addition the time interval for which the transmitter has to be switched on is shorter than when sending with a data rate of 1 Mb/s. Thus, the first parameter to adjust is the MAC packet length, i.e. start fragmentation. Then the data rate should be reduced. Figure 11 shows that the power consumption of the 1Mb/s data rate is increased by fragmentation as long as the SNR is higher than nine. This indicates that fragmentation should be switched off after reducing the data rate. To summarize, the adjustment of the MAC parameters: fragmentation, data rate and LRL is not straight forward. A strategy that starts fragmentation first, then reduces tha data rate and finally increases the LRL will lead to an optimal power consumption. Fragmentation and the LRL have to be re-adjusted for each data rate individually. Especially large LRLs (>7) are useful only if all other means did not lead to a correct transmission of the packet. Further simulations are needed to determine when to start fragmentation and the optimal length of the fragments and when to reduce the data rate, respectively.

6

Summary and Conclusions In this paper, we have showed that the magnitude of the detrimental effects of wireless

channel errors on the TCP protocol are highly dependend on the channel delay. As long as the channel delay is in the range of 100ms the reduction of the data rate can be tolerated. For large channel delays additional means are needed. We have used simulations to evaluate options which reduce the effects of lossy wireless channels to the TCP protocol. Realistic models of the TCP and the IEEE 802.11 MAC protocols were used in order to obtain quantitative information. The results demonstrate that there is a major advantage if the MAC layer can be made aware of the type of data it is handling, i.e., data from a reliable

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

21

connection-oriented service as opposed to an UDP-type stream. This allows to optimize the MAC layer for this case, without caring about the effect on other data types. The optimal behavior is then a “never give up” strategy. Our simulations show that increasing the LRL is a suitable means to shield TCP from transmissions errors. They also show that the number of bytes transmitted into the channel is independent of the protocol that does the retransmission. This is due to the fact that a reliable transport protocol must deliver all packets eventually, so that any MAC-discarded packet will be retransmitted by TCP anyway, albeit with negative side effects such as congestion window closure or blockage until a timer expires. From the view point of power consumption it is always advantageous to do the retransmission on the MAC layer. Results were also obtained concerning optimization of the performance by fragmentation. Since the overhead of the MAC protocol becomes a significant portion of the total data as packets get smaller, fragmentation on the MAC layer is preferable to that on the TCP layer. Our simulations indicate that fragmentation should be preferred to lowering the data rate. Thus, the best implementation of the “never give up” strategy would start with fragmentation on the MAC level, then lower the data rate step-wise and finally increase the LRL. Please note that further research is needed to determine when fragmentation should start and when the data rate should be lowered. Any approach along the lines indicated leads to additional complications because of the need to permit interaction between the application and the MAC layer. Perhaps this would be comparatively straightforward if there is only one type of data at a time, but it would need more effort if packets from different streams are to be interleaved. However, the results show that the gains from such vertical protocol interactions could make the effort worthwhile.

References [Bala95]

Balakrishna, H.; Seshan, S.; Katz, R. H.: Improving Reliable Transport and Handoff Performance in Cellular Wireless Networks ACM Wirless Networks, Vol. 1, (4), 1995

[Bala96]

Balakrishna, H.; Padmanabhar,V. N.; Seshan, S.; Katz, R. H.: A Comparision of Mechanisms for Imporoving TCP Performance over Wireless Links. In IEEE/ACM Transactions on Networking, Dec. 1997.

Reprinted from: The Journal of Supercomputing, Vol. 23, (3), 245-260, 2002

22

[Desi95]

DeSimone, A.; Nanda, S.: Wireless Data: Systems, standards, services. ACM Wirless Networks, vol. 1, (3), 1995.

[Krae99]

Kraemer, R.; Methfessel, M.: A Vertical Approach to Energy Management. Proceedings of MoCu 1999.

[Laks97]

Lakshman, T. V.; Madhow, U.:The Performance of TCP/IP for for Networks with High Bandwidth-delay Products and Random Loss. IEEE/ACM Transactions on Networking, vol. 5, 1997.

[Lett98]

Lettieri, P.; Srivastava, M.B.: Adaptive Frame Length Control for Wireless Link Throughput, Range, and Energy Efficiency. Proceedings IEEE INFOCOM ’98, 1998.

[Meth02]

Methfessel, M.; Dombrowski, K.; Langend¨orfer, P.; Frankenfeldt, H.; Babanskaja, I., Matthaei, I., Kraemer, R.:Vertical Optimization of Data Transmission for Mobile Wireless Terminals. submitted to IEEE Personnel Communications.

[Meth02a]

Methfessel, M.; Frankenfeldt, H.; Dombrowski, K.; Kraemer, R.:Optimizing the Downlink for Mobile Wireless Devices. submitted to Wireless and Optical Communications Conference, Banff 2002.

[Pad98]

Padhye, J.; Firoiu, V.; Towsley, D.; Kurose, J.: Modeling TCP Throughput: A Simple Model and its Empirical Validation. ACM SIGCOMM ’98 conference on applications, technologies, architectures, and protocols for computer communication.

[Pars00]

Parsa, C.; Gracia-Luna-Aceves, J.J.: Improving TCP Performance over Wireless Networks at the Link Layer. In ACM Mobile Networks and Applications Journal, Special Issue on Mobile Data Networks: Advanced Technologies and Services, Vol. 5, (1), 2000.

[Tsao99]

Tsaoussidis, V.; Badr, H.; Verma, R.: Wave and Wait Protocol (WWP) : An EnergySaving Transport Protocol for Mobile IP Devices. In Proceedings of the 7th International Conference on Network Protocols (ICNP’99), IEEE Computer Society Press, 1999, pp. 301-308.

Suggest Documents