Performance Enhancement of TCP in Dynamic Bandwidth Wired and ...

2 downloads 134295 Views 659KB Size Report
Wireless networks and communication services may gradually dominate the .... link from that over a wired network, thereby shielding the original TCP ... advantage of the TCP proxy scheme for improving transmission performance, but most of.
Performance Enhancement of TCP in Dynamic Bandwidth Wired and Wireless Networks Neng-Chung Wang · Jong-Shin Chen · Yung-Fa Huang · Chi-Lun Chiou

© Springer Science+Business Media, LLC. 2008

Abstract In this paper, we propose a scheme that dynamically adjusts the slow start threshold (ssthresh) of TCP. The ssthresh estimation is used to set an appropriate ssthresh. A good ssthresh would improve the transmission performance of TCP. For the congestion avoidance state, we present a mechanism that probes the available bandwidth. We adjust the congestion window size (cwnd) appropriately by observing the round trip time (RTT) and reset the ssthresh after quick retransmission or timeout using the ssthresh estimation. Then the TCP sender can enhance its performance by using the ssthresh estimation and the observed RTT. Our scheme defines what is considered an efficient transmission rate. It achieves better utilization than other TCP versions. Simulation results show that our scheme effectively improves TCP performance. For example, when the average bottleneck bandwidth is close to 30% of the whole network bandwidth, our scheme improves TCP performance by at least 10%. Keywords Congestion avoidance · Congestion window · Round trip time · Slow start · TCP

N.-C. Wang (B) Department of Computer Science and Information Engineering, National United University, Miao-Li 360, Taiwan, R.O.C. e-mail: [email protected] J.-S. Chen · Y.-F. Huang Graduate Institute of Networking and Communication Engineering, Chaoyang University of Technology, Taichung 413, Taiwan, R.O.C. e-mail: [email protected] Y.-F. Huang (B) e-mail: [email protected] C.-L. Chiou Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taichung 413, Taiwan, R.O.C. e-mail: [email protected]

1 Introduction The performance of transmission control protocol (TCP) [18,20] has greatly improved, when congestion avoidance and control algorithms were first introduced. TCP is currently the most widely used Internet transport protocol. TCP was primarily designed for wired networks. The bit error rate is negligible in wired networks. Each TCP packet is associated with a sequence number. A receiver acknowledges that a packet has been received by sending back to the sender a corresponding acknowledgement (ACK) that has sequence numbers of the next expected packets. Packet loss or reception of out-of-order packets indicates failures. To avoid such failures, TCP implements flow control and congestion control algorithms based on the sliding window and additive increase multiplicative decrease (AIMD) algorithms. The quickly developing wireless technology may be combined with wired technology. Wireless networks and communication services may gradually dominate the marketplace. Thus, research into mobile hosts will be very important in the next generation Internet. The popularization of wireless communication and portable personal computers is giving rise to a new discipline of computing known as mobile computing [7,11,21]. Research activity in the area of mobile wireless data networks has therefore increased. However, the performance of TCP degrades when a TCP connection involves a mobile host communicating over a wireless link. The packet loss rate in wireless links is much higher than that in wired networks [17,22]. Therefore, we propose a scheme that observes network variation in detail regardless of whether wired or wireless links are used. In this paper, we propose an approach to enhance the end-to-end TCP performance. We want to enhance TCP performance by modifying TCP Reno. We can modify the slow start state and the congestion avoidance state without any additional physical cost. The central idea in our approach is to introduce a new scheme to TCP at the slow start state and congestion avoidance state. We want to dynamically set the slow start threshold and adjust the congestion window. The functionality of TCP is designed to be satisfactory not only for traditional Internet usage but also for a variety of network environments. TCP must be devoted to providing a reliable service with the following requirements. 1. TCP must offer better solutions for error recovery and congestion avoidance in the wired/wireless networks. 2. TCP must share resources and potential bandwidths fairly. 3. TCP must dynamically utilize the current availability of bandwidths. The present Internet infrastructure provides no guarantee on bandwidth or latency. At present, there are no ideas on how to avoid packet losses in the communication process. Packets must be transmitted through many relay routers to arrive at a destination. An overload of a network and a lack of router resources would first lead to network congestion and, after that, cause packet loss. TCP is the standard for reliable data transport on the Internet. TCP achieves reliability by using cumulative ACKs that are sent by the receiver to the sender for all received data on a regular basis. A cumulative ACK acknowledges the successful receipt of all data received in order. These ACKs are used by the sender to determine whether packet delivery is successful or whether there has been packet loss. The sender would perform retransmission after three duplicate ACKs (DUPACKs) or timeouts. Error control in TCP is faulty in a fading situation because TCP always identifies a packet loss as a result of congestion and does not consider errors from wireless links. When a mobile node performs a handoff, it will be disconnected temporarily from the network; hence, packet

loss is natural. TCP detects a loss by the use of timeout and on the reception of DUPACKs. TCP reacts by reducing the sending rate to adapt to the available bandwidth. It uses a sliding window scheme for rate control. The size of the window indicates the amount of packets that can be sent without an acknowledgement. The control of the window size follows the additive-increase, multiplicative-decrease model. When no packet loss occurs, the window size is slowly incremented until the maximum capacity is reached. The rest of this paper is organized as follows. Section 2 describes the related work on TCP improvement schemes in networks. The proposed scheme is presented in Sect. 3. Performance evaluation is given in Sect. 4. Finally, concluding remarks are made in Sect. 5. 2 Related Work Over the past few years, several solutions have been proposed to improve TCP performance over high-speed networks. How to efficiently utilize network bandwidth is an important issue. There are several implementation versions (i.e., Westwood, Reno, and Vegas) which attempt to improve network utilization. 2.1 TCP Reno TCP Reno [12] induces packet loss to estimate the available bandwidth in a network. It is based on five fundamental states: slow start, congestion avoidance, fast retransmission, fast recovery, and retransmission timeout. When there are no packet losses, TCP Reno increased its cwnd by one during each RTT and when there is a packet loss, it reduces its cwnd to one half of the current cwnd. This is called additive increase and multiplicative decrease, respectively. However, TCP Reno fails to achieve fairness because TCP is not a synchronized rate-based control scheme. The congestion avoidance mechanism adopted by TCP Reno causes periodic oscillation in the cwnd due to the constant updating of the cwnd. The rate at which each connection updates its cwnd depends on the RTT of the connection. Hence, the connections with shorter delays can update their cwnd faster than other connections with longer delays and thereby steal an unfair share of the bandwidth. 2.2 TCP Westwood TCP Westwood [8,15] enhances the window control and back off process. A TCP sender observes the acknowledgement reception rate and estimates the eligible bandwidth of the current connection. TCP Westwood is a sender side modification of the TCP congestion window algorithm that improves upon the performance of TCP Reno in wired as well as wireless networks. It fully complies with end-to-end TCP design principle. Whenever a TCP Westwood sender detects a packet loss that indicates a timeout has occurred or that three DUPACKs have been received, the sender estimates the bandwidth to properly set the congestion window and the slow start threshold. TCP Westwood avoids overly conservative reductions of cwnd and ssthresh. The bandwidth estimation of TCP Westwood can be used by the congestion control algorithm executed at the sender side of a TCP connection. The available bandwidth estimation algorithm is complex and cannot follow the rapid changes in a hybrid mobile network. In TCP Westwood, the congestion window during the slow start and congestion avoidance states is unchanged. The general idea is to use the bandwidth estimate (BWE) to set the cwnd and ssthresh after a congestion event. We start by explaining the general algorithm behavior after

n DUPACKs. Assume that RTTmin is the minimum RTT measured by the TCP source. The pseudo code of the algorithm is shown as follows. if (n DUPACKs are received) ssthr esh = BWE × RTTmin /seg_size; if (cwnd > ssthr esh) /∗ congestion avoidance ∗ / cwnd = ssthr esh; endif endif In the congestion avoidance state, we explore the extra available bandwidth. Consequently, when a sender receives n DUPACKs, it means that we have hit network capacity. When one or more segments are dropped due to natural losses in the wireless link, the sender will receives DUPACKs that are not caused by a congestion event. In TCP Westwood, the BWE would recognize an achievable bandwidth. The goal is to obtain the appropriately rate with the ideal window in order to avoid bottleneck congestion. After n DUPACKs are received, TCP Westwood sets both cwnd and ssthresh equal to the ideal window. The standard fast retransmission/fast recovery then follows. TCP exploits the slow-start state to probe for available bandwidth. This is the rationale for using the BWE to set ssthresh. After a timeout, cwnd and ssthresh are set to 1 and BWE, respectively. We explain the timeout algorithm behavior using the pseudo code shown below. if (timeout expires) cwnd = 1; ssthr esh = BWE × RTTmin /seg_size; if (ssthr esh < 2) ssthr esh = 2; endif endif 2.3 TCP Vegas TCP Vegas [5] uses a congestion avoidance algorithm that utilizes bandwidth more efficiently than TCP Reno. Even so, it cannot guarantee fair of the bandwidth sharing. TCP Vegas computes the expected and the actual flow rate. The expected rate is equal to cwnd/RTTmin , and the actual rate is equal to cwnd/RTT. RTTmin is the minimum RTT measured by the TCP source. During the slow start state, TCP Vegas doubles its congestion window every other RTT. In the congestion avoidance state, TCP Vegas calculates the difference between the window size and the number of acknowledged packets during the RTT, using the following equation. di f f = (Expected Rate − Actual Rate) × RTTmin The cwnd is increased by one if di f f < α, decreased by one if di f f > β, and unchanged if α ≤ di f f ≤ β. TCP Vegas works well with homogeneous TCP versions. In mixed scenarios with different TCP versions (i.e., Westwood, Reno), TCP Vegas sources would receive very little throughput [16]. Consequently, TCP Vegas has not yet been introduced into the Internet. When a DUPACK is received, TCP Vegas checks whether the difference between the current time and the timestamp recorded for the relevant segment is greater than the timeout

value. If it is, TCP Vegas retransmits the segment without waiting for three DUPACKs. This modification can avoid a situation in which the sender never receives three DUPACKs when a DUPACK is lost or delayed. With respect to the multiple-packet loss problem, TCP Vegas spends particular attention to the first two partial ACKs after a retransmission. The sender determines whether this is a multiple-packet loss by checking the timeout of unacknowledged packets. When any timeout occurs, the sender immediately retransmits the packet without waiting for any DUPACKs. TCP Vegas compares the timestamp of the retransmitted packet and the timestamp of the last window decrease. If the retransmitted packet was sent before the last decrease, the sender will not decrease the cwnd upon receiving DUPACKs for this packet because this packet loss occurred because of the previous window size. TCP Vegas is a significant improvement that avoids unnecessary reductions of cwnd because of its quick retransmission of packet loss. 2.4 Other TCP Versions Split connection approaches [13] can also improve the throughput of TCP connections. The connection between a sender and receiver is split into two separate connections, one between the fixed sender and the base station and the other between the base station and the mobile receiver. The split connection approach splits a TCP connection between a sender and a receiver into multiple TCP connections. Over a wireless hop, a specialized protocol tuned to the wireless environment may be used. In the split connection approaches, the relaying routers or BS provide feedback reflecting the network condition of the separate connections only. The separate connections may be wired or wireless. The separate connections are sometimes short end-to-end links. The sender makes decisions on congestion control using feedback from BS. I-TCP [3], M-TCP [6], Freeze-TCP [9], and METP [10] are examples of split connection approaches. I-TCP uses standard TCP for its connection over a wireless link. Like other split connection approaches, it attempts to separate loss recovery over a wireless link from that over a wired network, thereby shielding the original TCP sender from the wireless link. Some of these approaches do not maintain the end-to-end semantics of TCP. These protocols may require state to be maintained and packets to be buffered at the base station. WTCP for wireless wide area networks (WWANs) [19] is a receiver-based method. WTCP addresses rate control over commercial WWANs. The receiver uses the rate control algorithm to compute the desired transmission rate and sends the computed rate to the sender using acknowledgements. The rate control algorithm uses inter-packet delays as the primary metric for rate control. The receiver must do considerable processing and compute the statistical information regarding losses and observed RTT. The snoop protocol [1,2,14] introduces a module, called the snoop agent, at the BS. The snoop protocol is another proposal that works at the link layer. An agent monitors every packet that passes through the TCP connection in both directions and maintains a cache of TCP packets sent across the link that have not yet been acknowledged by the receiver. A packet loss is detected by the arrival of DUPACKs from the receiver or by a local timeout. The snoop agent retransmits the lost packet if it has cached and suppressed the DUPACKs. When the snoop agent retransmits the lost packet, it will start a retransmission timer. If this timer expires, the lost packet is retransmitted again. In TCP networks, the TCP proxy is a popular approach that splits a TCP connection between a sender and receiver into multiple TCP connections. Some schemes have been proposed to improve transmission performance by splitting TCP connections. Several studies

researches have focused on wireless or satellite networks. Other studies have shown the advantage of the TCP proxy scheme for improving transmission performance, but most of the schemes do not take into account the serious problems involved in splitting TCP connections and relaying data packets.

3 Proposed TCP Scheme TCP is able to work in any network environment. However, the bandwidth of a network may change frequently for many different reasons. Therefore, TCP needs to probe the extra bandwidth of a network to use the available bandwidth well. In this section, we propose a scheme that improves the slow start state and the congestion avoidance state. The proposed scheme dynamically sets the slow start threshold and adjusts the congestion window in a dynamic bandwidth environment.

3.1 Slow Start Threshold Estimation By setting the initial slow start thresh (ssthresh) of TCP to an arbitrary value, TCP performance may suffer from two potential problems as follows: 1. If ssthresh is set too high over the bandwidth of the links, the exponential increase of congestion window size (cwnd) will generate too many packets too quickly, causing consecutive packet loss at the bottleneck router and lots timeouts. 2. If the initial ssthresh is set too low, the sender does not effectively utilize the exponential increase of the slow start state and switches prematurely to a linear increase of the congestion avoidance state. The utilization of a large bandwidth is not effective when the startup is poor. TCP lacks an appropriate error control mechanism [4]. TCP is unable to determine if an out-of-order sequence packet is due to a loss, a congestion problem, or a reduction in real bandwidth. Maximum Transmission Unit (MTU) in wireless links is constantly much smaller than the MTU in wired links. A small MTU over the first link forces transmission of smaller packets over the entire end-to-end path even though a wired link can supply much larger packets. In the end-to-end approach, only the end hosts participate in the flow control. The receiver provides feedback reflecting the network condition, and the sender makes decisions on congestion control. Being able to accurately probe for the available bandwidth is the key factor in performance. This is a difficult challenge. The end-to-end approach can have its congestion control mechanism sensed in two ways, reactive and proactive. With the reactive congestion control, the standard TCP Reno is adjusted based on the collective feedback of ACKs and DUPACKs generated at the receiver. TCP probes for the available bandwidth by gradually increasing cwnd until the network reaches the congestion state. The congestion state is unavoidable in TCP Reno. TCP will then degrade to a small transmission rate that may be unnecessary for transmission errors or for wireless random loss. We propose a slow start threshold estimation (ssthresh estimation) scheme that improves TCP performance during the slow start state. The ssthresh estimation dynamically adjusts the slow start threshold. The ssthresh estimation combines the expected rate and the actual rate to obtain an appropriate rate. Then the appropriate rate is used to obtain an appropriate ssthresh. The appropriate ssthresh can enhance TCP performance. Assume that RTTmin is

Throughput

Expected Rate

Appropriate Rate

Actual Rate

Time Fig. 1 Expected rate, appropriate rate, and actual rate with β = 0.3

the minimum RTT measured by the TCP source. We define the appropriate rate (AppR) as below, where AppR with β = 0.3 and 0 < β < 1. Expected Rate = cwnd/RTTmin Actual Rate = cwnd/RTT Appropriate Rate(AppR) = Expected Rate × β + Actual Rate × (1 − β) Figure 1 shows the relationships among the expected rate, the actual rate, and the appropriate rate. If parameter β is close to 1, the appropriate rate would get closer to the expected rate. Therefore, the appropriate ssthresh would be set too high. On the other hand, if parameter β is close to 0, the appropriate ssthresh would be too conservative (small) to degrade TCP performance. In this paper, we make the appropriate rate conservative by setting β to 0.3. If the appropriate rate is too large, ssthresh would be set too high. This would cause multiple packet loss if the exponential increase of cwnd generates too many packets too quickly. When a sender receives an ACK in the slow-start state, the pseudo code of the algorithm is as follows: if (cwnd < ssthr esh) /∗ slow start ∗ / ssthr esh = AppR × RTTmin /seg_size; if (cwnd > sthr esh) /∗ congestion avoidance ∗ / cwnd = ssthr esh; endif endif if (timeout expires) /∗ slow start ∗ / cwnd = 1; ssthr esh = AppR × RTTmin /seg_size; if (ssthr esh < 2) ssthr esh = 2; endif endif

When a timeout or a fast retransmission is in the congestion avoidance state, the pseudo code of the algorithm is as follows: if (fast retransmission executes) ssthr esh = Actual Rate × (1 − β) × RTTmin /seg_size; cwnd = ssthr esh; endif if (timeout expires) cwnd = 1; ssthr esh = AppR × RTTmin /seg_size; if (ssthr esh < 2) sthr esh = 2; endif endif 3.2 Appropriate Congestion Window In the congestion avoidance state, the slow-start threshold estimation can be performed as usual when a timeout or a fast retransmission occurs. After the timeout or the fast retransmission, the TCP Reno sender retransmits one segment and sets ssthresh to half the current cwnd. In our scheme, we set ssthresh to Actual Rate × (1 − β) × RTTmin /seg_size after the fast retransmission. When the timeout occurs, ssthresh is set to AppR × RTTmin /seg_size. In this state, we can detect the extra bandwidth via consecutive observation RTT (COR). COR can observe variations of RTT and determine if there are three consecutive decreases of RTT or three consecutive increases of RTT. The COR period is three (P = 3). For three consecutive decreases of RTT, we calculate the variation as follows: RTTdiff = RTTmax − RTTmin Variation = RTTdiff /RTTmax , where RTTmax and RTTmin are the maximum and the minimum RTT measured by the TCP source, respectively. RTTdiff is the difference between RTTmax and RTTmin . We dynamically adjust cwnd in the congestion avoidance state according to the degree of variation of RTT. For three consecutive decreases of RTT, we define three cases of the next cwnd below.

cwndnext

⎧ ⎨ cwndcur + 1, = cwndcur + 3, ⎩ cwndcur + 5,

if Variation < 1/3 if 1/3 ≤ Variation < 2/3 if Variation ≥ 2/3

For three consecutive increases of RTT, we calculate the variation in a same way as we calculated the variation for three consecutive decreases of RTT. For three consecutive increases of RTT, we define two cases of the next cwnd below.  cwndnext =

cwndcur + 1, cwndcur ,

if Variation < 1/2 if Variation ≥ 1/2

3.3 The Operations in Our Scheme The proposed scheme is based on five fundamental states: slow start, congestion avoidance, fast retransmission, fast recovery, and retransmission timeout. We formally define the operations in our proposed scheme below. 3.3.1 Slow Start If a connection starts or a timeout occurs, the slow-start state begins. The initial cwnd is set to one packet in this state. When a sender receives an ACK, it increases cwnd exponentially by adding one packet each time. The proposed ssthresh estimation combines the expected rate and the actual rate to obtain an appropriate slow-start threshold. The slow start state controls the window size until cwnd achieves the appropriate slow start threshold. Then the congestion avoidance state begins. 3.3.2 Congestion Avoidance Packets sent at exponential speeds will quickly lead to network congestion. The congestion avoidance state begins when cwnd exceeds ssthresh. In this state, cwnd is increased according to our rules after an ACK is received to make cwnd grow linearly. In this state, cwnd will change according to our rules. For three consecutive decreases of RTT, if the variation of RTT is greater than and equal to 2/3, then cwndnext = cwndcur + 5. When the cwnd is increased by cwndnext = cwndcur + 5 or cwndnext = cwndcur + 3 until RTT increases consecutive three times, then cwndnext = cwndcur + 1. As shown in Fig. 2, we show the congestion avoidance state in detail when three consecutive decreases of RTT turn to three consecutive increases of RTT. 3.3.3 Fast Retransmission When three DUPACKs are detected, TCP moves from congestion avoidance to fast retransmission. Incoming packets are considered to be out-of-order by the receiver when a packet loss occurs. For any out-of-order packet received, the receiver immediately sends a DUPACK acknowledging the next expected sequence number. After receiving three DUPACKs, the sender retransmits what appears to be the lost packet without waiting for the retransmission timer to expire. 3.3.4 Fast Recovery Fast recovery takes place immediately after the sender performs fast retransmission. Here, a new ACK is defined as the ACK acknowledging the sequence number beyond the lost packet. When a new ACK is received, the sender sets cwnd to ssthresh to reduce cwnd, and the congestion avoidance state continues. 3.3.5 Retransmission Timeout The sender maintains a corresponding timer for each packet sent. The timer is used to check for a timeout of a non-received ACK of the packet. When timeout occurs, the sender sets cwnd to one and returns to the slow start state.

Fast Retransmit

Send missing packet ssthresh = appropriate ssthresh cwnd = ssthresh

≥ 3 duplicate ACK

cw

start

r ou by ed d ad i s ules r nd

≥ 3 duplicate ACK

nd sen = cw d p nd ack + 1 et AC K

+1 nd cw ket = s n d pac cw end CK s pA Du

cw

K

AC cwnd ≥ appropriate ssthresh

Slow Start

Timeout

Congestion Avoidance

nonduplicate ACK

Fast Recovery

Timeout

cwnd = 1 ssthresh = appropriate ssthresh

Timeout

Retransmit Timeout

Fig. 2 Congestion control diagram for the proposed scheme

4 Performance Evaluation In this section, we evaluate the performance of our scheme in terms of throughput. Our scheme was compared with TCP Reno and TCP Westwood. 4.1 Simulation Environment We conducted a performance evaluation using NS2 (Network Simulation 2). The simulations conducted estimated bandwidth values with exponential distributed interarrival times [7]. The constant packet length was assumed to be 512 bytes. The buffer size of the router was from 30 to 70 packets and the maximum segment size was 512 bytes. Figure 3 shows a simple wired environment used in our simulations. There are two senders, two destinations, and two routers. The TCP connections are transmitted from S1 to D1 (connection 1) and from S2 to D2 (connection 2). As shown in Fig. 3, in order to simulate a realistic network scenario with variations of RTT, we used two connections that share the bottleneck link. Then we tested the throughput of connection 1 and connection 2 with different protocols. Figure 4 shows a simulation environment with four connections. There are four senders, four receivers, and three routers in this environment. We measured the performance of various TCP versions when they were applied to four connections. The TCP connections are

Fig. 3 A wired network environment with two connections

Fig. 4 A wired network environment with four connections

Fig. 5 A mixed wired and wireless network environment

transmitted from S1 to D1 (connection 1), S2 to D2 (connection 2), S3 to D3 (connection 3), and S4 to D4 (connection 4). Figure 5 shows a simulation topology with wired and wireless links. The wired portion is a 10 Mbps link between the sender and the base station. The transmission latency over the wired link is initially assumed to 30 ms. The wireless portion of the simulations is a 2 Mbps wireless link with a transmission latency of 0.1 ms. The wireless link is assumed to connect the base station to a mobile receiver.

10000 Reno (connection1) Westwood (connection1) Proposed scheme (connection1) Reno (connection2) Westwood (connection2) Proposed scheme (connection2)

9000 Throughput (Kbps)

8000 7000 6000 5000 4000 3000 2000 1000 30

35

40

45 50 55 Bottleneck buffer size

60

65

70

Fig. 6 Throughput comparison between connection 1 and connection 2 with different buffer sizes

10000 9500

Reno Westwood Proposed scheme

Throughput (Kbps)

9000 8500 8000 7500 7000 6500 6000 30

35

40

45 50 55 Bottleneck buffer size

60

65

70

Fig. 7 Throughput of connection 1 and connection 2 with different buffer sizes

4.2 Simulation Results The throughput for three protocol models with different buffer sizes and bandwidths was simulated. The throughput comparison between connection 1 and connection 2 with different buffer sizes is shown in Fig. 6. Figure 7 shows the total throughput of connection 1 and connection 2 with different buffer sizes. The throughput of connection 1 with different bottleneck capacities is shown in Fig. 8. The throughput improvement from our scheme is more significant at higher buffer capacities. The simulation results indicate Reno is vulnerable when the buffer capacity is insufficient. Improvement occurs because of the aggressive congestion avoidance strategy that observes the variation of RTT in detail. Therefore, the aggressive congestion avoidance strategy can effectively improve throughput performance. Figure 9 shows the throughput comparison with variable P for different bottleneck capacities. Figure 10 shows a comparison of throughput when β was changes. As shown in Fig. 11,

11000

Reno Westwood Proposed scheme

Throughput (Kbps)

10500 10000 9500 9000 8500 8000 13

14

15

16 17 18 19 Bottleneck capacity

20

21

22

20

21

22

40

45

Fig. 8 Throughput of connection 1 versus bottleneck capacity 11200

Proposed scheme (P=5) Proposed scheme (P=4) Proposed scheme (P=3)

Throughput (Kbps)

10800 10400 10000 9600 9200 8800

13

14

15

16 17 18 19 Bottleneck capacity

Fig. 9 Throughput comparison with variable P 30

Proposed scheme (b =0.4) Proposed scheme (b =0.3) Proposed scheme (b =0.2)

Throughput (Mbps)

25 20 15 10 5 0

0

5

10

15

20

25

30

Bottleneck capacity (Mbps)

Fig. 10 Throughput comparison with variable β

35

4000

Reno Westwood Proposed scheme

3500

Throughput (Kbps)

3000 2500 2000 1500 1000 500 0

0

10

20

30

40

50

60

70

80

Time (sec)

Fig. 11 Throughput comparison with four TCP connections

3000

Reno Westwood Proposed scheme

Throughput (Kbps)

2500 2000 1500 1000 500 0

0

0.05

0.25

1.25

6.25

The packet loss rate of wireless link (%)

Fig. 12 Throughput versus the packet loss rate in a wireless link

in a wired environment with four connections, the simulation results show that the four connections share the bandwidth fairly. The simulation results of the mixed wired and wireless network are shown in Figs.12 and 13. In Fig. 12, we show a comparison of the throughput when the packet loss probability was from 0 to 6.25%. Figure 13 shows that the proposed scheme obtains a slight increase in performance when the wireless link transmission speed increase. Thus, the proposed scheme is more effective than TCP Reno and TCP Westwood in utilizing bandwidth. As shown in Fig. 13, the wireless packet loss rate is 0.3%. Our simulation results clearly show that, in addition to enhancing end-to-end TCP performance, the proposed scheme performs very well in simulation environments where the bandwidth and delay are variable.

3000

Reno Westwood Proposed scheme

Throughput (Kbps)

2500 2000 1500 1000 500 0

1

2

3

4

5

6

7

8

9

The capacity of wireless link (Mbps)

Fig. 13 Throughput versus the capacity in a wireless link

5 Conclusions Effective error control and congestion control for heterogeneous (wired and wireless) networks have been an active area of research recently. The well-known challenge in providing TCP congestion control in mixed environments is that current TCP implementations rely on packet loss as an indicator of network congestion. In wired links, a congested router is the major reason for packet loss; in wireless links, noise or signal fading is the major reason for packet loss. In this paper, we presented a concept for TCP protocol in a dynamic bandwidth environment. The proposed scheme is based on an appropriate ssthresh and observations of RTT. Our scheme enhances the slow start state and the congestion avoidance state. The proposed scheme helps a sender intelligently solve the issues of high dynamic bandwidth and long delay. The ssthresh estimation evaluates an appropriate ssthresh value and thereby enables good utilization of bandwidth. Another contribution of this work is the proposed observation RTT mechanism, which uses an aggressive congestion avoidance strategy. It observes variations of RTT and adjusts the cwnd effectively to improve performance. In the next future, we will refine the adjustment of cwnd in the congestion avoidance state according to the degree of variation of RTT in order to improve TCP performance in dynamic bandwidth wired and wireless networks. Acknowledgments This work was supported by the National Science Council of Republic of China under grants NSC-94-2213-E-324-025 and NSC-95-2221-E-239-052.

References 1. Balakrishnan, H., Seshan, S., & Katz, R. H. (1995). Improving reliable transport and handoff performance in cellular wireless networks. Wireless Networks, 1(4), 469–481. 2. Balakrishnan, H., Seshan, S., & Katz, R. H. (1995). Improving TCP/IP performance over wireless networks. In Proceedings of the ACM MOBICOM, (pp. 2–11), November 1995. 3. Bakre, A., & Badrinath, B. R. (1995). I-TCP: Indirect TCP for mobile hosts. In Proceedings of the 15th International Conference Distributed Computing Systems (ICDCS), May 1995. 4. Braden, R. T. (1989). Requirements for internet hosts-communication layers. In RFC 1122, October 1989.

5. Brakmo, L. S., & Peterson, L. L. (1995). TCP Vegas: End-to-end congestion avoidance on a global internet. IEEE Journal on Selected Areas in Communications, 13(8), 1465–1480. 6. Brown, K., & Singh, S. (1997). M-TCP: TCP for mobile cellular networks. ACM SIGCOMM Computer Communication Review, 27(5), 19–43. 7. Capone, A., Fratta, L., & Martignon, F. (2004). Bandwidth estimation schemes for TCP over wireless networks. IEEE Transactions on Mobile Computing, 3(2), 129–143. 8. Casetti, C., Gerla, M., Mascolo, S., Sanadidi, M. Y., & Wang, R. (2002). TCP westwood: End-to-end congestion control for wired/wireless networks. Wireless Networks, 8(5), 467–479. 9. Da Costa, G. M. T., & Sirisena, H. R. (2003). Freeze TCP with timestamps for fast packet loss recovery after disconnections. Computer Communications, 26(15), 1792–1799. 10. Desimon, A., Chuah, M. C., & Yue, O. (1993). Throughput performance of transport-layer protocols over wireless LANs. In Proceedings of the IEEE GLOBECOM (Vol. 1), (pp. 542–549), December 1993. 11. Elaarag, H. (2002). Improving TCP performance over mobile networks. ACM Computing Surveys, 34(3), 357–374. 12. Fall, K., & Floyd, S. (1996). Simulation-based comparisons of Tahoe, Reno, and SACK TCP. In Proceedings of the ACM Computer Communications Review, pp. 5–21, July 1996. 13. Fu, Z., Luo, H., & Zerfos, P. (2005). The impact of multihop wireless channel on TCP performance. IEEE Transactions on Mobile Computing, 4(2), 209–221. 14. Izumikawa, H., Yamaguchi, I., & Katto, J. (2004). An efficient TCP with explicit handover notification for mobile networks. In Proceedings of the IEEE Wireless Communications and Networking Conference (Vol. 2), (pp. 647–652), March 2004. 15. Mascolo, S., Casetti, C., Gerla, M., & Sanadidi, M. (2001). TCP westwood: Bandwidth estimation for enhanced transport over wireless links. In Proceedings of the ACM MOBICOM (pp. 287–297), July 2001. 16. Mo, J., Anantharam, V., La, R. J., & Walrand, J. (1999). Analysis and comparison of TCP Reno and Vegas. In Proceedings of the IEEE INFOCOM (Vol. 3), (pp. 1556–1563), March 1999. 17. Nanda, S., Ejzak, R., & Doshi, B. T. (1994). A retransmission scheme for circuit-mode data on wireless links. IEEE Journal on Selected Areas in Communications, 12(8), 1338–1352. 18. Postel, J. B. (1981). Transmission control protocol. In RFC 793, September 1981. 19. Sinha, P., Venkitaraman, N., Sivakumar, R., & Bhargavan, V. (1999). WTCP: A reliable transport protocol for wireless wide-area networks. In Proceedings of the ACM MOBICOM (pp. 231–241). 20. Stevens, W. R. (1994). TCP/IP Illustrated (Vol. 1). Addison-Wesley. 21. Tsaoussidis, V., & Matta, I. (2002). Open issues on TCP for mobile computing. Wireless Communication and Mobile Computing, 2(1), 3–20. 22. Wang, K.-Y., & Tripathi, S.-K. (1998). Mobile-end transport protocol: An alternative to TCP/IP over wireless links. In Proceedings of the IEEE INFOCOM (Vol. 3), (pp. 1046–1053), March 1998.

Author Biographies Neng-Chung Wang received the B.S. degree in Information and Computer Engineering from Chung Yuan Christian University, Taiwan, in June 1990, and the M.S. and Ph.D. degrees in Computer Science and Information Engineering from National Cheng Kung University, Taiwan, in June 1998 and June 2002, respectively. He joined the faculty of the Department of Computer Science and Information Engineering, Chaoyang University of Technology, Taiwan, as an assistant Professor at the Department of Computer Science and Information Engineering, National United University, Taiwan. Since August 2007, he has become an Associate Professor at the Department of Computer Science and Information Engineering, National United University, Taiwan. His current research interests include computer networks, wireless networks, and mobile computing. Dr. Wang is a member of the IEEE Computer Society, IEEE Communications Society, and Phi Tau Phi Society.

Performance Enhancement of TCP Jong-Shin Chen was born in 1972. He received the B.Sc. and Ph.D. degrees in computer science from Feng Chia University, Taiwan, in 1996 and 2003, respectively. Currently, he is an assistant professor in the Department of Graduate Institute of Networking and Communication Engineering, ChaoYang University of Technology, Taiwan. His research interests include mobile computing, capacity planning, mobile agent, and wireless systems.

Yung-Fa Huang received the College Diploma in Electrical Engineering from National Taipei University of Technology in 1982, M.S. degree in Electrical Engineering from National Hsinhua University in 1987 and Ph.D. degree in Electrical Engineering from National Chung Cheng University in 2002. During 1987–2001, he was an instructor in Chung Chou Institute of Technology. He is currently an Associate Professor in Graduate Institute of Networking and Communication Engineering, Chaoyang University of Technology. His current research interests include multiuser detection in CDMA cellular mobile communication systems and wireless networks.

Chi-Lun Chiou received the B.S. degree in Department of Computer Science and Technology from TOKO University, Taiwan, in June 2004, and the M.S. degree in Computer Science and Information Engineering from Chaoyang University of Technology, Taiwan, in June 2006. His research interests include mobile computing and wireless networks.

Suggest Documents