JTCP: Congestion Distinction by the Jitter-based ... - Semantic Scholar

2 downloads 7039 Views 220KB Size Report
retransmission timer (RTO). It is not worth to ..... After the timer expires, we still check whether the jitter ... Figure 5. Pseudo code of JTCP after timer expires. IV.
JTCP: Congestion Distinction by the Jitter-based Scheme over Wireless Networks Eric Hsiao-Kuang Wu, Ming-I Hsieh, Mei-Zhen Chen and Shao-Yi Hung Dept. of Computer Science and Information Engineering, National Central University, Chung-Li, Taiwan [email protected] Abstract— TCP, a widely used transport protocol performs well over the traditional network which is constructed by purely wired links. As wireless network is growing rapidly, the wired/wireless mixed inter-network, a heterogeneous environment will get wide deployment in the next generation ALL-IP wireless networks. TCP which detects the losses as congestion events could not suit the heterogeneous network in which the losses will be introduced by higher bit error rates or handoffs. There exist some unsolved challenges for applying TCP over wireless links. End-to-end congestion control and fairness issues are two significant factors. To satisfy these two criteria, we propose a jitter-based scheme to adapt sending rates by the packet losses and jitter ratios. The experiment results show that our jitter-based TCP (JTCP) conducts good performance over the heterogeneous network. Keywords—TCP; jitter; wireless; congestion control

I. INTRODUCTION In the recent years, the wireless and mobile communications [1] have shown significant popularities in areas such as commerce, education, entertainment, and defense. As the number of mobile hosts grows exponentially, the smooth and effective communications over wired and wireless links, a heterogeneous network draw a great amount of research efforts. Besides, the ubiquity of the Internet is driven by the network-technology-independent design of IP [3]. Without surprise, a heterogeneous network environment will get wide deployment in the next generation ALL-IP wireless networks. In this simple IP architecture which allows a multitude of services, the transport layer’s primary role is to provide end-to-end communications service. Most applications are employing one of two protocols: the user datagram protocol (UDP) or the transmission control protocol (TCP) [4]. UDP is a simple, datagram-oriented, transport layer protocol. It is suitable for best effort unicast streaming multimedia or real-time applications. However it provides no reliability and no congestion control. It does not assure that the sent datagram reaches their destination. Some enhancements were proposed to reform this protocol such as equation-based congestion control for unicast traffic [6]. Unlike UDP, TCP offers a connection-oriented, reliable delivery service for a stream of data sent from one machine to another without duplication or data loss over an IP network. One of the most critical issues for TCP is congestion control. It won’t overshoot the bottleneck bandwidth via properly slowing down its congestion window. It is also a gentle protocol to share the network capacity with others. According

to these characteristics, TCP is not only widely adopted but also carefully investigated on recent advanced networks. The applications such as FTP, Web, Telnet, etc. are developed on top of TCP. It is well designed for the best effort environment for its congestion avoidance scheme [2]. The TCP sender deploys the additive increase and multiplicative decrease (AIMD) scheme to enforce congestion avoidance. It dictates that the sender increases the congestion window (cwnd) by at most one segment each round trip time (RTT). As three or more duplicate acknowledgements (ACKs) are received by the sender in a row, it indicates that the segment has been lost. TCP assumes the packet losses as the consequence of network congestion and will reduce its window accordingly. The congestion avoidance mechanism makes TCP robust in the purely wired environment but not in wireless links which will experience higher bit error rates in transmission [7]. In the wired links, the losses are usually caused by congestion; however, the non-congestion losses may occur due to random or burst errors in the wireless links. TCP will unnecessarily half down his transit window size before retransmitting lost packets or initiate its congestion window and back off its retransmission timer (RTO). It is not worth to reduce the sending rate especially when the event is not caused by congestion. Therefore, TCP will experience performance degradation since it can not distinguish the lost event caused by real congestion or random drop of the wireless characteristics. A number of researches have been devoted to the issue of TCP performance over the heterogeneous network and a great effort is underway to find solutions of the problem. There are several categories of solutions to improve TCP performance over the heterogeneous network [8]. The Split connection approach hides the lossy parts of the internet so that only congestion losses are detected at the source such as the Indirect-TCP [11]. The Snoop Agent approach [9][10], a linkaware protocol involves the installation of a specialized agent in the base station. The drawback of both schemes is that endto-end semantics of TCP acknowledges are violated. One set of research efforts enhance TCP to distinguish different types of losses. The congestion coherence approach [12][13] utilizes Explicit Congestion Notification (ECN) [14] to enhance its judgment on wired or wireless losses; however, the routers are required to support ECN and there might not be enough time to obtain ECN as the buffer at a congested router is overflowing. Another set of research efforts propose schemes to estimate the link capability. TCP Westwood [15] estimates the capacity of the network connection but it will have poor performance when the sender estimates the incorrect

bandwidth or the random packet loss rate exceeds a few percent. Among the above solutions, few could conform to the TCP’s manipulation principle especially end-to-end semantics. Even if their schemes can get end-to-end congestion control, they may not have well performance or fairness over lossy links. In this paper, we propose congestion distinction by the jitter-based scheme over wireless networks (JTCP). JTCP carries the following key functions: •

Jitter ratio: First we propose jitter ratio scheme to reflect queue. If any packet is queued, the queued length will grow and congestion event occurs. Via the jitter ratio estimation, we will detect congestion events.



Cooperating the jitter ratio and TCP: Since TCP itself is not sufficient to detect the losses caused by the wireless link we apply the jitter ratio estimation to its congestion detection mechanism. That will reduce the un-necessary retransmission by TCP.



Next RTT: The TCP’s reaction duration is about one RTT. If multi-losses occur in the same RTT, we consider them as the same congestion event and just reduce the sending rate once.



Complete performance investigation to JTCP: we simulate several scenarios to find out the well parameters for the proposed jitter-base scheme.

JTCP supports the above functions as a real end-to-end, fair, and wireless compatible approach for the current internet environment. II. JITTER AND JITTER RATIO The main ideal of our proposed scheme is to apply the jitter ratio on TCP congestion control mechanism. The jitter ratio is extended form inter-arrival jitter. The inter-arrival jitter D is defined in RTP (real-time protocol)’s RFC document. From the value of jitter, we can trace the delay variance of packet-by-packet. This characteristic can help us to investigate the link status. A. Inter-arrival Jitter The inter-arrival jitter is the value of packet spacing at the receiver compared with packet spacing at the sender for a pair of packets. For example, if Si is the sending time for packet i, and Ri is the receiving time for packet i, then for two packets i and j, inter-arrival jitter D(i,j) may be expressed as in: D (i, j ) = ( R j − Ri ) − ( S j − Si ) = ( R j − S j ) − ( Ri − Si )

(1)

In other words, the value of inter-arrival jitter can demonstrate the packet-by-packet delay. According to the above equation, there is no delay between packet i and packet j if D is equal to zero. As D is greater than zero, it means that the transmit time of packet j is longer than packet i. We can assume packet j is queued in a while. There is a brief view in Fig. 1. Depends on this phenomenon, and then we define the jitter ratio to direct the ratio of queued packets.

j

i

j

j

i

i

D 0

Sender

Queue

Receiver

Figure 1. Inter-arrival jitter B. Jitter Ratio The jitter ratio (jr) is defined as the ratio of queued packet. Consider the queue at the bottleneck, and one traffic flow of fixed rate is consuming the bandwidth of the bottleneck. If the packet arrival rate at the router is greater than the router’s service rate, the probability of un-serviced/queued packets will be proportional to the arrival rate subtracting the service rate of the router over the arrival rate. To express the situation, let t A (sec) be the packet-by-packet delay of the packets arrival at the router, t D (sec) be the delay of the packets departure from the router, and B (packet/sec) be the service rate of the router. B≈

1 tD

(2)

The ratio of queued packets at the router could be calculated as follows: 1  1 1  − B  − t  A  ≈  t A tD 1 1     t  A  tA  ≈ =

  tD − t A      =  t A × tD  = tD − t A tD 1   t  A

( tR (i) − tR (i − 1) ) − ( tS (i) − tS (i − 1) ) ( tR (i) − tR (i − 1) )

(3)

D

( tR (i) − tR (i − 1) )

When the increasing queue reaches the maximum limit of the buffer, the following arriving packets at the router will be dropped. Therefore, the ratio of packets dropped (lost) will be approximated as the ratio of queued packets. This ratio can be treated as the loss ratio predictor that is formulated by the inter-arrival jitter and is defined as the following: Jr =

D tR ( i ) − t R ( i − 1)

(4)

Jr (Jitter ratio) is an important guide to determine whether the packets are queued or not. In our proposed scheme, the congestion event is defined as the time when the triple duplicate ACKs are triggered and the jitter ratio is much significant. Now we will enter the practical method to enhance the TCP’s performance shown in next section.

III. PROPOSED SCHEME: JTCP Since the jitter ratio could point out the connection condition between the source and the destination, we apply the jitter ratio on TCP. The JTCP will be able to distinguish the congestion loss and non-congestion loss. A. Combination of jitter ratio and TCP From the description of section 2, TCP will send packets when receiving a new ACK. We know the reaction time of TCP is about one round trip time so the JTCP’s sample time for an average jitter ratio will be set one RTT. According to the TCP’s behavior of the sliding window, if the current window size is w, it means TCP sends w segments in one round trip time previously. Now, we know the required date to estimate JTCP’s average jitter ratio is the current congestion window size and sequence number, see Fig. 2. Acked cwnd n-w

current cwnd n-1 n

w

w

JTCP will determine if these ACKs arrive within a RTT. If the period for duplicate ACKs has been extended to the next RTT, we consider them as one congestion event (no matter how many duplicate ACKs occur during one RTT) such as TCP Newreno. Otherwise, an immediate recovery will proceed. Since the reaction time of TCP is about one RTT and the senders which half their congestion windows redundantly will degrade the performance. The value of w identifies the current window size and k is the control parameter. The k/w ratio denotes that k packets will be queued at the router when TCP injects w packets into the network. Since TCP only adds one segment per RTT in AIMD scheme, we know the k shouldn’t greater than 1. JTCP qualifies a congestion event with three factors: (1) the sender receives n (e.g. triple) duplicate ACKs, (2) the event is longer than one RTT (Next RTT) and (3) the jitter ratio is more than the value of k/w. As long as the three situations are satisfied, JTCP will half down the congestion windows as TCP Reno. On the other hand, if the jitter ratio is less than the value of k/w, the duplicate ACKs are caused by lossy link loss and JTCP only enters the immediate recovery phase.

One RTT

Sending packets

Figure 2. congestion window of TCP during one RTT

TDACKs

We can get TCP’s average jitter during one RTT as the following: n −1

∑D

i =n− w

i

=

n −1

∑ (R

i =n− w

i −1

− Ri ) − ( Si −1 − Si )

cwnd = cwnd/2 ssthreshold = cwnd

Next RTT

NO

Immediate recovery

YES

(5) YES Congestion Event

= ( Rn −1 − Rn − w ) − ( Sn −1 − Sn − w )

Jr > k/w

NO Non-congestion Even

We define the average jitter ratio of JTCP as jr =

D( m×( n − w),n −1) Rn −1 − Rm×( n− w)

Next RTT: The time between recent and previous TDACKs is longer than one RTT

(6)

Note m is a parameter that determinates the amount of pervious RTT taken into account. The greater m is set and the more history we consider. If the value of m is huge, it will make low response to adjust the sending rate. In our experiments, we set the m = 1 to make fast response and achieve good performance. The previous one RTT is enough to judge the state of the communication link. The value of n denotes the sequence number of earliest non-acked packets and w denotes the current congestion window size. In our proposed scheme, we apply the value of jitter ratio (jr) to TCP’s congestion avoidance algorithm. According to the significance of jr, we want to find the rule between congestion event and jr value. The rule can help us to know that the congestion event is composed of packet losses and jitter ratio.

Figure 3. Diagram of JTCP after receiving TDAC If ( TDACKs are received ) If (jr >= k / cwnd) & In Next RTT // Congestion E // Fast Recovery ssthreshold = 1 / 2 cwnd cwnd = ssthreshold else

// packets lost but not caused by congestion // Immediate Recovery ssthreshold = D * cwnd cwnd = ssthreshold

endif endif

B. JTCP action after receiving triple duplicate ACKs Fig. 3 shows the state diagram of JTCP after receiving triple duplication ACKs. As three duplicate ACKs occur,

Figure 4. code of JTCP after receiving TDACKs

Fig. 4 is the pseudo code of JTCP after receiving TDACKs. In the immediate recovery, we set a decrease factor (D) to which will be greater than 1/2. If D is 1, it means the sender only sends out the lost packet and doesn’t reduce its sending rate. We have experimental evidence for the appropriate value of k and D (see next section). The purpose of JTCP is to reduce the unnecessary degradation to improve TCP’s performance. C. JTCP action after the timer expires After the timer expires, we still check whether the jitter ratio is greater than k/w or not. If jr is less than k/w, then we just half down the congestion window instead of applying slow start. We consider the lossy link may have burst losses to lead to the timer expiration. The burst losses may be caused by weak signals in long distance or handoff of mobile device. We decrease one-half window size and try to do fast recovery because of the non-congestion event. The pseudo code of the algorithm is in Fig. 5. If ( Timer expire )

A. Wireless TCP comparison The losses may be random or burst drops over wireless link. In following scenarios, we apply the error models to simulate the lost packets. Compared with other version modifications of TCP (Reno, Newreno, and Westwood), we will show JTCP has a great improvement upon the lossy links. First the case of the random drop errors will be shown and the case of the varying propagation time will be introduced later. The total time of this simulation is about 300 seconds. The normal error model is attached on the wireless link. We compare the throughputs of JTCP with several version TCP assuming independent errors ranging from 0 to 10% packet loss probability. In Fig.7, JTCP has more than 20% improvement for the 1 to 5% packet error rates. Even thought the packet discard rate reaches 10%, the average throughput for JTCP is still higher than Westwood which is designed as end-to-end semantics to improve TCP over the heterogeneous network.

If (jr >= k / cwnd) // enter slow start phase

2

sstrheshold = 1 / 2 cwnd

1.6 Average Throughput (Mbps)

// the loss is occurred by lossy links

else

JTCP Reno Newreno Westwood

1.8

cwnd = 1 Send the loss packet sstrheshold = 1 / 2 cwnd cwnd = ssthreshold endif endif

Figure 5. Pseudo code of JTCP after timer expires

1.4 1.2 1 0.8 0.6 0.4 0.2

IV. SIMULATION AND ANALYSIS A set of simulation experiments are performed to evaluate the effectiveness of JTCP. All simulations are performed with network simulator 2 (NS-2) [23] by implementing a pair of JTCP agents. Fig. 6 shows the typical network topology of a wired/wireless mixed network. S denotes the source node, R1 and R2 are routers, and D is the destination node. The wireless link which has been modeled with an error module represents the last hop transmission between R2 and D. Errors are assumed to occur in both directions of the wireless link. The propagation time over the wired link is initially assumed to be 45ms. The bottleneck bandwidth is 2Mbps between R1 and R2 and the packet size is 1000 bytes. 100M 5ms S

2M 40ms R1

10M 0.01ms R2

0 0

2

6

8

10

Figure 7. Average throughput vs. packet loss rate B. Inter-protocol Fairness and Utilization One of the key concerns for TCP implementations is fairness that every connection can get the fair share bandwidth of the bottleneck. In this subsection, a set of experiments are performed to address the issue. Their definitions are:

Throughput Utilization

D

total _ packet × packet _ size simulation _ time Throughput = × 100(%) bottlenect _ bandwidth =

(7)

2

Wireless Link

Fairness

Figure 6. Simulation Network topology

4

=

 n   ∑ bi   i =1   n  n ×  ∑ bi2   i =1 

bi denotes the fraction of the bandwidth taken up by connection i on the links. From the fairness equation, we can see that the value of Fairness ranges form 1/n to 1, with 1 corresponding to best allocation among all connections.

In this scenario, we inject multiple flows (2 to 30) into the links and calculate their utilization and fairness values. TCP Reno is well known can get fair network capacity. We can contrast the fairness of JTCP with Reno over the lossy links with 1% and 5% loss rate. In the Fig. 18 and Fig. 20, three wireless TCP (JTCP, Reno, Westwood) almost have the same utilization which is higher than ninety percent when the flow number is high. It’s significant target that the JTCP gains better utilization than others when there are fewer flows. The lossy link will drop any packet for each connection. If there are fewer flows, the packet loss rate will have a great effect upon each flow. The results of fairness are shown in Fig. 8. Even though the JTCP’s fairness isn’t the best, the fairness still keeps upon 0.98. 1.005

This work was supported by the Media Tek Inc. under the “ Mobile Applications for UWB Technology” Project.

Fairness

0.995

REFERENCE

0.99

0.985

0.98

0.975 0

As mentioned in the above sections, we use the course jitter ratio scheme to enhance TCP Reno. Under extremely higher bit error rate, JTCP might not be able to distinguish the wireless losses by comparing the jitter ratio value and 1/w. In the experiment observations, the jitter ratio value may be utilized to estimate the number of the packets which are queued at the router according to the sending rate and the growing rate of the jitter ratio. The continuous efforts will go to the analysis of the jitter-base model and design a precise scheme to adapt the various wireless networks. ACKNOWLEDGEMENT

JTCP Reno Westwood

1

generation wireless networks to avoid the unnecessary retransmit and degradation.

5

10

15 20 The number of flows

25

30

Figure 8. Fairness with 1% loss r V. CONCLUSION This article presented a novel protocol, JTCP to improve the performance of TCP in the networks with wireless links and mobile hosts. The proposed transport solution has achieved end-to-end congestion control semantics, fairness and wireless adaptability. JTCP has been experimented through several simulations to show considerable throughput gains over other versions of TCP. The article introduces the concept of jitter ratio and the implementation of JTCP. It uses extra information in the TCP header to store the sending and receiving timestamp. According to the timestamps, we can estimate the jitter ratio. When the losses occur, the sender receives triple duplicate acknowledgements or the retransmit timer expires, we will compare the value of jitter ratio with the inverse value of current congestion window size to distinguish the network congestion losses or non-congestion losses. This is a simple scheme but enhances TCP robustness for the heterogeneous wireless network. In simulation experiments, JTCP not only performs as well as regular TCP, Reno version, in wired environment but also gains better utilization and fairness than other versions of TCP like New Reno, Westwood. The article demonstrates that as the jitter ratio is greater than 1/w the queued packets are increasing in the bottleneck buffer. As long as the packets are queued, the congestion event occurs. On the contrary, the packet losses may occur by the characteristics on the wireless link. It is critical for next

[1] Qi Bi, George I. Zysman, and Hank Menkes, “Wireless Mobile Communications at the Start of the 21st Century,” IEEE Communication Magazine, January 2001. [2] Van Jacobson, and Michael J. Karels, "Congestion Avoidance and Control," ACM Computer Communications Review, vol. 18, no. 4, pp. 314-329, August 1988. [3] G. Xylomenos, G.C. Polyzos, P. Mahonen, M. Saaranen, ”TCP Performance Issues over Wireless Links,” IEEE Communications Magazine, April 2001. [4] R. Stewart and C. Metz, “SCTP: new transport protocol for TCP/IP,” Internet Computing, IEEE, Volume: 5 Issue: 6, Nov.-Dec. 2001. [5] Jun Li, Stepheb B. Weinstein, Junbiao Zhang, and Nan Tu, “Public Access Mobility LAN: Extending The Wireless Internet into the LAN Environment”, IEEE Wireless Communications Magazine, June 2002. [6] S. Floyd, M. Handley, J. Padhye, and J. Windmer, “Equation-Based Congestion Control for Unicast Applications,” in Proc. ACM SIGCOMM Symposium on Communicastions Architectures and Protocols, Aug. 2000. [7] Andrew S. Tanenbaum, “Computer Networks, Third Edition,” PrenticeHall, Inc. 1996. [8] Hari Balakrishnan, Venkata N. Padmanabhan, Srinivasan Seshan and Randy H. Katz, "A comparison of Mechanisms for Improving TCP Performance over Wireless Links," Proc. ACM SIGCOMM '96, Aug. 1996. [9] Hari Balakrishnan, Srinivasan Seshan, Elan Amir, and Randy H. Katz, "Improving TCP/IP Performance over Wireless Networks," Proc. 1st ACM Int'l Conf. on Mobile Computing and Networking (Mobicom), Nov. 1995. [10] Jian-Hao Hu, Kwan L. Yeung, Siew Chee Kheong, and Gang Feng, "Hierarchical Cache Design for Enhancing TCP over Heterogeneous Networks with Wired and Wireless Links," Globecom, 2000. [11] Bakre A., Badrinath B. R., "I-TCP: Indirect TCP for Mobile Hosts," In Proceedings of ICDCS 95, May 1995. [12] Ramakrishnan K. K., Floyd S., "A Proposal to add Explicit Congestion Notification (ECN) to IP," Internet Draft, Work in progress, January 1999. [13] Chunlei Liu, and Raj Jain, "Requirements and Approaches of Wireless TCP Enhance," [14] S. Floyd, “TCP and explicit congestion notification,” ACM Computer Communication Review, Oct. 1994. [15] Saverio Mascolo, Claudio Casetti, Mario Gerla, M. Y. Sanadidi, and Ren Wang, "TCP Westwood: Bandwidth Extimation for Enhanced Transport over Wireless Links," ACM Mobicom, 2001.