ARROW-TCP: Accelerating Transmission toward Efficiency and Fairness for High-speed Networks ∗ School
Jianxin Wang∗ , Liang Rong∗ , Xi Zhang† , and Jianer Chen‡
of Information Science and Engineering, Central South University, Changsha, China 410083 Email:
[email protected],
[email protected] † Department of Electrical and Computer Engineering, Texas A&M University, Texas, USA Email:
[email protected] ‡ Department of Computer Science, Texas A&M University, Texas, USA Email:
[email protected]
Abstract—A novel congestion control protocol, ARROW-TCP, is proposed to address the issues of stability and convergence in existing transmission control protocols. Theoretical analysis shows that ARROW-TCP is globally stable and achieves exponential convergence to efficiency and fairness in a constant time. Meanwhile, ARROW-TCP obtains ideal performance of zero queuing delay, free packet loss by converging monotonically to the fair allocation and avoiding overshooting link capacity. Moreover, the price mechanism leverages ARROW-TCP into max-min rate allocation in hybrid multi-bottleneck networks. Finally, extensive simulations are conducted to verify our theoretical analysis and the simulation results demonstrate that ARROWTCP outperforms other transmission control protocols in terms of stability, convergence, and packet loss rate.
I. I NTRODUCTION With rapid advances in the deployment of very high speed links in the Internet and growing requirements of huge volume of data to be transported over the high bandwidth-delay product networks, the need for a survivable replacement of TCP has grown more and more imperative. Various transport protocols for accelerating transmission over high bandwidthdelay product networks have been proposed in recent years, including end-to-end congestion control protocols [8]–[11] and distributed congestion control protocols [12]–[17]. In this paper, we focus on distributed protocols with aid of explicit feedback from intermediate routers. From the point view of a control system, the performance of a congestion control protocol can be characterized by stability, convergence, and transient behavior. Recent advances in mathematical modeling of congestion control have stimulated the research on theoretic analysis of the stability of currently developed Internet congestion control protocols as well as the design of new protocols [1]–[4], [6], [7], [18]. Unfortunately, whether these protocols are stable or not depends not only on the control parameters of the protocols but also on the network parameters, such as link capacity, feedback delays, and the number of active competing flows. Therefore, protocols are prone to be unstable if these parameters are not located in the region that satisfies stability criterions. With regards to the issue of convergence, it is usually depicted by two aspects: convergence to efficiency and convergence to fairness [5]. A protocol with fast convergence
rate signifies that it can not only exploit available bandwidth efficiently but also share bottleneck resources fairly as soon as possible with other competing flows. Suppose that the available bandwidth of a flow in a steady state is P . The AIMD algorithm converges linearly to efficiency and fairness within O(P ) time. VCP [12] and EMKC [14] only improve the convergence to efficiency from O(P ) to O(ln P ). By using a logarithm function in the source algorithm to regulate the sending rate fast to the desired value, CLTCP [17] improves the convergence to efficiency and the convergence to fairness to O(ln ln P ). XCP [13] and RCP [15] allocate the fair share of each flow directly and both their convergence to efficiency and fairness are O(1), i.e., constant convergence rate. Another important requirement of congestion control protocols is transient behavior. Transient behavior deals with the procedure that a protocol transits from an arbitrary state to the steady state. Monotonic convergence sounds better than oscillatory convergence since oscillations generally leads to capacity overshooting and results in transient packet losses. Motivated by the limitations of existing congestion control protocols, we develop a new congestion control protocol, called ARROW-TCP, based on explicit rate pre-assignment mechanism. Theoretical analysis and performance evaluations have demonstrated that ARROW-TCP achieves both strong stability and fast convergence to efficiency and fairness without introducing much excess traffic into networks in multirouter bottleneck scenarios. Moreover, the convergence rate is independent of link capacity. Meanwhile, ARROW-TCP converges to efficiency and fairness monotonously and obtains zero packet loss, zero queueing delay, and full link utilization in the steady state. II. S YSTEM M ODEL We consider a communication network with a set L = {1, . . . , L} of L links shared by a set S = {1, . . . , S} of S sources. For each link l ∈ L, the index set of sources using link l is denoted by Sl ⊆ S. Equivalently, for each flow r ∈ S, its route involves a set of links, which is a subset of L, denoted by Lr . In order not to introduce too many redundant notations, we use l ∈ L to indicate the router which is associated with the egress link l. Also, we use r ∈ S to denote the flow
978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.
originated from source r. Further, we define the bottlenecked flow of link l as this flow is really throttled by link l; otherwise, the flow is labeled unbottlenecked flow of link l. Denoting the set of bottlenecked flows of link l by Slc , and the set of unbottlenecked flows Slu . For each source r, we have the following: • • •
•
The congestion window wr (t) (in number of packets); The packet size sr (in bits); The RTT dr , which is the sum of the forward delay d→ lr to its bottleneck link l (r ∈ Slc ) and the backward delay → ← d← lr from link l, i.e., dr = dlr + dlr ; The throughput xr (in bits/s).
For each link l, we have the following: • • • • • • •
The associated link capacity Cl > 0 (in bits/s); The target link utilization γl ≤ 1; The number of its bottlenecked flows Nl ; The aggregate rate yl (in bits/s) of all sources which use link l; The aggregate rate of its unbottlenecked flows ul ; The fair rate gl which is the desired amount of resource shared by link l’s bottlenecked flows; The price pl , used to indicate to its bottlenecked flows the link congestion level. III. D ESIGN R ATIONALE
A. Source Algorithm We separate the control mechanism in the sources into two components, i.e., window control and burstiness control. for the convenience of design and upgrading asynchronously. 1) Window control: Source r uses the feedback from its bottleneck link l to update its congestion window. The evolution of the congestion window through time is described by the following control law: wr (k + 1) = wr (k) + βr [wr∗ (k − d← lr ) − wr (k − dr )], (1) where, βr is a control gain, and wr∗ is the desired optimal sending window that relies on the fair rate gl from the bottleneck link l. Sources update their congestion windows periodically with an interval of Δ according to (1). Suppose the packet size in source r is sr , the optimal window size is calculated from the fair rate gl as follow: gl (t) ∗ Δ . sr
The estimation of the number of bottlenecked flows, Nl (k), and the combined rate of the unbottlenecked flows, ul (k), are addressed in section V. In addition to the fair rate calculation, routers communicate to the sources through link prices calculated as follow: yl (k) − γl Cl . (5) yl (k) For source r, the router with the maximum link price in its route is considered as the bottleneck initially. However, when the system reaches steady, there may be multiple routers with zero link price in the route of source r. In this case, the router with the smallest fair rate is considered as the bottleneck of source r. This bottleneck selection rule is ensured by the result from [16] which states that for each network there is a maxmin rate allocation. pl (k) =
ARROW-TCP is a distributed timer-driven window-based protocol, which provides joint design of source and router algorithms. Sources and routers trigger calculation of window size and fair rate, respectively, in the same control interval, which is distinct from event-driven protocols like TCP, XCP, and many others. Therefore, the window size is not the amount of data segments to be sent in a RTT timescale but in a constant control interval when dealing with ARROW-TCP.
wr∗ (t) =
2) Burstiness control: The burstiness nature of TCP in subRTT timescale can lead to sudden traffic with the negative impact on network stability. Pacing approach [19] has been proposed to counteract these effects by spacing the data packet evenly over an entire round trip time, so that the data is not sent in a burst. In ARROW-TCP, sources do not send all data packets into the network immediately after updating congestion window but space all data packets evenly over the interval Δ. Therefore, the inter-packet interval δr is: Δr . (3) δr (k) = wr (k) B. Router Algorithm The router computes the fair rate for its bottlenecked flows by dividing the residual bandwidth γl Cl −ul (k) equally among all bottlenecked flows. The fair rate gl (k) is also calculated in the time interval Δ: γl Cl − ul (k) . (4) gl (k) = Nl (k)
IV. P ERFORMANCE A NALYSIS A. Stability Analysis The following Lemma describes the equilibrium window size of each flow. Lemma 1: Given that flow r, together with Nl − 1 other flows, is bottlenecked by a link l of capacity Cl , its stationary ul )Δ . window size is w ˆr = wr∗ = (γl CNl −ˆ l sr The equilibrium can be easily derived from Eq. (1), (2), and (4) by letting the variation of window size wr (k) to be zero. We omit the proof for brevity. We next show that for the asymptotic stability of source r, βr is only related to dr . Taking the z-transform of source r’s window update Eq. (1) yields: →
(z dr +1 − z dr + βr )Wr (z) = βr z dlr Wr∗ (z),
(6)
Wr∗ (z)
where, Wr (z) and are the z-transforms of wr (k) and wr∗ (k), respectively. Eq. (6) results in the transfer function →
(2)
Hr (z) =
Wr (z) βr z dlr Nr (z) = ∗ = dr +1 . Dr (z) Wr (z) z − z dr + βr
(7)
978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.
The stability of the window control is determined by analyzing the roots of the characteristic equation
λ = 1.4 λ = 1.0 λ = 0.6 λ = 0.4
16000 14000
(8)
A necessary and sufficient condition for source r to be asymptotically stable is that the roots of the characteristic equation (8) lie inside the unit circle in z space. The location of these roots relative to the unit circle can be determined by using a bilinear transformation v = (z + 1)/(z − 1) and then applying the Routh-Hurwitz stability criterion [21] to the following transformed equation ˆ r (v) = Dr (z) v+1 D z= v−1
Window Size
Dr (z) = z dr +1 − z dr + βr = 0.
18000
12000 10000 8000 6000 4000 2000 0 0
50
100
150 Step
200
250
300
Fig. 1. Window size dynamics vs. different configuration of β, where β = λ/(1.1369d + 0.9970), with λ = 1.4, 1.0, 0.6, and 0.4, respectively.
= (v + 1)dr +1 − (v + 1)dr (v − 1) + βr (v − 1)dr +1 =
d r +1 k=0
dr ! [2k + (−1)k (dr + 1)βr ]v dr +1−k = 0. k!(dr + 1 − k)! (9)
The bilinear transformation of the complex variable z into the new complex variable v transforms the interior and the exterior of the unit circle in the z-plane onto the open left half and open right half of the v-plane, respectively. Therefore, the asymptotic stability of source r can be determined by examining whether all the roots of Eq. (9) lie in the open left half of the v-plane. The stable region of βr obtained by Routh-Hurwitz stability criterion is given in Theorem 1 whose proof will be found in the longer version of this paper. Theorem 1: Under any consistent bottleneck assignment that does not change over time, source r is asymptotically stable and converges to its stationary window size wr∗ if and only if βr ≤ β r = 2.0052/(1.1369dr + 0.9970). B. Transient Behavior In particular, we consider a single-source single-link network with equivalent capacity of 10 000/Δ packets/s. Let β = λ/(1.1369d + 0.9970), we choose λ = 1.4, 1.0, 0.6, and 0.4, respectively, to examine the window evolution trajectory. Fig. 1 presents the result for round trip time d = 10. From Fig. 1, we observe that the initial overshoot descends as λ decreases and the window size finally converges to the steady state monotonically. Numerical results show that the window size monotonically converges to the stationary value when λ ≤ 0.5 with RTT d varying from 1 to 2000, which is sufficient large to capture current Internet delays.
1) Convergence to Efficiency: We first give the following definition about efficiency. Definition 1 ((1 − ε)-efficiency): For a given small positive constant ε (0 ≤ ε < 1), the system converges to (1 − ε)efficiency in ke (ε) steps if the system starts with y(0) = 0 and ke (ε) is the smallest integer satisfying γC − y(k) ≤ ε. (10) γC Theorem 2: Under any consistent bottleneck assignment that does not change over time, ARROW-TCP converges to (1 − ε)-efficiency exponentially and the convergence rate is independent of link capacity. Proof: Denote x(k) = w(k) ∗ s/Δ as the sending rate of each flow. Window update equation (1) can be converted into rate update equation as follow: ∀k ≥ ke (ε) :
x(k + 1) = x(k) + β[x∗ − x(k − d)].
(11)
Taking the summation of(11) for all N flows, we get that N the combined rate y(k) = i=1 xi (k) forms a delayed linear system: y(k + 1) = y(k) + β[γC − y(k − d)].
(12)
Further, denoting e(k) = (γC − y(k))/γC, we get the initial value e(0) = e(1) = . . . = e(d) = 1. According to Definition 1, our objective is equivalent to prove that e(k) converges to zero exponentially. Eq. (12) can be further written as: e(k + d + 1) − e(k + d) + βe(k) = 0. (13) Taking the z-transform of (13), we get [z d+1 E(z) + z d+1 e(0) + . . . + ze(d)]−
C. Capacity-Independent Exponential Convergence to Efficiency and Fairness We show that under any consistent bottleneck assignment that does not change over time, ARROW-TCP converges to efficiency and fairness exponentially fast, and the convergence rate is independent of link capacity. Again, we focus on a single link that is shared by N sources with homogeneous delays.
[z d E(z) + z d e(0) + . . . + ze(d − 1)] + βE(z) = 0.
(14)
Substituting e(0) = e(1) = . . . = e(d) = 1 into above equation, we obtain: z d+1 − zd + β m2 z md+1 z m1 z + + ... + . = z − z1 z − z2 z − zd+1
E(z) =
z d+1
(15)
978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.
RT
RC RS Link Price: pl Fair Rate: gl Proposed Size: sr Window Size: wr
where, mi are coefficients determined by zeros zi . The denominator of Eq. (15) is identical to the character equation as shown in Eq. (8). Thus, all zeros lie inside the unit circle if β locates in the stable region as described in Theorem 1. Taking the invert z-transform of Eq. (15), we obtain the following equation: k . e(k) = m1 z1k + m2 z2k + . . . + md+1 zd+1
Unused
Flags
32 bits
Fig. 2.
Format of ARROW-TCP packet header
(16)
k
k+1
Since |zi | < 1, each component mi zik in Eq. (16) approaches to zero exponentially. Moreover, the convergence rate of e(k), i.e., ke (ε), is dependent exclusively on the roots zi and independent of link capacity. 2) Convergence to Fairness: Also, we start by giving the definition of fairness. Definition 2 ((1 − ε)-fairness): For a given small positive constant ε (0 ≤ ε < 1), (1 − ε)-fairness is reached in kf (ε) steps if the system starts in the maximally unfair state and kf (ε) is the smallest integer satisfying
'
'
∀k ≥ kf (ε) :
|xr (k) − x∗r | ≤ ε, ∀r ∈ S. x∗r
User r
Router l
'
1 T ' Fig. 3.
T'
Illustration of calculation of the number of bottlenecked flows
(17)
Theorem 3: Under any consistent bottleneck assignment that does not change over time, ARROW-TCP converges to (1 − ε)-fairness exponentially and the convergence rate is independent of link capacity. Proof: Consider the case where a new flow joins the network after other N − 1 flows consuming all link capacity. Let er (k) = (|xr (k) − x∗r |)/x∗r , we can obtain the same equation as Eq. (13) for each source r. Analogously, we can derive the result as described in Theorem 3. Here, we omit the detailed proof for brevity. V. I MPLEMENTATION We discuss the implementation details in this section. A. Packet Header Format and Interactions between Sources and Links The header format of an ARROW-TCP packet is illustrated in Fig. 2. The current window size is placed into the Window Size field in the packet header before each source injecting a segment into the network. The front three 8-bit fields, RT , RC , and RS , are indeed in terms of hop count from a source. The RT field records the hop count from the source to the true bottleneck link; the RC field carries the current hop number of the packet and is incremented by each router; and the RS field is modified by the routers which perceive their link price to be higher than that experienced by the flow at the preceding routers. The routers identify whether or not the incoming packet belongs to their bottlenecked flows by reading the RT field in the packet header and then judging whether RT equals to RC . After that, the routers compute the combined rate of unbottlenecked flows on the time interval Δ, the number of the bottlenecked flows, and thereafter the fair rate can be achieved according to eq. (4). Thereby, the fair rate can be feedbacked to sources through Fair Rate packet field.
B. Calculating the Number of bottlenecked flows Based on the fact that both sources and links operates in the same control interval Δ and each source sends out packets in a fluid-like manner by spacing outgoing packets evenly over Δ time, it is not difficult to derive that, for any link l, the packets arrived during an interval are generally originated from their sources in two consecutive intervals, which is depicted in Fig. 3. For example, supposing source r is bottlenecked by link l, link l’s control interval can be separated by a time demarcation that packets arrived before and after the demarcation are originated from the kth and (k + 1)th interval of source r. We use a fraction θ to represent the time demarcation which separates link l’s control interval into (1 − θ)Δ and θΔ (see also Fig. 3). Since packets are sent in a fluid-like manner with inter-packet interval calculated by Eq. (3), the number of arrival packets from source r in (1−θ)Δ and θΔ time are (1 − θ)wr (k) and θwr (k + 1), respectively. Therefore, for each packet p arrived at link l over a control interval Δ, 1 p∈r
W indow Size carried in p
(1−θ)wr (k)
=
i=1
1 + wr (k)
θwr (k+1)
j=1
1 = 1. wr (k + 1)
(18)
Generalizing the case to all Nl flows bottlenecked by link l, we have 1 W indow Size carried in p p∈r r∈Slc ⎞ ⎛ θwr (k+1) r (k) 1 (1−θ)w 1 ⎠ = Nl . ⎝ + = wr (k) wr (k + 1) c i=1 j=1 r∈Sl
(19)
978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.
140
Number of Active Sessions
cwnd / rtt (Mbps)
20
Total Sending Rate Flow 1 Flow 2-10
120 100 80 60 40 20 0 0
20
40
60
80
16
12
8
4
100
Time (second)
0
(a) XCP 1000
0
120
Total Sending Rate Flow 1 Flow 2-10
25
50
75 100 125 Time (second)
150
175
200
(a) Number of active sessions
Total Sending Rate Flow 1 Flow 2-10
1
60
0.8
40 20
1
XCP RCP
Flow 1 Flow 2
100
80 cwnd / rtt (Mbps)
10
80
Packet Loss Ratio
cwnd / Δ (Mbps)
Rate (Mbps)
100 100
0.6 0.4
60
40
0 0
20
40
60
80
100
0
20
Time (second)
(b) RCP
40
60
80
100
20
0.2
Time (second) 0
(c) ARROW-TCP
0 0
Fig. 4. Experiment 1: Transient behavior in the presence of burst traffic under dumbbell topology
25
50
75 100 125 Time (second)
VI. S IMULATION
175
200
25
50
Flow 1 Flow 2
150
175
200
175
200
Flow 1 Flow 2
100
80 cwnd / Δ (Mbps)
80 Rate (Mbps)
75 100 125 Time (second)
(c) XCP
60
40
20
60
40
20
0
0 0
The performance of ARROW-TCP is validated in NS2 simulator [22], meanwhile XCP and RCP are selected for the purpose of comparison. For XCP and RCP, we use the parameters suggested in their original papers, i.e., α = 0.4, β = 0.226 in XCP, α = 0.4, β = 1.0 in RCP. For ARROWTCP, the parameter λ for tuning the control parameter β is set to be 0.4, and the time interval Δ = 100 ms. Unless stated elsewhere, otherwise packet size is 1000 bytes, the bottleneck link capacity in dumbbell topology is configured with 100 Mbps, and the buffer size is 1 Mbytes (approximately 1000 packets). Experiment 1: Transient behavior in the presence of burst traffic under dumbbell topology We examine the convergence and transient behavior of the protocols in the presence of burst traffic under dumbbell topology in this experiment. Flow 1 with RTT of 400 ms starts at t = 0 second, and additional 9 flows with RTT of 10 ms join together at t = 50 seconds. The simulation runtime is 100 seconds. Fig. 4 presents per-flow throughput of each flow and the total throughput of all flows. We should note that the Y-axis of RCP is on a logarithmic scale because of its high capacity overshoot. The stability of XCP is related with RTT, as a result, XCP exhibits unstable throughput trajectory. The total sending rate of RCP overshoots link capacity by approximate 9 times which will certainly result in surge of packet losses in transient time. The reason is that RCP source directly applies feedback and flow 1 with long RTT, however, does not release its consumed bandwidth quickly enough after other flows join the network. ARROW-TCP can eliminate
0
(b) Packet loss rate 100
Eq. (19) indicates that the number of bottlenecked flows of link l can be figured out by taking the summation of the reciprocal value of the window size carried in each incoming packet among its bottlenecked flows in a control interval Δ.
150
25
50
75 100 125 Time (second)
(d) RCP
150
175
200
0
25
50
75 100 125 Time (second)
150
(e) ARROW-TCP
Fig. 5. Experiment 2: Stability in the presence of mice flows under dumbbell topology
capacity overshooting and converges to steady throughput monotonically, which confirms our theoretical analysis. Experiment 2: Stability in the presence of mice flows under dumbbell topology In this experiment, we investigate the stability of elephant flows which are affected by the random arrival/departure mice flows and examine the metric of packet loss rate. Flow 1 with RTT of 400 ms and flow 2 with RTT of 100 ms start transmission at t = 0 second and t = 10 seconds, respectively. From t = 20 seconds, 300 mice flows join. The number of active flows at any instant time is plotted in Fig. 5a. The simulation results, as shown in Fig. 5, show that RCP and XCP suffer in packet losses, measured every 100 ms, in the presence of random mice flows. Although RCP achieves fair allocation, as the result of capacity overshooting, it leads to burst of packet losses which can be as high as 40%. The packet loss rate of XCP is even higher than that of RCP in most of times due to its inability. ARROW-TCP obtains satisfactory stability and fair allocation in dynamic environment. Moreover, we do not observe packet losses and build-up queues for ARROWTCP in the whole simulation runtime. Experiment 3: Stability and max-min fair allocation under multi-bottleneck topology In this experiment, we test the stability and fair allocation in
978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.
Link 1 150Mbps, 1ms Flow 2-9
cwnd / rtt (Mbps)
125
Flow 1
ms 100 ps, Mb 0 2 3 3 k Lin Lin k2 100 Mb ps, 1ms
100
50 25 0 0
25
50
75 100 125 Time (second)
150
175
200
(b) XCP
Flow 1 Flow 2-9
R EFERENCES
cwnd / Δ (Mbps)
125
100 75 50 25
100 75 50 25
0
0 0
25
50
75
100
125
150
175
200
0
25
Time (second)
(c) RCP
This work is supported in part by the National Natural Science Foundation of China Nos. 60673164, 60873265; the Program for Changjiang Scholars and Innovative Research Team in University of China under Grant (IRT0661); the U.S. National Science Foundation CAREER Award under Grant ECS-0348694.
Flow 1 Flow 2-9
150
125 Rate (Mbps)
ACKNOWLEDGMENT
75
(a) Multi-bottleneck topology
150
behavior and obtains the ideal performance of zero queueing delay and free packet loss.
Flow 1 Flow 2 Flow 3 Flow 4 Flow 5 Flow 6 Flow 7 Flow 8 Flow 9
150
50
75
100
125
150
175
200
Time (second)
(d) ARROW-TCP
Fig. 6. Experiment 3: Stability and max-min fair allocation under multibottleneck topology
a multi-bottleneck, shown in Fig. 6a. To reduce the transient packet losses, we set the router buffer size to be sufficient large (at least equals to 1 BDP). Flow 1 with RTT of 202 ms initially starts at t = 0 second, and flows 2-9 with same RTT of 2 ms start transmission after 50 seconds. The simulation results are shown in Fig. 6. As the throughput trajectories of flow 2-9 are almost overlapped each other in both RCP and ARROWTCP, we use the same line type to illustrate their throughput evolutions. From Fig. 6, we can see clearly that XCP and RCP are inclined to instability after flows 2-9 join the network. The reason is that XCP and RCP are locally stable and their stability criterions are derived in the assumption that all flows are with the same bounded delay. As a result, XCP and RCP overshoot the link capacity in a different extent, queue will be built up and the average RTT computed by router may not reflect the correct delay of flows that are being controlled by the router, which adds up the adverse impact on their stability. ARROW-TCP obtains stable control of window evolutions and achieves max-min fair allocation in multi-bottleneck networks. VII. C ONCLUSION In this paper, we proposed a novel distributed congestion control protocol, named ARROW-TCP, based on the explicit rate assignment. ARROW-TCP consists of source algorithm and router algorithm, both are the key mechanism for the stability and convergence. Each source utilizes the fair rate calculated by its bottleneck router to update the window size and employs link price to manage bottleneck membership. Theoretical analysis and simulation demonstrate that ARROWTCP achieves exponentially convergence to efficiency and fairness. Moreover, ARROW-TCP outperforms XCP and RCP in terms of stability in multi-bottleneck link topology scenarios. Meanwhile, ARROW-TCP avoids the capacity overshooting
[1] S. Deb, R. Srikant, Global stability of congestion controllers for the Internet. IEEE Conference on Decision and Control, 2002. [2] L. Ying, G. E. Dullerud, and R. Srikant, Global stability of internet congestion controllers with heterogeneous delays. IEEE/ACM Transactions on Networking, vol. 14, no. 3, 2006, pp. 579-591. [3] P. Ranjan, R. J. La, and E. H. Abed, Global stability conditions for rate control with arbitrary communication delays. IEEE/ACM Transactions on Networking, vol. 14, no. 1, 2006, pp. 94-107. [4] F. Kelly, G. Raina, and T. Voice. Stability and fairness of explicit congestion control with small buffers. ACM SIGCOMM Computer Communication Review, vol. 38, no. 3, 2008, pp. 51-62. [5] D. Loguinov, H. Radha, End-to-end rate-based congestion control: convergence properties and scalability analysis. IEEE/ACM Transactions on Networking, 11(5), 2003: 564-577. [6] V. Misra, W. B. Gong, and D. F. Towsley, Fluid-based analysis of a network of AQM routers supporting TCP flows with an application to RED. In ACM SIGCOMM 2000, pages 151-160, 2000. [7] C. V. Hollot, V. Misra, D. F. Towsley, and W. Gong, A control theoretic analysis of RED. In INFOCOM 2001, pages 1510-1519, 2001. [8] S. Floyd, Highspeed TCP for large congestion windows. IETF RFC 3649, Experimental, December 2003. [9] I. Rhee, L. Xu, CUBIC: A new TCP-friendly high-speed TCP variant. In the 3rd Workshop on Protocols for Fast Long-Distance Networks (PFLDnet), 2005. [10] D. J. Leith, R. Shorten, H-TCP Protocol for high-speed long distance networks. In the 2nd Workshop on Protocols for Fast Long-Distance Networks (PFLDnet), 2004. [11] S. Bhandarkar, S. Jain and A. L. N. Reddy, LTCP: Improving the Performance of TCP in Highspeed Networks. ACM SIGCOMM Computer Communication Review, vol. 36, no. 1, 2006, pp. 41-50. [12] Y. Xia, L. Subramanian, I. Stoica, and S. Kalyanaraman, One more bit is enough. ACM SIGCOMM, August 2005. [13] D. Katabi, M. Handley, and C. Rohrs, Congestion control for high bandwidth delay product networks. In Proceedings of ACM SIGCOMM, August 2002. [14] Y. Zhang, S. R. Kang, and D. Loguinov, Delayed stability and performance of distributed congestion control. ACM SIGCOMM, August 2004. [15] N. Dukkipati, M. Kobayashi, R. Zhang-Shen, and N. McKeown, Processor sharing flows in the Internet. In Proceedings of IEEE IWQoS, 2005. [16] Y. Zhang, D. Leonard, and D. Loguinov, JetMax: Scalable max-min congestion control for high-speed heterogeneous networks. In Proceedings of IEEE INFOCOM, April 2006. [17] Xiaomeng Huang, Chuang Lin, Fengyuan Ren, Guangwen Yang, Peter D. Ungsuman, and Yuanzhuo Wang, Improving the convergence and stability of congestion control algorithm. In Proceedings of IEEE ICNP 2007. [18] S. Jain, D. Loguinov, PIQI-RCP: Design and analysis of rate-based explicit congestion control. In Proceedings of IEEE IWQoS, 2007. [19] A. Aggarwal, S. Savage, T. Anderson, Understanding the performance of TCP pacing. In Proceedings of IEEE INFOCOM, 2000. [20] J. Aweya, M. Ouellette, and D. Y. Montuno, Design and stability analysis of a rate control algorithm using the Routh-Hurwitz stability criterion. IEEE/ACM Transactions on Networking, 12(4): 719-732, August 2004. [21] Q. Wu, Automatic control. Beijing, China, Tsinghua University Press, 1990. [22] The VINT project, http://www.isi.edu/nsnam/ns.
978-1-4244-4148-8/09/$25.00 ©2009 This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2009 proceedings.