NADA: A Unified Congestion Control Scheme for Low-Latency ...

6 downloads 13475 Views 558KB Size Report
Cisco Systems Inc., 170 W Tasman Drive, San Jose, CA 95134. {xiaoqzhu ... video conferencing) present a unique set of challenges for congestion control.
NADA: A Unified Congestion Control Scheme for Low-Latency Interactive Video Xiaoqing Zhu and Rong Pan Cisco Systems Inc., 170 W Tasman Drive, San Jose, CA 95134 {xiaoqzhu,ropan}@cisco.com Abstract—Low-latency, interactive media applications (e.g., video conferencing) present a unique set of challenges for congestion control. Unlike TCP, the transport mechanism for interactive media needs to adapt fast to abrupt changes in available bandwidth, accommodate sluggish responses and output rate fluctuations of a live video encoder, and avoid high queuing delay over the network. An ideal scheme should also make effective use of all types of congestion signals from the network, including packet losses, queuing delay, and explicit congestion notification (ECN) markings. This paper presents a unified approach for congestion control of interactive video: network-assisted dynamic adaptation (NADA). In NADA, the sender regulates its sending rate based on a composite congestion signal calculated and reported by the receiver, which combines both implicit (e.g., loss and delay) and explicit (e.g., ECN marking) congestion indications from the network. Via a consistent set of sender adaptation rules, the scheme can reap the full benefits of proactive, explicit congestion notifications supported by advanced queue management schemes. It remains equally responsive in the absence of such notifications. Extensive simulation studies show that NADA interact well with a wide variety of queue management schemes: conventional drop-tail, random early detection (RED), recently proposed CoDel (controlled delay) and PIE (Proportional Integral controller Enhanced), as well as a token-bucket-based random marking scheme based on Pre-Congestion Notification (PCN). Furthermore, NADA reacts fast to changes over the network, allows for weighted bandwidth sharing among multiple competing video streams, and sustains a substantial share of bottleneck bandwidth when coexisting with TCP.

I.

I NTRODUCTION

Recent years have seen a flourishing of video conferencing services, such as Microsoft Skype 1 , Google Hangout 2 , and Apple Facetime 3 . Their growing popularity attests to the rising user demand for interactive rich media applications. Despite decades of research efforts and advances in video coding and networking technologies, however, it remains an elusive task to support low-latency interactive video over the Internet with satisfactory quality of experience. The design of an effective congestion control scheme for interactive media faces a set of unique challenges. First, the scheme needs to achieve both high data rates and low queuing delays at steady state. It needs to react fast to abrupt changes in network bandwidth, while accommodating sluggish responses and random fluctuations in the output rate of a live video encoder. As an end-to-end solution, the congestion control scheme should interact well with a wide variety of 1 http://www.skype.com 2 http://www.google.com/+/learnmore/hangouts/ 3 http://www.apple.com/iphone/features/facetime.html

queue management schemes that may be present in a practical network. Ideally, it should also be capable of integrating all forms of congestion indications from the network — either implicitly in the form of queuing delays and packet losses, or explicitly via random early congestion notification (ECN) markings [1] — into the calculation of a target sending rate. In this paper, we present a scheme that fulfils the aforementioned requirements: Network-Assisted Dynamic Adaptation (NADA). It responds to both implicit and explicit congestion signals in a unified manner, and effortlessly achieves weighted bandwidth sharing amongst multiple competing video streams. A stream with a higher user-specified priority or more dynamic video contents will naturally acquire a greater portion of the bottleneck network bandwidth. Since the NADA sender reacts to both network delay and loss, by design, it can withstand the coexistence of TCP flows sharing the same bottleneck queue. In NADA, the receiver aggregates per-packet drops, ECN markings, and one-way-delay measurements into a composite congestion signal. It periodically reports back a time-smoothed version of the congestion signal to the sender in the forms of an Real-Time Transport Control Protocol (RTCP) [2] message. Upon receipt of such feedback, The NADA sender calculates the reference rate as a function of the congestion signal value, the dynamic rate range associated with the current video content, and the user-specified weight of priority for the stream. It accommodates delayed responses and output rate fluctuations of a live video encoder via a local rate shaping buffer. Size of the rate shaping buffer exerts an influence on both the outgoing rate at the sender and the target rate of the live video encoder. Extensive simulations confirm that the proposed NADA scheme works well with a wide variety of network queue management schemes. Our test scenarios span across conventional drop-tail queues, random early detection (RED) [3], the recently proposed CoDel (Controlled Delay) [4] and PIE (Proportional Integral controller Enhanced) [5], as well as a random marking scheme based on a token bucket algorithm originally designed for Pre-Congestion Notification (PCN) [6]. In all tests, NADA streams react fast to time-varying network bandwidth. It is also shown that multiple competing NADA streams can share a common bottleneck in a stable manner, without introducing oscillations in the rates of individual streams. In the following, we first review related work in Section II. We then present an overview of the NADA scheme in Section III. Sections V and IV describe the proposed receiver and sender behavior, respectively. Section VI establishes the rate of a NADA stream at equilibrium and derives a stability criterion for the end-to-end system. In Section VII, we evaluate performance of NADA via extensive network simulations.

978-1-4799-2172-0/13/$31.00 ©2013 IEEE

II.

BACKGROUND AND R ELATED W ORK

Research on congestion control has attracted long-standing interests over the past 25 years [7]. Early works aimed at tailoring congestion control to real-time media traffic so as to satisfy two criteria: TCP-friendliness and media-friendliness. The former indicates that the outgoing rate of the video stream is equal to that of a comparable TCP flow [3] [8]. The latter requires that the media streaming rate remains smooth, thereby avoiding the typical sawtooth patterns in rate variations of TCP [9] [10] [11]. The recently publicized phenomenon of “bufferbloat” — the prevalent use of excessive buffering over access networks, as documented in [12] — has rekindled the interest in making congestion control work again for low-latency interactive video. Delay-based congestion control schemes emerge as natural candidate solutions for avoiding excessive self-inflicted queuing [13]. Two recent proposals to the Internet Engineering Task Force (IETF) Working Group on RTP media congestion avoidance techniques, [14] and [15], both rely on one-way delay measurements as the primary source of congestion indication. However, it is well-known that delay-based congestion control schemes tend to lose throughput significantly when competing against their loss-based counterparts [16]. Our proposal of NADA [17] resolves this coexistence issue by integrating all forms of network congestion indications — delay, loss, and marking — into one composite congestion signal in the calculation of a target sending rate. III.



Rs . The size of the rate shaping buffer Ls , together with the reference rate Rn , determines the value of encoder target rate Rv and the sending rate Rs . •

Network node. The NADA system is designed to work with different modes of operations at the network node. The supported queuing disciplines range from drop-tail to random early detection (RED) [3] and random early marking based on a token bucket algorithm for Pre-Congestion Notification (PCN) [6]. It also works with the more recently proposed schemes, such as CoDel [4] and PIE [5], which are designed to explicitly control queuing delays.



NADA receiving agent. The NADA receiver agent derives one-way delay of each packet dn from timestamps in the real-time transport protocol (RTP) [2] header. It records per-packet events of losses and explicit congestion notification (ECN) markings extracted from the IP header [1]. Combining all forms of congestion signals, the receiver calculates the value of a composite congestion signal in the form of an equivalent delay d˜n . It periodically reports the timesmoothed value of the composite congestion signal xn back to the sender via RTP control protocol (RTCP) messages.

S YSTEM OVERVIEW

In this section, we first introduce the key components in an end-to-end system for congestion control of interactive video. Figure 1 provides such an overview. The list of notations are summarized in Table I. Our proposed NADA scheme comprises the sending and receiving agents in the system. They are designed to interact with various forms of live video encoders and network queue management schemes, in a unified manner. •

Fig. 1. Overview of the NADA system. The live video encoder adapts its sending rate based on aggregated congestion feedback from the receiver.

Live video encoder. It encodes the incoming raw video frames into RTP packets. The target rate for the video encoder rate control is denoted as Rv . Note that the actual output rate from the encoder Ro naturally falls within the dynamic range of [Rmin , Rmax ], which depends on the video scene complexity and may change over time. The value of Ro may also fluctuate randomly around the input target rate Rv . Moreover, the live video encoder can only react to changes in target encoding rate over coarse time intervals, on the order of seconds. We designate the typical encoder reaction time as τv . In our design of NADA, the operation of the live video encoder is treated as a black box. We have designed the NADA sending agent to interact with an arbitrary live video encoder. NADA sending agent. It is responsible for calculating a reference rate Rn based on the composite network congestion signal as reported by the receiver. The NADA sending agent further regulates the video outgoing rate at Rs . A rate shaping buffer is employed to absorb the instantaneous difference between video encoder output rate Rv and the regulated sending rate

In the following, we first describe how the NADA receiver aggregates various forms of congestion signals from the network. We then explain how the NADA sender reacts to periodic receiver reports of the composite congestion signals by regulating both the target rate for the live video encoder and the sending rate over the network. IV.

NADA R ECEIVER B EHAVIOR

The role of the NADA receiver is fairly straightforward. It is in charge of four steps: a) to monitor per-packet oneway delay measurements, packets losses, and random marking statistics; b) to aggregate all forms of congestion indication in the form of a composite signal; c) to calculate a time-smoothed value of the congestion signal; and d) to send periodic reports of the composite congestion signal back to the sender.

TABLE I. Symbol n dn 1M n 1L n ˜ dn xn Rmin Rmax τv Rv Ro Rn Rs Ls

L IST OF NOTATIONS .

Explanation index of video packet, as subscript; measured one-way delay for the n-th packet; binary indicator of random marking for the n-th packet; binary indicator of loss of the n-th packet; composite congestion signal in the form of an equivalent delay; time-smoothed value of the composite congestion signal; minimum encoder output rate for the current video content; maximum encoder output rate for the current video content; typical reaction time of encoder rate control; target rate for encoder rate control; output rate from live video encoder; reference rate based on the composite congestion signal; regulated sending rate as output of the rate shaping buffer; size of the rate shaping buffer

A. Monitoring and aggregating per-packet statistics The receiver observes and estimates one-way delay dn for the n-th packet, ECN marking event 1M n , and packet loss M L event 1L . Here, 1 and 1 are binary indicators: the value n n n of 1 corresponds to a marked or lost packet and the value of 0 indicates no marking nor loss. The equivalent delay d˜n is calculated as follows: L d˜n = dn + 1M n dM + 1n dL .

(1)

Here, dM is a prescribed delay penalty term corresponding to an observed ECN marking event (e.g., dM = 200 ms); dL is a prescribed delay penalty term corresponding to an observed packet loss event (e.g., dL = 1 second). B. Calculating time-smoothed values The receiver calculates a time-smoothed version of the composite congestion signal value via exponential averaging: xn = αd˜n + (1 − α)xn−1 .

(2)

The weighting parameter 0 < α < 1 adjusts the level of smoothing. A larger value of α ensures faster responsiveness of the NADA rate adaptation, at the expense of reduced system stability. C. Sending Periodic Feedback Periodically, the receiver sends back the updated value of xn ’s in RTCP messages, to aid the sender in its calculation of target rate. The size of acknowledgement packets are typically on the order of tens of bytes, and are significantly smaller than average video packet sizes. Therefore, the bandwidth overhead of the receiver acknowledgement stream is sufficiently low. V.

NADA S ENDER B EHAVIOR

Figure 2 provides a more detailed view of the NADA sender. Upon receipt of an RTCP report from the receiver, the NADA sender updates its calculation of the reference rate Rn as a function of the network congestion signal. It further adjusts both the target rate for the live video encoder Rv and the sending rate Rs over the network based on the updated value of Rn , as well as the size of the rate shaping buffer.

Fig. 2.

Components of a NADA sender.

A. Rate shaping buffer The rate shaping buffer is employed to absorb any instantaneous mismatch between encoder rate output Ro and regulated sending rate Rs . The size of the buffer evolves from time t−τ to time t as: Ls (t) = max[0, Ls (t − τ ) + (Ro − Rs )τ ].

(3)

A large rate shaping buffer contributes to higher end-to-end delay, which may harm the performance of real-time media communications. Therefore, the sender has a strong incentive to constrain the size of the shaping buffer. As will be discussed later in Section V-C, the NADA sender can deplete the rate shaping buffer by increasing the sending rate Rs , or limit its growth by reducing the video encoder target rate Rv . B. Calculation of Reference Rate The sender calculates the reference rate Rn based on the composite network congestion signal from receiver RTCP reports. It first compensates the effect of delayed observation via a linear predictor: x(t) − x(t − δ) τo (4) δ In (4), δ denotes the interval between two consecutive received video packets. The prediction parameter τo is a reference time lag for the compensation step. The reference rate is then calculated as a function of x ˆ: xref Rn = Rmin + w(Rmax − Rmin ) (5) x ˆ Here, Rmin and Rmax denote the content-dependent rate range the encoder can generate. The weight of priority level is designated as w. The reference congestion signal xref is typically chosen as the expected one-way propagation delay over t the path, so that the maximum rate of Rmax can be achieved over an empty queue. The final reference rate Rn is constrained within the dynamic range of [Rmin , Rmax ]. x ˆ = x(t) +

Intuitively, a rising value of x ˆ indicates increased network congestion and leads to a lower value of Rn via (5). The back-off in video sending rate shall, in turn, help to relieve network congestion, thereby resulting in a new equilibrium of the overall system. It can also be noted that a stream with a higher priority w or a wider dynamic range of rate (Rmax − Rmin ) will settle at a higher rate than another stream under the same network condition and observing the same network congestion level. As explained in Sec. VI, this naturally leads to weighted bandwidth sharing amongst NADA streams. Finally, the combination of w and xref determines the sensitivity of the rate adaptation scheme in reaction to fluctuations in the observed composite congestion signal x. Note that the sender does not need any explicit knowledge of the queue management scheme inside the network. Rather, it reacts to the aggregation of all forms of congestion indications via the composite congestion signal xn from the receiver in a coherent manner. C. Updating video encoder target and sending rate The target rate for the live video encoder is updated based on both the reference rate Rn and the rate shaping buffer size Ls , as follows:

Fig. 3.

Illustration of the proposed slow-start mechanism.

VI.

T HEORETICAL A NALYSIS

In this section, we first characterize how multiple NADA streams will share the rate of a bottleneck link at equilibrium. We then derive a stability criterion for the NADA congestion control scheme near its operating point, based on a fluid traffic model of the system. A. NADA Rate at Equilibrium

Rv = Rn − βv

Ls . τv

(6)

Similarly, the outgoing rate is regulated based on both the reference rate Rn and the rate shaping buffer size Ls , such that: Ls Rs = Rn + βs . (7) τv In both (6) and (7), the first term indicates the rate calculated from network congestion feedback alone. The second term indicates the influence of the rate shaping buffer. A large rate shaping buffer nudges the encoder target rate slightly below — and the sending rate slightly above — the reference rate Rn . Intuitively, the amount of extra rate offset needed to completely drain the rate shaping buffer within the same time frame of encoder rate adaptation τv is given by Ls /τv . The scaling parameters βv and βs can be tuned to balance between the competing goals of maintaining a small rate shaping buffer and deviating the system from the reference rate point. D. Slow-start mechanism Finally, special care needs to be taken during the startup phase of a video stream, since it may take several roundtrip-times before the sender can collect statistically robust information on network congestion. We propose to regulate the reference rate Rn to grow linearly, no more than: Rss at time t: Rss (t) = Rmin +

t − t0 (Rmax − Rmin ). T

(8)

As illustrated in Fig. 3, the start time of the stream is t0 , and T represents the time horizon over which the slow-start mechanism is effective.

At equilibrium, all streams sharing a common bottleneck experience approximately the same one-way delay do 4 , packet loss ratio pL , and random marking ratio dM . The composite congestion signal can be expressed as: xo = (1 − pL )do + pM dM + pL dL .

(9)

According to (5), the rate at steady state R for any stream satisfies the following: xref R − Rmin =w . (Rmax − Rmin ) xo

(10)

Figure 4 illustrates functional form of the relatioship between R and xo . The left hand size of (10) can be considered as the relative bandwidth of the stream. When streams bear similar propagation delays and reference delay parameter xref (typically chosen as the expected propagation delay over the path), it becomes clear form (10) that the ratio between relative bandwidth of different streams will be dictated by their relative weights of importance. In other words, the excess rate each NADA stream receives above its minimum requirement R−Rmin is weighted by its dynamic rate range (Rmin −Rmin ) together with the user-specified priority level w. B. Stability Criterion We now establish the stability criterion for a single NADA stream over a single bottleneck link with capacity C. For simplicity, we carry out our derivations for the case of a droptail queue. Derivation of the system stability criterion over other types of queues can be carried out following the same methodology. 4 For low-latency interactive video applications, we assume that self-inflicted queuing delay constitutes the majority portion of the one-way delay.

VII.

P ERFORMANCE E VALUATION

A. Simulation Setup

Fig. 4.

Illustration of the NADA sending rate at equilibrium.

Recalling from (2), (4), and (5), the update for the reference rate R(t) = Rn in NADA can be expressed as differential equations in the fluid traffic model: q(t) ˙ = R(t) − C; − log(1 − α) q(t) x(t) ˙ = (x(t) − ); δ C x ˆ(t) = x(t) + x(t)τ ˙ o;

xref . R(t) = Rmin + w(Rmax − Rmin ) x(t − τ˜)

(11) (12) (13) (14)

In (11), q(t) corresponds to the bottleneck queue size, and q(t) ˙ indicates the first-order time derivative q(t). Following the derivations in [18], the exponential smoothing filter in (2) is expressed in the form of (12), where δ corresponds to the inter-packet arrival interval. In (14), τ˜ denotes round-trip time of the network. δ We introduce τk = − log(1−α) as the time constant in the smoothing filter (2). Linearizing the system around the equilibrium points, R = C and xo = w(Rmax − Rmin )xref /(Ro − Rmin ), we have: ˙ δq(t) = δR; δq 1 ˙ (δx − ); δx(t) = τk C ˙ δx ˆ = δx + τo δx(t); w(Rmax − Rmin )xref δx(t − τ˜). δR = − x2o

(15)

We evaluate the proposed NADA scheme within ns-2 [20], an event-driven packet level network simulator. All simulations follow a simple dumbbell network topology, as shown in Fig. 5. The one-way propagation delay of the path is fixed at 10 milliseconds (ms). We mimic the behavior of a live video encoder in the form of a traffic generator in ns-2. The encoder can only update its output rate as dictated by the NADA sender after a delay of τv = 1 s. Furthermore, the output rate of the encoder randomly fluctuates by 10 % around the target rate. Unless otherwise stated, the parameters of the NADA scheme are fixed throughout all experiments. The rate range is set at Rmax = 6 Mbps and Rmin =0.1 Mbps. The reference delay is fixed at xref = 20 ms. The smoothing parameter is set at α = 0.001 at the receiver; the prediction parameter is set at τo = 0.1 s. We choose the scaling factors as βs = 0.1 and βv = 0.1 for slightly shifting the sending and video encoding target rates from the NADA reference rate. In our experiments, we investigate the interaction of NADA with a wide variety of queue management schemes, as follows: •

Drop-tail: In a conventional drop-tail queue, packets are dropped upon arrival if the queue size exceeds its limit at that time. The queue limit is set at 100 packets for a typical packet size of 1200 bytes. This corresponds to a queuing delay of 480 ms over a 2 Mbps link when the queue is full.



RED: In a random-early-drop (RED) queue, packets are dropped upon arrival randomly with probability p. The drop probability p is calculated as a function of the time-smoothed queue length, according to [3]. In our experiments, we set the minimum threshold at 5 packets and the maximum threshold at 95 packets. The maximum dropping probability is set at 100%.



CoDel: As a newly proposed queue management scheme to address the bufferbloat problem, CoDel is designed for explicitly controlling the queuing delay at a given target level [4]. Excess packets are dropped upon departure based on per-packet measurements of queuing delay from built-in timestamps. The target queuing delay is set at 20 ms in our experiments.



PIE: As an alternative, lightweight queuing scheme to address the bufferbloat problem, PIE aims at stabilizing the queuing delay around a target level by adaptively tuning the random drop probability [5]. Unlike CoDel, packets are dropped upon arrival and no explicit timestamps are required. The target delay is also set at 20 ms in our experiments.



PCN: In this scheme, the early congestion notification (ECN) bit in the IP header of packets are marked randomly. The marking probability is calculated based on a token-bucket algorithm originally designed for the Pre-Congestion Notification (PCN) standard [6]. The target link utilization is set as 90%; the marking probability is designed to grow linearly with the token bucket size when it varies between 1/3 and 2/3 of the full token bucket limit.

(16) (17) (18)

For very low rates of Rmin ≈ 0, xo ≈ wRmax xref /C. Therefore, (18) can be expressed as δR = − xCo δx. Consequently, we obtain the open-loop transfer function in the Laplace domain: 1 + τo s G(s) = − e−s˜τ . (19) (1 + τk s)xo s Note that the critical frequency of the system can be solved numerically, such that: 1 + jωc τo | G(jωc ) |=| |= 1. (20) (1 + jωc τk )ωc xo Following Bode analysis [19], we need a positive phase margin for the system to be stable: 

G(jωc ) = − 90◦ + arctan(ωc τo ) − arctan(ωc τk ) − ωc τ˜ (21) > − 180◦ .

Fig. 5. Network topology for simulation evaluation of NADA. The one-way propagation delay is fixed at 10 ms.

Fig. 7. A single NADA stream over a time-varying link. The bottleneck queue follows the random early detection (RED) algorithm [3].

Fig. 6. A single NADA stream over a time-varying link. The bottleneck queue follows the conventional drop-tail discipline.

B. Single Stream over Time-Varying Link We first consider the simple scenario of a single NADA stream over a time-varying link. The bottleneck queue follows the simple drop-tail rule. The queue limit is set at 100 packets, or, equivalently, 960 Kbits when the typical packets size is 1200 bytes. The bottleneck bandwidth changes over time from 4 Mbps to 2 Mbps at time t = 20 seconds, and then back to 4 Mbps at time t = 80 seconds. Figure 6 shows traces of video streaming rate, measured one-way-delay of the stream, as well as the composite congestion signal reported by the receiver. As link rate decreases over time, the network congestion signal – measured in terms of one-way packet delivery delay for drop-tail queues – increases accordingly. This leads the NADA agent to reduce its sending rate, according to (5) - (7). Similarly, when link capacity recovers, decreasing one-way delay leads to increasing sending rate. It can be noted that the NADA stream reacts fast to such abrupt changes in link capacity, resulting in only brief periods of delay bursts. C. Reaction to Losses and Markings We then investigate how NADA reacts to other forms of congestions signals, such as packet drops and random early markings of the early congestion notification (ECN) field in the IP header [1]. Figures 7 and 8 further show the traces of sending rate, one-way delay, and composite congestion signals

Fig. 8. A single NADA stream over a time-varying link. The bottleneck queue performs random early markings based on a token bucket algorithm originally designed for PCN [6]. Target utilization of the link is set at 90%.

for two different queuing mechanisms at the bottleneck link: random early drops based on RED [3], and ECN markings based on a token bucket algorithm originally designed for PreCongestion Notification (PCN) [6]. It can be observed that the sending rate of NADA closely follows the available link capacity over both RED and PCN, and the traces of the composite congestion signal look similar to the previous case with drop-tail queuing. On the other hand, the contributing factors of the congestion signal are different for different queuing schemes. During the periods of the low link rate of 2 Mbps, a persistent packet drop ratio around 1% contributes to the higher composite congestion signal and consequently lower rate of NADA. In the case of PCN, the random marking ratio increases with decreasing link rate and leads the sending rate to always stabilize at 90% of link capacity. The slight link under-utilization, by design, results in an empty queue at steady states.

Fig. 9. Packet loss ratio and one-way delay of a single NADA stream at steady state, when interacting with different queuing schemes over a bottleneck link of 2 Mbps.

Fig. 10. Traces of sending rate of multiple NADA streams, as they share a common bottleneck link of 9 Mbps. The bottleneck queue follows the randomearly-detection (RED)] algorithm [3].

D. Tradeoff Between Loss and Delay Figure 9 summarizes the steady-state performance of NADA in terms of one-way delay and packet loss ratio over a single bottleneck link of 2 Mbps. The different data points correspond to the presence of different queuing schemes at the bottleneck: conventional drop-tail queuing, random early detection (RED) [3], CoDel [4], PIE [5], and random early markings based on a token bucket algorithm originally designed for Pre-Congestion Notification (PCN) [6]. It can be noted that the presence of early drops in RED, CoDel, and PIE lead to lower one-way delay of the video stream. Unlike RED, which increases the packet loss ratio linearly with average queueing delay, both CoDel and PIE aim at controlling the queueing delay at a target level of 20 ms. Consequently, they yield lower one-way delay of the video stream, at the expense of slightly higher packet loss ratios. By design, the early random markings in PCN leads to the lowest delay and zero packet loss at steady state while streaming at 90 % of link capacity.

F. Competing with TCP Flows Finally, we study the behavior of NADA streams in the presence of competing TCP traffic. Figure 12 summarizes the average rate obtained by the two NADA streams and the competing TCP flow over various queuing mechanisms. The bottleneck link has a capacity of 4 Mbps. Since NADA senders react to the combined congestion signal of delay, loss, and markings, they are able to sustain a substantial portion of the bottleneck link capacity in the presence of a competing TCP flow. The behavior of TCP is the most aggressive when only conventional drop-tail queue is in use. All three randomdropping based queueing schemes, RED, CoDel, and PIE, lead to similar bandwidth allocations between NADA and TCP flows. With PCN, the high random marking ratio around 45% leaves very little room for the TCP flow to survive. The two NADA streams, nonetheless, are able to maintain their substantial share of bandwidth with a relative ratio of 2:1. VIII.

E. Multiple NADA Streams Next, we consider the scenario where multiple NADA streams share the same bottleneck link. Figure 10 shows the traces of sending rate of four NADA streams, as they gradually enter a link with capacity 9 Mbps. The bottleneck queue follows the random-early-detection (RED) scheme. Half of the streams have a priority weight of w = 1, the other half have a priority weight of w = 2. During the period when all four streams are active, the sending rate stabilizes around 1.5 Mbps and 3.0 Mbps, respectively, for the two classes of streams. Figure 11 shows the per-stream sending rate, one-way delay experienced by each stream, as well as the composite congestion signal, when the link rate varies from 6 Mbps to 12 Mbps. All streams experience the same amount of one-way delay, packet loss ratio, and consequently the same values of composite congestion signal. The ratio between the rates of the two classes of streams consistently stays at 1:2.

C ONCLUSIONS AND F UTURE W ORK

This paper describes a unified approach for congestion control of real-time media: NADA (network-assisted dynamic adaptation). Our design of NADA follows the key idea of integrating all forms of congestion indications — delay, loss, and marking — into a composite signal. Consequently, it remains robust in the presence of loss-based congestion control schemes. As confirmed by extensive simulation results, NADA works well with a wide variety of queue management schemes, including drop-tail, RED, CoDel, PIE, and PCN-based random marking. Both theoretical analysis and simulation studies further confirm that NADA supports weighted bandwidth sharing. The bandwidth consumed by each stream is naturally weighted by its own rate dynamic range, as dictated by video content, as well the user-specified level of priority. Encouraged by these preliminary results, we plan to evaluate performance of NADA in a testbed setting. In particular, we are interested in studying how NADA interacts with an on-line encoder rate control process, over time-varying video contents captured in a real-world conferencing setting. We will also

(a) One-way delay and Composite Congestion Signal

(b) Per-Stream Packet Loss Ratio

(c) Per-Stream Sending Rate

Fig. 11. Four NADA streams competing over a single bottleneck link. Streams 1 and 3 have a priority weight of w = 1; Streams 2 and 4 have a priority weight of w = 2. The link rate varies between 6 Mbps and 12 Mbps. The bottleneck queue follows the random-early-detection (RED) algorithm [3].

Fig. 12. Average rate of two NADA streams and a competing TCP flow, when sharing a common bottleneck link of 4 Mbps. The two NADA streams maintain a relative ratio of 1:2 in their sending rates, across all experiments.

explore mechanisms for automatically tuning the parameters in the NADA according to network load at steady state, content characteristics and loss-resiliency of the video stream.

R EFERENCES [1] K. K. Ramakrishnan, S. Floyd, and D. Black, “The addition of explicit congestion notification (ECN) to IP,” RFC 3168 (Proposed Standard), Sep. 2001. [2] H. Schulzrinne, S. Casner, R. Frederick, and V. Jacobson., “RTP: A transport protocol for real-time applications,” RFC 3550 (Standard), Jul. 2003. [3] S. Floyd and V. Jacobson, “Random early detection gateways for congestion avoidance,” IEEE/ACM Transactions on Networking, vol. 1, no. 4, pp. 397–413, Aug. 1993. [4] K. Nichols and V. Jacobson, “A modern AQM is just one piece of the solution to bufferbloat,” ACM Queue, May 2012. [5] R. Pan, P. Natarajan, C. Piglione, M. S. Prabhu, V. Subramanian, F. Baker, and B. VerSteeg, “”pie: A lightweight control scheme to address the bufferbloat problem”,” in Proc. IEEE International Conference on High Performance Switching and Routing (HPSR’13), Taipei, Taiwan, Jul. 2013.

[6] M. Menth, F. Lehrieder, B. Briscoe, P. Eardley, T. Moncaster, J. Babiarz, A. Charny, X. J. Zhang, T. Taylor, K.-H. Chan, D. Satoh, R. Geib, and G. Karagiannis, “A survey of PCN-based admission control and flow termination,” IEEE Communications Survey and Tutorials, vol. 12, no. 3, pp. 357–375, Jul. 2010. [7] V. Jacobson, “Congestion avoidence and control,” in ACM Conference on Communications Architectures, Protocols and Applications (SIGCOMM’88), vol. 18, Stanford, CA, USA, Aug. 1988, pp. 157–173. [8] S. Floyd, M. Handley, J. Pahdye, and J. Widmer, “TCP friendly rate control (TFRC): Protocol specification,” RFC 5348 (Proposed Standard), Sep. 2008. [9] B. Wang, J. Kurose, P. Shenoy, and D. Towsley, “Multimedia streaming via TCP: An analytic performance study,” in Proc. ACM Multimedia (MM’04), New York, NY, USA, Oct. 2004. [10] Z. Wang, S. Banerjee, and S. Jamin, “Media-friendliness of a slowlyresponsive congestion control protocol,” in Proc. 14th International Workshop on Network and Operating Systems Support for Digital Audio and Video (NOSSDAV’04), Cork, Ireland, 2004, pp. 82–87. [11] J. Yan, K. Katrinis, M. May, and B. Plattner, “Media- and TCP-friendly congestion control for scalable video streams,” IEEE Trans. Multimedia, vol. 8, no. 2, pp. 196–206, Apr. 2006. [12] J. Gettys, “Bufferbloat: Dark buffers in the Internet,” IEEE Internet Computing, vol. 15, no. 3, pp. 95–96, May 2011. [13] J. Wang, D. X. Wei, and S. Low, “Modelling and stability of FAST TCP,” in Proc. 24th IEEE INFOCOM, Miami, FL, USA, Mar. 2005. [14] H. Lundin, S. Holmer, and H. Alvestrand, “A Google congestion control algorithm for real-time communication,” Internet-Draft (Informational), Feb. 2013. [15] P. O’Hanlon, “Congestion control algorithm for lower latency and lower loss media transport,” Internet-Draft (Informational), Apr. 2013. [16] Ł. Budzisz, R. Stanojevic, A. Schlote, F. Baker, and R. Shorten, “On the fair coexistence of loss- and delay-based TCP,” IEEE/ACM Trans. Networking, vol. 19, no. 6, pp. 1811–1824, Dec. 2011. [17] X. Zhu and R. Pan, “NADA: A unified congestion control scheme for real-time media,” Internet-Draft (Informational), Mar. 2013. [18] C. Hollot, V. Misra, D. Towsley, and W.-B. Gong, “A control theoretic analysis of RED,” in Proc. IEEE International Conference on Computer Communications (INFOCOM’01), Anchorage, Alaska, USA, Apr. 2001. [19] G. Franklin, J. D. Powell, and A. Emami-Naeini, Feedback Control of Dynamic Systems. NJ, USA: Prentice Hall, 2006. [20] “The network simulator NS-2,” http://www.isi.edu/nsnam/ns/.

Suggest Documents