Jun 8, 2011 - The Internet Society. World IPv6 Launch: Access Networks. Who? â« ATT, Comcast, Free, Internode, KDDI, Ti
And there's no analogue of DNS Time-To-Live (TTL), so there's no way for repository to signal how long relying party sho
underlying filesystem (on which Unix variant?) will likely defeat any attempt to code around this. â· rsync is not well
Jul 29, 2013 - and, of course, multi-homing. â. Particularly, it can also be used by the MPTCP community! âThe road to hell is paved with unused testbeds.â.
5%. 10%. 15%. Google IPv6 Adoption shaded region represents the duration of the ... Quantification of failures helpful for upcoming IPv6-only networks.
view. (Server) virtualization is driving changes in the network and creating SDN applications. â« DC Orchestration. â« Service/Platform Virtualization. We're starting ...
Oct 24, 2011 ... M. Karir. Internet Draft. Merit Network Inc. Intended status: Informational Track. Ian
Foo. Expires: January 2012. Huawei Technologies. October ...
IETF 86-Oralando, March 2013. T. Reddy, J. Kaippallimalil, Ram Mohan. R and. R. Ejzak ... Dedicated bearers can be estab
3 MAs deployed somewhere in Finland, Sweden and Canada in [RFC 6948]. - 14 MAs deployed across EU, more upcoming . ... S
Pass/fail determinations of Test Set results and test suite results. ⢠Raw results contained in leaf-list. ⢠ping.yang represents an extremely simple example;.
Mar 14, 2016 - The list of current Internet-. Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts
Mar 14, 2016 - [RFC1035], Mockapetris, P., "DOMAIN NAMES - IMPLEMENTATION AND. SPECIFICATION", November 1987. Author's A
Web apps currently can't do real DNS queries; with this ... Best practice HTTP semantics. ⢠Uses normal HTTP content .
the receiver to allow faster recovery of the data lost by avoiding unnecessary ..... Conference on Distributed Computing Systems, Vancouver,. Canada, June: ...
Jun 8, 2011 - Facebook, Google, Microsoft Bing, and Yahoo! â« Others are welcomed to join, see: www.worldipv6launch.org
May 29, 2014 - School of Development, PES Institute of Technology Campus,. Azim Premji University ... Downloaded by [University of Florida] at 06:30 06 February 2015 .... panthers, bears, sambar, spotted deer and the black buck. Some of ...
cient fast path recovery mechanism, which employs the Diffusing Update. Algorithm (DUAL) to establish the working and backup paths concur- rently and modify the ... changes in a finite time and guarantee a loop-free routing. And employ the.
28 May 1975 ... 1. One More Try on the FTP. 2. This is a slight revision of RFC 686, mainly
differing in the discussion of print files. Reading several RFCs that I ...
results show that PIE can ensure low latency and achieve high link utilization under various congestion situations. Index Termsâbufferbloat, Active Queue ...
even without cookies. ⢠Once on the wire, it is vulnerable to intercept, and there are known, wide deployments that ex
RACK: a time-based fast loss recovery draft-ietf ... - IETF Datatracker
RACK + TLP reduces recovery latency by 24% and may replace DupThresh approach. 5 ... Too high: 1ms is too high inside a
RACK: a time-based fast loss recovery draft-ietf-tcpm-rack-02 Yuchung Cheng Neal Cardwell Nandita Dukkipati Google IETF98: Chicago, March 2017
SYN
What’s RACK (Recent ACK)?
SYN/ACK ACK
Key Idea: time-based loss inferences (not packet or sequence counting) ●
P1 P2
If a packet is delivered out of order, then packets sent chronologically before it are either lost or reordered
SACK of P2 Retransmit P1
●
●
Wait RTT/4 before retransmitting in case the unacked packet is just delayed. RTT/4 is empirically determined (more later on making it adaptive) Conceptually RACK arms a (virtual) timer on every packet sent. The timers are updated by the latest RTT measurement.
Expect ACK of P1 by then … wait RTT/4 in case P1 was reordered
ACK of P1/P2
SYN
Tail Loss Probe (TLP) ●
●
●
Problem ○ Tail drops are common on request/response traffic ○ Tail drops lead to timeouts, which are often 10x longer than fast recovery ○ 70% of losses on Google.com recovered via timeouts Goal: ○ Reduce tail latency of request/response transactions Approach ○ Convert RTOs to fast recovery ○ Solicit a DUPACK by retransmitting the last packet in 2 SRTTs ○ Requires RACK to trigger fast recovery
SYN/ACK P0
ACK
P1 After 2 SRTTs... send TLP to get SACK to start RACK recovery of a tail loss
P2
ACK
TLP: P2 SACK of P2 Retransmit P1
ACK of P1/P2
3
What’s new since last IETF in Nov. 1. Fully implemented in Linux 4.10 a. b.
On by default Reduced number of loss recovery heuristics from 9 to 4:
RACK, TLP, F-RTO, DupThresh (RFC6675), FACK, Early Retransmit (RFC5827), Thin-D c.
(RFC4653), Forward Retransmit Deployed in Google TCP
2. -02 draft a. b.
New experiments on reordering window length and removing DupThresh New text on interacting with congestion control
Exp: RACK+TLP vs DupACK threshold Diffs compared to RFC6675 (DupThresh) RACK
RACK + TLP
RACK + TLP + RFC6675 (Linux)
Time in loss recovery
-0.5%
-24.0%
-24.1%
%RTO reduced
-5.4%
-25.8%
-23.7%
Retrans. Rate 1.3% 1.5% 1.5% (including TLP) 4-way experiment at one Google DC in Europe for a week in 2017. ~1.5B flows sampled -
RFC6675-only retransmit rate is 1.3% RACK + TLP reduces recovery latency by 24% and may replace DupThresh approach 5
How RACK interacts with congestion control RACK influences congestion control indirectly -
-
Congestion control (Reno/RFC5681) - On fast recovery cwnd is reduced to ssthresh - On RTO cwnd resets to 1 By reducing RTOs, RACK + TLP speeds up fast recovery and avoids cwnd resets - Rationale: cwnd should only reset if the entire flight is lost and the ack clock is also lost
Example: Reno C.C. w/ cwnd=20. Send 10 packets, which are all dropped. Events
Recovery Time
cwnd upon recovery ends
Standard
RTO then slow start
RTO + 4*RTT
1*2*2 = 4
RACK + TLP
Fast recovery (PRR slow start)
2*RTT + 4*RTT
20/2 = 10 6
Next: smarter reordering window Current window is max(1ms, min_RTT/4) - Too low: high spurious retransmit if reordering exceeds the window - Too high: 1ms is too high inside a data-center (RTT < 100us) -
But = DupThresh Progressively phase out DupThresh approach
Conclusion Vision: making TCP resilient and efficient to reordering and loss with one algorithm - Better load-balancing (e.g. multi-paths, flowlets) - Faster forwarding (e.g. parallel forwarding, wireless link layer optimization) - Simpler transport with time-based approach RACK is now the key loss recovery in Linux Work-in-progress 1. Optimize reordering window for high reordering and/or low RTT 2. Pure time-based recovery by completely retiring DupThresh approach