Flocks: Interest-based construction of overlay networks. (MMEDIA10) .... of ban dw idth w ith op tim al lin k p lacem en t. Simulation Time in Seconds. Bandwidth ...
IEEE CQR 2012 May 17, 2012 - San Diego, CA
An Efficient and Scalable Engine for Large Scale Multimedia Overlay Networks
Pier Luca Montessoro Department of Electric, Managerial and Mechanical Engineering (DIEGM) University of Udine Italy
Stefan Wieser Laszlo Böszörmenyi
Institute of Information Technology (ITEC) Klagenfurt University Austria
Motivations • Increasing demand of multimedia streaming and remote storage • No control on the network infrastructure and limited cooperation from ISPs • To make overlays a feasible solution we must provide: • Scalability • Flexibility through dynamic self-organization • Performance (fast packet forwarding)
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
2
Architecture overlay network layer
C
E
sender A
receiver
D B
F
ACK messages reservation info
collected indexes
host A Resource reservation and keepalive messages
REBOOK layer
collected indexes
host F
reservation info
node B
host A
node D
DLDS layer
DSDL table
forw. table
DSDL table
physical links ON virtual links flow route
physical network layer
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
forw. table
3
• REBOOK
REBOOK/DLDS
• deterministic, dynamic, per-flow resource reservation • It IS NOT another reservation protocol • It IS a distributed algorithm for efficient status information handling within intermediate nodes
• DLDS (Distributed Linked Data Structure) • the enabling algorithm REBOOK: a deterministic, robust and scalable resource booking algorithm. (JONS 2010) A novel algorithm for dynamic admission control of elastic flows. (FITCE 2011) Distributed Linked Data Structures for Efficient Access to Information within Routers. (ICUMT 2010) Efficient Management and Packets Forwarding for Multimedia Flows. (JONS 2012)
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
4
DLDS (Distributed Linked Data Structure) During setup: “pointers” collection • store resource reservation information in routers AND • keep track of pointers (memory addresses or indexes in tables) along the path Afterwards • use the pointers to access status information without searching → constant cost to access the per flow resource reservation info → virtual circuit performance for packet forwarding
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
5
Resource reservation and pointers collection Resource reservation ACK message
server (sender)
4
2
req=2, res=1
Resource reservation message req=2, res=2
4
R3
R1 client (receiver)
Resource reservation message 1
4
req=2, res=1
2
2 3 4
2 Mb/s
R2
R4
5 6 Resource Reservation Table
1 1 Mb/s
2 3 4 5
NOTE: in this context, ON nodes acting as software router
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
6 Resource Reservation Table
6
Fast packet forwarding
R3 R1
sender
receiver Rr index
IP dest
Data Packet
R2 Destination
R4
Output Port
RR index
Reservation Info
Local Index
Next Index
IP dest
Data Packet
Destination
Forwarding Table
Resource Reservation Table
Reservation Info
Local Index
Next Index
Forwarding Table
Resource Reservation Table
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
7
Output Port
A Few Problems • Route changes, disappearing flows, end nodes or routers faults • High speed consistency check • Low priority table cleanup process
• Need to dynamically change assigned resource amounts • Partial release • Distributed control function for optimality and fairness Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
8
Snd1
Does it work?
Snd3 650
650
Rtr1
Rtr2
Rcv1 10 UDP flows, Rmin=15 Rreq=25
650 Rtr3
Rcv3
Snd5 650 Rtr4
650 Rtr5
Snd7 650
650
Rtr6
Rtr7
Rcv5
Rcv7
this link is down between T1 and T2
300 250 200 150 100
total packet rate per sender
50 0 12
T1: route change
T2: route change
10 8 6 4
number of booked flows per sender node
2 0
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
9
Does it work? (cont’d)
1
direct access
R0
0.8 0.6 0.4
lookup
0.2
sender 3
250 250
30 0
250
200 200
time
30 0
250
200
150
time
30 0
200
150
time
30 0
150
time
150
100
50
0
0
sender 2
1
sender 0
receiver 0 R2
direct access
0.8
R1
R0
0.6 0.4
lookup
0.2
receiver 1 R3
100
0
R1
0
50
sender 1
1
direct access
receiver 2
receiver 3
R2
0.8 0.6 0.4
lookup
0.2
100
50
0
0
1
R3
0.8
direct access
0.6 0.4
lookup
0.2
100
50
0
0
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
10
Flocks: Flexible and Self-Organizing Overlay • Gossip-based protocol • Uses “Interest” to build the overlay • Idea: Interested nodes “stick” together • Interest in neighbours computed locally
• QoS estimation and QoS-aware routing • QoS-estimate to any node, fast and scalable to large networks
• Topology aggregation to generate a hierarchical representation for scalability • Nodes only need a small local view • O(log n) runtime complexity scalable? Flocks: Interest-based construction of overlay – networks. (MMEDIA10) Decentralized topology aggregation for QoS estimation in large overlay networks. (NCA2011)
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
11
Combined Overlay • Why combine Flocks with REBOOK? • Flocks provide a flexible and scalable overlay • REBOOK allows to keep track of bandwidth allocation and avoid overcommitting links • DLDS enable fast routing table access and higher performance once routes are established (O(1))
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
12
Evaluation: Overlay Setup
• Overlay with 160 Flock-REBOOK nodes • Nodes only have a small local view • Two random groups connected by bottleneck • Within each group: Interest in high bandwidth
• Small local view of only 4 other nodes • Artificial bottleneck crossed by all flows • Alternative paths outside local view • Must refer to aggregated information • REBOOK tracks allocated resources
• New reservation • Use path with the highest estimated bandwidth Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
13
Experimental Results: Admitted Flows • Admitted flows increased significantly
active flows
• Overlay avoids overcommitting links • Discovers alternative path w. QoS estimation 80 70
without REBOOK and QoS Estimation with REBOOK and QoS Estimation
60 50 40 30 20 10 0 simulation time
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
14
Experimental Results: Admitted Flows • Bandwidth per flow increases as well
avg. reserved bandwidth per flow
• Overlay uses alternative paths used before REBOOK needs to reduce resources 7000 without REBOOK and QoS estimation 6000
with REBOOK and QoS estimation
5000 4000 3000 2000 1000 0
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
simulation time
15
amount of reserved resource
Experimental results (cont’d)
1200 1000 800 600 400 200 0
# flows
simulation time 50 45 40 35 30 25 20 15 10 5 0
Resource reservation in a small congested network
partially reserved
fully reserved
reserved resource amount in routers
simulation time 25000 20000 15000 10000 5000 0
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
16
simulation time
Performance
Activity Resource reservation setup Keepalive message handling Resource reservation release Forwarding table access
CPU time 200 ns per flow 100 ns per flow 25 ns per flow 10.6 ns per packet
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
17
Conclusion • Flocks provide a flexible and scalable overlay for multimedia delivery • Hierarchical aggregation and QoS estimation of Flocks allow discovering alternative paths using small, local views • REBOOK allows a scalable way for overlays to keep track of aware of resource utilization • REBOOK and DLDS enable efficient and fast performing routing for overlays
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
18
Thank you! www.montessoro.it
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
19
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
20
Flocks: Interests • Each node has two sets of name-value pairs • May differ from node to node, and over time • Used to rank neighbours
• Property-Set
(“what I have”)
• Shared with neighbours: virtual, uncertain, inheritable
• Interest-Set
(“what I want”)
• Not shared, evaluated locally Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
21
FLOCKS PERFORMANCE: TRAFFIC Average incoming traffic per node (kbit/s) 100.0
Kilobit per second sent, per node
Traffic in kbit/s 90.0
Stabilization
80.0
Failure
Churn
Recovery
70.0 60.0 50.0 40.0 30.0 20.0 10.0 0.0 0
100
200
300
400
500
600
700
800
900
1000
1100
Simulation Time in Seconds Stefan Wieser, Laszlo Böszörmenyi
Decentralized topology aggregation for QoS estimation
22
1200
PERFORMANCE: QOS Measures closeness to the global optimum % of bandwidth with optimal link placement
1.0 0.9 0.8 0.7 0.6 0.5 0.4
Stabilization
0.3
Failure
Churn
Recovery
0.2 0.1 Bandwidth 0.0 0
100
200
300
400
500
600
700
800
900
1000
1100
Simulation Time in Seconds Stefan Wieser, Laszlo Böszörmenyi
Decentralized topology aggregation for QoS estimation
23
1200
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
24
Thank you! www.montessoro.it
Montessoro, Wieser, Böszörmenyi, CQR 2012, San Diego, CA
25