ALIEN Tutorial on Advanced Technologies in OpenFlow Networks H. Woesner* U. Toseef§ M. Sune* Ł. Ogrodowczyk# J. Matias¶ R. Doriguzzi Corin† E. Jacob¶ B. Belter#, K. Pentikousis§ *BISDN, §EICT, #PSNC, ¶EHU, †CREATE-NET
EWSDN 2014, Budapest, Hungary 1 September 2014
List of Topics Topics
Speakers
Introduction to SDN experimental facilities in Europe
H. Woesner
The ALIEN HAL architecture
U. Toseef
The ALIEN HAL implementation on Cavium Octeon
M. Sune
The ALIEN HAL implementation on EZappliance
Ł. Ogrodowczyk
The ALIEN HAL implementation on DOCSIS
J. Matias
Designing and orchestrating experiments on ALIEN devices
R. Doriguzzi Corin
AAA framework in ALIEN
U. Toseef
Experimentation experience and results
E. Jacob
Summary and conclusions
B. Belter
Introduction to SDN experimental facilities in Europe: OFELIA and ALIEN Hagen Woesner
[email protected]
EWSDN Workshop, Budapest, Hungary 1 September, 2014
An example of OpenFlow facility: the OFELIA testbed “The OFELIA project (www.fp7-ofelia.eu) offers a Pan-European testbed to the research community for conducting experiments in an OpenFlow-enabled wide-area network.”
OFELIA is dead. Long live OFELIA! •
•
•
EU FP7 project from October 2010 to November 2013 – Part of the FIRE initiative (Future Internet Research and Experimentation) – OpenFlow experimentation was central to OFELIA • OpenFlow 1.0 became an obstacle, eventually During the life time of the project – SDN took off • OpenFlow is still essential part of SDN, but not the only – NFV just started end of 2012 OFELIA islands committed to operation until end of 2014 (some 2015) – Then, where do we go from here?
What is to be experimented for the Future Internet? – Predictions are always difficult, especially about the future
Things we did (at least partially) right in OFELIA •
• •
• •
Free access to everyone (from early on in the project) – No cumbersome contracts etc. – Low entrance barriers Start from existing Open Source software projects – Expedient + OptIn SFA Control framework – FlowVisor-based network virtualization – To day, no clearly better alternative has emerged • Flowspace firewall, MPLS underlay?, ONOS (complete NATting?) Distributed architecture – When OFELIA fails today, it is in the centralized parts (DNS, mail, etc.) Open Calls – New partners, new ideas, more active contributions
Things we did wrong in OFELIA •
•
•
• •
Inter-island experimentation based on centralized structure – Hub in Ghent is OK for experimentation, but we should have built an idea for fully meshed operation Beyond NEC IP8800 and some other L2 switches, hardware was not available beyond OF 1.0 „OCF in a box“ existed too long as an idea, only – Github is just one part of the story – Be user-friendly, not just developer-friendly Did anyone really care about Future Internet when it came to OpenFlow? – Then, OpenFlow was standardized by ONF, few connections. Connection to university teaching – We started a couple of times to build a curriculum, this went nowhere. – Some chances missed, but that‘s life.
The ALIEN HAL Architecture
Umar Toseef
[email protected]
EWSDN Workshop, Budapest, Hungary 1 September, 2014
Outline •
Motivation
•
Hardware Abstraction Layer (HAL)
•
HAL Architecture
•
Cross-Hardware Platform Layer – Network Management – OpenFlow Endpoint – Virtualization
•
Hardware Specific Layer
•
HAL Interfaces
•
Northbound Interfaces
•
HAL Hardware Integration Models
•
HAL Implementation
Motivation •
OpenFlow protocol is SDN enabler
•
Implementation and support for OpenFlow
– Rapid evolution of the protocol – Diverse network hardware platforms – Legacy network equipment •
Limitations of OpenFlow protocol
– Lack of support for NPUs and general CPU architectures – Processing framework supports only “stateless” operation – Virtual ports are out of the scope
– Developed for Ethernet based networks
Hardware Abstraction Layer (HAL) •
A piece of software that enables – OpenFlow support on non-OpenFlow capable network elements – Abstraction of underlying hardware complexities
– Integration of legacy network devices in an OpenFlow deployment – Easy upgrading to new OpenFlow protocol version
•
Compatible with a range of network platforms and technologies – E.g., Optical devices, Point to multi-point devices, Programmable platforms
HAL Architecture
•
Modular design approach – Extensibility – Module reusability
•
Split of function and logic 1. Cross-Hardware Platform Layer 2. Hardware-Specific Layer
•
Abstraction of device specific features from control-plane logic
•
Support for independent plugins
Cross-Hardware Platform Layer •
Hardware agnostic software component
•
Handles node abstraction, virtualization and management mechanisms
•
Management Plane
– presents a unified abstraction of the physical platform
– Supports plugins for management and configurations •
Control Plane
– OpenFlow Endpoint encapsulate control-plane functionalities – Connectivity between OpenFlow Endpoint and Controllers – Manages the forwarding state down to the platform drivers
Cross-Hardware Platform Layer: Network Management •
Management interface to configure HAL capable devices – Connection setup to the controller
– Support for multiple controllers – Connection interruption handling – Switch and controller certificate configuration for each controller
– Queue parameters configuration – Switch port configuration – Capability discovery
– Configuration of the switch datapath ID
Cross-Hardware Platform Layer: OpenFlow Endpoint •
Establishes a connection channel to the OpenFlow controller
•
Abstracts OpenFlow protocol version
•
Implements OpenFlow-specific session negotiations
•
OpenFlow pipeline may implement OpenFlow tables
Packet Flow Through OpenFlow Pipeline
Cross-Hardware Platform Layer: Virtualization
•
A software plugin module
•
Allows multiple parallel experiments execution on the same physical substrate
•
Supports a distributed slicing architecture
•
Protocol version agnostic
•
Minimizes the latency overhead
Hardware Specific Layer •
Handles the diversity of network platforms and their communication protocols
•
Provides a unified interface in its northbound for the upper layer
•
Requires different implementations for each hardware platform
•
Three key modules 1. Discovery
•
Collects the information required to initialize CHPL
2. Orchestration
•
Sends configuration and control commands to device hardware components
3. Translation • Translates OpenFlow based data and action models to device specific
protocol syntax and semantics
Hardware-Specific Layer Interfaces •
Two common interfaces are exposed towards Cross-Hardware Platform Layer
1. Abstract Forwarding API – Provides interfaces for management,
configuration and event notifications – Supports closed-box platforms
2. Hardware Pipeline API – Allows execution of generic C/C++ code – Expedites the hardware driver implementation
– A low-level interface to network packet operations, memory management etc.
HAL Northbound Interfaces
1. OpenFlow Protocol Interface – Connects an ALIEN device to OpenFlow controllers – Provides interface to configure and manage an ALIEN device
– Agnostic to OpenFlow protocol version – TCP/TLS
2. JSON-RPC Interface – Virtualization GW and VA – NETCONF/OF-CONFIG
– JSON RPC 2.0
HAL Hardware Integration Models 1. Built-in Model – Overall HAL implementation runs inside the network device – Suitable for fully programmable devices 2. Proxy Model – HAL implementation partially runs on a separate machine which is an integral part of the device – Suitable for closed-box platforms
3. Orchestrator Model – An extension of proxy model – HAL exposes a group of devices as a single network device – Suitable for platforms composed to tightly coupled elements like DOCSIS
HAL Implementation
An overview of Cavium OCTEON® network processor support for xDPd Marc Suñé
[email protected]
EWSDN Workshop, Budapest, Hungary 1 September, 2014
Outline
1. 2. 3. 4. 5.
Introduction Architecture Slow path Fast path Future directions
1. Introduction: motivation OCTEON network processors: • designed for fast packet processing • highly programmable devices (OF extensions!) • hardware accelerators for common packet processing tasks: I/O, checksum calc., packet
2x OCTEON CN5860 (EMERSON 9305)
classification, encryption…
How to bring OpenFlow to these devices? Using xDPd’s framework
OCTEON CN5230 (DELL SDP)
1. Introduction: xDPd “The eXtensible Datapath daemon (xDPd) is a multi-platform, multi OpenFlow version, open-source data path, built focussing on performance and extensibility.” Architecture: Platform (hw) agnostic. Re-used across all platforms
Platform specific Each platform needs to provide its impl.
1. Introduction: Cavium OCTEON® brief overview
•
• •
• • * from www.cavium.com
Optimized hardware architecture for packet processing Up to 64 cnMIPS cores Cores can run: – GNU/Linux OS – Simple Executive Standalone mode (SE-S) or bare metal A number of HW accelerators Programmable in C using the OCTEON SDK.
1. Introduction: xdpd-octeon Second/third generation (v0.4/v0.5) of xDPd’s support for OCTEON’s network processor family. Supports all OCTEON’s family NP Implemented and tested using:
DELL PowerConnect 7024
OCTEON CN5230 in a DELL SDP module with a 10G XAUI port
2. xdpd-octeon architecture I The basic architecture:
• One of the cores* runs GNU/Linux OS and xdpd‘s main process. It manages the overall device, runs the OpenFlow endpoint(s) and controls the I/O cores. • The rest of the cores (N-1) run the OF fast path in bare-metal (Standalone or SE-S mode) • Mgmt core and I/O cores exchange data via a shared memory chunk allocated at bootstrap time, in so-called bootmem memory • The key component is ROFL-pipeline, a platform agnostic OF pipeline written in C * (can be increased at wish)
2. xdpd-octeon architecture II
mgmt core
Platform (hw) agnostic. Re-used across all platforms
I/O cores (N-1)
OCTEON specific OCTEON driver running as part of the GNU/Linux process Fast path running
3. Fast path: Stand Alone (SE-S) All the fast path cores run the same routine (pseudo-code): //Abstract rofl-pipeline packet datapacket_t pkt; while(keep_working == true){ //Get packet from the POW buf = cvmx_pow_work_request_sync(WAIT);
//Classify packet classify(&pkt, buf); //Instruct rofl-pieline to process the pkt through //the OF pipeline of_process_packet_pipeline(pkt); }
4. Slow path: OCTEON driver The OCTEON xDPd driver is in charge of: • Background tasts: maintain ROFL-pipeline timers (entry expirations) • Handle PKT_IN: using tag switching from SE-S cores. • PKT_OUT/FLOWMOD+BUFFERID events coming from the OF controller • Flow table and group table manipulation; add flowmod, add groupmod, table statistics...
5. Future directions & questions
• Use HW checksum offload • Increase the usage of HW flow classification routines • Exploit the potential of input queues and flow classification. • Increase performance for encap/decap tasks • OF extensions for encrypted tunnels using HW acceleration units
5. Future directions & questions
Any questions? ALIEN project: http://www.fp7-alien.eu/ xDPd: http://www.xdpd.org, https://github.com/bisdn/xdpd ROFL: https://github.com/bisdn/rofl-core
Mailing lists: xdpd (at) bisdn.de, rofl (at) bisdn.de
Marc Suñé Andreas Köpsel Victor Alvarez Tobias Jungel
The ALIEN HAL Implementation on EZappliance Łukasz Ogrodowczyk
[email protected]
EWSDN Workshop, Budapest, Hungary 1 September, 2014
What is EZappliance? •
Compact hardware platform based on EZchip NP-3 network processor
•
Evaluation system for easy development and deployment of new efficient network applications
•
Produced by EZchip Technologies from Israel (http://www.ezchip.com)
EZappliance - physical box overview •
EZappliance = NP3 network processor + Control CPU embedded system 800 MHz CPU, → Complete data plane and control plane solution 512 MB RAM 256 MB FLASH
2Gbps
30Gbps
We use only 1G
HAL architecture adaptation for EZappliance
HAL implementation for EZappliance based on xDPd/ROFL framework
HAL implementation for EZappliance Software design plan Common for all ALIEN devices
Based on xDPd and ROFL framework
Midleware with API to NP-3 Microcode in NP-3 processor
HAL software deployment
• • •
Language: C++ License: Mozilla Public License 2.0 HAL interface used: AFA
• •
Language: C/C++ License: proprietary (NDA)
• •
Language: EZchip assembler License: proprietary (NDA)
EZDriver library
OpenFlow pipeline over TOPs into NP-3 Choosing actions (port destination)
Looking for a matching flow entry in the OF table NP-3 Network Processor Packetout
Control
TOP parse
Messages (modification instructions)
Messages (analyze results)
search keys
TOP search I
search results
TOP resolve
search keys
TOP search II
search results
Statically configured
Control TOP modify
Packetin
TM
Loopback
TM queues mapping
Frame parser Frame Memory (FMEM)
Search Memory
Statistics Memory
OpenFlow table(s)
Go to next OF table Frame modifications OF counters
OpenFlow pipeline into NP-3 Development 1/2 •
EZchip Microcode Development Environment (MDE) – GUI for microcode building, simulating, debugging
OpenFlow pipeline into NP-3 Development 2/2 •
Specialized assembler language for NP-3 programming
•
Frame represented as series of bytes in frame memory and accessed through pointers
•
NP-3 programmer responsible for frame handling in each Task Optimized Processor
•
OpenFlow pipeline code for NP-3 publicly available as a binary file (proprietary code, NDA)
OpenFlow table realization Search structures into NP-3
• Structure type
Key size [B]
Result size [B]
• Direct Access Tables
1-4
8, 16, 32, 64
Memory)
• Hash Tables
1-48
8, 16, 32, 64, 96
• Trees
1-16, 38
1, 3, 8, 16, 32, 64, 96
Linked: Hash + Tree
1-38
8, 16, 32, 64, 96
Linked: Tree + Hash
4
8, 16, 32, 64
OpenFlow table is implemented using hash table Resides in expensive 256 KB TCAM (Ternary Content Addressable
•
•
Nearly deterministic access (12 clock cycles) OpenFlow 1.0 requires 32 bytes key size ~3500 OpenFlow 1.0 entries in the basic setup Limitations: – OpenFlow 1.2 requires at least 57 bytes (!!!)
EZappliance - OpenFlow functionalities
• •
3x EZappliances during live demonstration „Streaming on demand in OF networks” @FIA 2014 in Athens @TNC 2014 in Dublin)
•
OpenFlow v.1.0 implemented with one flow table OpenFlow functionality tested using OFtest tool
No Control Plane benchmarks has been done yet
Software repositories publicly available on GitHub •
https://github.com/fp7-alien/xDPd-for-EZappliance Manuals, instructions, description of the components
ALIEN HAL for EZappliance List of references •
•
Deliverables available on the http://www.fp7-alien.eu/ – D2.2: Specification of Hardware Abstraction Layer – D2.3: Report on Implementation of the Common Part of an OpenFlow Datapath Element and the Extended FlowVisor – D3.1: Hardware platforms and switching constraints – D3.2: Specification of hardware specific parts – D3.3: Final Prototypes of Hardware Specific Parts Publications, Articles: – D. Parniewicz, R. Doriguzzi Corin, et al., “Design and implementation of an OpenFlow hardware abstraction layer”, Proc. SIGCOMM DCC 2014, Chicago, USA, August 2014. – L. Ogrodowczyk, B. Belter, et al., “Hardware abstraction layer for nonOpenFlow capable devices”, In Proc. TERENA Networking Conference, ISBN 978-90-77559-24-6, Dublin, Ireland, May 2014. – B. Belter, A. Binczewski, et al., “Hardware Abstraction Layer on EZchip NP3″, Proc. IEICE ICTF 2014, Poznan, Poland, May 2014.
The ALIEN HAL Implementation on DOCSIS Jon Matias
[email protected]
EWSDN Workshop, Budapest, Hungary 1 September, 2014
Outline • • • • •
DOCSIS Platform HAL architecture for DOCSIS (Proxy based) HAL implementation for DOCSIS Demonstrations Software repository (GitHub)
DOCSIS Platform •
Physical Overview – Data Over Cable Service Interface Specification (DOCSIS) is a family of specifications developed by Cable Television Laboratories (CableLabs). – Three main elements in the DOCSIS system: • Cable Modem Termination System (CMTS). • Hybrid-fiber-coaxial (HFC) infrastructure. • Cable modem (CM). – DOCSIS infrastructure deployed at the UPV/EHU laboratory: • One Cisco uBR7246VXR Universal Broadband Router (CMTS) • Twelve Cisco Modem/EPC3825 (CM).
DOCSIS Platform abstracted as a single OpenFlow device •
Set of elements abstracted as a single OF device – ALien Hardware INtegration Proxy
HAL architecture for DOCSIS Platform (Proxy based)
Figura poster!!!
HAL implementation for DOCSIS Platform based on xDPd/ROFL framework
HAL implementation for DOCSIS Platform Software design plan
Demonstration of HAL implementation for DOCSIS platform •
•
Future Internet Assembly (FIA 2014) in Athens (March 2014) – HAL implementation for DOCSIS Terena Networking Conference (TNC 2014) in Dublin (May 2014) – DOCSIS ALIEN integration in OFELIA
Software repositories publicly available on GitHub •
https://github.com/fp7-alien/alien-DOCSIS
ALIEN HAL for DOCSIS Platform List of references •
•
Deliverables available on the http://www.fp7-alien.eu/ – D2.2: Specification of Hardware Abstraction Layer – D2.3: Report on Implementation of the Common Part of an OpenFlow Datapath Element and the Extended FlowVisor – D3.1: Hardware platforms and switching constraints – D3.2: Specification of hardware specific parts – D3.3: Final Prototypes of Hardware Specific Parts Publications, Articles: – D. Parniewicz, R. Doriguzzi Corin, et al., “Design and implementation of an OpenFlow hardware abstraction layer”, Proc. SIGCOMM DCC 2014, Chicago, USA, August 2014. – L. Ogrodowczyk, B. Belter, et al., “Hardware abstraction layer for nonOpenFlow capable devices”, In Proc. TERENA Networking Conference, ISBN 978-90-77559-24-6, Dublin, Ireland, May 2014. – V. Fuentes, J. Matias, et al., “Integrating complex legacy systems under OpenFlow control: The DOCSIS use case″, EWSDN 2014, Budapest, Hungary, Sept 2014.
Designing and orchestrating experiments on ALIEN devices Roberto Doriguzzi Corin
[email protected]
EWSDN Workshop, Budapest, Hungary 1 September, 2014
Outline
•
Introduction
•
Experiment design and orchestration in OFELIA
•
The Time-Based approach
•
The distributed slicing approach
•
Conclusions
Introduction During the previous sessions we have learnt that: •
The HAL enables ALIEN non-OpenFlow devices to “speak“ OpenFlow
•
With the HAL, the ALIEN devices can support different versions of the OpenFlow protocol: – OpenFlow version v1.0, v1.2 and v1.3.2 – OpenFlow v1.0 with optical extensions
•
With the HAL, both dataplane and control plane can be extended with new fields
An example of OpenFlow facility: the OFELIA testbed “The OFELIA project (www.fp7-ofelia.eu) offers a Pan-European testbed to the research community for conducting experiments in an OpenFlow-enabled wide-area network.”
The OFELIA Control Framework (OCF)
Expedient connects to different aggregate managers and provides a GUI Opt-In/FOAM: aggregate managers for OF resources
FlowVisor slices the flowspace (the VLAN field is used to achieve the isolation between experiments)
We can leverage on the OFELIA experience Configuration and management of experiments on ALIEN HW also need: •
User interface where the users can register and configure their experiments
•
A software that allows the management of the experiments (create, approve, reject, delete, list etc.)
•
A configurable resource manager that connects the ALIEN resources to the user OpenFlow controllers: – The resource manager must provide support for OpenFlow versions beyond v1.0: let the experimenters to use all the versions of the protocol or even to customize to protocol (protocol agnostic?)
Two different approaches Main goal: try to replace FlowVisor as resource manager with something suitable for our hardware 1. Do not inspect the protocol. Just remove the slicing process and allow only one experiment at a time 2. Enhance the HAL and put the slicing process down to the datapath level (directly on the switches)
Finally, we need to build/adapt an orchestration software around the chosen approach.
The Time-Based approach
The OFELIA Control Framework
TB Plugin: Time Based plugin for Expedient TBAM: Time Based Aggregate Manager OFGW: OpenFlow Gateway
The Time-Based Control Framework
Time Based approach considerations PROS: (i) With this approach the control protocol is not inspected; therefore it supports any SDN-enabled device (even nonOpenFlow). (ii) The user can access the devices during the experiment for monitoring or configuration purposes.
CONS: (i) No sharing mechanism, therefore only one experiment at a time is allowed. (ii) The OFGW represent a Single Point of Failure in the architecture. A failure of the OFGW would bring down the running experiment.
The distributed slicing approach
The OFELIA Control Framework
The Control Framework adapted to the distributed slicing
VAO: Virtualization Agent Orchestrator VA: Virtualization Agent (plug-in for the HAL)
Distributed slicing considerations PROS: (i) Multiple concurrent experiments are allowed at the same time. (ii) Being the slicing process performed at the datapath level, Single Points of Failure are avoided. (iii)Easier integration within the current OCF. CONS: (i) The distributed slicing mechanism depends on the HAL implementation for both hardware platforms and versions of the OpenFlow protocol supported.
Conclusions • Managing experiments for OpenFlow > 1.0 is not straightforward • The current tools (FlowVisor) and orchestration software (OCF) are not suitable • We followed two different approaches: – The Time based (pragmatic, more flexible on the control channel but with no sharing capabilities) – The distributed slicing approach (allows simultaneous experiments but tight to HAL implementation)
AAA Framework in ALIEN
Umar Toseef
[email protected]
EWSDN Workshop, Budapest, Hungary 1 September, 2014
Outline
•
ALIEN Control Framework
•
ClearingHouse services
•
Authentication and Authorization
•
User registration and interactions
•
Member roles
•
Privilege delegation
ALIEN Control Framework
TBAM: Time Based Aggregate Manager LDAP: Lightweight Directory Access Protocol
GAPI: GENI AM API
FAPI: Federation API
VPN: Virtual Private Network
ClearingHouse services
•
Member Authority
•
Slice Authority
•
Service Registry
•
Project Service
•
Logging Service
Authentication and Authorization • •
Authentication using certificate Certificate – Asserts: public key ↔ subject – Issued & digitally signed by CA – Limited validity period
• •
Authorization using credentials Credentials – Provide the owner with permissions on a target object – Issued & digitally signed by CA – Limited validity period – Can be delegated
User Registration Workflow
Authentication
User Interaction with TBAM using Expedient
ClearingHouse – Member roles
Lead
Context: Slice / Project
Admin
Member
Auditor
Privilege delegations
Delegation
Speaks-for
Speaks-as
Experimentation Experience And Results
Eduardo Jacob
[email protected]
EWSDN Workshop, Budapest, Hungary 1 September, 2014
Outline
•
Motivation
•
Experimentation planning over OFELIA
•
Integration of elements in OFELIA
•
Integrated experiment
•
Conclusions
Motivation • •
• •
•
•
“In-labo” testing of developments is sometimes not sufficient to assess its functionality. It’s not only about passing OF-Test successfully. Stock applications should run unmodified over standard controllers managing HAL equipped devices. Hidden bugs (or misunderstood OF features) are detected and corrected. Integration in OFELIA represents a dual interest. – OFELIA represents a well known environment to test the new developments. – The basis for making available new hardware to experimenters under an OF control paradigm is achieved. A two-phase experimental testing was decided. – OFELIA integration and individual functional testing for every ALIEN device. – Integrated experiment where different OFELIA islands and equipment are chosen. It’s difficult to describe the full set of experiments run during the project. – We will describe EzAppliance and DOCSIS OFELIA integration and individual testing. – We will describe one: the CONET based experiment which involves resources from different islands and DOCSIS and EzAppliance hardware .
Experimentation planning in OFELIA
•
Integration in OFELIA is by itself a challenge – Very different task depending on whether partner was originally involved in OFELIA or not. • OFELIA partners (EICT, UNIVBRIS, CREATE-NET) • Others (PSNC, PUT, UCL, DELL-FORCE, UPV/EHU) – OFELIA project is finished, but infrastructure maintained on a best effort basis. – Not every implementation is able to present itself as a full Ethernet OpenFlow switch (Octeon SDP/Dell, L0 Switch)
Experimentation planning in OFELIA (II)
•
•
•
Several possibilities to integrate ALIEN resources in OFELIA (from easy to complex) – The local resources are exposed by the remote island. (PUT, UCL, DELL/FORCE), the equipment uses tagged frames. (demonstrated at FIA Athens 2014) – Create an island with OCF, connected to another island with a shared data plane (common VLAN). An Alien slice in each island. (PSNC, UPV/EHU) (Demonstrated at TNC 2014) – Create an island with OCF, with credentials to manage remote resources with local aggregate managers. A common slice in all involved islands (PSNC, UPV/EHU) Sharing the data plane involves setting the same VLAN for the interconnection. Normally the control plane is done by direct connection from the island to OFELIA central hub in iMinds. – For increased availability direct control plane connection between islands has been used (UPV/EHU).
Experimentation planning in OFELIA (II) OFELIA for ALIEN experiments UCL
PUT
PSNC
UNIVBRIS
OFC Control plane iMInds
VPN-based control connection
I2Cat
UPV/EHU
OFELIA
DELL/ FORCE
Integration of elements in OFELIA
•
After the connectivity to OFELIA is established, the integration is achieved in a two step process. (just to reduce number of things that could be failing… escaping from the undetermined system of equations) – 1st Phase: Individual integration in OFELIA and functional testing. • To be assured that individual setup is compatible with OFELIA at functional level. – 2nd Phase: Project wide experiment. • The developments are tested in OFELIA with a common application.
Integration of elements in OFELIA •
There are several experiments that involve either switches or VMs (controller, application servers or clients) from different islands.
OFELIA Integration Application chosen for functional test Islands involved
NetFPGA (PUT)
EZappliance (PSNC)
Octeon SDP (BISDN / FORCE)
L0 Switch (UNIVBRIS)
DOCSIS (UPV/EHU)
GEPON (UCL)
Type 1
Type 2
Type 1
Type 3
Type 3
Type 1
Firewall
Performance testing PUT PSNC
Ethernet Video Packet streaming modification
Lightpath establishment
Learning switch
PSNC UNIVBRIS
UNIVBRIS
EHU I2CAT
DELL EHU
UCL UNIVBRIS
Detail of an experiment PSNC- EZappliance - Video on Demand •
Resources involved – PSNC island: • 1 EzAppliance device exposed as OF switch • 1 server machine for deploying Virtual Machines (i.e. Pox OpenFlow controller) – UNIVBRIS island: • 2 NEC OpenFlow switches • 1 server machine for deploying Virtual Machines (i.e. web server and video streaming server)
PSNC- EZappliance Video on Demand - Workflow •
• •
•
•
•
•
•
Step 0 – – – Step 1 – Step 2 – – – – Step 3 – – – – Step 4 – – – – – Step 5 – – – – Step 6 – – – – – Step 7 – – – –
(proactive web-access) OF controller installs flow entries for web-access transport service (allows any client connected to EZappliance nodes to access web server) Flow entries are statically defined in OpenFlow controller application code (predefined data plane topology knowledge) Web-access transport service is bidirectional and must support ARP, ICMP and TCP traffic (interacting with web server) User can view and navigate on web server page(s) (user requests a movie stream) Step 2.0) User clicks on ‚Play’ button within a video page Step 2.1) Video player generates RTSP request for video streaming server (destination TCP port 554) Step 2.2) RTSP request is forwarded to OF controller in packet-in event Step 2.3) OF controller ignores RTSP request; streaming client cannot connect (user requests a network configuration) Step 3.0) User clicks on ‚Configure network’ button and HTML page requests TCP session with 10.0.0.200 (not existing IP address which is used within a “signalling” to OF controller that OF controller must take an action) Step 3.1) Client PC sends ARP request for 10.0.0.200 Step 3.2) ARP request is forwarded to OF controller by packet-in event Step 3.3) OF controller installs flow entries enabling RTSP sessions between client and video streaming server (user requests a movie stream) Step 4.0) Users clicks on ‚Play’ button Step 4.1) Video player generates RTSP request for video streaming server (destination TCP port 554) Step 4.2) RTSP request is forwarded by network devices to video streaming server Step 4.3) video streaming server sends RTSP response to client Step 4.4) RTSP response is forwarded to client and RTSP session is established (video stream is sent) Step 5.0) Basing on RTSP session, video streaming servers starts sending RTP messages carrying video content Step 5.1) First network node generates packet-in with that RTP packet Step 5.2) OF controller recognize destination IPv4 and destination UDP port, and send proper flow entries in a reactive mode to network devices Step 5.3) video streaming packets are sent through network to client and video is displayed (user stops video) Step 6.0) User clicks on ‚Stop’ button Step 6.1) Video player generates RTSP request for video streaming server (destination TCP port 554) Step 6.2) RTSP request is forwarded by network devices to video streaming server Step 6.3) video streaming server sends RTSP response to client Step 6.4) RTSP response is forwarded to client and RTSP session is ended (user clears network) Step 7.0) User clicks on ‚Deconfigure network’ button and HTML page requests TCP session with 10.0.0.201 (not existing address) Step 7.1) Client PC sends ARP request for 10.0.0.201 (not existing IP address which is used within a “signalling” to OF controller that OF controller must take an action) Step 7.2) ARP request is forwarded to OF controller by packet-in event Step 7.3) OF controller uninstall flow entries for RTSP and RTP packets which sent in the data plane between client and video streaming server
PSNC- EZappliance Video on Demand - Conclusions. •
Successful testing of: – OpenFlow 1.0 – Flow entry match to: • Ethernet type (recognize IPv4 and ARP) • Source IPv4 address (also carried within ARP when OF1.0 control is used) • Destination IPv4 address (also carried within ARP when OF1.0 control is used) • IP protocol (recognize TCP, UPD and ICMP) • Source TCP port • Destination TCP port • Destination UDP port – Supported actions: • Forward to a port • Drop – Flow entry add – Flow entry remove – Packet-in – Packet-out
Detail of an experiment UPV/EHU- DOCSIS – Learning Switch •
•
Resources involved – EHU island: • 1 DOCSIS ALIEN device exposed as OF1.0 switch. • 1 NEC IP8800 OF1.0 switch. • 1 VM end host (data plane) deployed in the server behind the DOCSIS platform. – I2Cat island: • 2 OF switches. • 1 VM end host (data plane). • 1 VM OpenFlow POX controller (control plane). UPV/EHU linked to I2Cat via a Layer-2 1 Gbps link provided by RedIRIS experimental network using Q-in-Q.
UPV/EHU – DOCSIS Learning switch 00:10:00:00:00:00:00:02
10:00:00:00:00:00:00:01
VLAN 700 13
19
4
6
DOCSIS ALIEN SWITCH
12
21
FLOWVISOR EHU
FLOWVISOR I2CAT
2
00:10:00:00:00:00:00:04
31
12
10:00:00:00:00:00:00:02
192.168.64.1/16
192.168.0.2/16 I2T
Verdaguer VM bcn (Server) 10.216.12.41/24
VM Bilbao (client) 10.216.64.16/24
OFELIA Control
Controller 10.216.12.121/24
OFELIA Data plane Flowvisor OF control
UPV/EHU- DOCSIS Learning Switch - Workflow • •
•
•
•
•
Step 0 (data plane application starts) – The end host at EHU launches the application (E2E data plane): ping. – The end host sends an ARP request to obtain the destination MAC address. Step 1 (ARP request -> flooding) – The ARP request packet generates a packet-in message (OF) to the controller. – The learning switch learns the SRC MAC address (source end host). – The controller sends a packet-out to flood the ARP request. – The ARP request floods the data plane. Step 2 (ARP reply -> flowmod) – The ARP request packet reaches the target host at i2cat. – The target end host generates the ARP response packet. – The ARP response packet generates a packet-in message (OF) to the controller. – The learning switch leans the SRC MAC address (destination end host). – The controller sends a flowmod to install an exact flow entry for ARP reply packet. – The ARP reply from the buffer is sent to the appropriate outport. Step 3 (ICMP echo request -> flowmod) – The ARP reply packet reaches the source host at EHU. – The end host at EHU sends the ICMP echo request to reach the end host at i2cat. – The ICMP echo request generates a packet-in message (OF) to the controller. – The controller sends a flowmod to install an exact flow entry for ICMP (EHU->i2cat). – The ICMP echo request from the buffer is sent to the appropriate outport. Step 4 (ICMP echo reply -> flowmod) – The ICMP echo request reaches the destination host at i2cat. – The end host at i2cat sends the ICMP echo reply to the end host at EHU. – The ICMP echo reply generates a packet-in message (OF) to the controller. – The controller sends a flowmod to install an exact flow entry for ICMP (i2cat->EHU). – The ICMP echo reply from the buffer is sent to the appropriate outport. Step 5 (ping at terminal) – The ICMP echo reply reaches the source host at EHU. – The ping data is displayed in the terminal.
Learning switch experiment UPV/EHU- DOCSIS- Ping
Learning switch experiment UPV/EHU- DOCSIS- Ping
UPV/EHU – DOCSIS Learning Switch - Conclusions. •
Successful testing of: – OpenFlow 1.0 – Flow entry match to: • In_Port • Ethernet type (recognize IPv4 and ARP) • Source IPv4 address • Destination IPv4 address • IP protocol (TCP) • Source TCP port • Destination TCP port – Supported actions: • Forward to a port / Flood • Drop – Flow entry add – Flow entry remove – Packet-in – Packet-out
Integrated experiment •
Application chosen: an OpenFlow aware CCN solution. – –
CONET project developed by UNIROMA in OFELIA See: • [CONET1] L. Veltri, G. Morabito, S. Salsano, N. Blefari-Melazzi and A. Detti, “Supporting Information-Centric Functionality in Software Defined Networks”, SDN’12 Workshop on Software Defined Networks (ICC 2012), Ottawa, Canada, 2012. • [CONET2] N. Blefari-Melazzi, A. Detti, G. Morabito, S. Salsano, and L. Veltri, “Information Centric Networking over SDN and OpenFlow: Architectural Aspects and Experiments on the OFELIA Testbed”, http://arxiv.org/abs/1301.5933, January 2013. Version 1
[CONET1]
Introduction to CCN – CONET support in OpenFlow Clean Slate Conet packet
Conet header NID, CSN
Evolutionary IPv4 Conet packet
IP header IP Conet option NID, CSN
Conet payload
Payload header
IPv4 Conet packet for OF subnet
IP header IP Conet option NID, CSN TAG
src_port, dst_port
Payload header
Each chunk is identified by ICN-ID and CSN
Map (ICN-ID,CSN) in a tag (src,dst) of an UDP-like datagram
Conet payload
Conet payload
ICN-ID and CSN carried in the “IP options” field
OpenFlow cannot read IP options
[Salsano, E.; Personal Communication]
Introduction to CCN – Sequence of operations Content Client /BN
OpenFlow SW/IN
OpenFlow Controller
Cache Server
Content Server /BN
Interest C.1 Interest C.1 Data C.1 Data C.1
Interest C.2
Data C.2
Data C.2
Data C.n
Data C.n OFFlowMod
Interest C.1 Data C.1
Data C.2
...
...
...
Data C.2
Interest C.2
...
Data C.1
Store C Interest C.1
Data C.1
[CONET2]
Introduction to CCN – Testbed at OFELIA Icinga mgt server
OpenFlow Controller Floodlight 10.216.12.88
March VM server
Rodoreda VM server 10.216.12.83
JSON
eth0
10.216.12.96
VLAN 16 eth2
Content Server/BN 10.216.12.86
10.216.12.84 eth0
eth0
OpenFlow Content Client/BN
VLAN 16 eth1 192.168.1.23 CONET-C 192.168.1.17 PLAIN IP
Cache Server
OpenFlow SW
OpenFlow SW
VLAN 16 eth2 192.168.1.8 CONET-S 192.168.1.9 PLAIN IP [CONET2]
CONET experiment over ALIEN slice •
OFELIA slice, – the ALIEN slice (VLAN 700), is created – Involving resources from several OFELIA islands: EHU, i2cat, iMinds, UNIVBRIS and PSNC. – The ALIEN slice includes computational resources (e.g. end-nodes, CONET nodes and OpenFlow controller), ALIEN hardware platforms) and OpenFlow devices from the pool of resources provided by OFELIA. – The CONET scenario includes a content client, content server and a cache server. These CCN end nodes are deployed in different OFELIA islands to demonstrate the proper integration of ALIEN islands (currently part of OFELIA) in the experiment.
CONET experiment over ALIEN slice Resources used •
•
EHU island: – 1 DOCSIS ALIEN device exposed as OF1.0 switch: • DPID 10:00:00:00:00:00:00:02, VLAN 700, ports 12 and 21. – 1 NEC IP8800 OF1.0 switch: • DPID 10:00:00:00:00:00:00:01, VLAN 700, ports 5, 6 and 19. – Port 19 is connected to i2cat island. – 2 VMs deployed from 2 different computation resources: • 2 CONET content-clients to request CCN content. I2cat island: – 4 OF switches: • DPID 00:10:00:00:00:00:00:01, VLAN 700, ports 2, 3 and 11. – Port 11 is connected to iMinds island. • DPID 00:10:00:00:00:00:00:02, VLAN 700, ports 1, 4, 12 and 13. – Port 13 is connected to EHU island. • DPID 00:10:00:00:00:00:00:03, VLAN 700, ports 1 and 9. – Port 9 is connected to UNIVBRIS island. • DPID 00:10:00:00:00:00:00:04, VLAN 700, ports 2 and 12. – 2 VMs deployed from 2 different computation resources: • 1 CONET cache server to cache CCN content. • 1 CONET server 1 to provide CCN content. – 1 VM OpenFlow CONET controller (10.216.12.121).
iMinds island: 1 OF switches: DPID 01:00:00:00:00:00:00:FF, VLAN 700, ports 3 and 5. Port 3 is connected to i2cat island. Port 5 is connected to UNIVBRIS island. UNIVBRIS island: 3 OF switches: DPID 05:00:00:00:00:00:00:02, VLAN 700, ports 5, 9 and 17. Port 9 is connected to PSNC island. DPID 05:00:00:00:00:00:00:03, VLAN 700, ports 7 and 16. Port 7 is connected to i2cat island. DPID 05:00:00:00:00:00:00:04, VLAN 700, ports 6 and 9. Port 9 is connected to iMinds island. PSNC island: 1 OF switches: DPID 11:00:00:00:00:00:00:01, VLAN 700, ports 11 and 17. Port 11 is connected to UNIVBRIS island. 1 VMs deployed from computation resources: 1 CONET server 2 to provide CCN content.
CONET experiment over ALIEN slice Resources used CCN Server 2 Data (VLAN 700)
10.216.65.11 / 24 192.168.64.2 / 16
Control
PSNC
CCN Controller 10.216.12.121
CCN MRTG 10.216.12.31
FLOWVISOR PSNC
17
ALIEN EzApp
11 9
5 05:00:00:00:00:00:00:02
17
5
3
FLOWVISOR EHU
FLOWVISOR I2CAT
19
EHU 6 12
10:00:00:00:00:00:00:02
ALIEN DOCSIS
21 CLIENT 2 10.216.64.16 / 24 192.168.0.2 / 16 eth2
05:00:00:00:00:00:00:04
9
3
1
I2cat
00:10:00:00:00:00:00:02
12
eth2 192.168.128.1/16
7 5
CCN Server 10.216.12.41 / 24
00:10:00:00:00:00:00:03
2 1
05:00:00:00:00:00:00:03
9
00:10:00:00:00:00:00:01
13
16
01:00:00:00:00:00:00:FF
11
10:00:00:00:00:00:00:01
6
8
iMinds
FV iMinds
UNIVBRIS
FV Bristol
CCN Cache Server 2 10.216.22.74 / 24 192.168.64.100 / 16 eth1
11:00:00:00:00:00:00:01
12
2
4 00:10:00:00:00:00:00:04
CCN Cache Server 10.216.12.37 / 24
eth1 192.168.64.1 / 16
CONET experiment over ALIEN slice Test Workflow The test performed distinguishes two phases. In the first phase, the client requests a content, which is provided by the server. While the content is going from the server to the client, the cache server is caching the same content. In the second phase, the client requests the same content again, and this time, the cache server provides the content. • •
• •
•
•
Step 0 (CONET client and server are configured) – The CONET client and the CONET server must be properly configured. – The CONET client must configure the server associated with each content. Step 1 (CONET client requests some content to the CONET server) – The CONET client requests a content (i.e. executes a command at the terminal). – The client’s configuration resolves which CONET server is associated to that content. – The CONET client sends the request to the appropriate server. Each content chunk is requested independently and generates a different request packet. Step 2 (CONET server provides the content to the CONET client) – The CONET server receives the client’s requests (one per chunk) and sends the content chunks back to the CONET client. – The whole content is sent in chunks from the server to the client. Step 3 (CONET cache server caches the content) – While the content chunks pass through the DPID with a CONET cache server associated, the content chunks are duplicated and the copy is sent to the cache server. – Actually, step 2 and step 3 are simultaneous steps. Step 4 (CONET client requests the same content again) – Once the complete content is received at the client, the CONET client requests the same content again (i.e. executes the same command at the terminal). – Based on the client’s configuration, the CONET server is resolved. – The CONET client sends the request to the associated server. – The requests from the client (one per chunk) reach the first CONET cache server in the path to the server and do not progress further (i.e. the server is not aware of any new request). Step 5 (CONET cache server provides de content to the CONET client) – The CONET cache server receives the client’s requests and sends the content chunks back to the CONET client. – The chunks already cached are sent from the cache server to the client. – If a certain chunk has not been already cached, the client’s request progresses until the server and the server provides the chunk.
CONET experiment over ALIEN slice Information Flow •
The following graphics shows the data plane traffic (incoming traffic in green and outgoing in blue) in the three most relevant elements in this test: the CONET client at the EHU island, the CONET server at the PSNC island, and the CONET cache server at the i2cat island.
A client behind the DOCSIS ALIEN in the EHU island request a content (100 K file) to a server located in the PSNC island. Data plane traffic of the CONET client at the EHU island
The content is provided by the server at the PSNC island, while a cache server located in the i2Cat island caches this content. Data plane traffic of the CONET server at the PSNC island
The same client in the EHU island requests the same content, and this time, the content is provided by the cache server in the i2Cat island. Data plane traffic of the CONET cache server at the i2cat island
CONET experiment over ALIEN slice Detail of the flowmod that redirects the content requests to the CONET cache server
Conclusions for the integrated experiment • • • •
The equipment which has been upgraded with ALIEN HAL can be successfully integrated under an OpenFlow control plane. The possibilities to define extensions (and use them) is related to the control framework (e.g. virtualized). OFELIA influence in performance measurements needs to be fully assessed in every use case. Features available in the new hardware are sometimes difficult to expose in a FlowVisor-based environment – i.e.: QoS aware service flows available in DOCSIS Embedded signalling in Ethernet or IP header (FV compatible) or Can be used either with specifics extensions
General conclusions • Deploying applications over OFELIA is not trivial. – Help from OFELIA team needed (UNIVBRIS, iMinds and I2Cat OFELIA team: thank you!!!!) – Manual approval of flowspaces: A change in any part of an island needs re-approval of whole setup. – Nevertheless a good way to have an OF infrastructure available for experimentation. – Dependency on OF1.0 is very limiting. • HAL approach is suitable – HAL modified devices previously no available for OF control now can be fully integrated under a OF controller. – It supports different type of hardware and it is demonstrated to work with stock applications.
Summary and Conclusions
Bartosz Belter and Kostas Pentikousis
[email protected],
[email protected]
EWSDN Workshop, Budapest, Hungary 1 September, 2014
What’s the vision?
•
ALIEN ambition is to develop an OpenFlow based programmable network architecture over non-OpenFlow capable hardware – ALIEN by providing a novel concept of Hardware Abstraction Layer enables non-OpenFlow platforms (aka “alien hardware”) to participate in network experiments and behave as standard Open Flow switch to control and management layer residing on top of the physical infrastructure
What we offer? •
Hardware Abstraction Layer (HAL): – decouples hardware-specific control and management from the network-node abstraction mechanism (i.e. OpenFlow) – hides the device complexity as well as technology and vendor specific features from the Control Plane.
•
A reference implementation of HAL – framework for the development of hardware drivers for various network devices – hardware drivers are developed using unified interfaces
•
Implementations of HAL Hardware Specific Parts for ALIEN platforms: – EZappliance, Cavium OCTEON, NetFPGA, Layer0 switch, Dell/Force10 Switch, GEPONand DOCSIS)
•
ALIEN advances to the OFELIA Control Framework
•
Detailed reports from experiments performed over OFELIA and ALIEN hardware
Further Reading •
Start here: – D. Parniewicz, et al., “Design and Implementation of an OpenFlow Hardware Abstraction Layer”, Proc. SIGCOMM DCC 2014, Chicago, USA, August 2014. – L. Ogrodowczyk et al., “Hardware Abstraction Layer for non-OpenFlow capable devices”, Proc. TERENA Networking Conference, Dublin, Ireland, May 2014
•
Then continue here: – Deliverables – Software
•
Demos @ EWSDN 2014
www.fp7-alien.eu
Acknowledgement This work was conducted within the framework of the FP7 ALIEN project, which is partially funded by the Commission of the European Union under grant agreement no. 317880