An OpenFlow Implementation for Network Processors

9 downloads 1536 Views 84KB Size Report
... [email protected]. Abstract—OpenFlow is catalyzing the deployment of software ... implementation of a HAL on programmable network platforms .... xDPd's configuration and management interfaces are ex- ... The Linux core implements.
An OpenFlow Implementation for Network Processors Marc Su˜ne´ † , Victor Alvarez† , Tobias Jungel† , Umar Toseef∗ , Kostas Pentikousis∗ †

BISDN GmbH, ∗ EICT GmbH {marc.sune, victor.alvarez, tobias.jungel}@bisdn.de, {umar.toseef, k.pentikousis}@eict.de Abstract—OpenFlow is catalyzing the deployment of software defined networking (SDN) technologies around the globe. In practice, however, compatibility issues hinder the deployment of an OpenFlow control plane on a number of network platforms. The FP7 ALIEN project addresses this problem by introducing a Hardware Abstraction Layer (HAL) which enables OpenFlow capabilities on legacy network elements. This paper presents the implementation of a HAL on programmable network platforms with multi-core CPUs and summarizes the implementation experience gained in the process.

I. I NTRODUCTION The efforts to realize OpenFlow functionality on legacy network devices are greatly hindered by the heterogeneity of platforms and their architectures. The FP7 ALIEN project started designing, specifying, and implementing a solution to this problem by introducing a Hardware Abstraction Layer (HAL) [1]. In doing so, ALIEN provides a solution that adds OpenFlow protocol support to network elements that do not have it natively. As a result, said elements, which we will refer to as ”ALIEN devices” in the remainder of this paper, can be integrated in an OpenFlow deployment. As a first step, we expect to see ALIEN devices introduced in SDN experimental facilities such as OFELIA [2], and then, at a later stage, in carrier-grade environments [3]. In a nutshell, HAL comprises a hardware-agnostic (or cross-hardware platform layer) and a hardware-specific part. Due to space considerations we refer interested readers to [4][5][1] for further details. The remainder of this paper presents the implementation details of the hardware specific part of HAL taking programmable network platforms as a reference. II. S OFTWARE A RCHITECTURE The Revised OpenFlow Library (ROFL; see http://roflibs. org) is one of main building blocks of the implementation presented here. The key motivation for introducing ROFL is to ease the development of OpenFlow control applications, controller frameworks, and datapath elements and can be used to build any kind of OpenFlow-enabled software. ROFL consists of three different libraries: ROFL-common, ROFLpipeline and ROFL-HAL: ROFL-common provides basic support for the OpenFlow protocol, including protocol parsers and message mangling, and can be used to build OpenFlow Endpoints (OFEs). The OFEs map the OpenFlow protocol wire representation to a set of C++ classes. Based on the design requirements, an OFE can be incorporated either in a datapath element or in an OpenFlow controller. This enables these endpoints to handle the OpenFlow control connections to any controller or datapath element. In practice, ROFL-common hides the details of the respective protocol version and provides a clean and easy-

to-use API to software developers. Currently, ROFL supports three types of endpoints, namely, OpenFlow versions 1.0, 1.2, and 1.3. In order to implement OpenFlow on ALIEN devices, a necessary and fundamental building block is an OpenFlow pipeline that can be integrated into any hardware platform written in ANSI C. The ROFL-pipeline addresses this point. The OpenFlow pipeline is useful in several ways. First, it can be used as a data model of the forwarding plane of an OpenFlow switch. Second, it can serve as a data model and state management library, maintaining the state of the installed flowMod and groupMod entries, associated timers, statistics, and so on, enabling platform-specific code to capture events (e.g. flowMod insertion and removal) and finally use other APIs to mangle ASICs or other device configuration. Third, the ROFL-pipeline can be employed as a data model, state manager and a software OpenFlow packet processing library, which uses packet processing APIs to process packets in a software or hybrid (i.e. hardware-cum-software) OpenFlow datapath elements. Furthermore, the ROFL OpenFlow pipeline supports specific matching algorithms (e.g. flowMod lookup) that can be defined on a per-table and per-logical switch basis, such as, for instance, Layer-3 optimized matching. ROFL-HAL implements the interface referred to as Abstract Forwarding API (AFA) in [1] between the hardwareindependent Control and Management Module (CMM) and the hardware-dependent platform drivers (see Figure 1). The eXtensible DataPath daemon (xDPd; see http://xdpd. org) is a framework for developing OpenFlow/SDN datapath elements that uses ROFL. Its architectural design makes it easily extensible to support new forwarding devices and platforms, new OpenFlow versions and extensions. xDPd supports OpenFlow versions 1.0, 1.2, and 1.3. Currently, xDPd is available on several hardware platforms, including user-space GNU/Linux (gnu-linux), GNU/Linux Intel DPDK (gnu-linuxdpdk), Cavium Octeon (octeon), Broadcom (bcm), EZChip NP-3 (ezappliance), NetFPGA-1G (netfpga1g) and NetFPGA10G (netfpga10g). Although much of the code is open and available, certain parts for each of these platforms may be subject to hardware vendor license limitations. One of the features of xDPd is the creation of multiple Logical Switch Instances (LSIs). LSIs are created either through a configuration file, which is processed at the start-up time, or dynamically through an API. The network interfaces have to be exclusively assigned to one LSI only. This is a simple way of slicing and a basic mechanism to realize virtualization. The hardware-independent part of xDPd contains the Control and Management Module (CMM). CMM uses ROFL, specifically ROFL-common, that implements the OFE, which

(no OS). The management core deals with the particular configuration of the fast path rules in the OpenFlow pipeline, as well as with the OpenFlow controller via the OpenFlow protocol. The OCTEON platform uses a shared memory, allocated at boot time in the so-called bootmem area, that is jointly accessed by the management core and the rest of the I/O cores (SE-S). This area is used to share the data structures of the ROFL-pipeline. In general, the management core has write access to it (e.g. add/delete flows) while all processing cores have only read-only access. The processing cores use this state to process packets continuously, as illustrated in Figure 1. It should be noted that actual packet flow is going through the SE-S cores exclusively, except in the event of a packet_in. Inside the OCTEON processor itself, another API is used to access the specific functions and registers of the hardware accelerators. This API is called Simple Executive API (SEAPI) or HAL in the OCTEON User Manual (not to be confused with the ALIEN-specified HAL). The Linux core implements a pipeline that is a logical representation of the SE-S cores, and no packet actually passes through this one, other then packet_outs.

Fig. 1.

Software architecture

allow the LSIs to run different protocol versions in parallel. In order to achieve this, CMM needs to bind the proper OpenFlow endpoint version to the LSI, and to translate the wire protocol OpenFlow values to the internal data model of the pipeline. xDPd’s configuration and management interfaces are exposed through the C++ native API, that can ultimately be consumed by xDPd’s plugins. As a result, xDPd can be extended to provide further interfaces to configuration and management entities. III. HAL I MPLEMENTATION ON OCTEON This section briefly describes how HAL has been implemented for the Cavium OCTEON Programmable Network Platform (PNP). PNPs represent a set of network equipment containing a re-programmable hardware unit (NPU or FPGA) that can be adapted to a wide range of network processing tasks, such as, for example, packet switching, routing, network monitoring, firewall protection, deep packet inspection, load balancing, just to name a few. The Cavium OCTEON family offers a variety of multi-core MIPS64 processor boards especially targeted for network packet processing duties. Equipped with 1 to 48 cnMIPS cores on a single chip along with other hardware acceleration units (port I/O, cryptography, DFA, and so on), they are a highly versatile software programmable network platform. The architecture of the implementation is as follows: a single MIPS core, called the management core, runs a standard CAVIUM GNU/Linux OS which is employed to run xDPd (i.e., OFE, CMM, plug-ins, etc.) as well as the OCTEONspecific driver. The remaining cores are devoted to fastpath packet processing. These cores run on bare-metal, that is, in standalone mode (Single Executive Standalone, SE-S) without any operating system. They are optimized to execute a specific compiled binary program, the OF pipeline processing, in ”single-thread” mode, i.e. without thread context swaps

IV. C ONCLUSION This paper described both the hardware-agnostic and the hardware-specific part of the network processor port to xDPd and OpenFlow. The hardware-agnostic part has been mainly built using (i) the Revised OpenFlow Library (ROFL) which provides a foundation for the development of OpenFlow controllers and datapath elements, and (ii) the eXtensible DataPath daemon (xDPd) which allows the development of platform-specific drivers for a variety of devices. xDPD supports extensions through plug-in modules, e.g., for device configuration and management and virtualization. This paper also presented the implementation and deployment details of HAL onto the OCTEON platform, where the management core hosts the common part and a portion of the platformspecific code while the remaining standalone cores are used to perform fast packet processing. This implementation validates in practice the feasibility of the HAL architecture design and its specification, as developed in the FP7 ALIEN project, for programmable network platforms. V. ACKNOWLEDGEMENT This work was conducted within the framework of the FP7 ALIEN project, which is partially funded by the Commission of the European Union under grant agreement no. 317880. R EFERENCES [1] U. Toseef (Ed.) et al, “Report on Implementation of the Common Part of an OpenFlow Datapath Element and the Extended FlowVisor,” available at http://www.fp7-alien.eu, 2014. [2] Marc Su˜ne´ et al, “Design and Implementation of the OFELIA FP7 Facility: The European OpenFlow Testbed,” Computer Networks - Special Issue on Future Internet Testbeds, 2013. [3] W. John et al, “Research Directions in Network Service Chaining,” in IEEE SDN4FNS, Nov 2013, pp. 1–7. [4] D. Parniewicz, et al, “Design and Implementation of an OpenFlow Hardware Abstraction Layer,” in Proceedings of the ACM SIGCOMM Workshop on Distributed Cloud Computing (DCC), Aug. 2014. [5] L. Ogrodowczyk, et al, “Hardware Abstraction Layer for Non-OpenFlow Capable Devices,” in Proceedings of the TERENA Networking Conference (TNC), May 2014.

Suggest Documents