of pre-existing applications and workflows through specific com- .... level, which includes a user application lifecycle management. (LCM) proxy for mediating the ...
On the deployment of an open-source, 5G-aware evaluation testbed Luis Tom´as Bolivar1 , Christos Tselios2 , Daniel Mellado Area1 and George Tsolis2 1
2
Red Hat Inc., Spain {ltomasbo, dmellado}@redhat.com Citrix Systems Inc., Delivery Networks Business Unit, Patras, Greece {name.surname}@citrix.com
Abstract—This paper focuses on the design and deployment of a virtualized evaluation testbed, solely based on open-source software components to support the full extend of features and functional requirements of dominant architectural concepts of 5G, such as Network Function Virtualization (NFV) and Multi-access Edge Computing (MEC). For meeting the elevated standards of performance and versatility set by existing 5G architectural approaches, we introduce a perplexed, yet fully customizable platform, capable of supporting all prerequisites of contemporary deployments, paired with backwards compatibility of pre-existing applications and workflows through specific component integration. However, the most important contribution of the proposed architectural blueprint is that alleviates the thorny issues of rapid service instantiation demand and increased networking performance requirements, by supporting both container and virtual machine (VM) orchestration. By extending its cornerstone element, Kuryr to leverage certain Openstack networking attributes, use ManageIQ as the orchestration module for containers as well as VMs, and align to the principles of both NFV and MEC, the proposed evaluation testbed is considered a solid candidate platform for conducting real-world solution testing, without the licensing premium. Index Terms—Cloud Computing, Virtualization, containers, SDN, 5G, MEC.
I. I NTRODUCTION The forthcoming 5G Networking Era is often considered by analysts and researches in both industry and academia, as the commencement of the hyper-connected society [1], where billions of users, devices and sophisticated machines will be capable of exchanging information and data. Impending telecommunication architectures should blend a multitude of novel and exciting technologies ranging from radio, transport and IP networks to industrial sector verticals and applications. The necessity for such a complex ecosystem compels for a substantial increase of connected devices and data rate (10100 times the existing number), less than 1 millisecond endto-end over-the-air latency, coverage and availability increase reaching 100% and 99.99% respectively, 1000 times larger throughput, real-time information processing and transmission, significantly lower network management operation and energy consumption costs and, last but not least, seamless integration of all current wireless technologies [2], [3] following usercentric principles [4],[5]. Architectural flexibility is amongst the prime requirements of 5G, as it allows the seamless operation of different building
blocks originating from existing open-source implementations and legacy infrastructure paradigms, now converted to functional schemes. In order to achieve this objective, two convoluted and equally important challenges need to be addressed sufficiently: interoperability and federation. The heterogeneous nature of participating technologies and network segments create a highly perplexed environment, in which the intended functionality verification can be accomplished only through extensive interoperability experiments each with a clearly defined testing methodology, executed on the appropriate testbed. However, due to the large number of available frameworks, solutions and standards, even determining the most suitable one for experimental purposes has become a challenging task. The thorny issue of proper component selections, entails a more radical approach. Instead of selecting already compromised building blocks, due to each ones’ inability of meeting the demanding standards of 5G, it is necessary to follow a combinatorial path to success by interconnecting software elements with supplementary strong-points and overlapping weaknesses, into a highly competitive amalgam that will match the expected degree of functionality for an end-to-end testbed. Given the fact that integrating proprietary solutions will provide leverage to certain players against others, the ideal testing environment must be kept open, highly available and accessible by everyone. Various stakeholders may either engage themselves into experimentation of their technology using elements of an open and available architecture or even replicate whole parts of it and integrate it to their proposed solutions [6]. The benefit of an open-source solution is that any accumulated practical experience will be documented, therefore information and guidelines on best practices for 5G verticals or enhancements of specific network segments, will become available and benefit the entire community. In this paper, we outline a testbed architecture that consists of purely open-source software, extends the functionality of its individual components and techniques towards developing an efficient playground for testing many well known networking concepts such as Network Function Virtualization [7], Software-Defined Networking [8] and Multi-access Edge Computing [9], [10]. The rest of the paper is organized as follows: Section II begins by presenting our motivation along with some similar attempts while Section III presents an
overview of the proposed architecture and its components. Section IV presents certain contributions that enhance the characteristics of the proposed solution and finally, Section V draws all major conclusions and summarizes the paper. II. M OTIVATION AND P REVIOUS W ORK Traditional networks have always relied on physical appliances such as routers, switches and load balancers, all implemented on dedicated hardware and optimized for specific tasks [11]. However, the advent of 5G will introduce a series of integral changes to currently deployed infrastructure, with the most important being the conversion of all services to software functions, even those located in the radio access network. Despite the profound complexity this task involves, late developments and real-life deployments in various sectors of the 5G research spectrum provide the necessary technology, system building blocks and architectural concepts towards assembling and operating such platform as an interconnected, distributed ecosystem. At the core of this ecosystem lies the Network Function Virtualization (NFV) paradigm, which aims to port physical appliances to software, creating Virtualized Network Functions (VNF) thus replacing proprietary hardware with commercial off-the-shelf (COTS) general purpose equipment. Together with Multi-access Edge Computing (MEC) [9], a new emerging concept which brings computational resources closer to end user equipment thus eliminating network delay and latency, NFV is considered the backbone of next generation networking. It is therefore essential for any 5G-aware evaluation testbed to support these functional paradigms efficiently in its core design. A. ETSI NFV Architecture One of the most complete architecture frameworks for NFV was proposed by the ETSI NFV Industry Specification Group (ISG) and is presented in Figure 1. In this approach, certain building blocks are clearly described such as the management and configuration (MANO) module, which includes the NFV Orchestrator (NFVO), the VNF Manager (VNFM) and the Virtualized Infrastructure Manager (VIM). Every virtual computational, storage and networking resource resides in the Network Function Virtualization Infrastructure (NFVI) layer, which provides the necessary interfaces to every complementary entity. The ETSI architecture has gained significant momentum and there are currently several projects developing ETSI-compliant functions and components, especially for the NFV MANO stack. Remarkable examples of such attempts are Open Source MANO (OSM) [12], Open Baton [13] and ManageIQ [14]. Complementary to the work being done in these initiatives, there are relevant results brought to the NFV arena by diverse well-established upstream open source projects such as OpenStack [15], KVM [16] and Open vSwitch [17]. Other standard defining organizations like OASIS [18] and open communities such as OPNFV [19], do provide solutions for NFV Orchestrators, NFVI and VIM, some of them even showcased [20], but the holistic approach of ETSI leaves limited space for defining something disruptive. This endeavor
will integrate IT and networking by having large network segments rely on software only, while Network Services (NS) can be deployed on top of such a softwarized infrastructure by VNF composition and chaining, allowing immense flexibility, efficiency, accurate resource utilization provisioning and vast modularity. NFV MANO
OSS/BSS
Service, VNF, Infrastructure Description
NFV Orchestrator (NFVO)
VNF EMS 1
EMS N
NS Catalogue
VNF 1
VNF N
VNF Catalogue
VNF Manager (VNFM)
NFVI
NFV Interfaces
Virtual Virtual Virtual Compute Storage Network Virtualization Layer Compute, Storage & Network Hardware
NFVI Resources
Virtualized Infrastructure Manager (VIM)
Fig. 1: ETSI NFV Reference Architecture [21]
B. Multi-access Edge Computing Multi-access Edge Computing (MEC) is a new emerging network deployment concept expecting to tackle network delay and increased network latency, by introducing the concepts of cloud computing to the mobile network ecosystem. It can be seen as a cloud server with increased computational and storage resources, operating at the edge of the mobile network, rapidly alleviating demanding tasks while adhering to strict delay requirements. This approach abolishes the traditional network operation model with the network backend overseeing everything, through dedicated hardware with little or no flexibility. MEC system level CFS portal
Operations Support System User application LCM proxy
UE application
Mobile edge orchestrator
Mobile edge platform Mobile edge service
Service registry Traffic rules control
DNS handling
MEC Application MEC Application
Mobile edge platform
MEC Application Virtualization Infrastructure
MEC host
ME platform element mgmt
ME app rules & reqsts mgmt
ME app lifecycle mgmt
Mobile edge platform manager Virtualization Infrastructure manager
MEC host level
Fig. 2: MEC reference architecture defined by ETSI
MEC standardization activities proceed in a steady pace, with drafts of specifications already been released by ETSIs’ dedicated industry specification group, ISG MEC in a series of documents such as [22], [23], [24] and [25]. Especially for the first one, its main purpose is to ensure that the exact same terminology is used by all relevant ETSI specification related to MEC. Figure 2 presents the reference architecture as described by ETSI in [23], consists of the various functional blocks together with the necessary communication interfaces between them. The aforementioned functional blocks, may not always represent physical nodes in the mobile network, but software entities running on top of the virtualized infrastructure. The virtualized infrastructure is perceived as a physical datacenter sliced through hypervisor software [26] on which the VMs representing functional entities run. From operational perspective, one may safely assume that the majority of ETSI NFV components will be re-utilized in the MEC reference architecture, since both platforms are virtualization-dependent. It is therefore expected that a system, server or applicationlevel management and orchestration entity should be available in all MEC deployments as well. MEC can be exploited either by a user enabled application located directly in the user equipment, or by third party customers (such as commercial enterprise) via customer facing service (CFS) portal. Both the UE and the CFS portal interact with the MEC system through MEC system management level, which includes a user application lifecycle management (LCM) proxy for mediating the requests, the operation support system (OSS) of the mobile operator and the mobile edge orchestrator which is the core functionality in the MEC system management level since it maintains overall view on available computing/storage/network resources and the MEC services. In this respect, the mobile edge orchestrator allocates the virtualized MEC resources to the applications that are about to be initiated depending on the applications requirements (e.g., latency). Furthermore, the orchestrator also flexibly scales available resources to already running applications [23]. The cornerstone of the proposed architecture is the Mobile edge host/server, an entity that contains the mobile edge platform and a virtualization infrastructure which provides compute, storage, and network resources for the mobile edge applications. The virtualization infrastructure includes a data plane that executes the traffic rules received by the mobile edge platform, and routes the traffic among applications, services, local and external networks. The Mobile edge platform on the other hand is responsible for the following functions: •
•
• •
offering an environment where the mobile edge applications can discover, advertise, consume and offer mobile edge services receiving traffic rules from the mobile edge platform manager, applications, or services, and instructing the data plane accordingly hosting mobile edge services providing access to persistent storage and additional information
III. P ROPOSED A RCHITECTURE OVERVIEW Multi-access Edge Computing advocates for the deployment of virtualized network services at remote access networks, which are placed next to base stations and aggregation points, and run on x86 commodity servers. In other words, its task is to enable services running at the edge of the network, so that services can benefit from higher bandwidth and low latency. This approach consequently creates several new challenges. On one hand, the network services’ reaction to current situations (for instance, spikes in the amount of traffic handled by specific VNFs/NS) needs to be rapid and also the application lifecycle management, including instantiation, migration, scaling, and so on, must be quick enough to provide a good overall user experience. However, the amount of available resources at the edge is notably limited when compared to central data centers. Therefore, these resources must be used efficiently, which necessitates careful virtualization overhead estimation and planning (both time-wise and resource-wise). Based on the current status of available upstream components (OpenStack, Kubernetes, OSM and ManageIQ) as well as the newly-introduced 5G performance requirements, we strongly believe that VMs may not always be the proper approach for all the needs of such a complicated and performance-oriented ecosystem. Other solutions such as the unikernel VMs and/or containers should be also available in the platform and initiated when necessary, we therefore forecast a mix container and VM deployment scenario, at least for the following few years. Note that even though there is a high interest towards moving more functionality to containers over the next years, the priority so far is still set on new applications rather than the legacy ones. And not all the applications will/can be migrated at the same time. On top of that, despite the fact that VMs and containers have a lot in common, certain differences also exist. These technologies should be seen as complementary, rather than competitive ones. For example, VMs can be a perfect environment for running containerized workload, providing a more secure environment for running containers with strict multi-tenancy support, as well as higher flexibility and even improved fault tolerance. This is commonly referred to as nested containers and is already fairly common practice even in productionenvironments, where Kubernetes [27] or OpenShift [28] instances run on top of OpenStack VMs. Consequently, with this blend of VMs and containers already operational in datacenters all over the globe, any 5G testbed design must follow a similar approach ensuring that the best features from both technologies are incorporated into the proposed orchestration framework. Figure 3 illustrates the various software components of the proposed 5G-aware evaluation testbed architecture along with their actuation level. A. VIM Option: OpenStack, Kubernetes and Kuryr OpenStack and Kubernetes are the most commonly used VIMs for VMs and Containers, respectively. In addition, Openshift is another well-known framework for container
MANAGE IQ
NFVO
VIRTUAL MACHINES
CONTAINERS
MAGNUM
TACKER VNFM
HEAT OPENSTACK
VIM
networking together under a single API. Overall, this approach allows: • A single community-sourced networking method regardless of having containers, VMs or both deployed in the datacenter • Leveraging OpenStack vendor support experience in the container space • A quicker path to Kubernetes & OpenShift for Neutron networking users • Smooth workload transition to containers/microservices
KUBERNETES
KURYR
INFRASTRUCTURE
Fig. 3: Proposed 5G-Aware Evaluation Testbed Architecture
management, leveraging Kubernetes’ power to introduce additional DevOps functionality. For providing a common infrastructure for both VMs and containers, the problem is not simply on how to increase or slice the available computational resources, regardless if these are VMs or containers, but also how to efficiently connect these computational resources among themselves and to the users. In other words, networking is an attribute of paramount importance and needs to be carefully addressed in the early stages of the testbed platform design. Regarding VMs in OpenStack, the Neutron project already has a very rich ecosystem of plug-ins and drivers which provide the necessary networking solutions and services, like load-balancing-asa-service (LBaaS), virtual-private-network-as-a-service (VPNaaS) and firewall-as-a-service (FWaaS). By contrast, in the container ecosystem when it come to networking, there is no standard networking API and implementation, therefore each solution tries to reinvent the wheel overlapping with other existing solutions. This is especially true in hybrid environments including blends of containers and VMs. As an example, OpenStack Magnum [29] had to introduce abstraction layers for different libnetwork drivers depending on the Container Orchestration Engine (COE). Therefore, there is a need to further advance in the container networking and its integration in the OpenStack environment. To accomplish this, we have used and worked on a recent project in OpenStack named Kuryr, which tries to leverage the abstraction and all the hard work previously done in Neutron, and its plug-ins and services. In a nutshell, Kuryr aims to be the ”integration bridge” between the two communities, containers and VMs networking, avoiding that each Neutron plug-in or solution needs to find and close all existing gaps independently. Kuryr allows to map the container networking abstraction to the Neutron API, enabling consumers to choose the vendor and keep one high quality API free of vendor lock-in, which in turn facilitates bringing container and VM
B. VNFMs To manage VM-based VNFs, our intention is utilizing and extend already available OpenStack components. We plan on using HEAT [30] to make the deployment actions through templates. And these templates will be managed by another OpenStack component designed for that end, named Mistral [31], to be able to adapt to the given workflows. Besides this, the Tacker OpenStack component [32] will be used to handle VNF and NS descriptors onboarding, translating them from TOSCA [33] to HEAT templates that will be processed by the corresponding entity to execute the requested deployment actions. As for container based VNFs, there are different options. Kubernetes itself already provides certain VNF management functionality. On top of that, for Kubernetes deployments on top of OpenStack VMs, it is possible make use of Magnum, that provides extra capabilities for container management. On the other hand, OpenShift already has additional container management functionality on top of Kubernetes that can be used through its API. Moreover, as OpenShift leverages Kubernetes functionality, all management capabilities currently available in both Kubernetes/Magnum can be used on Openshift deployments too. In addition, there is some initial work (mostly design and first steps) on extending Tacker to also allow Kubernetes as a VIM, also using Kuryr as the integration layer between containers and VMs 1 . C. NVFOs Last but not least, at the upper level it is possible to utilize a variety of of different NFV orchestrators. Due to the lack of a complete end-to-end orchestation solution at the moment, different options have been carefully evaluated. Provided that the usage of open-source software was a prime consideration for the design of the proposed 5G-aware evaluation testbed, two of the most promising solutions were deployed, namely OSM and ManageIQ. OSM appears to have significant momentum paired with a large support by the NFV community, however its container orchestration capabilities are non-existent at the moment. This is the prime reason we decided to further explore and extend ManageIQ, being the only one capable orchestrator dealing with different VM and container providers, which, as mentioned before, is a necessity for 5G deployments in 1 https://github.com/openstack/tacker-specs/blob/master/specs/queens/Kubernetesas-VIM.rst
the foreseable future. In addition, ManageIQ allows to easily handle multiple deployments from a single operational point. This is without a doubt a major asset towards deploying a 5G testbed with inherent support of different cloud deployments at the network edge besides the core. Moreover, ManageIQ provides enough flexibility to include new orchestration actions by relying on Ansible [34] playbooks to trigger/execute new configuration and/or deployment actions. IV. N OVEL C ONTRIBUTIONS T OWARDS A 5G- AWARE A RCHITECTURE Upon presenting the components at each hierarchy level (VIM, VNFM, and NFVO), together with the ”glue” between VMs and containers (Kuryr), it is important to highlight the different deployment options. As shown 2 , VMs and containers can be deployed in a side-by-side or in a nested manner. Some applications (MEC Apps) or VNFs may need really fast scaling or spawn responses and therefore must be directly executed on dedicated bare metal deployments. In this case, these elements will run inside containers to take advantage of their easy portability and life cycle management, unlike the old-fashioned bare metal installations and configurations. Other applications and VNFs may not require such fast scaling or spawn times but depend on higher network performance (latency, throughput) and still need to retain the flexibility given by containers or VMs, expose specific interfaces [35] and exceed certain network traffic thresholds for testing purposes [36], thus requiring a VM with SR-IOV or DPDK support. Finally, there may be other applications or VNFs that benefit from extra manageability, consequently taking advantage of running in nested containers, with stronger isolation (and thus improved security), and where some extra information about the status of the applications is known (both the hosting VM and the nested containers). This approach also allows other types of orchestration actions over the applications. One example being the functionality provided by Magnum OpenStack project [29] which allows to install Kubernetes on top of the OpenStack VMs, as well as some extra orchestration actions over the containers deployed through the virtualized infrastructure. A. Kuryr As stated before, Kuryr is the glue between VMs and containers. It is a recent project in OpenStack that tries to leverage the abstraction and all the hard work previously done in Neutron, its plugins and services, and use them to provide production-grade networking for container use cases. It was designed with a two-fold objective: a) make use of Neutron functionality in containers deployments; and b) be able to connect both VMs and containers in hybrid deployments. Thanks to Kuryr, all additional Neutron features such as security groups or floating IPs can be applied directly to any container ports, by having Kuryr service operating as an 2 https://ltomasbo.wordpress.com/2017/01/24/superfluidity-containers-andvms-deployment-for-the-mobile-network-part-2
intermediary between the Docker or the Kubernetes network service and the Neutron server. One of the main advantages of Kuryr is that it provides a way to avoid double encapsulation as is the case in current nested deployments, for example when the containers are running inside VMs deployed on OpenStack: one for the Neutron overlay network and another one on top of that for the containers network (e.g., flannel overlay). This creates an overhead that needs to be eliminated for the 5G deployment scenario which any real-world project should focus on supporting. There were some gaps regarding nested container support at Kuryr that we have addressed during the design and development phases of software components for the proposed evaluation testbed. We have extended Kuryr to leverage on the new TrunkPort functionality provided by Neutron (also known as VLAN-Aware-VMs) to be able to attach subports that are later bound to the containers inside the VMs, running Kuryr to interact with the Neutron server. This enables better isolation between containers co-located in the same VM, even if they belong to the same subnet as the network traffic will belong to different (local) VLANs. To render Kuryr capable of working in nested environment, a few modifications and extensions were necessary. These modifications have also been contributed to the Kuryr upstream branches, both for Docker and Kubernetes/OpenShift support, in particular: • (Docker) https://review.openstack.org/#/c/402462/ • (Kubernetes) https://review.openstack.org/#/c/410578/ The way containers are connected to the external Neutron subnets, by using the Trunk ports functionality is the following: the VM where the containers are deployed is booted with a Trunk Port. Then, for each container created inside the VM a new port is created (as in the baremetal case) and it is attached as a subport to the VM’s trunk port, therefore having a different encapsulation (VLAN) for different containers running inside the VM. They also differ from the own VM traffic, which leaves the VM untagged. Note that the subports do not have to be on the same subnet as the host VM. Thus allowing containers both in the same and in different Neutron subnets to be created in the same VM. To include this support for nested containers, a few changes were made to the two main Kuryr components, the KuryrController and the Kuryr Container Networking Interface (Kuryr-CNI). The Kuryr-Controller is the service in charge of the interactions with the OpenShift/Kubernetes API server, as well as the Neutron one. Meanwhile, Kuryr-CNI is the module in charge of the networking binding for containers and pods at each worker node, therefore one Kuryr-CNI instance must be present in each one of them. As for the Kuryr-Controller, one of the main changes is related on how the ports that will be later utilized by the containers are created. Instead of simply issue a request towards Neutron asking for a new port, two additional steps must be performed once the port is created: 1) Obtain the VLAN ID to be used for encapsulating containers traffic inside the VM.
User
OVS Agent
Neutron
Kuryr Controller
Kubernetes API server watch pod ev()
Kubelet
Kuryr CNI
watch pod ev()
create pod() watch for pod running()
create port()
ADD ev(pod)
ADD ev(pod)
ADD to network(pod) watch pod ev()
set up subport()
trunk add subports() show port(port.id)
annotate pod(vif)
MODIFIED ev(pod) with vif annotation
create vif()
MODIFIED ev(pod) with vif annotation
subport active
show port(port.id)
annotate pod(vif)
configure vif()
MODIFIED ev(pod) with vif active annotation run pod containers () MODIFIED ev(pod) running
pod running()
Fig. 4: Kuryr Sequence Diagram: Nested case
2) Call Neutron to attach the created port to the VMs trunk port by using VLAN as a segmentation type, and the previously obtained VLAN ID. This way, the port will be attached as a subport to the VM, and can be later used by the container. Moreover, the modifications at the Kuryr-CNI are targeting the new way to bind the containers to the network. Instead of being added to the OvS (br-int) bridge, containers are connected to the VMs’ virtual network interface (vNIC) in the specific VLAN provided by the Kuryr-Controller (subport). The interaction process between Kuryr and other Kubernetes/OpenShift and Neutron components, is depicted in the sequence diagram at Figure 4. Similarly to Kubelet [37], the Kuryr-Controller is watching over the OpenShift/Kubernetes API server. When a user request for creating a pod reaches the API server, a notification is sent to both Kubelet and KuryrController. The Kuryr-Controller then interacts with Neutron to create a Neutron port that will be later used by the container. It advocates towards Neutron to create the port, and also assign it to the VM’s trunk port upon receiving the VLAN ID to be used. Once this process is concluded, it notifies the API server with the information about the created port through a pod(vif) message. Then, the Kuryr-Controller awaits for the Neutron server to notify it about the status of the port changing to active, to finally re-annotate the pod(vif) with the active status information and notify the API server about it. In the meantime, when Kubelet receives the notification about the pod creation request, it calls the Kuryr-CNI to
handle the local bindings between the container and the network. Kuryr-CNI waits for the notification containing the port information and then initiates the necessary actions to attach the container to the Neutron subnet. These actions consist of creating a virtual ethernet device and attaching one of its ends to the VM’s vNIC in the specified VLAN ID (instead of attaching it to the OvS bridge as in the bare metal case) while leaving the other end for the pod. Once the notifications about the port being active arrive, Kuryr-CNI finishes its task and the Kubelet component creates a container with the provided virtual ethernet device end, and connects it to the Neutron network. B. ManageIQ One of the most powerful capabilities of ManageIQ is selfservice, which allows an administrator to maintain a catalog of requests that can be ordered by regular users. Such requests include but are not limited to actions needed for provisioning a single VM, a container or an entire application stack. The process starts with the system administrator creating a ”service bundle,” which is a collection of ”service items”. Each service item is a specific action that ManageIQ knows how to create/handle. The exact order in which items in a bundle are provisioned is specified by the administrator, in what is known as the state machine. Services typically require some amount of input. For example, if the request is to provision a VM, then a typical question would be the size of the memory and the disk. This information can be requested from the user through a dialog, which can be created using a built-in dialog editor.
Once the service bundle and the dialog are created, they need to be associated with an ”entry point” in the ManageIQ workflow engine (called ”Automate”). The entry point defines the process to provision the bundle. With the bundle definition, dialog, and entry point, the request can be published in a service catalog, which then enables users to order the service. We have used this capability to add support to run Ansible playbooks at containers deployments, in our case Kubernetes/OpenShift. Thanks to enable the execution of playbooks located at git repositories, we provide an easy way to onboard new lifecycle management actions. It is really easy to update, include, extend different playbooks that take care of different VMs/containers/Apps lifecycle actions, such as creation, termination, scaling or any other configuration actions. It is as simple as using the common git commit and git push commands. In our ManageIQ catalog item, we have defined a dialog that takes, on the one hand, the Ansible playbook to execute (usually a site.yml file), and on the other hand the provider where you want to execute it – being the only requirement to have Ansible installed at the ManageIQ appliance and sshpasswordless towards the provider’s master nodes. At Figure 5, we can see there are different playbooks to create, delete, scale and scale the Application for a given Github repository. Note there may be several of them, as many as desired, each initiating a whole series of orchestration actions that interact with possible additional third-party components when attached to the interfaces of our 5G-aware evaluation testbed as a whole. V. C ONCLUSIONS The plethora of available software solutions facilitating near identical processes can sometimes be frustrating. Especially when deploying a performance-oriented testing environment for experimental evaluation of complex networking behavior, any limitation introduced by the integrated components may compromise the overall architecture end-to-end. This paper presented a specific architectural blueprint of a virtualizationbased, 5G-aware evaluation testbed, solely relying on opensource software components rather than proprietary ones. Taking into consideration the actual performance requirements of 5G, we have implemented additional functionality to the aforementioned components, rendering them capable of handling a super-set of features, in the most efficient manner. In addition, we have committed these improvements to the opensource community, thus extending publicly available software repositories. The obvious benefit of our proposed solution is the fact that all individuals may obtain access to the source code of the described software packages an zero cost, deploy these packages in private or publicly available infrastructure without any hassle derived from possible licensing limitations, improve the code to suit their needs and evaluate a wide spectrum of services, architectural approaches and platforms recently proposed by the various standardization bodies to be integrated in the upcoming 5G networking architecture. Through this paper, we have presented a holistic 5G-aware testing environment
which can be easily deployed by researchers and scientists to facilitate their solution evaluation in a near-standardized platform, which follows the strict guidelines set by the leading authorities of the 5G Networking Era. ACKNOWLEDGMENT This work is partly supported by the European Commission under the auspices of Superfluidity Project, Horizon 2020 Research and Innovation action (Grant Agreement No. 671566). The views and opinions expressed are those of the authors and do not necessary reflect the official position of Citrix Systems Inc. R EFERENCES [1] O. Vermesan and P. Friess, Building the Hyperconnected Society: Internet of Things Research and Innovation Value Chains, Ecosystems and Markets, ser. River Publishers Series in Communications. River Publishers, 2015. [Online]. Available: https://books.google.gr/books? id=itHVCQAAQBAJ [2] N. Panwar, S. Sharma, and A. K. Singh, “A Survey on 5G: The Next Generation of Mobile Communication,” CoRR, vol. abs/1511.01643, 2015. [Online]. Available: http://arxiv.org/abs/1511.01643 [3] G. Bianchi, E. Biton, N. Blefari-Melazzi, I. Borges, L. Chiaraviglio, P. Cruz Ramos, P. Eardley, F. Fontes, M. J. McGrath, L. Natarianni et al., “Superfluidity: a flexible functional architecture for 5g networks,” Transactions on Emerging Telecommunications Technologies, vol. 27, no. 9, pp. 1178–1186, 2016. [4] C. Tselios and G. Tsolis, “On QoE-awareness through Virtualized Probes in 5G Networks,” in Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), 2016 IEEE 21st International Workshop on, 2016, pp. 1–5. [5] C. Tselios, I. Politis, K. Birkos, T. Dagiuklas, and S. Kotsopoulos, “Cloud for multimedia applications and services over heterogeneous networks ensuring QoE,” in Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), 2013 IEEE 18th International Workshop on, Sept 2013, pp. 94–98. [6] I. Politis, C. Tselios, A. Lykourgiotis, and S. Kotsopoulos, “On optimizing scalable video delivery over media aware mobile clouds,” in 2017 IEEE International Conference on Communications (ICC), May 2017, pp. 1–6. [7] R. Jain and S. Paul, “Network virtualization and software defined networking for cloud computing: a survey,” IEEE Communications Magazine, vol. 51, no. 11, pp. 24–31, November 2013. [8] D. Kreutz, F. Ramos, P. Esteves Verissimo, C. Esteve Rothenberg, S. Azodolmolky, and S. Uhlig, “Software-defined networking: A comprehensive survey,” Proceedings of the IEEE, vol. 103, no. 1, pp. 14–76, Jan 2015. [9] P. Mach and Z. Becvar, “Mobile edge computing: A survey on architecture and computation offloading,” IEEE Communications Surveys Tutorials, vol. PP, no. 99, pp. 1–1, 2017. [10] T. Taleb, K. Samdanis, B. Mada, H. Flinck, S. Dutta, and D. Sabella, “On multi-access edge computing: A survey of the emerging 5g network edge cloud architecture and orchestration,” IEEE Communications Surveys Tutorials, vol. 19, no. 3, pp. 1657–1681, thirdquarter 2017. [11] C. Tselios, I. Politis, V. Tselios, S. Kotsopoulos, and T. Dagiuklas, “Cloud Computing: A Great Revenue Opportunity for Telecommunication Industry,” in FITCE Congress (FITCE), 51st, 6, Poznan, Poland, 2012. [12] ETSI ISG, “Open Source MANO (OSM),” http://www.osm.etsi.org/, [Online]. [13] OpenBaton, “OpenBaton Openstack Driver,” [Online]. Available:, http: //openbaton.github.io. [14] “ManageIQ - Discover, Optimize and Manage your Hybrid IT,” http: //manageiq.org/, [Online]. [15] “OpenStack,” [Online]. Available:, https://www.openstack.org/. [16] “Kernel Virtual Machine,” http://www.linux-kvm.org, [Online]. [17] “Open vSwitch,” http://openvswitch.org/, [Online]. [18] “OASIS,” https://www.oasis-open.org/, [Online]. [19] The Linux Foundation, “Open Platform for NFV (OPNFV),” [Online]. Available:, https://www.opnfv.org.
Fig. 5: ManageIQ: Ansible Playbooks execution
[20] The Linux Foundation - OPNFV, “Pharos Project,” [Online]. Available:, https://www.opnfv.org/community/projects/pharos. [21] ETSI GS NFV 002, “Network Functions Virtualisation (NFV): Architectural Framework,” V.1.2.1, December 2014. [22] ETSI GS MEC 001, “Mobile Edge Computing (MEC): Terminology,” V.1.1.1, March 2016. [23] ETSI GS MEC 003, “Mobile Edge Computing (MEC): Framework and Reference Architecture,” V.1.1.1, March 2016. [24] ETSI GS MEC 004, “Mobile Edge Computing (MEC): Service Scenarios,” V.1.1.1, March 2016. [25] ETSI GS MEC 005, “Mobile Edge Computing (MEC): Proof of Concept Framework,” V.1.1.1, March 2016. [26] C. Tselios and G. Tsolis, “A survey on software tools and architectures for deploying multimedia-aware cloud applications,” in Lecture Notes in Computer Science: Algorithmic Aspects of Cloud Computing. Springer International Publishing, 2016, vol. 9511, pp. 168–180. [27] The Linux Foundation, “Kubernetes: Production-Grade Container Orchestration,” [Online]. Available:, https://kubernetes.io/. [28] Red Hat Inc., “Openshift: Container Application Platform ,” [Online]. Available:, https://www.openshift.com/. [29] OpenStack, “Magnum : API for Container Orchestration Engines ,” [Online]. Available:, https://wiki.openstack.org/wiki/Magnum. [30] ——, “Heat: Openstack Orchestration,” [Online]. Available:, https:// wiki.openstack.org/wiki/Heat. [31] ——, “Mistral: Workflow as a Service,” [Online]. Available:, https:// wiki.openstack.org/wiki/Mistral. [32] ——, “Tacker: OpenStack NFV Orchestration,” [Online]. Available:, https://wiki.openstack.org/wiki/Tacker. [33] ——, “TOSCA-Parser,” [Online]. Available:, https://wiki.openstack.org/ wiki/TOSCA-Parser. [34] Red Hat Inc., “Ansible: Simple IT Automation,” [Online]. Available:, https://www.ansible.com/. [35] I. Prevezanos, C. Tselios, A. Angelou, M. McGrath, R. Mekuria, V. Tsogkas, and G. Tsolis, “Evaluating hammer network traffic simulator: System benchmarking and testbed integration,” in GLOBECOM 2017 - 2017 IEEE Global Communications Conference, Dec 2017, pp. 1–6.
[36] I. Prevezanos, A. Angelou, C. Tselios, A. Stergiakis, V. Tsogkas, and G. Tsolis, “Hammer: A real-world end-to-end network traffic simulator,” in 2017 IEEE 22nd International Workshop on Computer Aided Modeling and Design of Communication Links and Networks (CAMAD), June 2017, pp. 1–6. [37] The Linux Foundation, “kubelet: Reference Documentation,” [Online]. Available:, https://kubernetes.io/docs/admin/kubelet/.