MCN Deliverable Template - Mobile Cloud Networking

0 downloads 320 Views 4MB Size Report
Apr 29, 2014 - Centre level (for interconnectivity of the service instance components) in ...... the coming months up to
FUTURE COMMUNICATION ARCHITECTURE FOR MOBILE CLOUD SERVICES Acronym: MobileCloud Networking Project No: 318109 Integrated Project FP7-ICT-2011-8 Duration: 2012/11/01-2015/09/30

D3.2 Infrastructure Management Foundations – Components First Release Type

Prototype

Deliverable No:

3.2

Workpackage:

WP3

Leading partner:

INTEL

Author(s):

Thijs Metsch (Editor), List of Authors overleaf.

Dissemination level:

RE

Status:

Draft

Date:

29 April 2014

Version:

1.0

Copyright  MobileCloud Networking Consortium 2012 - 2015

List of Authors 

Monica Branco (INOV)



Giuseppe Carella (FhG)



Luis Cordeiro (ONE)



Marius Corici (FhG)



Desislava Dimitrova (UBERN)



Andy Edmonds (ZHAW)



Alex Georgiev (CS)



Andre Gomes (UBERN/ONE)



Peter Gray (CS)



Luigi Grossi (TI)



Atoosa Hatefi (Orange)



Sina Khatibi (INOV)



Jakub Kocur (TUB)



Giada Landi (NXW)



Cláudio Marques (ONE)



Thijs Metsch (INTEL)



Julius Mueller (TUB)



David Palma (ONE)



Dominique Pichon (Orange)



Bruno Sendas (PTIN)



João Soares (PTIN)



Bruno Sousa (ONE)



Lucio Studer Ferreira (INOV)

List of reviewers: 

Santiago Ruiz (STT)



Florian Antonescu (SAP)



Paolo M. Comi (ITALTEL)

Copyright  MobileCloud Networking Consortium 2012-2015

Page 2 / 93

Versioning and contribution history Version

Description

Contributors

0.1

Initial draft

INTEL

0.2

Collection of material

TI PTIN NEC INTEL CS NXW ONE UTWENTE TUB INOV UBERN ZHAW Fraunhofer Orange

0.3

First complete draft

INTEL

0.4

Version for peer review

INTEL

0.5

Changes based on peer review

TI PTIN NEC INTEL CS NXW ONE UTWENTE TUB INOV UBERN ZHAW Fraunhofer Orange

0.6

Version for GA review

INTEL

0.7

Final editing after peer review

INTEL

1.0

Final version ready for submission

INTEL

Copyright  MobileCloud Networking Consortium 2012-2015

Page 3 / 93

Executive Summary This deliverable shows the first prototype implementations of the infrastructure foundations. Those foundations were defined as architectural artefacts in the previous deliverable (D3.1 2013). So this deliverable can be seen as a continuation of the work previously presented. The infrastructural foundation of the MobileCloud Networking project can be roughly split into 5 parts, which also correspond to the related Tasks in the work package. It deals with the Networking on ActiveServer=" address should be changed from localhost to the wanted server address(es) for active pushing. This can be a list of comma separated addresses. Only these servers are allowed to do active metering with the configured agent. "Hostname=" should be commented out so the agent can register itself using the native hostname of the VM. A custom name could be defined here as well. Most of the time you want the automatic hostname extraction to take care of this. Note that the agent has to be restarted for changes to take place. Custom [user–parameters] can be configured in a plain text file in "/etc/zabbix/zabbix_agent.d/". They will be usable after the next agent restart and be used as keys, usable in server side defined items. 2.6.3.2.3 Data extraction Frontend Data extraction through the PHP-frontend works rather intuitive. The simplest way would be using the "Graphs" or "Latest Data" section. However, it is also possible to define own, more complex, graphs, maps and screens. This way of extraction is more of a WYSIWYG approach, so it’s not really usable for automation of configurations, but nice to have to get a simplified overview. JSON-API The Zabbix-API is a web based API and shipped as part of the web frontend. The frontend supports the use of remote HTTP requests to call the API. It is based on JSON-RPC, so all commands sent have to be JSON encoded. This way of data extraction and modification is suited to be used in third party software. Notifications There are a lot of different possible (E-mail, SMS, instant messenger or custom alert scripts) that can be triggered by defined events and conditions. This ranges from mails/SMS to local/remove executed scripts and SSH execution.

2.6.4 Documentation of the code In order to correctly support the code developed for the Common Monitoring Management System a GIT repository was created to maintain sources and documentation. https://git.mobile-cloud-networking.eu/monitoring/MaaS

Copyright  MobileCloud Networking Consortium 2012-2015

Page 48 / 93

The documentation was structured in such a way that all the developers can easily generate the corresponding documentation of their code. For this the pydoc and Sphinx were used for the code developed in Python, while Javadocs was considered for the Java programming Language. Table 14 shows where the source code can be found and how documentation can be accessed: Table 14 MaaS documentation

SubComponent

Reference

Documentation

MaaS

https://git.mobile-cloudnetworking.eu/MaaS

See README.md file.

2.6.5 Third parties and open source software Table 15 shows the used third-party software packages: Table 15 MaaS dependencies

Name

Description

Reference

License

Urllib2

URL library

http://urllib3.readthedocs.org/

MIT

ConfigParser

Configuration Parser

http://docs.python.org/py3k/library/configp MIT arser.html

SocketIO

Socket package

https://github.com/invisibleroads/socketIO -client

MIT

Threading2

Threading library

http://github.com/rfk/threading2

MIT

Pika

RabbitMQ library

https://pika.readthedocs.org

MPL v1.1 and GPL v2.0 or newer

Development

The Common Monitoring Management System (CMMS) is based on Zabbix, an OpenSource monitoring software released under GPL license. For ZCP the following software libraries are required:

2.6.6 Installation, Configuration, Instantiation Please find all installation and configuration information for the source code under the following git repository:

Copyright  MobileCloud Networking Consortium 2012-2015

Page 49 / 93

https://git.mobile-cloud-networking.eu/monitoring/MaaS

2.6.7 Roadmap All tasks covered in Task 3.3 have been transformed to JIRA ‘issues’ in order to follow the SCRUM methodology as close as possible (TMAAS 2014). A project entitled TMAAS has been instantiated, which is public to the MCN consortium: All issues are managed in monthly sprints. Backlogs store future work items for upcoming sprints. The most relevant upcoming topics are summarized as followed: •

Integration of MaaS with other reporting MCN Service monitoring adapter into MaaS e.g. EPCaaS, IMSaaS and RANaaS.



Integration of MaaS with other consuming MCN services requesting monitoring data such as AaaS, RCBaaS and SLAaaS.



Completing the MCN life cycle phases for MaaS.



Validation and tests of the monitoring system.

2.6.8 Research works and algorithms This section presents additional research work, which has not been presented in previous sections. The integration of Ceilometer into Zabbix has been discussed in the MCN consortium and two main approaches have been identified. The final approach has been presented under section 4.3, but a second promising approach has been designed, which is outlined in the following.

2.6.8.1 Ceilometer into Zabbix Integration Approach Monitoring and metering are essential parts to provide scalable and reliable services within Mobile Cloud environments. While many third-party monitoring solutions already exist, OpenStack brings its own system, Ceilometer. Even though highly integrated with its own environment, it still lacks essential features (e.g. tested and powerful alarms/actions) and differs in some ways from other mainstream monitoring solutions (in this example: Zabbix). However, some OpenStack based cloud platform solutions might want to continue with Ceilometer as an OpenStack built-in monitoring solution. The challenge is to integrate Ceilometer properly into Zabbix, to get the best of both worlds. This document points out one possible way to achieve a seamless integration of Ceilometer into Zabbix. In particular, the Ceilometers REST API is used to access Ceilometers data and Zabbix-agents userparameters to make them natively useable by the Zabbix-server. 2.6.8.1.1 Main Differences in Ceilometer/Zabbix Setup, Regarding Data Gathering In Ceilometer, the nova-node component gets monitored by an agent, which meters the instances via information provided by the used hypervisor. Data from multiple sources (agents, API-pushes, eventbus) are gathered by the collector and then added to the database. This database can only be accessed through the API, provided by the ceilometer-API server. In Zabbix, instances and physical nodes get monitored by agents, running directly on the node. Server and agent are communicating to exchange the meters to be monitored and the metering-data created. Data from single sources, the agents, gets directly pushed or pulled into the Zabbix server and its database. Other sources (e.g. SNMP support) are also possible but independent from this approach. Copyright  MobileCloud Networking Consortium 2012-2015

Page 50 / 93

Data can get queried through the API and is accessible over the web-frontend. 2.6.8.1.2 Integration Approach The goal is to integrate data collected by Ceilometer directly into Zabbix; in other words, to provide compatibility for already implemented custom meters or OpenStack related data, which is more easily accessible through Ceilometer than Zabbix. This task should be automated so it is transparent from the metering-consumer and well integrated. The idea is to pipe ceilometer metrics directly into the Zabbix monitoring process. This is achieved by defining custom user-parameters for the related Zabbix-agent, which are in fact request calls to the Ceilometer API. The received metrics will now be pushed/pulled to the Zabbix-server just like native parameters. These user-parameters can be created after the deployment, “on the fly”, although this requires a restart of the agent. 2.6.8.1.3 Benefits of this Solution This approach is rather straightforward. It only uses Zabbix and ceilometer APIs and a script, which will add the required user-parameters on deploy and register them in Zabbix. Direct measurements on top of Ceilometer (without passing through Zabbix) for very fast notifications of the subscriber are still possible. 2.6.8.1.4 Problems Identified The most obvious problem with this approach is the overhead it creates. Using two monitoring systems and two databases will overlap in terms of features to some degree. Consistent authentication is also problematic. Both Zabbix and Ceilometer are using token-based authentication. While Ceilometer uses the existing OpenStack keystone authentication system, Zabbix brings its own. Access to both systems is needed, so two authentications have to be performed where the credentials may differ. A consistent nomenclature is a significant feature of well-integrated systems. Using the same ID's and names for nodes, tenants and users with both systems is highly recommended. This is not the default behaviour though and may be hard to achieve consistently.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 51 / 93

2.6.8.1.5 Flow Diagrams for the Integration Approach

Figure 24 Flow diagram

Figure 24 show the general flow: 1. Ceilometer’s Collector farms the metrics (via compute-agent, event bus, w/e) and publishes them into the database. 2. The user-parameter in the Zabbix-agent configuration is in fact a call to the Ceilometer REST API, requesting the wanted metric. A keystone token has to be acquired for authentication purposes. 3. The requested metric gets published via Zabbix-agent to the Zabbix-server, and will be processed/evaluated there.

2.6.9 Conclusions and Future work The Monitoring-as-a-Service (MaaS) supporting service supports distributed infrastructure monitoring data collection and exposure towards other MCN services. As part of the first release, a basic prototype has been realized, which follows the MCN service life cycle deployment and provisioning model. Zabbix (Zabbix 2013) has been selected as a well-established Open Source monitoring tool, as an outcome of D3.1 in M12. A generic Monitoring Adapter has been specified and examples have been provided for other MCN service owners to implement specific monitoring adapters. Two approaches of Ceilometer into Zabbix integration have been specified, from which one approach has been implemented as part of the first prototype. Future work will elaborate the first release of MaaS further on and improve the life cycle stages for MaaS. Analytics-as-a-Service (AaaS) is expected to be tightly integrated into MaaS.

2.7 Analytics-as-a-Service The Analytics service is not part of the M18 prototype. Future work will include the detailed architecture, prototype implementation and first algorithms to analyse data. This data/traces should be collected from the services which are being cloud-enabled on the MCN architecture in the deliverables

Copyright  MobileCloud Networking Consortium 2012-2015

Page 52 / 93

of M18. Hence an earlier start point for the development of the analytics service was not planned. Also note that the Analytics service was not envision during the writing of the (DoW 2012).

2.8 Cloud Controller The following sub-sections describe the sub-components which make up the Cloud Controller (CC). Most of the work are new developments and are contributed out of Task 3.4.

2.8.1 Definition and Scope The CC provides the signalling and management interfaces to enable the control planes. These will be used by the instances of SM and SO. The Cloud Controller will support the SO's end-to-end provisioning and life cycle management needs of services in Mobile Cloud Networking. It will provide both atomic and support services required for realising those SO needs. The main MCN architectural entity that interacts most with the CC is the SO, which is responsible for service instance creation (including orchestration). The CC is a logical entity - consisting of multiple sub-components that abstract underlying technology choices. It is copyright © 2013-2015 by Intel Performance Learning Solutions Ltd, Intel Corporation and licensed under an Apache 2.0 license.

2.8.2 High-level design The diagram in Figure 25 shows a high-level overview of the components provided in M18. The SOs developed by the service are deployed by using the Northbound RESTful API (which is based on the OCCI standards described in (Nyren et al. 2011)). Once the SO instances are running the make use of the Service Development Kit (SDK) to interact with the Modules of the CC. The information model to call the SDK and the sub modules of the CC is defined by a service template. This template consists of the Service Template Graph (STG) and Infrastructure Template Graph (ITG) as defined in (D3.1 2013).

Figure 25 High level overview of CC

Three main development activities have been carried out over the last sprints: 

As of M18 the Northbound API is based on the OCCI standard and is implemented as a basic service interface.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 53 / 93



A SDK which supports basic functionalities for deploying and provisioning services was realized.



And for easy development the CC can be automatically setup & provisioned in a set of Virtual Machines using the Vagrant tool (Vagrant 2014).

Those three main sub-components will be highlighted in the next sections. Focus for deliverable was to get a basic CC up and running which supports the deployment and provisioning of basic services.

2.8.3 Low-level design For each part of the CC some more details are provided in the next sections. The SDK and API are implemented using the Python programming language. Hence the class diagrams will only show the classes not necessarily all code functions which are not object orientated. The service templates can either be presented in AWS CloudFormation (AWS 2013) of Heat Orchestration Template (HOT) (Alex Henneveld 2013) format. For interactions between the components please refer to the sequence diagrams described in (D3.1 2013 p. 102) All code has been tested and the lowest code coverage with test is currently at 99%. As coding standards the same standards as those for OpenStack apply. Test coverage is mandatory and style is defined and enforced by tools as defined in (van Rossum et al. 2001).

2.8.3.1 Development environment The development environment consists of three Virtual Machines (VMs). One VM which runs OpenShift Origin for hosting the SO instances. The other two VMs contain a minimalistic OpenStack installation with support for IaaS, SDN and basic Storage enabled. All three VMs are defined in a Vagrantfile and are provisioned using Puppet configuration files on the fly. The advantage of taking this approach is that not only can a certain software deployment configuration be on the developer’s machine, the very same deployment configuration can be placed on the actual testbeds.

2.8.3.2 Service Development Kit The following UML diagram in Figure 26 shows the basic structure of the Service Development Kit. The Deployer is used to deploy services using OpenStack heat. To abstract from the underlying technologies an adapter pattern has been chosen. The same principle applies to the Authentication service. Here OpenStack keystone is used and abstracted from. The services implementation in the SDK enables the access to all services available in the CloudController. It is currently realized through OpenStack keystone and represents the Design Module of the CC.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 54 / 93

Figure 26 Service Development Kit UML class diagram

2.8.3.2.1 Sample SO To verify the overall architecture and integration of the SDK with the development environment shown above a sample SO was developed. For more details on SOs see section 3.1. It deploys a simple service described in the AWS CloudFormation template language and deploys it using the SDK. The SO itself is deployed through the Northbound API. The sample SO exposes a simple OCCI like interface itself, which can be accessed by the SM. It will be used to trigger the deployment, provisioning and disposal operations. The UML class diagram in Figure 27 shows the sample SO.

Figure 27 Sample SO UML class diagram (hidden details)

The interface of the sample SO is implemented in the Application class. See the screenshot in Figure 28, which shows Northbound API of the CC in the larger Firefox window, the SO instance interface in smaller Firefox window (which shows the stack id) and the listing of stack on top of OpenStack in the lower ssh window (note the stack id from the SO).

Copyright  MobileCloud Networking Consortium 2012-2015

Page 55 / 93

Figure 28 Screenshot of integrate SO/SDK & CC

All this integration work has been done on top of the development environment described in section 2.8.6.1. Deployment was done using an emulated SM which would use the Northbound API to instantiate the Sample SO. Once done only authentication tokens need to be given to the SO through its own interface. Then deployment & provisioning as well as disposal and retrieval of the state can be triggered as shown in the screenshot above.

2.8.3.3 Northbound API The northbound API is implemented using the pyssf OCCI implementation (see section 2.8.5 for references). Hence the central piece is the implementation of a registry which is passed to pyssf’s OCCI implementation. The following UML diagram in Figure 29 shows a high-level class diagram.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 56 / 93

Figure 29 CC Northbound Interface UML class diagram

Backends for dealing with application types (AppBackend) and there interconnects (ServiceLink) are set up by the registry and connected to the OCCI core model in the registry. Two classes of the OCCI core model are the Application and Resource templates.

2.8.4 Documentation of the code The Table 16 shows where the source code can be found and how documentation can be accessed: Table 16 CC documentation

SubComponent

Reference

Documentation

Development environment

https://git.mobile-cloudnetworking.eu/cloudcontroller/mcn_cc

Documentation can be found in README.md file

Service Development Kit

https://git.mobile-cloudnetworking.eu/cloudcontroller/mcn_cc_sdk

Documentation can be found in README.md file. Documentation of samples, sample SO can be found in README.md in the respecting sub-directories. API documentation can be created by running “make html” in the doc sub-directory.

Northbound API

https://git.mobile-cloudnetworking.eu/cloudcontroller/mcn_cc_api

Detailed documentation can be created by running “make html” in the doc sub-directory.

2.8.5 Third parties and open source software Table 17 shows the used third-party software packages:

Copyright  MobileCloud Networking Consortium 2012-2015

Page 57 / 93

Table 17 CC dependencies

Name

Description

Reference

License

OpenStack

IaaS solution

http://www.openstack.org

Apache 2.0

OpenShift Origin

PaaS solution

http://openshift.github.io/

Apache 2.0

pyssf

Provides OCCI compatible RESTful http://github.com/tmetsch/pyssf interface

LGPL

Vagrant

Vagrant provides easy to configure, http://vagrantup.com/ reproducible, and portable development environments

MIT

mox

Mox is a mock object framework for https://code.google.com/p/pymox/ Apache Python testing 2.0

Runtime

Development

2.8.6 Installation, Configuration, Instantiation The following sub-sections deal with the installation, configuration procedures for each sub-component.

2.8.6.1 Development environment Clone the SCM repository and run $ git submodule init $ git submodule update $ vagrant up

to bring up the development environment Virtual Machines. More detailed installations instructions can be found in the SCM repository of the Cloud Controller – see previous section.

2.8.6.2 Software Development Kit Not applicable as SO instances will make use of this sub-component directly. It is installed once a SO instance is created.

2.8.6.3 Northbound API Configuration can be done in the file etc/defaults.cfg. Once done the service can be brought up by running runme.py.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 58 / 93

2.8.7 Roadmap The following sprints up to the next deliverable (in M27) as defined in (TCLOUD 2014) will focus on: 

Supporting runtime operations of large-scale services.



In depth support and management of interconnected resources which are exposed by Software Defined Networking principles.



Integration with other services like the Monitoring and SLA service.



Advancing the Service Development Kit with more features to make de the orchestration of large scale service as simple as possible.

2.8.8 Research works and algorithms 2.8.8.1 Description of network Quality of Service in service templates An extract of a template that specifies network QoS parameters following the proposed model is shown in the following listing, where the new structures are shown in red. heat_template_version: 2014-03-25 resources: test_network_1: type: OS::Neutron::Net properties: name: test_network_1 qos_id: { get_resource: qos_1 } qos_1: type: OS::Neutron::qos properties: qos_parameter: { get_resource: qos_p1 } qos_parameter: { get_resource: qos_p3 } qos_2: type: OS::Neutron::qos properties: qos_parameter: { get_resource: qos_p2 } qos_p1: type: OS::Neutron::qos_param properties: type: rate_limit policy: 1024 kpbs classifier: { get_resource: classifier_c2 } qos_p2: type: OS::Neutron::qos_param properties: type: delay policy: 2 ms

Copyright  MobileCloud Networking Consortium 2012-2015

Page 59 / 93

classifier: { get_resource: classifier_c1 } qos_p3: type: OS::Neutron::qos_param properties: type: delay policy: 4 ms classifier_c1: type: OS::Neutron::classifier properties: type: destinationIf policy: { get_resource: host2_port } classifier_c2: type: OS::Neutron::classifier properties: type: L3_protocol policy: udp test_subnet_1: type: OS::Neutron::Subnet properties: network_id: { get_resource: test_network_1 } name: test_subnet_1 cidr: 5.0.0.0/24 gateway_ip: 5.0.0.1 allocation_pools: - start: 5.0.0.100 end: 5.0.0.200 host1: type: OS::Nova::Server properties: name: host1 image: cirros-0.3.1-x86_64-uec flavor: m1.nano networks: - port: { get_resource: host1_port } host2: type: OS::Nova::Server properties: name: host1 image: cirros-0.3.1-x86_64-uec flavor: m1.nano networks: - port: { get_resource: host2_port }

Copyright  MobileCloud Networking Consortium 2012-2015

Page 60 / 93

host3: type: OS::Nova::Server properties: name: host1 image: cirros-0.3.1-x86_64-uec flavor: m1.nano networks: - port: { get_resource: host3_port } host1_port: type: OS::Neutron::Port properties: network_id: { get_resource: test_network_1 } fixed_ips: - subnet_id: { get_resource: test_subnet_1 } qos_id: { get_resource: qos_2 } host2_port: type: OS::Neutron::Port properties: network_id: { get_resource: test_network_1 } fixed_ips: - subnet_id: { get_resource: test_subnet_1 } host3_port: type: OS::Neutron::Port properties: network_id: { get_resource: test_network_1 } fixed_ips: - subnet_id: { get_resource: test_subnet_1 }

In this example we have three hosts (host1, host2, host3) that are connected to a single network, each of them with a network interface (host1_port, host2_port, host3_port). At the network level, we have defined a QoS resource qos1, so that the maximum delay acceptable between any port attached to the network is 4 ms (see qos_p3 resource) and the UDP traffic is limited to 1024 kbps on every port (see qos_p1 resource and the associated classifier classifier_c2). On the other hand, since we need a more restricted value for the maximum delay between host1 and host2, we have defined an additional QoS resource qos2, which refers to the QoS parameter resource qos_p2. Here the classifier classifier_c1 specifies that the QoS requirement needs to be enforced only for the subset of traffic that is designated to network interface in host2. Finally, the QoS resource qos2 is attached to the host1_port. The resulting topology is shown in Figure 30.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 61 / 93

Figure 30 Example of network topology with QoS parameters

The sample values included in the classifier resources of the previous example are related to a specific port resource or to the value of a field in the IP packet header (in this case the protocol field). This can be generalized to any value (or combination) of the fields in L2, L3 and L4 headers.

2.8.9 Conclusions and Future work The work described above describes the initial steps of enabling basic SOs to deploy and provision services. With the help of tests and sample implementations of the Service Orchestration the basic functionalities could be verified. Also huge efforts have been put into realizing the integration of all the separate components into the overall logical entity of the CC. The following sprints will focus around the work of enhancing the deployment and provisioning phase based on inputs from the services. Next to this future work will be done on runtime management and management of networks through SDN.

2.9 Service Graph Editor The Service Graph Editor – or StgEditor – is being developed out of Task 3.4 to help Service Owners/Developers. The help offered is to easily generate templates which are compatible with the information model the CC uses as defined in (D3.1 2013).

2.9.1 Definition and Scope The StgEditor is a desktop tool that enables the graphical editing of MCN Service Template Graphs and aims at automating the generation of the corresponding Infrastructure Template Graphs. From an architectural standpoint the StgEditor is part of the SM and is used by the MCN-SP to manipulate and customize STGs. Nevertheless it is service independent and for this reason considered as a common component. The StgEditor provides a GUI for the manipulation of a graph that represents a STG where each node and edge can be clicked to inspect and possibly modify its parameters. It eventually builds an ITG template file to be handled by the SO for deployment, provisioning, run-time, phases. Though it operates at SM level it is actually a plain infrastructure related tool.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 62 / 93

2.9.2 High-level design The StgEditor provides a GUI tool for the composition and customization of STGs and the corresponding ITGs. The adopted graphical model for the STG is a conventional graph where the nodes represent Service Instance Components –SICs– and the edges represent interfaces. The StgEditor shows a palette of template nodes and template edges that are selected by the user and then dragged onto a project whiteboard to become nodes and edges instances as shown in Figure 31.

Figure 31 StgEditor screenshot

The palette appears on the left hand side of the window and is made of a tab for the collection of the available SIC templates and a tab for the Interface templates. The user selects a template by clicking on it and he can then drag an instance on the work area on the right. The edges endpoints can be stretched to connect one node to the other. Each node and edge is marked with a name that mnemonically shows the role of the SIC or interface. The nodes are also distinguished by the icon that is used to graphically represent them. In the following the term node will be used interchangeably with the SIC it represents, and the term edge with the related interface. Each SIC and Interface in the STG project can be double clicked to open configuration pop-ups. The configuration pop-up is made of a set of tabs where the first one holds some general common information, including the name of the instance. The second tab holds a number of parameters that can be edited to customize the instance. The third one holds the definition of the exposed Interface End Points. Once the editing of the STG is completed, the user can save the STG project in a file for further editing at a later time. If the editing is completed, the user can select on a menu the Generate ITG template command and save the result in a HOT Template file. The means for uploading the ITG template file onto the SO has not yet been defined.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 63 / 93

2.9.3 Low-level design The StgEditor is being implemented as a desktop application written in Java with few open source libraries. The architecture is that of a typical java swing application, i.e. event driven, where events are generated by a variety of sources, either related to graphical operations (e.g. drag and drop) or graph related operations (e.g. Graph Node Created). When the StgEditor is launched the palette definition files for SICs and Interfaces are read and decoded. The SICs templates and the Interfaces templates are defined in JSON files stored in a configurable directory. At the end of the start up procedure the actual palette for STG editing is displayed at the left hand side of the StgEditor window. At the time this deliverable is edited two types of definition files are implemented one for the STGNode, i.e. a SIC template and the other for the STGEdge. Each STGNode template file defines a ClassName to identify the SIC, a textual description, the path to the file of the icon image, the path to a file containing the definition of the ITG resources associated with the SIC, the definition of the endpoints of the interfaces to be terminated by the SIC. STGEdge template file is quite similar to the STGNode one. Differences are the lack of an icon file, as edges are drawn as lines, and the presence of just two endpoints as each edge always connects up to two STGNodes. Figure 32 shows how the different components of the model are combined to yield the expected result. The uppermost level shows two SICs interconnected by an interface. This abstract model is represented in the Stg Editor GUI by means of a graph having one node (StgNode) per each SIC and an edge (StgEdge) for the interface. Each node and edge is characterized by a number of parameters specific to each SIC and Interface, and by a snippet of HOT code that describe the relevant Itg components. The Stg Editor user, e.g. a MCN Service Designer, can edit these parameters and assign values to fit the specific application. Once the STG editing has completed, the Stg Editor uses the configured parameters to customize the HOT snippets. All the customized snippets are then combined together to produce the integrated HOT template file that can be passed to the Service Orchestrator and enter its service life cycle.

Figure 32 StgEditor information flow

2.9.4 Documentation of the code Table 18 shows where the source code can be found and how documentation can be accessed:

Copyright  MobileCloud Networking Consortium 2012-2015

Page 64 / 93

Table 18 StgEditor documentation

Sub-Component

Reference

Documentation

STG editor framework

https://github.com/jgraph/jgraphx/tree/m aster/docs

Documentation can be found within the “api” and “manual” directories

STG Editor Manual

User https://git.mobile-cloudUser Manual for the Stg Editor networking.eu/cloudcontroller/stg_editor (living document) /UserManual_v0.doc

Stg Editor prototype

https://git.mobile-cloudHow to run the Stg Editor networking.eu/cloudcontroller/stg_editor prototype /Windows

2.9.5 Third parties and open source software Table 19 shows the third-party dependencies of the StgEditor. Table 19 StgEditor dependencies

Architecture component

Software Name

Reference

License

STG editor framework

jgraphx

https://github.com/jgraph/jgraphx

BSD

Yaml coding/decoding

SnakeYaml

http://www.snakeyaml.org

Apache 2.0

JSON coding/decoding

Json-lib

http://json-lib.sourceforge.net

Apache 2.0

2.9.6 Installation, Configuration, Instantiation The StgEditor is provided as a jar file plus a number of jar libraries. A configuration file is available for customizing the location for the palette definition files, icon images and HOT snippets, and other possible customizations. Before launching the StgEditor these files should be properly populated.

2.9.7 Roadmap At M18 a first working prototype will be demonstrated capable of generating actual HOT templates out of SIC based graphs. At M21 the StgEditor is expected to be completed with a number of examples.

2.9.8 Conclusions and Future work The StgEditor is an ongoing work that aims at exploiting GUI techniques to ease the complexities of the MCN services instantiation. Future work will focus on challenging the implementation by coding the STG nodes and edges to represent the MCN Services being developed.

2.10 Database-as-a-Service A Database-as-a-Service offer storage capabilities to SO and SIC instances. This service is built on existing technologies and delivered out of Task 3.4.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 65 / 93

2.10.1 Definition and Scope SO instances as well as SIC should have access to on-demand storage. This feature is delivered through a Database-as-a-Service.

2.10.2 High-level design SO instances can access storage to be themselves fault tolerant and high-available. Therefore the Northbound API of the CC allows for attaching Storage solutions such as MongoDB, PostgreSQL and MySQL. SIC can access storage through the Database-as-a-Service offered through OpenStack Trove (Trove 2014) . The databases themselves are launched in Virtual Machines and then can be accessed through their native interfaces. Supported Databases such as: Percona, MySQL, MongoDB, Cassandra, Redis, CouchDB, Memcache and VoltDB.

2.10.3 Low-level design For OpenStack Trove’s Design manuals please refer to: https://wiki.openstack.org/wiki/TroveArchitecture For OpenShift Origin’s Design manuals please refer to: http://openshift.github.io/documentation/oo_cartridge_guide.html#mariadb

2.10.4 Documentation of the code Table 20 shows where the source code can be found and how documentation can be accessed: Table 20 Database-as-a-Service documentation

SubComponent

Reference

Documentation

OpenStack Trove

http://docs.openstack.org/developer/trov e/

https://wiki.openstack.org/wiki/Trove

OpenShift Cartridges

https://www.openshift.com/developers/t echnologies

For example the Mongo documentation can be found here: https://www.openshift.com/developers/ mongodb

2.10.5 Third parties and open source software Table 21 shows the used third-party software packages: Table 21 Database-as-a-Service dependencies

Name

Description

Reference

License

Runtime

Copyright  MobileCloud Networking Consortium 2012-2015

Page 66 / 93

OpenStack

IaaS solution

http://www.openstack.org

Apache 2.0

OpenShift Origin

PaaS solution

http://openshift.github.io/

Apache 2.0

2.10.6 Installation, Configuration, Instantiation OpenShift Origin comes with support for Databases such as Mongo and PostgreSQL out of the box. They only need to be configured in the respective environments. For example the MongoDB has been configured for the MCN CC development environment and is available out-of-the-box. Instance of the Database can be attached to SO instances using the Northbound API of the CC. OpenStack Trove is available on the OpenStack Cloud Stack and so is tightly integrated. Using the Trove-API it can be easily seen as a Database-as-a-Service. When requesting a new database instance the reference endpoint is returned including the authentication tokens required to access the data.

2.10.7 Roadmap Only open item is to test the integration of OpenStack Trove with OpenStack heat supporting the full orchestration which will be completed once OpenStack Icehouse is released.

2.10.8 Conclusions and Future work This concludes the initial development/configuration of storage solutions for both SO instances and Service Component Instances. The work on integrating OpenStack Heat with Trove by the OpenStack community will be closely monitored and tested. No immediate further work is planned.

2.11 Radio Access Network-as-a-Service The following sections cover work done on the Radio Access Network conducted by Task 3.5.

2.11.1 Definition and Scope The task is of constructing the design elements for a system that can be used to manage Radio Access Network (RAN) for an organization that specializes in providing on demand RAN to customers, either directly to Enterprise End Users (EEU), e.g., a Mobile Network Operator (MNO) or via intermediate actors, called Mobile Cloud Network Provider (MCNSP) in the MCN terminology. The organization offers RAN to its customers with a given pricing in a variety of geographical areas for a certain duration with some target traffics to be supported with specific Radio Access Technologies (RAT) and with certain guarantees specified by a Service Level Agreement (SLA).

2.11.2 High-level design Figure 33 shows the high-level system functional view for current release. The current implementation design aims at designing a basic end to end configuration using the MCN framework. For this reason we chose to implement a scenario with a single stakeholder providing both Radio Access Network and core network. As a result, a single instance of SO controls both LTE RAN and EPC. Please also note that the base station implements the S1 interface with the Core network and the interface with the traffic generators. Next RANaaS release will integrate a dedicated RANaaS SM and an emulated base station implementing radio layers using the OpenAir Interface open source project.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 67 / 93

Configs. storage

Service Orchestrator

Cloud Controller

RANaaS Instance

Traffic Generator

A

UBBool p Per RAT BBU

A

L3 eNB

MME /S-GW

DNS

A

A BBU DNS eNB L3 eNB MME RAT S-GW

Agent Base Band Unit Domain Name System evolved NodeB Layer 3 eNB Mobility Management Entity Radio Access Technology Serving Gateway

Figure 33 Architecture reference model for M18 prototype

To complete the demo for the M18 prototype OpenEPCs eNodeB was virtualized and used with the OpenEPC solution. Next to this, performance analyses have been conducted using the OpenAirInterface.

2.11.3 Low-level design The current release of the RANaaS focuses mainly around two user stories as also shown in Figure 34: 

As an EPC and RAN Provider I am able to deploy and subsequently provision an end to end mobile network made up of simplified LTE base stations and an Evolved Packet Core in order to be able to test end to end connection between traffic generators and IP service end points (e.g., a web server).



As an EPC and RAN Provider I am able to inject traffic into a deployed RAN made up of simplified LTE base stations in order to be able to test end to end connectivity and test algorithms, e.g., network function placement.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 68 / 93

Cloud Controller

Authentication

Manage Service Requests

Service Orchestrator

Design RAN + EPC Deploy RAN + EPC

Manage RAN + EPC

Provision RAN + EPC

RANP + EPC Provider File Editor Manage User Generator

Figure 34 Use-case diagram of RANaaS for M18

2.11.3.1 OpenEPC’s eNodeB The eNodeB which is part of OpenEPC is a key component for the M18 prototype release. The eNodeB design consists of separate elements with multiple functionality that have to cooperate within one entity. Figure 35 depicts a general overview of the construction of the eNodeB emulator and its main components. Besides protocols enabling standard communication with other network elements, the most important role in the eNodeB emulator model is played by the highest logical block, called "eNodeB". It contains the majority of the component's logic which has to full multiple different tasks ensuring proper operation of the node.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 69 / 93

eNodeB enodeb addressing

S1AP

X2AP sctp

gtp

routing

routing_gtpu

routing_raw

nas

mysql

console

Figure 35 eNodeB architecture

Figure 36 shows the interfaces between the eNodeB the traffic generator the OpenEPC instance. Application

Application

GTP-U

GTP-U

GPRS Tunnelling Protocol

GPRS Tunnelling Protocol

IP

IP

UDP

UDP

Internet Protocol

Internet Protocol

User Datagram Protocol

User Datagram Protocol

IP

IP

IP

IP

Internet Protocol

Internet Protocol

Internet Protocol

Internet Protocol

MAC

MAC

MAC

MAC

Medium Access Control

Medium Access Control

Medium Access Control

Medium Access Control

Ethernet

Ethernet

Ethernet

Ethernet

User Genarator

LTE-Uu emulation

eNodeB emulator

S1-U

SGW-U

Figure 36 Interfaces between User Generator, eNodeB and SGW

With this setup it is possible to inject traffic through a User Generator via the eNodeB to the SGW of EPC. The User Generator works on a per “virtual” user basis. For each virtual user the following phases could be identified: 

Attachment



Traffic generation



Detachment

The message exchange during all of the procedures is presented in the Figure 37. The communication between the User Generator and eNodeB is based on simple text commands sent via a regular TCP/IP

Copyright  MobileCloud Networking Consortium 2012-2015

Page 70 / 93

connection. The commands are received and processed by an agent working on listening on the specific IP address and TCP port configured by the SO.

Figure 37 User generator and eNodeB

In order to provide dynamic configuration management for cloudified network functions, a novel small size Element Management System (EMS) Agent was developed, which resides in each instance of a cloudified network function. In order to enable dynamic configuration of the specific network function, the particular EMS Agent controls the instance’s active state. EMS Agent receives commands from external site (Service orchestrator) usually over the management network.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 71 / 93

The communication part between SO and eNodeB-Config is realized via a REST based API as shown in Figure 38. Each command sent to REST API corresponds to a script located directly on the network function’s instance – this will be called service adapter scripts in following sections.

Service Orchestrator

Rest API

L3 eNB (Generic EPC binary)

configuration

HTTP Server

L3-eNB

Figure 38 Configuration management via EMS Agent

Although these service adapter scripts are individual to each service, these scripts are organized in a common way. Some of the scripts are related only to configuration information regarding the particular network function unit. Other scripts are executed whenever dependencies among Network Functions exist. Table 22 API for the eNB Management

Name

Function

preinit

Taking care of basic configuration (like network-configuration) related *not* to the service functionality provided through this service instance but especially to the the running VM.

install

Installing the component itself, applying options etc, anything which is not dependent on other service instances, or services.

relation-joined

Taking care of all things need to be done to establish a connection between dependent components.

relation-departed

Removing the established connection.

start

Starting a service instance

stop

Stopping a service instance

Please note whenever a relation between network function units is established (joined) or dismantled (departed) a corresponding scripts are executed on each site of the created/deleted relation.

2.11.4 Documentation of the code The Table 23 shows the documentation of the RANaaS.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 72 / 93

Table 23 RANaaS documentation

SubComponent

Reference

Documentation

OpenEPC

http://www.openepc.net

Licensors can retrieve the documentation of OpenEPC.

2.11.5 Third parties and open source software 2.11.5.1 OpenAirInterface OpenAirInterface (Eurecom 2013) is an open-source hardware/software development platform developed by Eurecom as an emulator for the LTE RAN (Nikaein 2012). It combines simulation or emulation (of the physical layer) with emulation (of MAC and higher layers) but there are also versions allowing work with actual PHY layer equipment. Currently, Eurecom works on adding an EPC to the emulator which shall be released still in 2014. It has several emulators with corresponding profiling tools: dlsim and ulsim: implement only the physical layer processing; oaisim: implements the complete stack, generating a given number of UEs and a eNB, providing their IP address, enabling to inject traffic using traffic source generators – such as ORG, IPERF or DITG – that is entirely processed as real equipment, the processing executing all layers of the UE and eNB protocol stack. OAI has been installed on several machines. In particular in CloudSigma ones, a public cloud provider of virtual machines (VMs) supported by a shared physical infrastructure, and on a VM at the University of Bern, where requirements towards the physical infrastructure are lower and the server has higher specs. OAI is used to profile, for specific allocated radio resources and traffic / services usage, the needed computation resources of the various BBU components (PHY cell, UP, CP) which satisfy the LTE 3GPP requirements in terms of latency. Several limitations and errors have been identified which do not allow yet the use of all OAI stated potentialities. In particular, it does not support multi-core processing (although it is multi-thread), strongly limiting its performance. The operation with traffic sources presents also several errors that require the improvement of OAI. These faults are expected to be solved in a mid-term, in order to run the intended evaluations, and enable to present final conclusions on the feasibility of eNBs running in the cloud. Several improvements have been done in the code, and submitted as contributions to the open source community, in order to enable the profiling of processing resources. OpenAir Scenario Descriptor (OSD) is a configuration dataset which is composed of four main parts, which represent the basic description of Open Air Interface (OAI) emulation platform. It is part of OAI emulation methodology for describing scenarios using the XML format. This allows repeatable and controlled experimentation to be executed, without having to run simulations in the command line and setting parameters manually. As the parameter set was rather limited, more parameters required for experimentation works were defined and implemented and contributed back to the OAI community. The OpenAirInterface Traffic Generator (OTG) is a tool used for the generation of realistic application traffic for the performance evaluation of emerging networking architectures. It accounts for

Copyright  MobileCloud Networking Consortium 2012-2015

Page 73 / 93

conventional traffic but also for the traffic characteristics of applications such as M2M and online gaming. Hence, OTG is capable of generating mixed human and machine type traffic patterns. Table 24 shows the used third-party software packages: Table 24 RANaaS dependencies

Name

Description

Reference

License

OpenEPC

Includes eNodeB

http://www.openepc.net

Fraunhofer proprietary

OpenAirInterface

OAI

http://www.openairinterface.org

GPL

Development

2.11.6 Installation, Configuration, Instantiation The following sections briefly describe the configurations needed for the RANaaS prototype of M18.

2.11.6.1 OpenEPC’s eNodeB Figure 39 describes how eNodeB service unit is instantiated and configured. It considers that in the initial state a VM holding the OpenEPC’s eNodeB binary, eNodeB’s configuration scripts, and additionally the EMS is provided internally. When the eNodeB service unit is instantiated the only running component expected it the REST based API.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 74 / 93

Orchestrator

eNB

MME

preinit install dns-reletion-joined mme-reletion-joined

user-generator-reletion-joined start S1AP: S1 Setup Request S1AP: S1 Setup Response

REST commands

3GPP standard signalling

Figure 39 eNodeB configuration

An overview of execution order for starting a User Generator instance is shown in Figure 40.

User generator

Orchestrator preinit install

REST commands enodeb-reletion-joined

start

Figure 40 Configuration of User generator

After User Generator and eNodeB services were successfully configured, required relations were created between the VMs and the elements are operational the procedure of attaching multiple virtual users to the base station can be initiated. Further information on the installation and usage of the eNB and the user generator can be found in section 2.11.5

2.11.7 Roadmap As also detailed in (TRAN 2014) the following sprints will focus for M 21 on 

Integrating the MCN framework with a RANaaS Service Manager and Service Orchestrator,

Copyright  MobileCloud Networking Consortium 2012-2015

Page 75 / 93



Implementing the RANaaS service lifecycle using the OpenAirInterface emulator as the base station



Further work on traffic generation with Open Air Interface



Integration of Monitoring

For M 27 the sprints will be dedicated to: 

the integration with other MCN support services, e.g., RCB (Rating Charging and Billing), SLA (Service Level Agreement) and



Work on the business aspects of the RAN service management.

2.11.8 Research works and algorithms Task 3.5 is working on multiple research topics in parallel as detailed in the next sections.

2.11.8.1 Performance analysis of eNodeB for porting to the cloud A range of tests to determine the computational needs of RANaaS concerning the execution of the RAN functionality on cloud-based infrastructure were described in the internal report (Dimitrova 2014). Central focal point is also the requirements set towards the physical infrastructure by the strict processing delay set by the RAN. The conducted tests are organized in four categories: 1.

Test group A: Tests aiming to determine the dependency of the processing time for the PHY layer functionality – depends on the configuration (and load) of the radio interface.

2.

Test group B: Tests aiming to determine the dependency of the processing time for the MAC and higher layers functionality – depends on the number of end users.

3.

Test group C: Tests aiming to determine the statistical boundaries of the processing time given execution in a (public) cloud.

4.

Test group D: Tests aiming to determine the dependency of the processing time on the configuration of the hardware platform.

The current OAI profiling of the PHY layer does not support multi-threaded operation and thus did not allow us to test the impact of the number of cores on the processing of the PHY layer. It is worth noting, however, that PHY operations are not all possible to be run in parallel, because they are sequential in nature, i.e., each operation depends on the output of the previous one. Therefore, multiple cores will only help there where the signal processing can be done in parallel. It order to investigate the potential wins of multi-core platforms we have keep an open communication channel with Eurecom to investigate the feasibility to have the code multithreaded and in parallel will look in the details of the signal processing chain to identify which PHY processing steps could be parallelized if at all. Profiling of the higher layer stack has bigger chances to benefit from multi-threading and thus increasing the number of cores. This is due to the fact that processing of the higher layers is done in parallel for the different users and allows parallelism of the processing. To test the impact of the number of cores on the processing of higher layers the profiling software (and the implementation of the eNB emulation) should support multi-threading. This is an open issue with the current version of OAI and is currently under investigation in collaboration with the Eurocom institute in the scope of open source contributions.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 76 / 93

Given that part of the eNodeB processing may not qualify for optimization due to sequential execution of the functionality an alternative to improve the performance is to use CPUs with higher speeds. Although we conducted initial control tests, which gave promising results, the impact of the CPU speed could not be tested thoroughly due to infrastructure limitations. The physical infrastructure of our CloudSigma partners is based on CPUs with equal parameters, and does not allow definition of increasing CPU speed scenario. As CloudSigma are in the process of upgrading their infrastructure we target test repetition on new, faster processors. Alternatives to the physical infrastructure are searched as well. To allow for more comprehensive evaluation of the processing needs of the RAN components monitoring tools were deployed on the machines where the OAI is running. Monitoring schedule was set to periodically trigger execution of the OAI profiling and in parallel rung the monitoring tools collecting information on the CPU, RAM and other hardware resources used. The schedule is set to span both working days and weekends to also allow the evaluation of fluctuation in the processing times caused by the sharing of the physical hardware infrastructure.

2.11.8.2 Fronthaul Solutions For building a fronthaul solution it is mandatory to keep into account three interdependent requirement types: technical aspects, business aspects and regulation constraints. Detailed fronthaul requirements and available technical solutions are partially described in (D3.1 2013). Based on these requirements, detailed in the internal report (Pizzinat 2013), four work paths have been identified: 1.

Single fiber optical distribution network (ODN): as a first step this could be realized by means of Coarse Wavelength Division Multiplexing -like bidirectional transceivers. Then, this could be done on dedicated ODN or shared ODN (Fiber to the …).

2.

Low cost colorless transceivers. Colorless transceivers could be used to suppress inventory issue.

3.

Greenfield or brownfield with coexistence: this item refers to NGPON2 scenarios with coexistence element that allows NGPON2 coexistence with previous GPON or XGPON generations on existing ODN.

4.

Without management capability or with PtP Encapsulation Method (PEM) to fit the required fronthaul requirements and providing O&M: the first option consists in fronthaul in wavelength overlay over ODN, the second option consists in using NGPON2 interfaces for CPRI transport over NGPON2 frame.

2.11.8.3 Virtualization of Radio Resources One of the key characteristics of RAN-as-a-Service (RANaaS) is the capability to provide multitenancy. These tenants, which are Virtual Network, are served elastic, on-demand and simultaneously over the same physical infrastructure according to their Service Level Agreements (SLAs). In addition, it is desired to share/virtualize the limited radio resources (i.e., spectrum) while providing them isolation, ease of use (i.e., network element abstraction), and multi-RAT (Radio Access Technique) support. The objective of virtualisation of radio resources, as it is described in, is to realise virtual wireless links. In other words, to share the limited radio resources (e.g., spectrum) among the EEUs while providing them RAN instance isolation, ease of use (i.e., network element abstraction), and multi-RAT (Radio

Copyright  MobileCloud Networking Consortium 2012-2015

Page 77 / 93

Access Technique) support. Virtual Radio Resource Management (VRRM) is a statistical decision problem, in which a decision has to be taken under sets of uncertainty. Within the internal report (Kocur 2013) statistical models have been setup, evaluated and first results achieved.

2.11.8.4 Radio and Cloud Resources Management C-RAN and RANaaS enables to allocate elastically radio resources on-demand. The objective is to design a Radio and Cloud Resources Manager (RCRM) that satisfies on-demand the users of a Mobile Network Operator (MNO) dynamically requesting services. In order to support a given offered traffic load (which varies both geographically and temporally), the requested radio and cloud (processing and storage) resources will be adequately configured, considering the available fibre resources between RRHs and DCs. A key aspect is the quantification and profiling of the relationship between the processing needs at the BBU to support a given radio resources usage. This processing must be decomposed between load independent and dependent components, which can be scaled according to needs. This may prove valuable when deciding how to deploy and split the functional components of a BBU on Virtual Machines (VMs). Various traffic scenarios are to be studied, as described in (Ferreira et al. 2013a), considering both geographic and temporal variations. Upper and lower processing boundaries will be established, and analytic expressions to model system behaviour re being proposed and worked on in the internal report (Ferreira et al. 2013b).

2.11.9 Conclusions and Future work The document shows the first step towards the design of a system that offers RAN-as-a-Service ondemand in a variety of geographical areas and for a specific duration of time, to support target traffic loads with RAT with a given pricing and specific SLAs’ guarantees. It is supported by a service manager and service orchestrator that enable a large variety of use cases. RANaaS is designed to explore the existing RANs in terms of service offer (covered geographical area, technologies, supported traffic profiles and SLAs). It enables customers to add, delete or modify RAN items, paying per usage. A high level design is proposed, identifying RANaaS system actors and use cases, such as managing the RANaaS catalog, the customer’s RAN, and orchestration use cases. They evidence the flexibility and potentiality of the RANaaS concept. An API has been developed enabling to deploy and manage RAN-as-a-Service. In the current state, it is able to deploy and manage a set of Layer-3 eNBs according to the proposed design. It is based on the OpenEPC software, having been developed the service orchestrator that controls eNBs. As future work, the RANaaS application will include a dedicated SM and SO with LTE base stations to enable scenarios with radio constraints. This will enable scenarios where the associated radio resources shall be dimensioned according to the expected traffic load and needed radio resources, in order to satisfy specific SLAs. A variety of open source software is used within this activity. OpenAirInterface (OAI) is a softwarebased LTE eNodeB, running on CloudSigma machines. It has been used initially to profile the needed computation resources for various traffic and radio resources usages. It shall be used in a second stage to replace the already developed Layer-3 eNB in order to emulate a realistic eNB, implementing the

Copyright  MobileCloud Networking Consortium 2012-2015

Page 78 / 93

entire eNB stack. To support it, open source code for generation of traffic has been used from OAI, the so called OTG (OpenAir Interface Traffic Generator), as well as IPERF and D-ITG (Distributed Internet Traffic Generator), which are currently used for specific purposes. The goal is to be able to inject realistic traffic, either from realistic traffic generators or from real end-users equipment running real applications. Several research topics are addressed around this activity. Using OAI, a study was presented of the variation of the processing time and its dependence on the physical infrastructure. It is shown that RAN processing in the cloud should be done with care, as the processing time presents large fluctuations, due the sharing of physical hardware infrastructures, showing to be unable to give processing guarantees. The research study shall be extended to various scenarios of resources usage, and solutions will be researched to solve these. Limitations of OAI in terms of multi-core support and several needed improvements still to be implemented avoid to present final conclusions at the current stage on the viability of supporting well performing eNBs in the cloud. The design of a Radio and Cloud Resources Manager (RCRM) is proposed. To support a given offered traffic load (which varies both geographically and temporally), the requested radio and cloud resources shall be adequately configured. Several radio resources can be dynamically allocated, such as the set of active micro- and macro-cells RRHs, the available RATs, the number of frequency carriers per RRH and associated bandwidth size. On the other hand, the processing and storage needs of BBUs vary dynamically with the load of the associated RRH. Computing resources supporting the instantiation of BBUs may be scaled up or down (increase or decrease processing and storage capacity) according to the needs (requiring to support the seamless migration of BBUs from one DC to another). As future work, its implementation and performance evaluation shall be assessed. To support the RRH-BBU fronhault, several solutions are evaluated in terms of technical requirements, costs, technical solutions on fiber, four long-term options being presented, ranging from single fiber network, low cost colorless transceivers, greenfield or brownfield with coexistence, and without management capability. An example taken from a deployment in Brittany highlighting antennas and central office locations, and interconnecting links. This shall be used for a reference scenario to be evaluated in a further stage. The concept of virtual radio resources is also proposed, and a model for management of these resources. The virtualisation of radio resources solution aggregates and manages all available resources. Virtual Network Operators do not have to deal with physical radio resources at all. Instead, they have to ask for wireless connectivity in form of capacity per service. The services of VNOs are provided by required virtual resources based on the contract and defined SLA (Service level agreement). Virtualisation approach leads to more efficient and flexible V-RAN. The details of proposed model for management of virtual radio resources containing its relation with physical resource managers, estimation of network capacity, and allocation of data rate to each service of each VNO have been described. A practical heterogeneous cellular network is considered as a case study. The initial numeric results support the increase of resource usage efficiency up to 6%. These results present how the virtual radio resource management allocates capacity to various services of different VNO when they have different SLAs and priority

Copyright  MobileCloud Networking Consortium 2012-2015

Page 79 / 93

3 Service enablement The following two sections describe the generic SO and Manager. Although not originally intended to be delivered out of WP3 they were developed for integration and testing purposes for M18.

3.1 Generic Service Orchestrator The SO is defined in (D2.2 2013). It is again a domain specific components and hence in the hands of the service owners to implement for their services. However to test integration of the foundations defined in WP3 a first generic implementation is provided out of WP3.

3.1.1 Definition and Scope The service orchestrator is the component in the MCN architecture that is responsible for the creation of a tenant’s service instance. For each tenant’s service instance request a service orchestrator is instantiated and managed by the service manager. To aid the management of the SO’s instantiated by the service manager it has a lightweight interface for the sole use by the service manager. This interface is currently a very early prototype and the definition of it is undergoing iterations currently.

3.1.2 High-level design As defined in (D2.2 2013 p. 30) the SO contains two functional blocks: “The Decision block (SOD) is responsible for interacting with “external” entities, e.g. Support Services or SM and take decisions on the run-time management of the SICs […] The Execution block (SOE) is responsible for enforcing the decisions towards the CC”

3.1.3 Low-level design The following UML in Figure 41 class diagram shows the SO. To implement a service orchestrator, a developer needs to implement the ServiceOrchestratorExecution and ServiceOrchestratorDecision interfaces. The classes that implement these interfaces will then provide the means to create an instance of the relevant service.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 80 / 93

Figure 41 SO UML class diagram

The Application class is the entry point and represent the SO’s interface towards the SM.

3.1.3.1 SOs and SO Bundles Service orchestrators are deployed within a service manager as a code (python in the prototype case) and supporting files. There is no assumption made about the structure of the bundle except for now. Figure 42 shows a possible structure.

Figure 42 Structure of a SO bundle

The service manager knows where to find this service orchestrator bundle as it will read the location from the service manager configuration file. Within the bundle the following details might be stored: 

Any kind of data needed to perform the management of Service Instances. This can include the service template, configuration files or whatever else is needed.



The logic for deploying, provisioning, runtime manage and disposal of services in form of python code (so.py).

Copyright  MobileCloud Networking Consortium 2012-2015

Page 81 / 93



Configuration files which tell the CC to install dependencies such as the SDK. And setting environment variables to use the right design module endpoint (support directory).

3.1.4 Documentation of the code The Table 25 shows where the source code can be found and how documentation can be accessed: Table 25 SO documentation

SubComponent

Reference

Documentation

SO

https://git.mobile-cloudnetworking.eu/cloudcontroller/mcn_cc_sdk/tree/master/mis c/sample_so

See file.

README.md

Deployment https://git.mobile-cloudcode of networking.eu/cloudcontroller/mcn_cc_sdk/tree/master/mis sample SO c/cc_deploy

See file.

README.md

3.1.5 Third parties and open source software Table 26 shows the used third-party software packages: Table 26 SO dependencies

Name

Description

Reference

License

OCCI implementation

http://github.com/tmetsch/pyssf

LGPL

Development pyssf

3.1.6 Installation, Configuration, Instantiation The sample service orchestrator can be run by following the steps in README.md file of the cc_deploy as described in section 3.1.4. It basically deploys the sample SO once it has installed dependencies and set the right environment variables (Endpoint of the design module).

3.1.7 Roadmap The following sprint up to M21 will cover some basic changes on the generic sample SO. The Application class will be extracted and put into the SDK. This will generalize the Interface of SO instances. This interface will be built upon the OCCI specification. No other actions are planned.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 82 / 93

3.1.8 Conclusions and Future work The SO presented here was used to test the overall integration of all components. It will be used for automated testing in future. Full flesh implementations of the SOs will be carried out in the respecting work packages (WP4, WP5).

3.2 Generic Service Manager The SM is defined in (D2.2 2013) and represents one of the key components of the overall architecture. Because SMs and SOs are domain specific they should be implemented by Service Owners from higher level services such as EPC (WP4) and IMS (WP5). However to ensure an integrated environment WP3 has made a first implementations for integration purposes.

3.2.1 Definition and Scope The service manager is the first point of entry for EEUs. At the service manager the EEU can, through the MCN service lifecycle, request service instances. The service manager provides the EEU the simple operations of creation, deletion, updating and description of service instances.

3.2.2 High-level design In order to aid integration and interoperation, the SM exposes and offers management of service types and instances through the OCCI specification. In this particular case, the core OCCI specification is used to allow service providers represent their service offer as an OCCI type, or what is better known as an OCCI Kind. Importantly, by adopting OCCI the means to discover what service types are offered by the service provider, through the SM, is provided.

3.2.3 Low-level design The UML class diagram in Figure 43 shows a brief overview of a generic SM.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 83 / 93

Figure 43 SM UML class diagram

To implement a service manager, a developer needs to simply define their service as an OCCI Kind as shown below. Once done, the service manager is ready to be executed, run and service requests from EEUs. In the Figure 44, an example of a service manager implementation is shown. This is the only code that a developer needs to implement to have a basic (without AAA or SLA support) service manager ready for operation.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 84 / 93

Figure 44 Sample SM

In the code, a service type is defined. A service type is what gives the service its signature. Service types are implemented as OCCI Kinds and have a set of metadata that describe them. The key metadata points to note are: 

identifier: the identifier is in fact a combination of the first two parameters. When combined this provides a unique identifier for the type. In the example code above, the identifier would be: http://schemas.mobile-cloud-networking.eu/occi/sm#epc



title: this provides some human readable text briefly describing the service.



attributes: these are a set of attribute names that can either be read-only (immutable) or read/write (mutable) by the EEU. The details of the exact semantics are covered in the OCCI core specification.

3.2.3.1 SM and Service Bundles Having a service manager run and operate is not enough to create service instances. The key aspect of this is the service orchestrator. As described in (D2.2 2013), it is the service orchestrator that is responsible for the creation of EEUs’ service instances. Also described in D2.3 was the concept of the service orchestrator bundle. This is supported in the service manager. Currently, the service orchestrator is deployed along with a service manager. It sits within a directory named “bundle” relative to the service manager implementation (“demo_service_manager.py” in the Figure 45).

Copyright  MobileCloud Networking Consortium 2012-2015

Page 85 / 93

Figure 45 SO bundle structure

The service manager knows where to find this service orchestrator bundle as it will read the location from the service manager configuration file. For more details on the configuration of the service manager please see the README.md in the service manager code repository.

3.2.4 Documentation of the code The Table 27 shows where the source code can be found and how documentation can be accessed: Table 27 SM documentation

SubComponent

Reference

Documentation

SM

https://git.mobile-cloudnetworking.eu/cloudcontroller/mcn_sm/ tree/initial_sm_impl

See README.md file.

3.2.5 Third parties and open source software Table 28 shows the used third-party software packages: Table 28 SM dependencies

Name

Description

Reference

License

OCCI imeplementation

http://github.com/tmetsch/pyssf

LGPL

Development pyssf

3.2.6 Installation, Configuration, Instantiation All configuration of the service manager is carried out through etc/sm.cfg. There are three sections to this configuration file. 

general: this configuration section is used by the code under the namespace of mcn.sm.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 86 / 93

o 

service_manager - this configuration section is related to the service manager that the developer implements o



port: the port number on which the service manager listens

bundle_location: this is where your service orchestrator bundle is located. Currently only file path locations are supported

cloud_controller - this configuration section is related to the configuration of the cloud controller’s APIs o

nb_api: The URL to the North-bound API of the CloudController

3.2.7 Roadmap The upcoming sprints will, as defined in (SM 2014), will focus on the following user stories and deliver this by M27 and M30. 

Integration of support service that support both the technical and business service manager



Separation of SM into BSM and TSM components



Implement the BSM to BSM components to support inter-SM communications



Implement asynchronous request processing to improve perceived responsiveness of SM



Extend the administration capabilities of the SM (e.g. remote uploads of service bundles)

3.2.8 Conclusions and Future work Although this first implementation of the SM has been offered by work-package 3 for now it should be turned over to the work-packages with more domain knowledge. Still the implementation presented here ensured that work-package 3 could verify that the “foundations” are integrated and are functional.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 87 / 93

4 Conclusions As the title of deliverable suggest the first ptototypes of all components – except the analtics service which is planned for future releases – which are delivered out of WP3 were shown.This is an important milestone for the further work as from now on the basic infrastructural foundations are available for all other services within the project. Each of the previous section have conclusions on their own and outlooks on future work. For the overall work done in WP3 the authors want to emphasis that more work will be carried out to integrate the differenent components. Future steps therefore will also focus on integrating the components even tighter together using the CC an the corresponding Service Development Kit. These two parts are the key architectural artefacts which bind all the different services and components together. Verfication of the components has been done using installations in the testbed, unittesting, and running systems. This is a key outcome as we can verify up and running codes which have been developed within the project and from external communities. With that WP3 has also left the more theoretical work on architectures and moves into the verification of the previsous work items. Also note that the work done here might or already has influenced external communities. This can include contributions to projects such as OpenStack as well as contributions and influencing Standards such as OCCI, which is wildly used in the foundations of MCN.

Copyright  MobileCloud Networking Consortium 2012-2015

Page 88 / 93

5 Terminology AAA – Access, Authorisation, Accounting AaaS – Analytics-as-a-Service API – Application Programming Interface BBU – Base Band Unit CC – Cloud Controller DNS – Domain Name Service DNSaaS – Domain Name Service-as-a-Service E2E – End to End EEU – Enterprise End User EMS – Element Management System EPC – Evolved Packcage Core HTTP – Hypertext Transfer Protocol ITG – Infrastructure Template Graph LB – Load Balancing LBaaS – Load Balancing-as-a-Service MaaS – Monitoring-as-a-Service MCNSP – Mobile Cloud Networking Service Providder MNO – Mobile Network Operator OAI – OpenAirInterface OCCI – Open Cloud Computing Interface ODN – Optical Distribution Network RAN – Radio Access Network RANaaS – Radio Acces Network-as-a-Service RANP – Radio Access Network Provider RAT – Radio Access Technique SCM – Source Code Management SI – Service Instance SIC – Service Instance Component SLA – ServiceLevel Agreement SLA – Service Level Agreement SLAaaS – Service Level Agreeement-as-a-Service

Copyright  MobileCloud Networking Consortium 2012-2015

Page 89 / 93

SM – Service Manager SO – Service Orchestrator STG – Service Template Graph TCP – Transmission Control Protocol UML – Unified Modelling Language VRRM – Virtual Radio Resource Management

Copyright  MobileCloud Networking Consortium 2012-2015

Page 90 / 93

References 3GPP. (2013) LTE; Evolved Universal Terrestrial Radio Access (E-UTRA) and Evolved Universal Terrestrial Radio Access Network (E-UTRAN); Overall description; Stage 2 (3GPP TS 36.300 version 11.5.0 Release 11), http://www.etsi.org/deliver/etsi_ts/136300_136399/136300/11.05.00_60/ts_136300v110500p. pdf Alex Henneveld. (2013) CAMP, TOSCA, and HEAT, http://de.slideshare.net/alexheneveld/201304specscamptoscaheatbrooklyn AWS. (2013) AWS CloudFormation, http://aws.amazon.com/en/cloudformation/ Azodolmolky S, Nejabati R, Escalona E, Jayakumar R, Efstathiou N, and Simeonidou D. (2011) Integrated OpenFlow-GMPLS Control Plane: An Overlay Model for SOftware Defined Packet over Optical Networks, Presented at the ECOC, pp., 1–3 Ceilometer. (2013) Project Ceilometer OpenStack, https://wiki.openstack.org/wiki/Ceilometer D2.2. (2013) Overall Architecture Definition, MobileCloud Networking Project D3.1. (2013) Infrastructure Management Foundations – Specifications & Design for MobileCloud framework, MobileCloud Networking Project Dimitrova, D. (2014) Performance analysis of eNodeB for porting to the cloud, MobileCloud Networking, https://svn.mobile-cloudnetworking.eu/svn/mcn/WP3/T3.5_WirelessCloud/Deliverables/D3.2/MCN-WP3-UBernD3.2_performance.docx DNSAAS. (2014) DNSaaS Jira - Issues & Roadmap, https://jira.mobile-cloudnetworking.eu/browse/DNSAAS DoW. (2012) Description of Work, Mobile-Cloud Networking ETSI. (2013) ETSI GS NFV 002: Network Functions Virtualisation (NFV): Architectural Framework, http://www.etsi.org/deliver/etsi_gs/NFV/001_099/002/01.01.01_60/gs_NFV002v010101p.pdf Eurecom. (2013) Open Air Interface (OAI), www.openairinterface.org/ Ferreira, L., Branco, M., and Correia, L. M. (2013a) Traffic Generation, MobileCloud Networking, https://svn.mobile-cloudnetworking.eu/svn/mcn/WP3/T3.5_WirelessCloud/Deliverables/D3.2/MCN-WP3-INOV-09101-Traffic_Generation_for_D3.2.docx Ferreira, L., Branco, M., and Correia, L. M. (2013b) Radio and Cloud Resources Management, MobileCloud Networking, https://svn.mobile-cloudnetworking.eu/svn/mcn/WP3/T3.5_WirelessCloud/Deliverables/D3.2/MCN-WP3-INOV-09002-RCRM_for_D3.2.docx Kocur, J. (2013) Installation, configuration and instantiation of eNB, MobileCloud Networking, https://svn.mobile-cloudnetworking.eu/svn/mcn/WP3/T3.5_WirelessCloud/Deliverables/D3.2/MCN-WP3-TUBinstallation-eNB.docx

Copyright  MobileCloud Networking Consortium 2012-2015

Page 91 / 93

Linux Foundation. (2013) OpenDaylight - An Open Source Community and Meritocracy for Software Defined Networking, Linux Foundation, http://www.opendaylight.org/publications/opendaylight-open-source-community-andmeritocracy-software-defined-networking Linux Foundation. (2014) OVSDB OpenStack Guide, https://wiki.opendaylight.org/view/OVSDB:OVSDB_OpenStack_Guide NEC. (2013) NEC Neutron Plugin, https://wiki.openstack.org/wiki/Neutron/NEC_OpenFlow_Plugin Neutron. (2014) OpenStack Neutron QoS, https://wiki.openstack.org/wiki/Neutron/QoS Nikaein, N. (2012) OAI emulation platform, Eurecom, http://svn.eurecom.fr/openair4G/trunk/targets/DOCS/oaiemu.doc Nyren, R., Edmonds, A., Alexander Papaspyrou, and Metsch, T. (2011) Open Cloud Computing Interface - Core, Open Grid Forum, http://ogf.org/documents/GFD.183.pdf ONESource. (2014) DNSaaS installation instructions., https://wiki.mobile-cloudnetworking.eu/wiki/DNSaaS_Implementation ONF. (2014a) ONF Software Defined Networking, https://www.opennetworking.org/sdnresources/sdn-definition ONF. (2014b) ONF Optical Transport WG, https://www.opennetworking.org/working-groups/opticaltransport ONF. (2014c) ONF Configuration and Management, https://www.opennetworking.org/workinggroups/configuration-management OpenFlow Switch Specifications, version 1.4.0. (2014) ONF Foundation, https://www.opennetworking.org/images/stories/downloads/sdn-resources/onfspecifications/openflow/openflow-spec-v1.4.0.pdf OpenStack Neutron. (n.d.) https://wiki.openstack.org/wiki/Neutron Pizzinat, A. (2013) Fronthaul Solutions, MobileCloud Networking Van Rossum, G., Warsaw, B., and Coghlan, N. (2001) Style Guide for Python Code, Python Software Foundation, http://legacy.python.org/dev/peps/pep-0008/ SM. (2014) Service Manager Jira - Issues & Roadmap, https://jira.mobile-cloudnetworking.eu/browse/SM Sousa, B. (2014) Towards a High Performance DNSaaS Deployment, Presented at the 6th International Conference on Mobile Networks and Management TCLOUD. (2014) Cloud Controller Jira - Issues & Roadmap, https://jira.mobile-cloudnetworking.eu/browse/TCLOUD TMAAS. (2014) MaaS Jira - Issues & Roadmap, https://jira.mobile-cloudnetworking.eu/browse/TMAAS

Copyright  MobileCloud Networking Consortium 2012-2015

Page 92 / 93

TNET. (2014) Intra DC connectivity Jira - Issues & Roadmap, https://jira.mobile-cloudnetworking.eu/browse/TNET TPERF. (2014) Performance Jira - Issues & Roadmap, https://jira.mobile-cloudnetworking.eu/browse/TPERF TRAN. (2014) RAN Jira - Issues & Roadmap, https://jira.mobile-cloud-networking.eu/browse/TRAN Trema. (2014) Sliceable switch tutorial, https://github.com/trema/apps/wiki/sliceable_switch_tutorial Trove. (2014) OpenStack Trove, https://wiki.openstack.org/wiki/Trove Vagrant. (2014) Vagrant, http://www.vagrantup.com/ Zabbix. (2013) Project Zabbix, http://www.zabbix.com/

Copyright  MobileCloud Networking Consortium 2012-2015

Page 93 / 93