Network Revolution - netLabs!UG

8 downloads 0 Views 409KB Size Report
[2] N. Feamster, H. Balakrishnan, J. Rexford, A. Shaikh, and. J. Van Der Merwe, ... [9] M. Casado, M. J. Freedman, J. Pettit, J. Luo, N. McKe- own, and S. Shenker, ... [23] N. Foster, A. Guha, M. Reitblatt, A. Story, M. J. Freed- man, N. P. Katta, C.
Network Revolution - Software Defined Networking and Network Function Virtualisation playing their part in the next Industrial Revolution ´ Briain1,2 , David Denieffe1 , Yvonne Kavanagh1 and Dorothy Okello2 Diarmuid O 1 GameCORE Research Centre, Department of Aerospace, Mechanical and Electronic Engineering, Institute of Technology, Carlow, Ireland 2 Department of Electrical and Computer Engineering, College of Engineering, Design, Art and Technology, Makerere University, Kampala, Uganda Symposium on Transformative Digital Technologies - Kamapla 2016 January 28, 2016 Abstract

Networking and telecommunications have been spared the major changes that have occurred in computing over the last decade. The Information Technology (IT) world transformed, virtualisation, then the cloud, instantly adaptable and elastic computing. Software Defined Networking (SDN) and Network Function Virtualisation (NFV) are about to bring about instantly adaptable and elastic networking. SDN is being realised in the data centre today and is about to take stage in the Wide Area Network (WAN). SDN is the extraction of the control functions from networking equipment hardware. This leaves the hardware with only data plane functionality. The control plane functions are migrated as software functions to be ran on standard industry hardware or more often than not on server instances located on cloud platforms. NFV is a separate but complementary technology that replaces existing functions typically found on specialised hardware with virtualised versions of the same function. These NFVs can be delivered on a virtual Customer Premises Equipment (vCPE) devices that will provide virtualisation locally for the provision of NFVs and/or in concert with cloud based functions at the data centre. This revolution will create the appearance of infinite capacity and permit the expansion of the current scientific, informatics and engineering boundaries to create a Cloud Integrated Network (CIN). CIN, the Internet of Things (IoT) as well as AuGmented Intelligence (AuGI) will come together in the future to create the perfect storm that will transform human existence in a third industrial revolution [1].

1

Introduction

Over the last ten years or so the landscape in computing has changed dramatically with the Cloud, large-scale data centres and virtualisation. Over the last few years networks have increased in speed and there has been a convergence on Ethernet as the standard for all links, to the point that the difference between Local Area Network (LAN), Metropolitan Area Network (MAN) and Wide Area Network (WAN) has diminished dramatically. What has not changed in that time however is the core switching and routing functions which are generally delivered on a hardware based stand-alone device that is self sufficient in terms of the data it switches or routes and the control necessary to make that happen. In a bid to outdo each other to maintain advantage in the market companies like Cisco, Juniper and HP have loaded their devices with features that over time have resulted in network devices that rely on aged protocols like Border Gateway Protocol (BGP) to communicate and networks have levels of header encapsulation that eat into the Maximum Transfer Unit (MTU) size of the packets. This layering of abstractions on top of other abstractions is not conducive to Network Management, where traffic patterns are decided within each layer independently. It is not uncommon for a packet to arrive at an Internet service provider (ISP) network with a Virtual LAN (VLAN) tag, the ISP adding another VLAN tag before passing the packet to an upstream ISP who adds an MultiProtocol Label Switching (MPLS) header as it is switched across their IP network. While the underlying networks have converged towards the all Ethernet / all Internet Protocol (IP) model, in some form the number of services have increased rapidly. In the past ISPs provided Internet Access in the form of Broad-

Symposium on Transformative Digital Technologies - Kampala 2016

band and possibly layered a voice service either as a circuit switched out of band telephone line or as a Voice over Internet Protocol (VoIP) service with some packet priority mechanism to give Quality of Service (QoS). In more recent years this service set is increasingly being supplemented with TeleVision over IP (TVoIP) that more often than not requires a separate Set Top Box (STB) for its provision. Network resilience is an important characteristic to ISP network designers, yet duplication of service paths may give the appearance of redundancy where there is in fact none. For example a tier three ISP getting a service from two independent ISPs, one a tier two provider and the second, an incumbent tier one provider, all to ensure path resilience for a customer. The tier one ISP provides an MPLS circuit at the customer end and drops that off at a data centre at a major city, the tier two ISP provides a Network Termination Unit (NTU) at the customer end and drops the traffic off at another data centre in a major city. However there is generally a limit on the number of actual fibre paths between major cities or even the circuit supplied by the tier two ISP many in fact have a portion passing through the tier one network. From the ISP providing the service to the customer there is the appearance of separate paths however a strategic failure of a fibre bundle could in fact expose this.

2

Route Control Platform

A good historical starting point is the Route Control Platform (RCP) proposed [2] in 2004, supported by AT&T, was followed with a design and implementation proposal [3] the following year. These papers proposed and designed a phased approach to solving the problems of convergence, route looping as well as the difficulties with traffic engineering that BGP and Internal BGP (iBGP) pose on large networks. As demonstrated in figure 2 the proposal was for an Autonomous System (AS) to use a control RCP Server to oversea the routing for all iBGP routers by feeding their routing tables directly and preventing the routers from sharing routes between each other. In this way the RCP controller has an overarching view of the routing picture within the AS, treating the AS as a single logical entity while the router has just to take care of the routing of packets based on the injected routes. Initially the RCP would interact with other AS using BGP but as these other AS implemented the RCP platform a newer more efficient inter-AS protocol could evolve. RCP based on research in the AT&T labs produced Intelligent Route Service Control Point (IRSCP) [4] route control architecture an RCP implementation. As SDN evolved over the last few years, it has triggered a revision of RCP with a view to uplifting the concept to take advantage of the new SDN architecture developments [5].

Figure 1: SDN Timeline.

Figure 2: Route Control Platform.

3

100 x 100 Clean-slate Project

The United States (US) National Science Foundation’s (NSF) Directorate for Computer and Information Science and Engineering (CISE) decided to stimulate innovative thinking in research into the future of Internet Architec-

Page 2 of 8

Symposium on Transformative Digital Technologies - Kampala 2016

• The network should be governed by policies declared over high-level names.

tures without the limitations of the constraints of today’s networks [6]. Funding from this initiative from 2003 until 2005 released researchers at a number of US Universities from the boundaries created by existing Internet design decisions while taking advantage of the benefit of hindsight and the lessons already learned in a drive to develop a 100 Mb/s to 100 million US households (100 x 100).

• Network routing should be policy-aware. • The network should enforce a strong binding between a packet and its origin.

The outcome of the research was a system, demonstrated in figure 3 that separated the control and data 3.1 4D Architecture planes, with a controller governed by a policy managFrom this the idea that the Control and Data planes ing communication between end-hosts to a point where should be separated was a theme that evolved from the there are no connections without explicit permission. It Clean Slate 4D [7] approach which proposed an architec- also proposed specialised Ethane switches with data-paths ture with a separate decision plane responsible for manage- managed by a flow table. Flow table entries to consist of ment and control, a dissemination plane to control commu- a matching Header linked to a corresponding Action. nications from the control entity to the routing devices, a discovery plane to monitor traffic and changes within the topology plus a data plane to handle the actual traffic. This 4D architecture allows for the direct control of data plane resources by an abstracted decision plane.

4

Internet Clean-Slate Design

A new clean-slate collaborative inter-disciplinary research programme funded by industrial partners was launched at Stanford University in 2006 [8]. This programme was designed to “focus on unconventional, bold, and long-term research that tries to break the network’s ossification”. The programme was broken into areas: • Network Architecture • Accommodating Heterogeneous Applications Figure 4: SDN Architecture.

• Accommodating Heterogeneous Physical Layers • Security • Economics and Policy

5

OpenFlow and Open vSwitch

Within this project Dr. Martin Casado of the High One issue that Ethane raised was the need for hardware Performance working group developed Ethane [9]. with access to the flow tables directly from a controller. This led to further work which resulted in the development of OpenFlow [10] a simple protocol that a controller could use over a secure channel (Transport Layer Security (TLS) over TCP/6633) to modify the flow table in a supporting switch, a South Bound Interface (SBI). Further work on this and an initial specification [11] in 2008 for a virtual Switch daemon (vswitchd), produced for the GNU/Linux kernel. This evolved to the Open virtual Switch (OvS) [12] project, it is now an Open Source project under the Apache 2 license and has reached v2.4 (as of April 2015). The overall SDN architecture is demonstrated in figure 4 with the Data and Control planes, linked by OpenFlow offering services from the Application plane via a RESTful Figure 3: Ethane. Application Program Interface (API). OpenFlow has also Ethane was designed around three fundamental princi- evolved, coming under the management of the Open Netples: working Foundation (ONF) [13] founded in 2011 for the Page 3 of 8

Symposium on Transformative Digital Technologies - Kampala 2016

promotion and adoption of SDN through open standards switches were interconnected with R2 [25] Channel Assodevelopment. OpenFlow has evolved to version 1.5.1 [14] ciated Signalling (CAS), typically in E1 Link: G.732 [26] (As of Apr 2015). - G.704 [27] Framing circuits. Like the Internet today the telephone switches of that era communicated both the signalling and bearer channels over the same physical links.

6

SDN Controller development

Now that a standard SBI existed the evolution of controllers as well as work on a NBI became important. Network Operating System (NOX) [15] a C++ based first generation controller [16] was developed by Nicira Networks and donated to the research community. A Python version of the NOX Controller called POX [17] was developed for rapid development and prototyping. Another Python based SDN Controller is ’RYU’ [18] (Japanese: flow), available under the Apache 2.0 license has OpenStack integration and supports OpenFlow 1.0 – 1.4 plus Nicira extensions. Ryu has a Web Server Gateway Interface (WSGI) and by using this function, it is possible to create a REST API (called RESTful API) [19], which is a useful NBI link with other systems or browsers in an application tier. A commercial grade Java SDN Controller developed by Big Switch Networks evolved from a Java based research SDN Controller called Beacon [20] as Project Floodlight [21]. This project code is also Apache 2 licensed. It, like RYU, has a RESTful API. The other big SDN Controller is a Linux Foundation collaborative project called OpenDaylight (ODL) [22], developed in Java. The latest version of the platform designated Helium is a follow on from the first release of ODL called Hydrogen. This project was designed to take advantage of existing Linux Foundation projects, like integration with OpenStack as well as developments with high availability, clustering and security. ODL OpenFlow plugin supports OpenFlow versions 1.0 and 1.3. Like RYU and Project Floodlight an application tier is made possible through a RESTful API as well as an Authentication, Authorisation and Accounting (AAA) AuthN filter.

6.1

NBI Developments

Figure 5: Signalling System No. 7 (SS7). In the 1970s Signalling System No. 7 [28] Common Channel Signalling (CCS) was developed to separate the signalling from the bearer channels, this released the control from the telephone switches allowing the Operators to deliver richer centralised services known as Intelligent Network (IN) [29] services. Referring to figure 5 the links between the switches carry the bearer channels, the voice equivalent of the Data Plane and are called Inter Machine Trunks (IMT). From a Signalling perspective the switches contain an entity called a Service Switching Point (SSP) which performs the call processing on calls by interacting with the connected SS7 Signal Transfer Points (STP). These STPs act as SS7 routing devices passing SS7 messages between SSPs, Service Control Points (SCP) and other STPs. The SCPs offer telephony services on the Intelligent Network (IN). So SS7 is in effect a Control Plane while IN is a network of telephony services.

As SDN evolves it has become apparent that new NBI mechanisms are required to meet the diverse Applications that will call on the SDN Controller. The Frenetic Project 8 Network Function [23] raises the level of abstraction for programming SDNs Virtualisation by the development of simple, reusable, high level abstractions and efficient runtime systems that automatically generate and install corresponding low-level rules on SDN At the SDN & OpenFlow World Congress in Darmstadt, switches. Pyretic [24] is a Frenetic Project implementa- Germany in October 2012 a group of Tier 1 service providers launched an initiative called NFV [30]. These tion embedded in Python. operators could see that Virtualisation and Cloud computing could evolve the way services are delivered on networks by consolidation and virtualisation of network equipment 7 SS7 in the telephony industry on industry standard high volume servers as can be seen The changes being witnessed in the migration to SDN from in the figure 6 NFV concept. Functions could also be mitraditional networking is analogous to the changes in the grated to centralised virtualised infrastructure while also telephony industry in the late 1970s and 80s. Telephony offering the facility to push virtualisation of functions right Page 4 of 8

Symposium on Transformative Digital Technologies - Kampala 2016

Containers to meet demand. In this way traffic patterns and service demand can be met in an automated and managed fashion. As a result the service provider can increase the speed to market of both existing NFVs but also decrease the time it takes to innovate new services and deliver them on the virtualised infrastructure. Figure 6: Network Function Virtualisation (NFV). out to the end user premises. While SDN and NFV are complimentary to each other they are not as yet interdependent and can therefore be operated either together, or independently. Obviously moving functions that were heretofore based on specialist hardware presents a number of challenges, such as; • the portability to a virtualised system and interoperability with existing infrastructure. • the performance trade-off between standards based hardware and that of specialised, function specific hardware. • the interaction of the Management and Network Orchestration (MANO) of the distributed functions with the network. Using the benefits of automation to achieve the transformational aspects of NFV. • the integration of functions into the overall NFV ecosystem and its coexistence with legacy systems. • the new challenges in terms of security and stability have evolved as a result of cloud computing and virtualisation. These challenges and newer security challenges will evolve from this new networking system. The benefits of NFV however make the case for migration so compelling that without doubt it will form the core of services to be offered by service providers well into the future. Hardware-based appliances have a specific life, which is getting shorter and shorter with the rapid pace of development, and they need regular replacement. This complicates maintenance procedures and customer support with no financial benefit to the service provider. NFV will transform the design of the network to implement these functions in software, many of these will process centrally thereby allowing for their operation to be migrated and backed up as needed. This will reduce equipment costs and reduce power consumption due to power management features in standard servers and storage, while eliminating the need for specific hardware. Services can be scaled up and down in a similar fashion to that provided by cloud services today. IT MANO mechanisms familiar today in cloud services will facilitate the automatic installation and scaling of capacity by building Virtual Machines (VM) or

Figure 7: NFV Ecosystem. Figure 7 shows the overall NFV ecosystem [31]. The underlying infrastructure collectively is called the Network Function Virtualisation Infrastructure (NFVI) and it consists of three domains, Network, Compute and Hypervisor/Virtualisation. The Network Domain consists of islands of switches with SDN Controllers or a traditional routed and switched network. The computing hardware and storage necessary to support the upper layers form the Compute Domain consists. The final domain in the NFVI is the Hypervisor/Virtualisation Domain which contains the virtualisation hypervisors and VMs. This can be built using existing hypervisors like KVM, Xen, VMWare or using Container technology like Docker. These NFVI domains are managed by a Virtual Infrastructure Manager (VIM). A Virtual Network Function Manager (VNFM) controls the building of individual Virtual Network Functions (VNF) on the VMs. MANO performs the overall management of the VIM, VNFM and Operations Support Systems (OSS) / Business Support System (BSS) and allows the service provider to quickly deploy and scale VNF services as well as provide and scale resources for VNFs. This system reduces administrator workloads and removes the need for manual administration type tasks. It also offers APIs and other tooling extensions to integrate with existing environments.

8.1

Providing NFV to the customer

Figure 8 demonstrates the benefits that ubiquitous high speed broadband gives to the service provider. It provides the ability to supply a vCPE [31] to the customer upon which VNFs can be offered. Current services that can be converted into NFV style services are:

Page 5 of 8

Symposium on Transformative Digital Technologies - Kampala 2016

to be the end of phase 1 and phase 2 was launched. This saw some reorganisation of the ISG NFV working groups, to focus less on requirements and more on adoption. The key areas addressed include: • Stability, Interoperability, Reliability, Availability, Maintainability. • Intensified collaboration with other bodies.

Figure 8: Virtual CPE (vCPE).

• Testing and validation to encourage interoperability and solidify implementations. • Definition of interfaces.

• Router. • Session Border Controller (SBC). • Load Balancer.

• Establishment of a vibrant NFV ecosystem. • Performance and assurance considerations. • Security.

• Network Address Translation (NAT). • Home Gateway (HG).

8.3

Open Platform NFV

The Linux Foundation established a Collaborative Project called ’Open Platform NFV (OPNFV)’ in October 2014 [35]. The project intent is to provide a Free and Open• Traffic Management. Source Software (FOSS) platform for the deployment of • Firewall. NFV solutions that leverage investments from a community of developers and solution providers. The initial focus • Deep Packet Inspection (DPI). of the OPNFV will be the NFVI and VIM. In reality this means the OPNFV will focus on building interfaces be• Bulk Encryption. tween existing FOSS projects like those listed below. Cre• Content Caching. ating these interfaces between what are essentially existing elements to create a functional reference platform will be • Session Initiation Protocol Gateway (SIP-GW). a major win for the technology and certainly contribute This however is just the beginning, these services al- to the goals of phase 2 of the ETSI NFV ISG. ready exist on traditional deployment mechanisms. The • Virtual Infrastructure Management: OpenStack, fact that virtualisation will now be available in the vCPE Apache CloudStack, ... at the customer premises means that a service provider can deploy new services not envisaged as yet and deploy • Network Controller and Virtualization Infrastructure: services on a trial basis, all without equipment changes. OpenDaylight, ... • Application Acceleration.

8.2

NFV Standards

• Virtualisation and hypervisors: KVM, Xen, libvirt, LXC, ...

After the initial white paper from the Darmstadt• Virtual forwarder: OvS, Linux bridge, ... Germany Call for Action in 2012 it was decided to form an Industry Specification Group (ISG) under the European • Data-plane interfaces and acceleration: Data Plane Telecommunications Standards Institute (ETSI). Phase 1 Development Kit (DPDK), Open Dataplane (ODP), of this group was to ”drive convergence on network opera... tor requirements for NFV to include applicable standards, where they already exist, into industry services and prod• Operating System: GNU/Linux, ... ucts to simultaneously develop new technical requirements with the goal of stimulating innovation and fostering an open ecosystem of vendors” [32]. They issued a progress 9 Ongoing research White Paper in October 2013 [31] and a final paper in October 2014 [33] which drew attention to the second release SDN is at an early stage of development. The Open Netof ETSI NFV ISG [34] documents that were subsequently working Research Center (ONRC) [36] at UC Berkeley and published in January 2015. December 2014 was considered Standford University has been created to help realise the Page 6 of 8

Symposium on Transformative Digital Technologies - Kampala 2016

potential of SDN. The IETF has a Software-Defined Networking Research Group (SDNRG) [37] with the stated goal of identifying the approaches that can be defined, deployed and used in the near term as well identifying future research challenges. The IETF have also a Network Function Virtualisation Research Group (NFVRG) (IETF, n.d.) to focus on research problems associated with NFV-related topics and the research community to address them. The Linux Foundation believe that with the projects they have in place already, they are in a perfect position to bring these together as a new project Open Platform NFV (OPNFV) to accelerate NFV [35]. Dr. James Kempf of Ericsson believes that NFV and SDN have traversed the peak of inflated expectation and are starting down the trough of despair [38]. However he has considered the OPNFV initiative of the Linux Foundation which he sees as a complimentary effort to their existing OpenDaylight and OpenStack projects. He believes that there is a lot of work yet to be achieved before reaching the slope of enlightenment and considers that SDN is confined to the data centre for some time to come.

10

increased interconnected computing world combined with AI systems exceeding human intelligence, could in fact replace human intelligence in a near future singularity event [39].

References [1] M. K. Weldon, The Future X Network: A Bell Labs Perspective. CRC PressINC, 2015. 00000. [2] N. Feamster, H. Balakrishnan, J. Rexford, A. Shaikh, and J. Van Der Merwe, “The case for separating routing from routers,” in Proceedings of the ACM SIGCOMM workshop on Future directions in network architecture, pp. 5– 12, ACM, 2004. 00298. [3] M. Caesar, D. Caldwell, N. Feamster, J. Rexford, A. Shaikh, and J. van der Merwe, “Design and implementation of a routing control platform,” in Proceedings of the 2nd conference on Symposium on Networked Systems Design & Implementation-Volume 2, pp. 15–28, USENIX Association, 2005. 00377. [4] P. Verkaik, D. Pei, T. Scholl, A. Shaikh, A. C. Snoeren, and J. E. Van Der Merwe, “Wresting Control from BGP: Scalable Fine-Grained Route Control.,” in USENIX Annual Technical Conference, pp. 295–308, 2007. 00033.

Conclusion

The networking industry did not change significantly during the last decade, innovation was confined to port speed increases and the migration to an all-Ethernet environment. At the customer end, the roll-out of ubiquitous broadband is progressing steadily and service providers have migrated their cores from Asynchronous Transfer Mode (ATM) core to all IP networks. Over the same period there has been a revolution in terms of computing with the roll-out of cloud based services driven by advances in virtualisation. The innovation that brought about the cloud is about to enter networking and telecommunications in the form of SDN and NFV. Initial penetration of SDN has started in the data centres where the cost base of the traditional Ethernet switches and routers have driven the adoption of the new technology. As phase 2 of the NFV ISG provides working solutions over the next two years it would be expected that they will be adopted by service providers who can deliver new services, cost effectively. This will provide a better and more functionally rich service to their customers and particularly in the case of SMEs the ability to offload existing functions to their provider rather than managing them within their own IT departments. What of the future and how will these new revolutionary networking paradigms, developments in IoT, AuGI and even Artificial Intelligence (AI) impact the future? What can be said is that they will certainly transform it and most likely result in a third industrial revolution. Whether the transformation will ultimately mean that the perfect storm will assist, human intelligence [1] or as warned by Stephen Hawkings and others that the revolution could in fact pose a risk as an

[5] C. E. Rothenberg, M. R. Nascimento, M. R. Salvador, C. N. A. Corrˆea, S. Cunha de Lucena, and R. Raszuk, “Revisiting routing control platforms with the eyes and muscles of software-defined networking,” in Proceedings of the first workshop on Hot topics in software defined networks, pp. 13–18, ACM, 2012. 00087. [6] N. S. F. (NSF), “NSF Future Internet Architecture Project,” June 2015. 00000. [7] A. Greenberg, G. Hjalmtysson, D. A. Maltz, A. Myers, J. Rexford, G. Xie, H. Yan, J. Zhan, and H. Zhang, “A clean slate 4d approach to network control and management,” ACM SIGCOMM Computer Communication Review, vol. 35, no. 5, pp. 41–54, 2005. 00573. [8] N. McKeown and B. Girod, “Clean slate design for the internet,” Whitepaper, Apr, 2006. 00014. [9] M. Casado, M. J. Freedman, J. Pettit, J. Luo, N. McKeown, and S. Shenker, “Ethane: Taking control of the enterprise,” in ACM SIGCOMM Computer Communication Review, vol. 37, pp. 1–12, ACM, 2007. 00587. [10] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation in campus networks,” SIGCOMM Comput. Commun. Rev., vol. 38, no. 2, pp. 69–74, 2008. 03589. [11] B. Heller, Openflow switch specification version 1.0. 0 (wire protocol 0x01). December, 2009. 00011. [12] “Open vSwitch.” 00031. [13] “Open Networking Foundation (ONF).” 00009.

Page 7 of 8

Symposium on Transformative Digital Technologies - Kampala 2016

[14] Open Networking Foundation (ONF), “OpenFlow Switch Specification, Version 1.5.1 ( Protocol version 0x06),” Mar. 2015. 00000.

[33] I. ETSI, “Network Functions Virtualisation–Network Operator Perspectives on Industry Progress - 2014,” Updated White Paper, 2014. 00000.

[15] “NOX · GitHub.” 00003.

[34] ETSI, “Network Functions Virtualisation.” 00008.

[16] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, and S. Shenker, “NOX: towards an operating system for networks,” ACM SIGCOMM Computer Communication Review, vol. 38, no. 3, pp. 105–110, 2008. 00991.

[35] Linux Foundation, “OPNFV - OPNFV - An open platform to accelerate NFV,” Oct. 2014. 00000.

[37] IETF, “IRTF Software-Defined Networking Research Group (SDNRG).” 00000.

[17] “POX · GitHub.” 00000. [18] RYU project team, “RYU SDN Framework — Ryubook 1.0 documentation.” 00000. [19] T. Fredrich, RESTful Service Best Practices. eCollege, 2012. 00006.

[36] ONRC, “Open Networking Research Center (ONRC).” 00000.

Pearson

[20] D. Erickson, “The beacon openflow controller,” in Proceedings of the second ACM SIGCOMM workshop on Hot topics in software defined networking, pp. 13–18, ACM, 2013. 00135.

[38] J. Kempf, “NFV and SDN: Has the Hype Curve Peaked?,” Jan. 2014. 00000. [39] S. Hawking, S. Russell, M. Tegmark, and F. Wilczek, “{S} tephen {H} awking:\’{T} ranscendence looks at the implications of artificial intelligence-but are we taking {AI} seriously enough?\’,” The Independent, vol. 2014, no. 0501, p. 9313474, 2014. 00019.

[21] Big Switch Networks, “Project Floodlight.” 00000. [22] Linux Foundation, “The OpenDaylight Platform.” 00000. [23] N. Foster, A. Guha, M. Reitblatt, A. Story, M. J. Freedman, N. P. Katta, C. Monsanto, J. Reich, J. Rexford, and C. Schlesinger, “Languages for software-defined networks,” Communications Magazine, IEEE, vol. 51, no. 2, pp. 128–134, 2013. 00084. [24] J. Reich, C. Monsanto, N. Foster, J. Rexford, and D. Walker, “Modular sdn programming with pyretic,” Technical Reprot of USENIX, 2013. [25] ITU-T, “Q.400-Q.490: Specifications of Signalling System R2,” 1988. 00000. [26] ITU-T, “G.732: Characteristics of primary PCM multiplex equipment operating at 2048 kbit/s,” 1988. 00000. [27] ITU-T, “G.704: Synchronous frame structures used at 1544, 6312, 2048, 8448 and 44 736 kbit/s hierarchical levels,” 1998. 00000. [28] ITU-T, “Q.700 : Introduction to CCITT Signalling System No. 7,” 1994. 00000. [29] ITU-T, “Q.1200 : General series Intelligent Network Recommendation structure,” 1999. 00000. [30] M. Chiosi, D. Clarke, P. Willis, A. Reid, J. Feger, M. Bugenhagen, W. Khan, M. Fargano, C. Cui, and H. Denf, “Network functions virtualisation: An introduction, benefits, enablers, challenges and call for action,” in SDN and OpenFlow World Congress, pp. 22–24, 2012. 00055. [31] G. ETSI, “Network Functions Virtualisation (NFV); Use Cases,” V1, vol. 1, pp. 2013–10, 2013. 00008. [32] ISG on NFV, “ISG NFV Proposal,” Oct. 2012. 00000.

Page 8 of 8