OpenFlow Supporting Inter-Domain Virtual Machine Migration Bochra Boughzala∗ , Racha Ben Ali† , Mathieu Lemay‡ , Yves Lemieux† and Omar Cherkaoui∗ ∗ University
of Quebec at Montreal Email:
[email protected],
[email protected] † Ericsson, Montreal, Canada Email:
[email protected],
[email protected] ‡ Inocybe, Canada Email:
[email protected]
Abstract—Today, Data Center Networks (DCNs) are rearchitected in different new architectures in order to alleviate several emergent issues related to server virtualization and new traffic patterns, such as the limitation of bi-section bandwidth and workload migration. However, these new architectures will remain either proprietary or hidden in administrative domains, and interworking protocols will remain in-process of standardization for a time longer than the usually required time to market. Therefore, interworking cloud DCNs to provide the federated clouds is a very challenging issue that seems to be potentially alleviated by a software-defined networking (SDN) approach such as Openflow. In this paper, we propose a network infrastructure as a services (IaaS) software middleware solution based on Openflow in order to abstract the DCN architecture specifities and instantly interconnect DCNs. As a proof of concept we implement an experimental scenario dealing with virtual machine migration. Then, we evaluate the network setup and the migration delay. The use of the IaaS middleware allows automating these operations. OpenFlow solves the problem of interconnecting heterogeneous Data Centers and its implementation offers interesting delay values.
I. I NTRODUCTION Emergent Data Center Networks (DCNs) are based on specific architectures and topologies which make them hard to interwork and interconnect. In fact, traditional DCNs are usually re-architected to alleviate several issues raised by the introduction of server virtualization and the emergence of new applications traffic patters in the clouds. Among these issues we cite the limitation of the bi-section bandwidth and its inefficient utilization by spanning tree protocols and the challenging support of a live migration of virtual machines. Therefore, traditional architectures, topologies and protocols are redesigned to alleviate these issues. For instance, a scaleout architecture of commodity switches with fat free topology are able to provide full bi-section bandwidth when properly combined with a multi path protocol such as Equal Cost Multi Path (ECMP). Furthermore, an identifier/locator split addressing topology provides : (1) a scalable addressing scheme to a high number of virtual machines (VM) by summarizing hierarchical locators of physical nodes, (2) an agile VM migration by simply remapping the VM identifier to its new locator. 978-1-4577-0261-7/11/$26.00 ©2011 IEEE
However, the specifities of these solutions are either proprietary or remain hidden inside administrative domains or take a long time to be standardized which is a major obstacle in rapidly deploying innovative services. On example is a mobile thin client applications that require a very low delay and therefore, VM inter domain migration to the nearest cloud domain to the user is the solution to minimize this latency and provide a good user experience for the thin-client. However this solution cannot be implemented if several specifically rearchitected cloud DCNs cannot interwork for that purpose. Another example is the cloud bursting during unplanned traffic surges, where the excessive workload can be migrated to other underutilized clouds in different time zones for instance or to benefit from clouds located in regions with low energy costs. This is oftenly refered as ”follow the sun and follow the wind”. Consequently, providing a federated clouds by interworking and properly interconnecting cloud DCNs will provide a pleathora of new innovative services. One attractive solution that we adopted in this paper is the design of a network infrastructure as a service (IaaS) software middleware to provide this interworking. In fact, our software-based approach is based on Openflow software-defined networking (SDN) in order to provide the required flexiblilibity to adapt to DCN specificities with a very rapid deployment between administrative domains. Furthermore, DCN equipment vendors usually provide proprietary closed solutions that are targeted to provide optimal performance within the same administrative domain without any interworking with other cloud domains built using equipments from other vendors. Therefore, it is critical to abstract DCN resources in order to interconnect them even though they are very diverse and hidden to external domains. Using Openflow we propose a generic solution for this abstraction based on IaaS generic framework. We evaluate our solution using a test-bed experimentation of two DCN topologies inspired by the already proposed architecture models. To describe the Data Center topology and its internal architecture, Clos models and its special instance fat-tree are usually used. Although each model solves a given problem, all models designs share the same purpose of introducing new connectivity features and optimizing Data Center performance,
scalability and in some cases energy consumption using elastic tree. In most of fat-tree topology models, there are at least three levels and the number of ports of each switch is usually denoted as a fat-tree parameter ”k”. Furthermore, in all these models the DCN is usually a multi-rooted tree. Switches with different port densities and speeds compose the different levels of each tree. Core switches are placed as root nodes. Aggregate switches are placed at the underlying root level and then at the lowest level of the tree we find the top of rack (ToR) switches that are directly connected to the rack of physical servers. On top of this general topology model, several DCN schemes were proposed for interconnecting network elements within the Data Center, each satisfying a different set of requirements. Portland [1] and VL2 [2] are among the most popular DCN schemes referenced in the literature. PortLand [1] uses a fattree as a DCN topology. In this fat-tree illustrated in Figure 1, links redundancy increases and therefore aggregate bandwidth increases as the root is neared at the aggregate level.
Fig. 1.
Fat Tree
PortLand aims to avoid switch configuration by modifying all switches to forward the traffic based on pseudoMAC headers that depend on positions (PMAC). By addressing the VMs in this topology using the PMAC pattern pod.position.port.vmid, longest prefix matches can be used to reduce the forwarding state in switches. However, VMs does not need to know about PMACs since traditional actual MACs (AMACs) are translated to PMACs and vice-versa at the ToR switches performing this MAC rewriting. A centralized fabric manager maintains the PMAC-to-AMAC mappings and a global connectivity matrix. VL2 [2] is based on a Clos topology with multi-rooted trees. The roots of the trees are called intermediate switches. All switches in VL2 are kept unmodified. Valiant Load Balancing (VLB) and Equal Cost Multi Path (ECMP) based on IP-in-IP encapsulation are used to balance the traffic fairly on the available links. VL2 also introduces host agents and a directory system acting as a centralized network manager and controller. In these DCN schemes, the scale out topology is not always efficient in energy consumption, therefore elastic trees [3] illustrated in Figure 2 were proposed to automatically shut off/on links depending on network usage. A power consumption reduction can be achieved using this scheme; however this has to be traded off against performance. In [4] authors provide a platform called Ripcord for rapidly prototyping, testing, and comparing different DCN schemes. Ripcord provides a common infrastructure, and a set of libraries to allow quickly prototyping of new schemes.
Fig. 2.
Elastic Tree
Considering the different requirements for different service providers, heterogeneous DCN architectures (VL2, PortLand, ElasticTree, etc.) will co-exist in different DCNs making hard their interconnection. Therefore, in our work we provide some directions to provide this connectivity between heterogeneous DCNs adopting different designs. In our approach, we identify the common characteristics of these heterogeneous DCN schemes and abstract them to a generic level so that they can be easily represented using high level policy rules that are translated to low level OpenFlow rules. These rules are pushed, preferably proactively, to the DCN elements to build the interDCN connectivity. This abstraction of connectivity, regardless of topologies, OSI layers and its related protocols, network vendors, etc., is achieved thanks to an OpenFlow based DCN model described in the next Section A. OpenFlow-based DCNs OpenFlow (OF) [5] is a practical approach that opens the forwarding plane hardware resources (forwarding table in OF1.0) of different vendor’s switches to be controlled using a remote separate OF controller. The communication between the controller and the switch forwarding plane is done using the OpenFlow protocol over a secure TCP channel. Since DCN schemes are designed to bypass existent control plane protocols (simple layer 2 forwarding with spanning tree loop avoidance), OpenFlow looks as an attractive and easy solution to prototype and implement the new connectivity features of these DCN schemes. Besides, since the OF controller is centralized, it has a global view of the whole network and therefore end-to-end forwarding paths either optimized or customized for a specific requirement can be easily established. For instance, in order to support a VM migration between two heterogeneous DCNs without connectivity interruption, specific forwarding paths can be re-routed by directly pushing the suitable OF rules in the suitable switch forwarding tables. In this case, we assume that these direct OF rules related to inter-DCN VM migration cannot raise conflicts with existent rules related to specific DCN scheme connectivity. This can be achieved using a specific policy rule conflict resolver. A feature that is usually missing in DCN schemes is the isolation of the DCN connectivity between multiple tenants of the cloud. OpenFlow, using a Flowvisor [6], is able to provide basic virtualization of the DCN that can be sliced into different flowspaces. Furthermore, in a virtualization context, OpenFlow enabled forwarding elements can also include software (even-
tually hardware accelerated) virtual switches at the hypervisor level. Therefore, an OpenFlow-based DCN is composed of the following elements. - The OpenFlow controller can be hosted on a separate server reachable on an IP network in the control plane. The controller dictates the behavior of the OF switches connected to it by either populating their flow tables either proactively when establishing basic DCN connectivity or reactively when a new flow arrives. Particularly, in the reactive mode, when a packet arrives to an OF switch that has not established yet an OF rule that matches that packet, this latter is sent to the controller to tell what to do with it. The controller then pushes the OF rule into the appropriate switches so that subsequent packets of the same flow will not be send to the controller again. The concept of a flow has a very wide definition and its granularity spans from a very fine grained flow such as a particular TCP connection to a coarse grained flow such as a VLAN or an MPLS label switched path. A widely used open source OF controller is NOX [7]. - The OpenFlow Switches (OFS) are the clients of the controller. They join the OF network by connecting to its controller over a secure TCP channel and exchanging Hello messages. The OpenFlow protocol specifies the message exchanges between OFS and OF controller. To maintain connectivity on the absence of network traffic, the controller and the switch will exchange an ’echo request’ and an ’echo reply’ every 5 seconds. For every new arriving flow of packet, the switch encapsulates the first packet of the flow inside an OpenFlow packet (called packet-in) and sends it to the controller. So the controller will respond by a ’packet-out’ and a ’flow-mod’ messages to set up the new flow entry corresponding on this flow of packets. - The OpenFlow Virtual Switches (OVS) [8] are software virtual switches that reside in the hypervisor space and understand the OpenFlow protocol. Instead of connecting physical machines they connect virtual machines on the same hypervisor. A widely used open source virtual switch is the Open vSwitch which is a virtual switch for open source Hypervisors such as qemu, kvm or xen. - The Flowvisor [6] acts as an OF proxy server between the switches and multiple controllers each controlling a specific flowspace of the network. Slices can be partitioned using flowspaces and controlled by different controllers. We may also have an overlapping flowspace that can be defined for a single controller to monitor a part or the whole physical network for instance. In case of a sliced network, each slice or virtual network has its own controller. OpenFlow appears as an attractive and flexible solution to define the forwarding logic of switches in DCNs using different schemes such as VL2 and PortLand. However, it may reveal some scalability limitations due to the huge number
of data plane forwarding rules that has to be established. Therefore, a complexity evaluation of the number of forwarding rules is evaluated in a further section of this paper. This OpenFlow environment is evaluated through an experimentation based on several topologies with different level of complexity. We evaluate OpenFlow by using it on a set of activated switches representing a Data Center built on a tree topology. B. Virtual Machine Migration The interest to use virtualization technologies is the ability to do several operations making the resources management more flexible. The virtual machine migration is a cloud operation that we are focusing on. However there are other virtual machine-based operations such as cloning a virtual machine to avoid doing the same installations, restoring virtual machine in case an incident happens. A virtual machine (VM) migration can be performed inside the same data center or between remote Data Centers [9]. This operation seeks mainly on ensuring load balancing and optimizing resources usage. These two goals are Data Center-oriented. Since the VM is running as a server, it is providing a service; so another reason justifying the VM migration is to be the nearest to the clients in order to offer a better quality of service by reducing the delay. In this case the objective of migrating a VM is clientoriented. There are cases where the VM migration can be critical, for example when the VM is running an HTTP server with several TCP connections or when it is running a video sequence. In such a situation we have to keep it running with the same IP address while migrating to not lose the established user connections. We have also to buffer its context on the host physical machine and send it to the target one. The fact of moving the VMs leads to the VM location issue. Solutions have been proposed to locate the VMs and the most common property of these solutions is the use of a mapping system between a fixed address and a variable address. In VL2, these two types of address are AA (Application Address) and LA (Location Address). In Portland,AMAC (Actual MAC) and PMAC (Positional Pseudo MAC) are used to resolve the VM location issue. A mapping table is maintained at the fabric manager – a centralized controller of the network. Note that in VL2 the problem is handled in the layer 3 by an IP-in-IP encapsulation and Portland handles this problem in the layer 2 based on MAC addresses. However, in the two implementations the fixed address is used to identify the VM and the variable address is used to locate the VM since it is able to move. C. Our Approach We aim to abstract the Data Center structure in order to be able to do inter Data Center operations. For example the migration becomes easier to do since it is constrained by no specificity about the way how Data Centers are made. Based on rules, we can set the network configuration to establish the connectivity required. In this implementation we just exploit the advantages of OpenFlow: its lightness, simplicity and
flexibility. We will show the configuration flexibility, so no matter if we are using Portland or VL2, we can define the appropriate rules for establishing the connectivity required for an inter-Data Center operation launched on demand. Our solution use OpenFlow with the abstraction structure required to be generic and independent of how the Data Center works on the inside. Even if we do not know how the Data Center is mounted we must be able to discover the appropriate rules in an easy way. To define these rules, we use a network control application based on the IaaS framework. The IaaS application creates the OpenFlow rules to establish the network connectivity for a specific operation. The switches receive these rules, so they update their flow tables. Obviously they must be able to understand and execute the rules sent by the IaaS application. In order to make the OpenFlow rules easier and faster to discover, we design a resource description that contains all the resources residing in the Data Center: virtual machines, OpenFlow switches, Open vSwitches, hosts (the hypervisors hosting the virtual machines)... The Data Center topology is relieved from this description. We have all the links connecting each couple of interfaces of all the Data Center devices. We defined two levels of rules; global rules and specific rules. - Global rules are topological; they are expressed in the Data Center resources description. They describe the network structure and they are related to the Data Center topology. An example of a global rule is to define how many levels of switches are involved to connect one server to another. When starting an operation, these rules are used to generate the specific rules that will be translated into valid OpenFlow rules (i.e. flow entries) to make this operation works. So, the global rules are high level rules we have to learn to not depend on specific implementation rules (such as VL2 or Portland) to a specific Data Center. The global rules are topologically related to the Data Center and they are used to create the specific rules. - Specific rules describe how operations will settle in hardware in the Data Center. These rules are used to be instantiated in the network as OpenFlow rules. Then, the OpenFlow rules are pushed into the appropriate switches that will handle the traffic generated by the launched operation. II. I AA S AS D ISTRIBUTED C ONTROLLER In our architecture, high-level rules are translated to Openflow rules and are pushed to data plane elements. In this context, NOX, a widely used Openflow controller, can be used to push rules down to all data plane elements. However, NOX is based on a centralized control logic requiring direct connections to data plane elements to build global network views. For this reason, maintaining and controlling a huge number of network states of the large number of data plane elements that scale out the Data Center Network can be overwhelming for the performance of a control plane based solely on NOX. Therefore, we propose to use a distributed
middleware framework based on an IaaS design, that can either control the data plane elements directly using the OF protocol or indirectly by carefully generating and parameterizing several NOX controllers. In any case, the IaaS middleware will distribute the control logic of all data plane elements (OF-based) over multiple controllers. Each controller will be responsible of a dynamically adjustable partition of data plane elements and may use appropriate aggregation to provide fewer and generalized network states when details are not required. A. The size of Openflow rule space Openflow can be used to abstract the control of the data plane of multiple network components. However, it faces some challenging limitations regarding the scalability of its generic and flexible flow-based forwarding since it is based on matching a large number of packet fields of multiple protocols at different layers (twelve-tuple in OF1.0 and more in OF1.1). More specifically, the Openflow protocol supports two types of data plane-level Openflow rules: (1) Exact match rules; and (2) Wildcard rules. Exact match rules are usually pushed by the controller in the SRAM of the data plane and the lookup is performed by applying efficient hash functions. However, worst case lookup speed can be very poor due to hash collisions accentuated by the huge number of exact match rules that are added for each L4 ’micro-flow’ connection. Worse yet, the lookup performance in a virtualized Data Center with multiple VMs per server is much more challenging. For instance a highly virtualized server hosting up to 10 VMs will multiply by 10 the average number of concurrent flows per VM. Therefore, the number of exact match rules in the aggregate and core switches will be extremely high thus significantly jeopardizing data plane forwarding performance. In contrast, wildcard rules sacrifices the flexibility of the fine granularity of exact match rules by matching only specific bits in the twelve tuple fields. Since we can define the whole flow space using few wildcard rules, these latter can scale well in aggregate and core switches. Wildcard rules are pushed by the controller in the TCAM which provides fast one-clock-cycle lookups. However, due to its size limitation, its cost and its power-consumption, the use of TCAM has to be optimized carefully. Furthermore, as we said before, wildcard rules sacrifices the flow granularity of Openflow and therefore prevent the flexibility in implementing some new data center connectivity features such as multi-path or QoS. In fact, for instance as in [10], we may want to wildcard the source address to use only 10% of the TCAM space that a per-source/destination rules will use. However, this will prevent benefiting from a load balancing of flows with different source addresses over different multiple paths. Besides the OF data plane scalability/flexibility tradeoff, the control plane of OF is assumed to be logically centralized thus expressing another scalability issue regarding the number of network states maintained for a large number of data plane elements controlled by the same logically centralized
IaaS Engine
sta
nti
ec on
fig
ate
ure
-R
ec
on
fig u re
Distributed IaaS
– In
– In
sta
Ac tiv
ate
nti ate
-R
IaaS Engine
NOX OpenFlow Controller
te Ac tiv a
NOX OpenFlow Controller
Core Routers
MPLS Backbone or Internet
Core Routers
OpenFlow Connection
OpenFlow Connection
Aggregate Switches
Aggregate Switches
Edge/ToR switches Edge/ToR switches
Inter Cloud VM Migration
Intra Cloud VM Migration
DataCenter2 (OpenFlow Network)
DataCenter1 (OpenFlow Network)
Fig. 3.
InterCloud Virtual Machine Migration
controller. In contrast, distributed control planes require a lot of state to be synchronized across many devices. Therefore, as in traditional hierarchical-routing based packet networks (OSPF or IS-IS based for instance), partitioning and aggregation are among the keys to a scalable control plane. Each OF controller will maintain the network states of a well defined OF domain. Partitioning into multiple OF domains will depend on several design requirements that we will try to partially automatize in our distributed IaaS controller. So that, for instance, the virtual network connecting servers frequently involved in long distance VM migrations will be confined in the same OF domain. The reason is to restablish the connectivity between migrating VM and its correspondant nodes as quickly as possibly using the centralized controller rather waiting for a propagation of a distributed state across the involved devices. Moreover, specific QoS treatment can be applied to the migrated flows.
B. IaaS controller to distribute OF rules Infrastructure as a Service is generally defined by the NIST as the capability provided to the customer to provision processing, storage and networking where he is able to deploy and run his own software including OSes and applications. The customer does not manage the underlying cloud infrastructure but has control over OSes, storage, deployed application and possibly a limited control over a selected number of networking components. The two major technologies that enable IaaS cloud computing are virtualization and elastic computing but they are generally considered at the server level only. In our work, we extend these technologies to the network level, thus providing network virtualization and elastic networking based on a related work.
III. E XPERIMENTATION AND R ESULTS Firstly, we calculate the required time to do the VM migration and then we calculate the time of setting up the network. To perform a VM migration we must have two hypervisors with an access to a storage server where the virtual machine images and their configuration files are available. We used two Xen hypervisors with an NFS server. The time required to migrate a virtual machine already running on a hypervisor to the destination is 40 seconds. For setting up the OpenFlow based data center, we implement our solution and evaluate it using Mininet, a linux based virtual machine wich provide a scalable platform for emulating OpenFlow networks via lightweight virtualization. Using Mininet we can create networks on a single physical machine. We can activate as much of OpenFlow switches as we want and we are also able to generate hosts and link them to the switches. We also define a topology on which the network will be built. We tried multiple strategies with different topologies. We rely on our generic resource description and our IaaS application that will activate NOX instances with the required components. It will also automatically define the OpenFlow rules and push them on the switches. The IaaS application input is a descriptor resources file containing all the Data Center devices. The topology is described on this file; it is composed by all the links existing between the several equipments. In the first topology, we generate an OpenFlow network built on two levels of switches. The network contains four hosts and three switches: two aggregate switches and a core router. The lowest switch level is a based on Open vSwitches. At the beginning the network is not configured and all the hosts can’t reach each other. To establish the connectivity between two hosts passing by the core router we have to push 6 rules on the switches to be able to forward the packets in the two ways. Connecting two hosts without passing by
Switch Type Core Aggregate Top Of Rack
OpenFlow Switch
Open vSwitch
VM
Fig. 4.
T HE NUMBER OF FLOW
Open vSwitch
VM
Two-levels based network
Number of Instances 1 0 2
T HE NUMBER OF FLOW
Number of Entries 2 0 4
TABLE I ENTRIES INSTALLED IN A TOPOLOGY
2
LEVEL HIERARCHICAL
We launched the application and calculated the required time to the switches to apply the recently pushed rules. At the end of this experimentation we obtained 16 seconds. OpenFlow Switch
OpenFlow Switch
Open vSwitch
VM
VM
OpenFlow Switch
Open vSwitch
VM
Fig. 5.
VM
Open vSwitch
VM
Number of Entries 2 4 4
TABLE II ENTRIES INSTALLED IN A TOPOLOGY
3
LEVEL HIERARCHICAL
VM
VM
the core router implies that the hosts are linked to the same aggregate switch; In this case, we have to push only 2 rules in the involved switch. So to configure full network connectivity we need to insert 28 rules. Since we aim to establish the required connectivity between the two involved hypervisors by the migration (the host hypervisor and the target hypervisor), we assume that the network is already configured and we have only to push the appropriate rules to connect two specific physical machines. In that case we have to push only 6 rules. Switch Type Core Aggregate Top Of Rack
Number of Instances 1 2 4
VM
Open vSwitch
VM
to support the migration operation. By using a centralized controller we can be more agile since it reacts faster and instead of waiting for the machine to migrate to its destination and sends ARPs packets, which leads to a latency duration, we do it pro-actively with the OpenFlow instructions and the IaaS application. Mainly, we are pushing flow entries : a set of specific rules. These rules are beyond VL2 or Portland implementation. The rules abstract the way the migration is handled. Note that virtual machine migration is not only interData Center, it can be also intra-Data Center but they are abstract enough to handle the two types of operations. We note that the number of rules depends on the number of switches and the numbers of servers. The main problem we can encounter is to have invalid and conflicting rules. The time that takes to establish the connectivity increases in a parallel way to the number of rules. We determine how long it takes for updating the flow tables in two cases. We determine the number of rules in each of those cases. We fix the time required to push the rules in the involved switches and we give the ratio of the network setting duration compared to the migration duration (average (17/40,26/40)= 0,53). The table below resumes the results we obtained through our experimentation. Note that in the two cases, we have k=3. - k: number of ports per switch - n: number of servers - m: number of switches - l: number of levels of switches - r: total number of rules - t: setup time (in seconds)
VM
Three-levels based network
In the second case, we increase the level of the complexity of the network. We built a network based on a topology with 3 levels of switches. The network contains 8 hosts and 7 switches organized as follow: 4 top of rack switches (Open vSwitches), 2 aggregate switches and a core router. If we want to connect two hosts passing by the 3 levels of switches we have to define 10 rules just for one flow of packets. To configure the whole network we need to define 128 rules. Using our IaaS application for automating the network setup, we pushed the rules and it takes 26 seconds to establish the connectivity between two specific hosts. The rules instantiated in the flow tables are injected in a pro-active way so the connectivity will be preconfigured
n m l r v t
Topology #1 4 3 2 6 2 17
Topology #2 8 7 3 10 2 26
TABLE III R ESULTS C OMPARISON
IV. F ORMULATION In order to perform the problematic formulation we define some parameters. In the list below we present the definition of the main parameters describing a Data Center Network: - f: average number of flows per VM
- v: average number of VMs per server - k: number of ports per switch for PortLand - s: average number of servers per ToR edge switch (= k/2 for PortLand) - de : number of ports per (ToR) edge switches (2 uplink + s downlink) for VL2 - da : number of ports per aggregate switch (da /2 uplink + da /2 downlink) for VL2 - di : number of ports per intermediate (core) switch (1 internet uplink + di downlink) for VL2 - ne : number of (ToR) edge switches - na : number of aggregate switches - ni : number of intermediate (core) switches - Ee : average number of entries in each (ToR) edge switch - Ea : average number of entries in each aggregate switch - Ei : average number of entries in each intermediate (core) switch ns ne na ni Ee Ea Ei
VL2 sda di /4 da di /4 di da /2 2vs ni + ne = (da /2) + (da di /4) ne = da di /4
PortLand k3 /4 k2 /2 k2 /2 k2 /4 2(k2 /2 − 1) + vk/2 2(k2 /2 − 1) + vk2 /4 k2 /2 − 1
TABLE IV A NALYTICAL FORMULATION OF THE AVERAGE NUMBER OF ENTRIES IN EACH SWITCH (P ORT L AND VS . VL2)
If we consider the second topology, we can determine the following parameters: s = 2 v = 2 da = 3 di = 2 and k = 3. Based on this value, we can pick up this final results: ns ne na ni Ee Ea Ei
VL2 3 1, 5 2 1, 5 8 3 1, 5
PortLand 6, 75 4, 5 4, 5 2, 25 10 11, 5 3, 5
TABLE V N UMERICAL APPLICATION OF THE FORMULATION
V. C ONCLUSION Data Centers are huge and complex networks and their configuration is even more complex. However, we can simplify this task thanks to the OpenFlow protocol and the IaaS framework. We adressed the need to make the configuration rules for the Data Centers interconnection generic. We proved that we are able to configure an OpenFlow Data Center on the fly regardless of the topology it has. In this paper, we proposed an OpenFlow based solution for remote Data Centers interconnection. In our study, we defined our OpenFlow solution that seems to be a good
solution for inter Data Center connectivity in order to enable inter cloud operations. It offers effective internal configuration abstraction of each Data Center. We showed that the solution in addition of being generic, it is feasible. It takes into account the real constraints of an inter-clouds operation. We gave an experimental scenario that demonstrates the feasibility of this solution in enabling inter Data Center virtual machine migration and in enhancing cloud based services. Setting up OpenFlow rules in the network takes 20 ms while virtual machine migration requires 40 ms; this ratio is interesting since it shows that the setup of the network takes half of the duration required to do the VM migration. R EFERENCES [1] R. Niranjan Mysore, A. Pamboris, N. Farrington, N. Huang, P. Miri, S. Radhakrishnan, V. Subramanya, and A. Vahdat, “Portland: a scalable fault-tolerant layer 2 data center network fabric,” ACM SIGCOMM Computer Communication Review, vol. 39, no. 4, pp. 39–50, 2009. [2] A. Greenberg, J. Hamilton, N. Jain, S. Kandula, C. Kim, P. Lahiri, D. Maltz, P. Patel, and S. Sengupta, “VL2: A scalable and flexible data center network,” ACM SIGCOMM Computer Communication Review, vol. 39, no. 4, pp. 51–62, 2009. [3] B. Heller, S. Seetharaman, P. Mahadevan, Y. Yiakoumis, P. Sharma, S. Banerjee, and N. McKeown, “ElasticTree: Saving energy in data center networks,” in Proceedings of the 7th USENIX conference on Networked systems design and implementation. USENIX Association, 2010, pp. 17–17. [4] B. Heller, D. Erickson, N. McKeown, R. Griffith, I. Ganichev, S. Whyte, K. Zarifis, D. Moon, S. Shenker, and S. Stuart, “Ripcord: a modular platform for data center networking,” in Proceedings of the ACM SIGCOMM 2010 conference on SIGCOMM. ACM, 2010, pp. 457– 458. [5] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, “OpenFlow: enabling innovation in campus networks,” ACM SIGCOMM Computer Communication Review, vol. 38, no. 2, pp. 69–74, 2008. [6] R. Sherwood, G. Gibb, K. Yap, G. Appenzeller, M. Casado, N. McKeown, and G. Parulkar, “Flowvisor: A network virtualization layer,” Technical Report Openflow-tr-2009-1, Stanford University, Tech. Rep., 2009. [7] N. Gude, T. Koponen, J. Pettit, B. Pfaff, M. Casado, N. McKeown, and S. Shenker, “NOX: towards an operating system for networks,” ACM SIGCOMM Computer Communication Review, vol. 38, no. 3, pp. 105– 110, 2008. [8] B. Pfaff, J. Pettit, T. Koponen, K. Amidon, M. Casado, and S. Shenker, “Extending networking into the virtualization layer,” Proc. HotNets (October 2009), 2009. [9] F. Hao, T. Lakshman, S. Mukherjee, and H. Song, “Enhancing dynamic cloud-based services using network virtualization,” in Proceedings of the 1st ACM workshop on Virtualized infrastructure systems and architectures. ACM, 2009, pp. 37–44. [10] A. Tavakoli, M. Casado, T. Koponen, and S. Shenker, “Applying NOX to the Datacenter,” Proc. HotNets (October 2009), 2009.