1
Synchronization Cost of Multi-Controller Deployments in Software-Defined Networks Fouad Benamrane¹, Francisco J. Ros², and Mouad Ben Mamoun¹ ¹ LRI, Faculty of Sciences of Rabat, Mohammed V University, Rabat, Morocco Email:
[email protected],
[email protected] ² Department of Information and Communications Engineering, University of Murcia, Murcia, E-30100 Spain. E-mail:
[email protected]. Abstract: The logically centralized control plane in Software-Defined Networks (SDN) must be physically distributed among multiple controllers for performance, scalability and fault tolerance reasons. However, this means that the network state must be synchronized among the different controllers to provide control applications with a consistent network view. In this paper, we explore such synchronization cost in a widespread SDN controller platform by emulating real wide-area network topologies. Our results show that the synchronization delay is lower than typical restoration times in currently deployed networks, showing the feasibility of multi-controller deployments from a network latency viewpoint. For this the controllers make use of a substantial amount of bandwidth that might be traded-off for synchronization delay when the deployment scenario allows it. Keywords: Software-Defined Networks, Controller Placement, Synchronization, Performance Evaluation. Biographical notes: Fouad Benamrane received his B.Sc. degree (2009) in physics at the Faculty of Sciences of Rabat (FSR), Morocco, and his M.Sc. degree (2011) in telecommunication sciences at the National School of Applied Sciences of Fez, Morocco (ENSAF). He is currently a Ph.D. candidate at the FSR. His current research focuses on the performance and scalability of software defined networks and the OpenFlow protocol. Francisco J. Ros received his B.Sc. (2004), M.Sc. (2007, 2009) and Ph.D. (2011) degrees from the University of Murcia, Spain. He works as a researcher and adjunct professor for the Department of Information and Communications Engineering (DIIC). Dr. Ros has participated in different research projects focused on communication networks, is a technical committee member of several conferences, and usually serves as a reviewer in major journals of the field. In addition, in 2012 he assisted the European Commission as a remote project evaluator and panel expert. His main research interests include software-defined networks, network function virtualization, cloud computing, and mobile networks. Ben Mamoun Mouad is currently a Professor in the Department of Computer Science at Mohammed V University, Rabat, Morocco. He received his M.Sc. and Ph.D. degrees in Computer Science from the University of Versailles, France, in 1998 and 2002. His research interests are about performance evaluation of networks.
1. Introduction In the last years we have witnessed the rise of SoftwareDefined Networks (SDN) (Nunes et al., 2014) as a means of providing the networking industry with the flexibility and agility that it traditionally lacks. The main proposition behind SDN consists of centralizing the control plane of the network into a separate controller that is in charge of the different data plane devices. Contrary to this, traditional network nodes (switches, routers, and middleboxes) integrate both planes within the same closed box that features the functionalities a vendor has assembled together. The SDN revolution lies on the ability of the central controller to expose the network state through open Application Programming Interfaces (APIs). In this way, network applications on top of such a controller can learn about the state of the network and take the actions they consider appropriate to change the overall behavior. These northbound APIs enable network programmability and
Copyright © 200x Inderscience Enterprises Ltd.
therefore a brand new ecosystem for network management. In addition, the controller interfaces the data plane by means of southbound protocols implemented at the various network nodes, as depicted in Fig .1. After the emergence of SDN in the mobile, enterprise and datacenter networks, new extensions are recently proposed such as Software Defined Storage (SDS), Software Defined Cloud (SDC), and Software Defined Security to extend the network revolution to these areas. Although the controller in charge of a network domain is a logically centralized construct, it must be physically distributed in order to meet performance (Benamrane et al., 2015), scalability, and fault tolerance constraints. To elaborate these constraints, let us consider a large network with a single controller responsible for all forwarding elements. The communication with the farthest node might require excessive latency, therefore limiting the
2 responsiveness of the control plane. In addition, a single controller might not be enough to service all nodes in the network. It would become a bottleneck in terms of processing power, memory, or input/output bandwidth. Furthermore, the controller itself might fail and therefore leave the network without a brain. Note that even when the controller is operational, some of the network nodes might get disconnected from the controller due to failures in links or in other nodes within the path between the two. The distributed controllers are beneficial for many application areas such as Security, Cloud and Storage that we mentioned earlier. In Software-Defined Security, the security is managed from higher level (Application) via policies and objects instead of dedicated hardware. In fact, having multiple controllers rather than one controller will obviously reduce the risk to attack the control plan, either from internal violation that comes from the data plane, or from external attacks (Hacking the controller directly). In addition, the new emerging cloud (SDC) (Buyya et al., 2014) proposes solutions for web, mobile, and Enterprise. It uses cloud computing to provide services and resources for customers in a distributed model (Zhao et al., 2015). However, there is still a lack of distributed computing (Frattolillo and Landolfi, 2011) in case where an application is partitioned over multiple cloud owned by multiple service providers. By using multiple SDN controllers in multiple cloud architecture, the service providers could synchronize polities and data to achieve high availability of the shared application. The same advantages can be cited for SDS, where the physical storage hardware is decoupled from of the management and control software. By carefully placing multiple controllers within an SDN domain, the former concerns can be alleviated. In particular, you can minimize the node-controller delay (Heller et al., 2012), guarantee certain reliability thresholds (Ros and Ruiz, 2014), or find a particular trade-off among several metrics (Hock et al., 2013). However, multi-controller deployments bring these benefits at a cost. In order to retain the (logically) centralized network view along with the programmability features, local databases within controllers must synchronize their content across the remaining set of controllers. Such databases contain a variety of information, including the nodes that comprise the network topology (e.g. their type, capabilities, ...), the different links among nodes (e.g. their type, bandwidth, usage statistics, ... ) and the paths that have been established within the network, to name a few. For instance, when a switch connects to one controller it extends its network view with such information. Given that the network state has changed, the remaining controllers must be notified so that the network view keeps consistent and control applications can act upon it (for example, a routing application recomputes network paths after the topology change). In this paper we explore the burden generated by such inter-controller synchronization. We focus on two key metrics regarding synchronization cost, namely delay and overhead. We emulate several wide-
area network topologies from the Internet Topology Zoo (Knight et al., 2011). Each network node connects to the controllers to guarantee 5 nines reliability (Ros and Ruiz, 2014). Therefore, these placements focus on reliability and not on other metrics like reduced inter-controller latency. We find that even in such a case, the synchronization latency can be kept below typical restoration targets of common networks using a widely supported SDN controller platform. This comes at the expense of a substantial usage of network bandwidth, but it might be traded-off for synchronization delay in scenarios where it is appropriate. Although more optimal solutions can be developed, our results suggest the feasibility of multi-controller deployments in SDN. The remainder of this paper is organized as follows. Section 2 reviews previous works on multi-controller deployments and highlights their main results. Our emulation setup and the experiments we perform are detailed on (Section 3). The results of such experiments are thoroughly analyzed in Section 4. Finally, Section 5 concludes the paper and depicts some lines for further investigation.
2. Related Work The physical distribution of the control plane in SDN has been addressed in different works. Levin et al. explore the state distribution trade-offs between strongly consistent and eventually consistent synchronization models (Levin et al., 2012), given that network state is logically centralized but physically distributed among different controllers. While strongly consistent models ensure that all controllers perceive the same network view, they limit responsiveness in the control plane during the process of achieving consensus. On the other hand, eventually consistent models improve liveness at the cost of possibly incorrect application behavior due to different network views at the various controllers. The Onix distributed control platform (Koponen et al., 2010) provides applications with flexibility to partition the network state and apply different storage mechanisms with distinct features. In order to reduce processing and synchronization overhead among controllers, Onix and other approaches like Kandoo (Hassas Yeganeh and Ganjali, 2012) propose to organize them into hierarchical layers. A different philosophy is followed by HyperFlow (Tootoonchian and Ganjali, 2010), which defines a publish/subscribe framework to selectively propagate network events across controllers. Opendaylight (OpenDaylight 2013), also support multiple controller deployment by using the cluster mode. Internally, it relies on an Infinispan (www.Infinispan.org) distributed datastore to keep a consistent view of the network state across controllers. Hence, it will be used during our evaluation as an SDN controller. The controller placement issue has been addressed in a few papers. The first study on the subject is due to (Heller et al., 2012). They introduce the controller placement problem and explore the trade-offs when optimizing for minimum latency between nodes and controllers. They find that, in most topologies, average latency and worst latency cannot be both optimized. In some cases, one controller is enough to meet common network expectations with respect to
3 communications delay (but obviously not fault tolerance). Additional controllers provide diminishing returns in most topologies. Yao et al. build upon this work to formulate the capacitated controller placement problem (Yao et al., 2014). They account for the different capacities of controllers, so that the number of nodes that connect to a controller cannot generate a load that the controller is not able to deal with. However, they assume that each node only connects to just one controller, which is not appropriate for highly reliable deployments. The works reviewed so far ignore the penalty that must be paid for inter-controller synchronization. The ParetoOptimal Controller Placement (POCO) framework (Hock et al., 2013) extends Heller et al.'s work (Heller et al., 2012) to consider additional aspects other than network latency. In particular, the whole solution space for placements of k controllers is analyzed with respect to node-controller latency, controller-controller latency, load balancing among controllers, and resilience in arbitrary double link or node failure scenarios. By generating all possible placements for k controllers, the trade-offs between the former metrics are explored. However, all latency metrics in the study employ distance (number of hops) as a surrogate for delay. To minimize flow setup time and communication overhead authors (Bari et al., 2013), propose a new framework that resolves Dynamic Controller Provisioning Problem (DCPP), and adjusts dynamically multiple controllers in the network based on changing network conditions. Therefore, from the study we cannot conclude whether common expectations regarding synchronization delay can be met in multicontroller deployments. This paper tries to shed some light on this particular question.
3. Evaluation Environment Description
and
Experiment
3.1. Evaluation environment In order to evaluate the synchronization cost involved in a multi-controller SDN deployment, we emulate different wide-area network topologies in a physical workstation. It features a 2.67 GHz Intel Xeon processor (eight cores) and 14 GB of RAM, on which we span various virtual machines through the VirtualBox (www.VirtualBox.org) hypervisor. Each of these virtual machines features one virtual CPU and 1 GB of RAM. One of them runs the Mininet network emulator (Lantz et al., 2010), so that we can create the SDN topologies under consideration (Subsection 3.2). Each network node is emulated by an Open vSwitch (Pfaff et al.), to which one virtual host is attached. Our SDN controller of choice is OpenDaylight (OpenDaylight 2013), given that it supports multi-controller deployments through clustering and it is actively supported by a large community base. Each instance of OpenDaylight (Hydrogen release) runs within its own virtual machine. A full control mesh is established among controllers, meaning that each instance sets up a connection
with every other controller in the cluster for synchronization purposes. Connectivity between network nodes within the Mininet virtual machine and the corresponding controllers is achieved by means of a virtual switch that interconnects all virtual machines. They use the Network Time Protocol (NTP) to synchronize their clocks, since this is needed to gather meaningful delay measurements in the experiments we describe in the following section. Fig.2 summarizes the whole evaluation environment.
3.2. Experiment description In the former evaluation setup, we emulate six different topologies from the Internet Topology Zoo (Knight et al., 2011). Let us assume that every network node is located at a facility where a controller can be instantiated. Instead of evaluating arbitrary multi-controller placements, we consider solutions to the problem of achieving high reliability in the southbound interface between nodes and controllers (Ros and Ruiz, 2014). In other words, the placements analyzed here guarantee that every switch is able to find an operational path to at least one controller with high likelihood. The size and number of controllers that correspond to each network topology are shown in Table 1. As you can see, we distinguish between two sets of scenarios. In the first one, we consider topologies of increasing size that achieve high reliability with just two SDN controllers. In this way we can measure the impact of synchronization as more nodes are part of the network. In our second set of topologies, we focus on placements with higher number of controllers. We do not evaluate placements with more than eight controllers because according to a previous study (Ros and Ruiz, 2014), 75% of 124 topologies from the Internet Topology Zoo require less than ten controllers to achieve five nines of reliability. Hence, with these sets of scenarios we try to cover representative cases that might be found in software-defined wide-area networks. In order to gather statistically meaningful measurements, we run ten independent experiments per scenario. For each run, we start all controller instances and let them form a cluster. Later on, we launch the Mininet topology so that switches can connect to controllers according to the evaluated placement. OpenFlow (OpenFlow 2012) is used as the southbound protocol between switches and controllers. After all nodes have established OpenFlow connections, we inject new traffic flows into the network. For this, we fix one of the emulated hosts as a server and the remaining hosts iteratively start a TCP connection with it. Given that each new connection originates from a different client, network switches cannot match the incoming packet with any flow in its flow table. Therefore, the switch issues a packet_in message so that the controller can install a new flow entry for the TCP connection by means of a flow_mod message. Hence, OpenDaylight instances reactively install forwarding rules in the network. The former experiment generates several events that provoke changes in the network state. Each of this changes
4 must be synchronized with the remainder of controllers by using the clustering mechanism, which is improved in term of performance in recent years (Li et al., 2009) and implemented within OpenDaylight. Among all types of events, we consider three of them. In first place, we have new switch events as nodes connect to their controllers. This effectively changes the network state by adding nodes to the topology view of controllers. When we inject TCP flows into the network, controllers learn about the existence of the data source (and possibly also the destination) via new host events. They are triggered as the controller receives a packet_in containing a packet whose source or destination was unknown so far, so the topology view is further augmented. Then, the controller installs a new flow on the switch and updates the network view accordingly. We focus on two different metrics to evaluate the synchronization cost among SDN controllers:
Synchronization delay: Time since a controller generates an event until a different controller is aware of that same event. This metric depends on the amount of information to be exchanged between controllers, the available bandwidth, the propagation delay, and the processing time at the controller.
Synchronization overhead: Aggregated data rate employed by synchronization messages among controllers. This accounts for all bidirectional traffic among controllers. The synchronization delay is a key metric in a distributed system, including distributed network controllers (Levin et al., 2012). Under a strongly consistent model, high delays limit the responsiveness of the overall system. In the case of eventually consistent models, the synchronization delay determines the time that an inconsistent network view is provided to applications (possibly provoking an incorrect behavior during such interval). Therefore, in both cases we benefit from low synchronization delays. The contribution of the propagation delay to the synchronization delay between two controllers is almost fixed and caused by the distance (and devices) that bits traverse along the path. Assuming that hardware devices achieve within a factor of two of the fiber (Cheshire, 1996), we compute the propagation delay for every pair of controllers in the evaluated scenarios. The processing time at the controller is dependent on the load and hardware capacity of the device, so we might reduce it by scaling out (load balancing a cluster of controllers) or scaling up (upgrading the hardware platform). Synchronization delay also depends on the bandwidth of the path between controllers. In this paper we do not artificially limit the available bandwidth for synchronization, in order to explore the maximum data rate required for the experiments we perform. This gives us an approximated upper bound on the synchronization overhead. When this value is high but the synchronization delay remains low, both metrics can be traded-off by provisioning inter-controller paths with lower bandwidths.
The following section analyzes and discusses the results of the experiments performed over the two sets of topologies shown in Table 1.
4. Synchronization Cost Analysis In this section we provide figures for specific runs of the former experiments, along with aggregated results of the ten runs per evaluated scenario. We compare the obtained delay metrics with common latency expectations for the control plane (Heller et al., 2012). Hence, we consider two typical targets in current networks:
Ring protection (50 msec): Target restoration time of a SONET ring, since fault detection until traffic flows in the opposite direction of the ring.
Shared-mesh restoration (200-250 msec): Target restoration time of a shared mesh, since fault detection until traffic flows through a new path.
By comparing the obtained synchronization delays with these baselines, we can get an idea of how feasible is to deploy a distributed control plane in SDN from a latency viewpoint. The remainder of this section discusses this issue on scenarios with two and k > 2 controllers.
4.1. Analysis of two-controller placements By evaluating logically centralized controllers comprised of two physically distributed controllers, we can set a performance baseline for synchronization. In our experiments, we can distinguish different phases regarding synchronization overhead, namely cluster bootstrapping, event synchronization, and database maintenance (Fig. 3(b), 3(d), 3(f)). In the bootstrapping phase, both controllers exchange information to create the cluster. This phase is not very bandwidth-intensive for two-controller scenarios, peaking at 400 Kbps or less. Obviously, there is no network state to synchronize at this moment, since switches have not yet connected to any controller. Therefore the overhead so far is kept below a reasonable threshold. As different events occur, the network state changes and must be synchronized among controllers. In our experiments these events occur in bursts. The higher the number of consecutive events, the higher the related overhead. In the Gridnet scenario (Fig. 3(a)) we find spikes near 1 Mbps when the nine nodes connect to the controllers (Fig. 3(b)). The bandwidth requirement is lower for other types of events, in particular when new hosts are discovered and new flows installed in switches. In such cases, synchronization overhead peaks at about 300-600 Kbps. Between periods of events we find database maintenance phases with low overhead, always well below 100 Kbps. These are devoted to check the consistency of the distributed network state. Since the EliBackbone topology (Fig. 3(c)) features more nodes, there are more events at every burst. This is, more switches connect to controllers, more hosts are discovered, and more flows get installed. Except for the bootstrapping phase, the latter makes
5 the synchronization overhead increase with respect to the former topology, as observed in Fig. 3(d). In general, the same trend can be observed in the results (Fig 3(f)) of the BtNorthAmerica network (Fig 3(e)). We show for all the performance figures that the synchronization delay is kept very low, even in an emulated environment like ours. This means that we can trade-off delay for overhead in order to accommodate lower bandwidth availability and still meet typical delay expectations in many scenarios. As you can see in Fig. 3, the synchronization delay for several events is dominated by the propagation delay. Transmission and processing delays are negligible in such cases. In others, they have a significant impact but the net result is that the synchronization delay remains below the ring protection reference in most cases. This result is further corroborated when we account for the ten independent experiments we perform per scenario. Fig. 4 shows that only a few outliers exceed the ring protection delay, and in all cases they are well below the mesh restoration target. This figure also reflects the impact that greater event bursts have onto synchronization delay. As we have more nodes (equivalent to more events in our setup), there is a higher likelihood that a few of them get affected by the higher load at the controller. However the vast majority of state changes can be synchronized quickly (note that whiskers in the figure extend up to the 99 percentile of samples).
4.2. Analysis of k-controller placements In this subsection we focus on scenarios with more than two controllers. They generate a larger amount of events than in the two-controller case, so for the sake of readability we only show aggregated results (Fig. 5) instead of providing details of specific runs. Fig. 5(a) provides the aggregated overhead for the three evaluated scenarios (note the y-axis is in logarithmic scale). The 75 percentile of the data samples require less than 300, 600 and 800 Kbps for the HiberniaUk, Noel and Goodnet scenarios, respectively. However, we can find peaks due to event bursts that outreach 4, 6 and 8 Mbps respectively. With these results, we can conclude that in our experiments each controller in the scenario generates at most about 1 Mbps (possibly more, but less than 2 Mbps) of synchronization overhead. As we saw in the previous section, such synchronization rate might be reduced at the expense of increasing the associated delay. Despite we use emulated networks within a single server, Fig. 5(b) shows that many events are synchronized below the ring protection target. For those that do not, they are still well below the mesh restoration reference. Therefore, in a real setup we can expect much lower delays, so distributing the control plane of an SDN should not be an issue from a delay viewpoint. Opendaylight clustering mode proves its feasibility in term of delay and overhead, and provides a shared datastore that could be fault tolerant, consistent and replicated. However, there are some weaknesses, especially for remote notification and fine grained sharing between multiple controllers. And a dedicated east-west interface could resolve these kind of issues.
5. Conclusion Distributed controller platforms are necessary to provide the control layer of programmable SDNs with performance, scalability, and fault tolerance guarantees. Distributing the network state while keeping a logically centralized view of the same involves keeping synchronized databases among controllers. In this paper we explore such synchronization cost, since it is key to understand the limitations and feasibility of SDNs in practice. We emulate real wide-area network topologies and evaluate the synchronization cost in a widely supported SDN controller platform. Despite we use an emulated testbed (limited resources for controllers) and an early release of the controller, we find that the synchronization delay is very low in most cases. In fact, it is lower than typical restoration times in currently deployed networks. For this the controllers make use of a substantial amount of bandwidth that can be tradedoff for synchronization latency when the deployment scenario allows it. Overall, our work suggests the feasibility of multicontroller SDN deployments from a latency viewpoint. In cases in which the controller load supposes a threat to processing delay, the system can still be scaled up or out to reduce the overload. However, we expect to see further enhancements in terms of distribution strategies in several areas, Namely, reducing the inter-controller overhead and improving tolerance to network partitions (while keeping simple abstractions for programmers) are important topics that deserve further investigation.
References Nunes, B.A.A., Mendonca, M., Nguyen, X.-N., Obraczka, K., Turletti, T., 2014. A Survey of Software-Defined Networking: Past, Present, and Future of Programmable Networks. IEEE Commun. Surv. Tutor. 16, 1617–1634. Benamrane, F., Ben mamoun, M., Benaini, R., 2015. Performances of OpenFlow-Based Software-Defined Networks: An overview. J. Netw. 10. Heller, B., Sherwood, R., McKeown, N., 2012. The Controller Placement Problem, in: Proceedings of the First Workshop on Hot Topics in Software Defined Networks, HotSDN ’12. ACM, New York, NY, USA, pp. 7–12. Ros, F.J., Ruiz, P.M., 2014. Five Nines of Southbound Reliability in Software-defined Networks, in: Proceedings of the Third Workshop on Hot Topics in Software Defined Networking, HotSDN ’14. ACM, New York, NY, USA, pp. 31–36. Hock, D., Hartmann, M., Gebert, S., Jarschel, M., Zinner, T., Tran-Gia, P., 2013. Pareto-optimal resilient controller placement in SDN-based core networks, in: Teletraffic Congress (ITC), 2013 25th International. Presented at the Teletraffic Congress (ITC), 2013 25th International, pp. 1–9. Knight, S., Nguyen, H.X., Falkner, N., Bowden, R., Roughan, M., 2011. The Internet Topology Zoo. IEEE J. Sel. Areas Commun. 29, 1765–1775.
6
Buyya, R., Calheiros, R.N., Son, J., Dastjerdi, A.V., Yoon, Y., 2014. Software-Defined Cloud Computing: Architectural Elements and Open Challenges, in: Advances in Computing, Communications and Informatics (ICACCI, 2014 International Conference on. IEEE, pp. 1–12. Zhao, X., Wang, X., Xu, H., Wang, Y., 2015. Cloud Data Integrity Checking Protocol from Lattice. Int J High Perform Comput Netw 8, 167–175. Frattolillo, F., Landolfi, F., 2011. Parallel and Distributed Computing on Multidomain Non-Routable Networks. Int J High Perform Comput Netw 7, 63–73. Bari, M.F., Roy, A.R., Chowdhury, S.R., Zhang, Q., Zhani, M.F., Ahmed, R., Boutaba, R., 2013. Dynamic Controller Provisioning in Software Defined Networks, in: 2013 9th International Conference on Network and Service Management (CNSM). pp. 18–25. Levin, D., Wundsam, A., Heller, B., Handigol, N., Feldmann, A., 2012. Logically centralized?: state distribution trade-offs in software defined networks, in: Proceedings of the First Workshop on Hot Topics in Software Defined Networks. ACM, pp. 1–6. Koponen, T., Casado, M., Gude, N., Stribling, J., Poutievski, L., Zhu, M., Ramanathan, R., Iwata, Y., Inoue, H., Hama, T., Shenker, S., 2010. Onix: A Distributed Control Platform for Large-scale Production Networks, in: Proceedings of the 9th USENIX Conference on Operating Systems Design and Implementation, OSDI’10. USENIX Association, Berkeley, CA, USA, pp. 1–6. Hassas Yeganeh, S., Ganjali, Y., 2012. Kandoo: A Framework for Efficient and Scalable Offloading of Control Applications, in: Proceedings of the First Workshop on Hot Topics in Software Defined Networks, HotSDN ’12. ACM, New York, NY, USA, pp. 19–24. Tootoonchian, A., Ganjali, Y., 2010. HyperFlow: A distributed control plane for OpenFlow, in: Proceedings of the 2010 Internet Network Management Conference on Research on Enterprise Networking. USENIX Association, pp. 3–3. Yao, G., Bi, J., Li, Y., Guo, L., 2014. On the Capacitated Controller Placement Problem in Software Defined Networks. IEEE Commun. Lett. 18, 1339–1342.
Lantz, B., Heller, B., McKeown, N., 2010. A Network in a Laptop: Rapid Prototyping for Software-defined Networks, in: Proceedings of the 9th ACM SIGCOMM Workshop on Hot Topics in Networks, Hotnets-IX. ACM, New York, NY, USA, pp. 19:1–19:6. Pfaff, B., Pettit, J., Koponen, T., Amidon, K., Casado, M., Shenker, S., n.d. Extending networking into the virtualization layer, in: In: 8th ACM Workshop on Hot Topics in Networks (HotNets-VIII).New YorkCity,NY(October 2009). OpenDaylight - An Open Source community Meritocracy for Software-Defined Networking, n.d.
and
OpenFlow - Open Networking Foundation, n.d. URL https://www.opennetworking.org/sdn-resources/openflow. Li, K.-C., Hsu, C.-H., Wen, C.-H., Wang, H.-H., Yang, C.T., 2009. A dynamic and scalable performance monitoring toolkit for cluster and grid environments. Int. J. High Perform. Comput. Netw. 6, 91–99. Cheshire, S., 1996. It’s the Latency, Stupid, http://rescomp.stanford.edu/~cheshire/rants/Latency.html. Oracle VM VirtualBox, https://www.virtualbox.org/ Infinispan Homepage http://infinispan.org/.
-
Infinispan,
n.d. n.d.
URL URL
7
Network Gridnet EliBackbone BtNorthAmerica HiberniaUk Noel Goodnet
Nodes 9 20 36
Controllers 2 2 2
15 19 17
4 6 8
Table 1: Evaluated network topologies and placements.
Figure 1: Architecture of software-defined networks.
Figure 2: Emulation-based evaluation environment.
8
(a ) Gridnet topology
(b) Gridnet performance
(c ) Elibackbone topology
(d) Elibackbone performance
(e ) BtNorthAmerica topology
(f) BtNorthAmerica performance
Figure 3: Synchronization delay of different events (bootstrapping, event synchronization, and database maintenance) with different topologies, using two-controller for each experiment.
9
Figure 4: Delay boxplots of two-controller placements. Whiskers extend to the 99 percentile, outliers are plot as independent points.
(a ) Overhead
(b) Delay
Figure 5: Boxplots of k-controller placements. Whiskers extend to the 99 percentile, outliers are plot as independent points.