future internet Article
Dynamic Traffic Scheduling and Congestion Control across Data Centers Based on SDN Dong Sun 1 , Kaixin Zhao 2 , Yaming Fang 3 and Jie Cui 3, * 1 2 3
*
ID
Experimental Management Center, Henan Institute of Technology, Xinxiang 453000, China;
[email protected] Department of Computer Science and Technology, Henan Institute of Technology, Xinxiang 453003, China;
[email protected] School of Computer Science and Technology, Anhui University, Hefei 230039, China;
[email protected] Corresponding:
[email protected]
Received: 20 June 2018; Accepted: 7 July 2018; Published: 9 July 2018
Abstract: Software-defined Networking (SDN) and Data Center Network (DCN) are receiving considerable attention and eliciting widespread interest from both academia and industry. When the traditionally shortest path routing protocol among multiple data centers is used, congestion will frequently occur in the shortest path link, which may severely reduce the quality of network services due to long delay and low throughput. The flexibility and agility of SDN can effectively ameliorate the aforementioned problem. However, the utilization of link resources across data centers is still insufficient, and has not yet been well addressed. In this paper, we focused on this issue and proposed an intelligent approach of real-time processing and dynamic scheduling that could make full use of the network resources. The traffic among the data centers could be classified into different types, and different strategies were proposed for these types of real-time traffic. Considering the prolonged occupation of the bandwidth by malicious flows, we employed the multilevel feedback queue mechanism and proposed an effective congestion control algorithm. Simulation experiments showed that our scheme exhibited the favorable feasibility and demonstrated a better traffic scheduling effect and great improvement in bandwidth utilization across data centers. Keywords: data center; dynamic scheduling; congestion control; OpenFlow; software-defined networking
1. Introduction Big data has become one of the hottest topics among academia and industry. With the development of big data, the amount of data from different sources such as the Internet of Things, social networking websites, and scientific research is increasing at an exponential rate [1]. The scale of data centers has gradually extended. Meanwhile, an increasing number of data are being transmitted in data center networks, and traffic exchange among the servers in data centers has also been growing fast. The most direct result may be low utilization ratio, congestion problems, service latency, and even DDOS attacks [2]. Data centers are always interconnected through wide area networks [3]. When traditional routing protocols are used in data center networks, flows are forced to preempt the shortest path to be routed and forwarded, which might lead the shortest path link to be under full load while some new flows are still competing for it, and other links are under low load. Without shunting for flows, the link would easily be congested and unable to provide normal network services. The most direct, but expensive solution, is to remold and upgrade the network. However, on account of the scale, complexity, and heterogeneity of current computer networks, traditional approaches to configuring the network devices, monitoring and optimizing network performance, identifying and solving network problems, and planning network growth would become nearly impossible and inefficient [4].
Future Internet 2018, 10, 64; doi:10.3390/fi10070064
www.mdpi.com/journal/futureinternet
Future Internet 2018, 10, x FOR PEER REVIEW
2 of 12
Future Internet 2018, 10, 64 2 of 12 performance, identifying and solving network problems, and planning network growth would become nearly impossible and inefficient [4]. Software-defined networking(SDN) (SDN)isisone oneofofthe the most notable forms of computer networking. Software-defined networking most notable forms of computer networking. It is Itreceiving is receiving considerable attention from academic researchers, industry researchers, network considerable attention from academic researchers, industry researchers, network operators, operators, large and medium-sized SDN is considered as a and some and largesome and medium-sized networkingnetworking enterprisesenterprises [5]. SDN is[5]. considered as a promising promising modality to rearchitect our traditional network [6]. The core idea of SDN is to decouple modality to rearchitect our traditional network [6]. The core idea of SDN is to decouple the control the control from the data plane to achieve flexiblemanagement, network management, efficientoperation, network plane from plane the data plane to achieve flexible network efficient network operation, andmaintenance low-cost maintenance through software programming [7]. Specifically, infrastructure and low-cost through software programming [7]. Specifically, infrastructure devices devices merely execute packet forwarding depending on the rules installed by the SDN merely execute packet forwarding depending on the rules installed by the SDN controller controller [8]. In the [8]. In the control the SDN controller can theoftopology of the underlying network and control plane, the plane, SDN controller can oversee theoversee topology the underlying network and provide an provide agile platform and efficient platform for plane the application plane to implement variousInnetwork agile andan efficient for the application to implement various network services. this new services. this newinnovative network paradigm, innovative specific and flexible services network In paradigm, solutions for realizingsolutions specific for andrealizing flexible services can be implemented can be implemented quickly efficiently in the of software deployed real networks quickly and efficiently in theand form of software andform deployed in realand networks withinreal-time traffic. with real-time traffic. In addition, this paradigm allows for the logically centralized and In addition, this paradigm allows for the logically centralized control and managementcontrol of network management of network devices in the data plane in accordance with a global and simultaneous devices in the data plane in accordance with a global and simultaneous network view and real-time network and real-time network Comparedit with traditional it isdeploy much network view information. Compared withinformation. traditional networks, is much easier tonetworks, develop and easier to develop and deploy applications in SDN [9]. applications in SDN [9]. This concentratedon onthe theproblem problem data center traffic management and attempted to This paper paper concentrated of of data center traffic management and attempted to avoid avoid congestion to make fullofuse the bandwidth resources. We proposed new solution on congestion to make full use theof bandwidth resources. We proposed a newa solution basedbased on SDN SDN for traffic engineering in data center networks by developing two modules on top of an open for traffic engineering in data center networks by developing two modules on top of an open source source SDN controller called Floodlight. Theamong traffic the among data can centers can be classified into SDN controller called Floodlight. The traffic datathe centers be classified into different different types. Different strategies are adopted upon the real-time changes of the link states. The types. Different strategies are adopted upon the real-time changes of the link states. The dynamic dynamic traffic scheduling can take full advantage of the network resources better. Given the traffic scheduling can take full advantage of the network resources better. Given the prolonged prolonged of theby bandwidth by malicious flows,anwe proposed an effective congestion occupationoccupation of the bandwidth malicious flows, we proposed effective congestion control algorithm control algorithm by adopting a multilevel feedback queue mechanism. Considering a large-scale IP by adopting a multilevel feedback queue mechanism. Considering a large-scale IP network with network with multiple data centers, the basic components are shown in Figure 1a. With traditional multiple data centers, the basic components are shown in Figure 1a. With traditional routing protocol, routing S1 and S2 to Dorwould traverse or congest paths using (A, B, (A, C) without all flowsprotocol, from S1 all andflows S2 tofrom D would traverse congest paths (A, B, C) without D, E, C). using (A, D, E, C). In contrast, the scheme DSCSD in this paper can route flows from S 1 and S2 to D to In contrast, the scheme DSCSD in this paper can route flows from S1 and S2 to D to select different select according to their types and theinformation real-time link information 1b), and the paths different accordingpaths to their types and the real-time link (Figure 1b), and(Figure the congestion can congestion can also be well controlled. also be well controlled.
source S1 S2
flow router
B A
S1
C D
E
D destination
(a)
S2
B A
C D
E
D
(b)
Figure thatthat preempt the shortest path (a) can(a)becan shunted into twointo different paths (b) paths with Figure 1.1.Flows Flows preempt the shortest path be shunted two different DSCSD. (b) with DSCSD.
The main contributions of this paper are as follows. First, we proposed a new approach for The main contributions of this paper are as follows. First, we proposed a new approach for traffic traffic scheduling that could route a newly arrived flow based on the real-time link information, and scheduling that could route a newly arrived flow based on the real-time link information, and also also dynamically schedule flows on the link. Better than traditional approaches and the SDN-based dynamically schedule flows on the link. Better than traditional approaches and the SDN-based scheme scheme with threshold value, we could improve the efficiency of the link utilizing the shortest paths. with threshold value, we could improve the efficiency of the link utilizing the shortest paths. Second, Second, we innovatively adopted a multilevel feedback queue mechanism of congestion control we innovatively adopted a multilevel feedback queue mechanism of congestion control suitable for suitable for different types of flows and could realize anomaly detection by preventing malicious different types of flows and could realize anomaly detection by preventing malicious flows from flows from occupying the bandwidth for a long time. occupying the bandwidth for a long time. The remainder of the paper is organized as follows: In Section 2, we review the traditional The remainder of the paper is organized as follows: In Section 2, we review the traditional network network and SDN used in data centers. Then, we elaborate on the design and implementation of our and SDN used in data centers. Then, we elaborate on the design and implementation of our proposed proposed scheme, DSCSD (Dynamic Scheduling and Congestion control across data centers based scheme, DSCSD (Dynamic Scheduling and Congestion control across data centers based on SDN) in
Future Internet 2018, 10, 64
3 of 12
Section 3. The performance evaluation of DSCSD is presented in Section 4. Finally, Section 5 concludes the paper. 2. Related Work This section reviews recent studies on traffic engineering in data centers which served as our research background and theoretical foundation. We discussed this from two aspects including a data center based on a traditional network and a data center based on SDN as well as its challenges. The first aspect explains the development of distributed data centers and the limitations of the traditional network protocols used in current data centers. These limitations exactly prove the demand of our proposed research. The other aspect shows some recent research progress of SDN-based data centers. 2.1. Data Center Based on Traditional Network In the traditional data center model, the traffic is mainly generated between the server and the client, with a low proportion of east–west traffic among the data centers [10]. With the extensive use of the Internet and the integration of the mobile network, Internet of Vehicle [11], cloud computing, big data, and other new generations of network technology and development have come into being to deal with massive-scale data and large-scale distributed data centers, bringing about the accelerated growth of east–west traffic exchanged among servers, for example, the Google File System (GFS) [12], Hadoop Distributed File System (HDFS) [13], and Google framework MapReduce [14]. However, when the traditionally shortest path routing protocols in current data centers are used, congestion will frequently occur in the shortest path link, and it may further reduce the quality of network services due to the long delay and low throughput. Traffic scheduling and congestion control are important technologies to maintain network capacity and improve network efficiency. Traditional networks have some congenital defects. The main disadvantages can be summarized as follows: First, there is no global coordinative optimization. Each node independently implements the traffic control strategy, which can only achieve the local optimum. Moreover, there is no dynamic and self-adaptive adjustment. The predefined strategies in routers cannot meet the frequently changing demands of business flows. In addition, traditional networks find it difficult to achieve the effective and accurate control of every network device. The configurations of network devices are numerous and diverse where the commands are so complicated that it is very difficult to find the network errors caused by configurations. Consequently, it is of great urgency to figure out how to effectively manage and dominate network traffic, which has pushed network architects to take advantage of SDN to address these problems in data centers. 2.2. Data Center Based on SDN A higher-level of visibility and a fine-grained control of the entire network can be achieved in the SDN paradigm. The SDN controller is able to program infrastructural forwarding devices in the data plane to monitor and manage network packets passing through these devices in a fine-grained way. Therefore, we can use the SDN controller to implement the periodic collection of these statistics. Furthermore, we can also obtain a centralized view of the network status for the SDN applications via open APIs and notify the up-level applications of a change in a real-time network [15]. Data centers are typically composed of thousands of high-speed links such as the 10 G Ethernet. In conventional packet capture mechanisms like switch port analyzer and port mirroring, infeasibility is completely reflected from a cost and scale perspective because a stupendous number of physical ports might be used. Adopting the SDN-based approaches to data center networks has become the focus of current research [1]. Tavakoli et al. [16] first applied SDN to data center networks by using NOX (the first SDN controller) to efficiently actualize addressing and routing of the typical data center networks VL2 and PortLand. Tootoonchian [17] proposed HyperFlow, a new distributed control plan for OpenFlow. Multiple controllers cooperate with each other to make up for the low scalability of a single controller while the advantage of centralized management is retained. The cooperation of
Future Internet 2018, 10, 64
4 of 12
distributed controllers can realize the expansion of the network and conveniently manage network devices [18–20]. Koponen et al. [21] presented Onix, on which the SDN control plane could be implemented as a distributed system. Benson et al. [22] proposed MicroTE, a fine-grained routing approach for data centers where he found a lack of multi-path routing, plus an overall view of workloads and routing strategies in data center networks based on a traditional network. Hindman et al. [23] presented Mesos, a platform for fine-grained resource sharing in data centers. Curtis [24] proposed Mahout to handle elephant flows (i.e., large flows) while large flows in Socket buffers were detected. Due to the burst of the traffic and the failure of the equipment, congestion and failure frequently occur. Large flows and small flows are always mingled in data center networks. Kanagavelu et al. [25] proposed a flow-based edge–to–edge rerouting scheme. When congestions appeared, it focused on rerouting the large flows to alternative links. The reason for this mechanism is that shifting these short-lived small flows among links would additionally increase the overhead and latency. Khurshid et al. [26] provided a layer between a software-defined networking controller and network devices called VeriFlow. It is a network debugging tool to find faulty rules issued by SDN applications and anomalous network behavior. FP Tso [27] presented the Baatdaat flow scheduling algorithm using spare data center network capacity to mitigate the performance degradation of heavily utilized links. This could ensure real-time dynamic scheduling, which could avoid congestion caused by instantaneous flows. Li et al. [28] proposed a traffic scheduling solution based on the fuzzy synthetic evaluation mechanism, and the path could be dynamically adjusted according to the overall view of the network. In summary, most of the current active approaches solving traffic engineering in data center networks based on SDN have two alternatives. The first common approach is based on a single centralized controller, with the problem of scalability and reliability. The other method is based on multiple controllers, but the cooperation of multiple controllers and the consistent update mechanism has not been well addressed thus far [29]. Our proposed scheme is an attempt that first applies the multilevel feedback queue mechanism to traffic scheduling and congestion control across data centers based on SDN, and can also provide a new solution to congestion caused by the prolonged occupation of malicious flows. 3. The Design and Implementation of DSCSD Usually, the links among data centers are highly probable for reuse, and the burst of instantaneous traffic leads to the instability and dynamicity of the network. Traffic scheduling and congestion control among multiple data centers can be well addressed with DSCSD by taking advantage of the flexibility and agility of SDN. Data centers are always interconnected through wide area networks. The network traffic is not constant at all times and has an unbalanced distribution. The traffic during the peak hours could increase by twice that of the average link load. However, the traditional routing protocols route and forward flows abide by the shortest paths algorithm, without shunting away flows to balance the link load. To satisfy the requirement of bandwidth during peak hours, we have no choice but to purchase 2–3× bandwidth as well as upgrade to large-scale routing equipment. As a result, the cost of network operation, administration, and maintenance are increased sharply, and also causes the waste of bandwidth in ordinary times. If we can meet the requirements of network services, the average link utilization rate of wide area networks can only achieve 30–40% [30]. Saying that some malicious flows extend the occupation of links, congestion might bring about interruptions in the transmission of their services [31–34]. In this paper, we propose DSCSD, a scheme of dynamic traffic scheduling and congestion control with a multilevel feedback queue mechanism that can address the problems above-mentioned to some extent. The following sections explain the design in detail and its implementation process.
Future Internet 2018, 10, 64 Future Internet 2018, 10, x FOR PEER REVIEW
5 of 12 5 of 12
3.1. System Model andand innovated on the dataofcenter In the following Our scheme schemewas wasdesigned designed innovated onbasis the of basis datanetworking. center networking. In the presentation, we briefly describe the components of the system model in Figure 2. (i) Data following presentation, we briefly describe the components of the system model in Figure 2. (i)center Data servers.servers. PC1, PC2, PC3 arePC3 three data centers, and theand fs from to PC3 priority center PC1,and PC2, and areindividual three individual data centers, the PC1 fs from PC1takes to PC3 takes over theover flows from PC2 to PC3. (ii)PC3. OpenFlow switches. With the fivethe switches, two distinct paths priority the flows from PC2 to (ii) OpenFlow switches. With five switches, two distinct are formed, and the path (S1, S4, S5) is the shortest path leaving (S1, S2, S3, S4) the second shortest. paths are formed, and the path (S1, S4, S5) is the shortest path leaving (S1, S2, S3, S4) the second (iii) SDN (iii) controller. We selected open source OpenFlow controller Floodlight the centralized shortest. SDN controller. Wethe selected the open source OpenFlow controller as Floodlight as the controller and it can be used to be control behavior OpenFlow switches by adding,byupdating, centralized controller and it can used the to control theofbehavior of OpenFlow switches adding, and deleting table flow entries in the switches. updating, andflow deleting table entries in the switches. SDN Controller
low priority traffic
S3
S2
S1
PC2 S5
high priority traffic
PC3 S4
PC1
Figure Figure 2. 2. System System model. model.
3.2. Dynamic Traffic Scheduling 3.2. Dynamic Traffic Scheduling We classified the flows among multiple data centers into different types. As for data We classified the flows among multiple data centers into different types. As for data duplication duplication flows, we set a lower priority for them, while the other flows of high quality requirement flows, we set a lower priority for them, while the other flows of high quality requirement had a higher had a higher priority. The higher priority flows traverse the shortest path, and the lower priority can priority. The higher priority flows traverse the shortest path, and the lower priority can select a path select a path depending on the real-time link states. The concrete link states on the shortest path are depending on the real-time link states. The concrete link states on the shortest path are categorized categorized into the following four cases. into the following four cases. State 1: The time when a low priority flow arrives. In this state, we still have several different State 1: The time when a low priority flow arrives. In this state, we still have several different possibilities. First, if bandwidth remains on the shortest path (S1, S4, S5), then it will directly select possibilities. First, if bandwidth remains on the shortest path (S1, S4, S5), then it will directly select this path. If the shortest path (S1, S4, S5) is fully occupied by a high priority flow, then it will have no this path. If the shortest path (S1, S4, S5) is fully occupied by a high priority flow, then it will have no choice but be transmitted on the second shortest path (S1, S2, S3, S5). The last possibility is that this choice but be transmitted on the second shortest path (S1, S2, S3, S5). The last possibility is that this flow should go into the congestion control with multilevel feedback queues, given all paths have no flow should go into the congestion control with multilevel feedback queues, given all paths have no bandwidth to be used. bandwidth to be used. State 2: The time when a high priority flow arrives. In this state, we also have several different State 2: The time when a high priority flow arrives. In this state, we also have several different possibilities. First, if bandwidth remains on the shortest path (S1, S4, S5), then it will directly select possibilities. First, if bandwidth remains on the shortest path (S1, S4, S5), then it will directly select this path. If the shortest path (S1, S4, S5) is fully occupied by a high priority flow, then it will have no this path. If the shortest path (S1, S4, S5) is fully occupied by a high priority flow, then it will have no choice but be transmitted on the second shortest path (S1, S2, S3, S5). The last possibility is that this choice but be transmitted on the second shortest path (S1, S2, S3, S5). The last possibility is that this flow should go into the congestion control with multilevel feedback queues, given all paths have no flow should go into the congestion control with multilevel feedback queues, given all paths have no bandwidth to be used. bandwidth to be used. State 3: The time when a high priority flow is transmitting on the link. Due to the high priority, State 3: The time when a high priority flow is transmitting on the link. Due to the high priority, we do nothing with it. we do nothing with it. State 4: The time when a low priority flow is transmitting on the link. From state 1, we can learn State 4: The time when a low priority flow is transmitting on the link. From state 1, we can learn that the low priority flow can select any one of the two paths adapting to data fluctuation. Therefore, that the low priority flow can select any one of the two paths adapting to data fluctuation. Therefore, if there is no available bandwidth on the shortest path link when a new flow with high priority if there is no available bandwidth on the shortest path link when a new flow with high priority arrives arrives at this point, it will vacate the shortest path link and be scheduled to the second shortest path at this point, it will vacate the shortest path link and be scheduled to the second shortest path link. link. Otherwise, if the new flow has an equally low priority and no subsequent low priority flow Otherwise, if the new flow has an equally low priority and no subsequent low priority flow needs needs transmitting, the shortest path is allotted to the newly arrived flow. transmitting, the shortest path is allotted to the newly arrived flow. According to the aforementioned analysis of the link states, we can realize dynamic traffic scheduling to improve the utilization of bandwidth resources.
Future Internet 2018, 10, 64
6 of 12
According to the aforementioned analysis of the link states, we can realize dynamic traffic scheduling to improve the utilization of bandwidth resources. 3.3. Congestion Control with Multilevel Feedback Queues As part of DSCSD, dynamic traffic scheduling has an obvious effect on the classification and diversion of flow. However, when malicious flows exist, congestion also might occur in the shortest path link due to the long-term occupation of limited link resources and the burst of network instantaneous traffic. The algorithm of congestion control with multilevel feedback queues not only provides a queuing service for congested flows, but also settles the problem of vicious prolonged occupation. The specific description is shown as follows. In the initial phase of the scheme, we define two multilevel feedback queues to store the flows waiting to be scheduled. One is the multilevel feedback queue of low priority and the other is the multilevel feedback queue of high priority. Then, in each feedback queue, we define three sub-queues, and give the highest priority to flow waiting queue 1, the second highest priority to flow waiting queue 2, and the lowest priority to flow waiting queue 3. As is shown in Figure 3, the transmission time of these flow waiting queues are different after being scheduled. Sub-queue 1 has the time t, sub-queue 2 has the time 2t that is twice the time of sub-queue 1, and sub-queue 3 has the time 3t that is triple the time of sub-queue 1. For the pseudo codes of the algorithm, see Algorithm 1. Algorithm 1: The Algorithm of Multilevel Feedback Queue Input: G: topology of data center network; Factive : set of active flows; Fsuspend : set of suspended flows; Fnew : a new flow; Ffirst : a flow from the first of the Fsuspend . Output: {}: scheduling state and path selection of each flow in G. When a new flow arrives 1 if (e.path = IDLEPATH) then 2 Factive ← Factive + Fnew ; 3 end 4 else Fsuspend ← Fsuspend + Fnew ; 5
while Fsuspend 6= ∅) do
6
if (e.path = IDLEPATH) then
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
if Fsuspend1 6= ∅) then
Select the flow at the first of Fsuspend1 ; Fsuspend1 ← Fsuspend1 − Ff irst ; Factive ← Factive + Ff irst ; Transmission time of the Ff irst is t; end else if Fsuspend2 6= ∅) then
Select the flow at the first of Fsuspend2 ; Fsuspend2 ← Fsuspend2 − Ff irst ; Factive ← Factive + Ff irst ; Transitssion time of the Ff irst is 2t; end else Select the flow at the first of Fsuspend3 ; Fsuspend3 ← Fsuspend3 − Ff irst ; Factive ← Factive + Ff irst ; Transit time of the Ff irst is 3t; end return {};
suspend 3
suspend 3
first
22
Factive Factive + F first ;
23
Transit time of the F first is 3t;
24 end 25 return {};
Future Internet 2018, 10, 64
7 of 12
t
Link transmission
2t
Link transmission
3t
Link transmission
flow waiting queue 1
flow waiting queue 2
flow waiting queue 3
Figure 3. Multilevel feedback queue.
4. 4. Experiment Experiment Results Results and and Performance Performance Analysis Analysis The The SDN SDN controller controller selected selected in in our our scheme scheme is is aa free free open open source source Floodlight Floodlight that that runs runs in in the the eclipse environment on the Ubuntu system. The virtual switch uses open source Open vSwitch 2.3.0. eclipse environment on the Ubuntu system. The virtual switch uses open source Open vSwitch 2.3.0. The network was was created created by by Mininet Mininet in in Ubuntu Ubuntu 14.04. 14.04. The virtual virtual network We implemented DSCSD on top of the Floodlight v1.2 We implemented DSCSD on top of the Floodlight v1.2 controller controller in in our our simulation simulation experiments. experiments. Floodlight source, and randomly added provides Floodlight is is aa free free source, and modules modules can can be be randomly added and and deleted, deleted, so so that that it it provides much more convenience for our test. In our experimental setup, the virtual switch was created much more convenience for our test. In our experimental setup, the virtual switch was created by by Open modules named named Open vSwitch. vSwitch. The The Floodlight Floodlight was was chosen chosen as as the the SDN SDN controller. controller. We We added added two two modules initflowtable Floodlight. The initflowtable and and trafficmanage trafficmanage on on Floodlight. The module module initflowable initflowable was was used used to to initialize initialize the the variable with a value and install some flow tables of preprocessing. The main function of our scheme variable with a value and install some flow tables of preprocessing. The main function of our scheme was the module module trafficmanage trafficmanage that that could could monitor monitor the the Packet-in Packet-in messages messages and and the the Flow-removed Flow-removed was the messages as link link information information messages of of the the OpenFlow OpenFlow protocol protocol and and recode recode the the installed installed flow flow tables tables as as well well as including congestion and bandwidth. When a new flow arrives or a flow needs to be scheduled, the including congestion and bandwidth. When a new flow arrives or a flow needs to be scheduled, module trafficmanage can can achieve accurate andand real-time detection andand control. the module trafficmanage achieve accurate real-time detection control. As shown in Figure 4, we used several virtual hosts to simulate three centers; h1 h2 and h2 As shown in Figure 4, we used several virtual hosts to simulate three datadata centers; h1 and were were the servers of two individual data centers; c1, c2, c3, c4, and c5 were collectively used as the servers of two individual data centers; c1, c2, c3, c4, and c5 were collectively used as another data another data center. we should virtual and IP addresses and MAC for them. center. Certainly, weCertainly, should assign virtualassign IP addresses MAC addresses for addresses them. We also used We also used the flows from any one of h1 and h2 to any one of c1, c2, c3, c4, and c5 to simulate the the flows from any one of h1 and h2 to any one of c1, c2, c3, c4, and c5 to simulate the interconnected interconnected flows among the data centers. In accordance with the system model in Figure 2, the flows among the data centers. In accordance with the system model in Figure 2, the path (S1, S4, S5) path (S1,shortest S4, S5)path wasleaving the shortest leaving (S1, S2, S3, S4)Here, the second shortest.two Here, was the (S1, S2,path S3, S4) the second shortest. we conducted testswe to conducted two tests to verify the effectiveness and performance of DSCSD. verify the effectiveness and performance of DSCSD. Test 1: Verify Verifythe theeffectiveness effectivenessofofDSCSD. DSCSD.We Weused used Mininet 4 M-bandwidth Test 1: Mininet to to setset a 4aM-bandwidth for for bothboth the the shortest path and the second shortest path, and we also utilized the tool iperf to simulate four shortest path and the second shortest path, and we also utilized the tool iperf to simulate four flows flows from h1and to c1c2,and andh2 from h2and to c3 c4.link Eachwas linkequipped was equipped with the requirement from h1 to c1 andc2,from to c3 c4.and Each with the requirement of a 2 of a 2 M-bandwidth. We discussed and analyzed the performance of the traditional and M-bandwidth. We discussed and analyzed the performance of the traditional networknetwork and DSCSD. DSCSD. Then, we chose a span under the selfsame condition to test the loss tolerance and real-time Then, we chose a span under the selfsame condition to test the loss tolerance and real-time bandwidth bandwidth of Some these of flows. of presented these datainare presented 1. Here, we adopted four of these flows. theseSome data are Table 1. Here, in weTable adopted four symbolic notations. symbolic notations. notation “h1–1” theBy traffic from“h1–2” h1 to c1. By analogy, means The notation “h1–1” The means the traffic frommeans h1 to c1. analogy, means the traffic“h1–2” from h1 to c2; the traffic from h1 to c2; “h2–c3” means the traffic from h2 to c3; and “h2–c4” means the traffic from “h2–c3” means the traffic from h2 to c3; and “h2–c4” means the traffic from h2 to c4. We also calculated h2 c4. Welink alsoutilization calculatedofthe overall of both5.cases as is shown in Figure 5. theto overall both caseslink as isutilization shown in Figure In Test 1, it held that the loss tolerance of the four flows in a traditional network was high, with two of them close to 100%. As for the overall link utilization, the average link utilization of our
Future Internet 2018, 10, x FOR PEER REVIEW
8 of 12
Future Internet 10, x PEER REVIEW scheme was2018, kept atFOR 97%, while that
8 ofwe 12 of traditional network was just 48%. For a comparison, verified the high improvement of link utilization by using DSCSD. scheme was kept at 97%, while that of traditional network was just 48%. For a comparison, we Future Internet 64 8 of 12 verified the 2018, high10, improvement of link utilization by using DSCSD. Floodlight
Floodlight
S1
C1
S3
S2
C1
S3
S2
C2
h2 S5
C2
S1
h2
C3
C4 C3
S4 S5
h1 S4
h1
Figure 4. Experimental topology.
C5
C4
C5
Figure 4. topology. Table 1. Real-time bandwidth comparison. Figure 4. Experimental Experimental topology.
bps Time bpss 0–1 bps Time 1–2 s Time 0–1 s 2–3 s 0–1 s 1–2 s 3–4 1–2 ss 2–3 2–3sss 4–5 3–4ss 3–4 5–6 4–5 ss 4–5 6–7 5–6sss 6–7sss 5–6 7–8 7–8ss 6–7 8–9 8–9 ss 7–8 9–10ss 9–10 8–9 s 9–10 s
Traditional Network bandwidth comparison. DSCSD Table Table 1. 1. Real-time Real-time bandwidth comparison. h1–1 h1–c2 h2–c3 h2–c4 h2–c4 h2–c4 h2–c4 106 kTraditional 1.87 M Network 1.91 M 25.3 k 1.95 M 1.94DSCSD M 1.95 M Traditional Network DSCSD h1–1 h1–c2 h2–c3 h2–c4 h2–c4 h2–c4 h2–c4 11.8 k 1.86 M 1.81 M 70.6 k 1.94 M 1.95 M 1.94 M h1–1 h1–c2 h2–c3 h2–c4 h2–c4 h2–c4 h2–c4 106 k 1.87 M 1.91 M 25.3 k 1.95 M 1.94 M 1.95 M 11.8 k 1.89 M 1.89 M 35.3 k 1.94 M 1.94 1.94MM 1.95 1.94MM 106 k 1.87 M 1.91 M 25.3 k 1.95 M 11.8 k 1.86 M 1.81 M k 1.94 M 1.94 153 M 70.6 11.8 M 1.95 1.94MM M 1.94 1.94MM M 11.8kk 1.92 1.86 M M 1.80 1.81 M 70.6 kk 1.95 1.94 M 1.95 11.8 k 1.89 M 1.89 M 35.3 k 1.94 M 1.94 M 1.94 11.8kk 1.75 1.89 M M 1.94 1.89 M 35.3 kk 1.94 1.94 M 35.3 M 176 M 1.94 1.95MM 1.94 1.95MM M 153kk 1.92 1.92M M 1.80 1.80 M M 11.8 11.8 k k 1.95 1.95 M 1.94 M 1.94 MM 153 M 1.94 M 1.94 82.3 M 82.3 M 1.95 1.94MM 1.95 1.94MM 35.3kk 1.94 1.75 M M 1.89 1.94 M 176 kk 1.82 1.94 M 35.3 M 1.95 35.3 1.95 M 1.88 M 176 23.5 1.94 M 1.95 1.94MM M 1.94 1.87MM M 82.3kkk 1.75 1.94M M 1.94 1.89 M M 82.3 k kk 1.94 1.82 M 1.94 35.3kkk 1.94 1.95M M 1.89 1.88 M M 23.5 k kk 1.82 1.94 M 1.94 82.3 M 1.94 23.5 1.94 M 1.88 M 82.3 58.8 1.95 M 1.94 1.94MM M 1.87 1.93MM M 23.5kk 1.95 1.94M M 1.88 1.88 M M 23.5 58.8 k k 1.94 1.95 M 1.94 M 1.93 MM 35.3 M 1.94 M 1.87 11.8 M 23.5 M 1.95 1.95MM 1.94 1.94MM 11.8kk 1.89 1.89 M M 1.92 1.92 M 23.5 kk 1.95 1.95 M 23.5 M 1.93 23.5kkk 1.94 1.91M M 1.88 1.90 M M 11.8 k kk 1.95 1.94 M 1.95 23.5 1.91 M 1.90 M 58.8 11.8 1.94 M 1.94 1.95MM M 1.94 1.94MM M 11.8 k 1.89 M 1.92 M 23.5 k 1.95 M 1.95 M 1.94 M 23.5 k 1.91 M 1.90 M 11.8 k 1.94 M 1.95 M 1.94 M
h2–c4 1.94 M h2–c4 1.94 h2–c4 M 1.94 1.95 M M 1.94 M 1.94 1.94MM M 1.94 1.95 1.95 1.94MM M 1.94 MM 1.94 1.94MM 1.94 1.94 1.95MM M 1.94 1.95 1.94 1.94MM M 1.94 MM 1.95 1.94MM 1.94 1.94 1.93 1.93MM M 1.94 M 1.93 M
Figure 5. 5. Overall Overall link link utilization utilization comparison. comparison. Figure Figure 5. Overall link utilization comparison. Test 2: Congestion control with multilevel feedback queues. In this test, we still used the In Test 1, it held that the loss tolerance of the four flows in a traditional network was high, simulation of Test 1. The difference was that we were only interested in the shortest path (S1, S4, S5) withTest two of them close tocontrol 100%. As for multilevel the overall link utilization, the In average linkwe utilization of our with feedback queues. this and set a 2:5 Congestion M-bandwidth. The time t was 20 s. Then, we let c5 send UDPtest, flows tostill h1 used with the a 3 scheme was kept at 97%, while that of traditional network was just 48%. For a comparison, we verified simulation of Test 1. The difference was that we were only interested in the shortest path (S1, S4, S5) M-bandwidth consistently, while c1, c2, c3, and c4 sent UDP flows to h2 with a 1 M-bandwidth. the high of link utilization by using DSCSD. and setweaimprovement 5 M-bandwidth. t was 20 s. bandwidth Then, we let c5 send UDP flows to h1 with and a 3 Here, divided Test 2 intoThe twotime cases: real-time without congestion control queues Test 2: Congestion control with multilevel feedback queues. In h2 thiswith test,a we still used the M-bandwidth consistently, while c1, c2, c3, and c4 sent UDP flows to 1 M-bandwidth. real-time bandwidth with congestion control queues, then we discussed and analyzed the simulation of TestTest 1. The difference was that webandwidth were onlywithout interested in the shortest (S1,and S4, Here, we divided 2 into two cases: real-time congestion controlpath queues S5) and set a 5 M-bandwidth. The timecontrol t was 20 s. Then, we we let c5 send UDP flows to h1 with real-time bandwidth with congestion queues, then discussed and analyzed the a 3 M-bandwidth consistently, while c1, c2, c3, and c4 sent UDP flows to h2 with a 1 M-bandwidth.
Future Internet 2018, 10, 64
9 of 12
Future 2018, x PEER REVIEW of Future Internet Internet 2018, 10, 10,Test x FOR FOR PEERtwo REVIEW of 12 12 Here, we divided 2 into cases: real-time bandwidth without congestion control queues99 and real-time bandwidth with congestion control queues, then we discussed and analyzed the effectiveness effectiveness of control in two The bandwidths in are effectiveness of congestion congestion in these these two cases. cases. The real-time real-time in two two cases cases are of congestion control in thesecontrol two cases. The real-time bandwidths in twobandwidths cases are respectively shown respectively shown in Figures 6 and 7. respectively shown in Figures 6 and 7. in Figures 6 and 7.
Figure Figure 6. 6. Real-time Real-time bandwidth bandwidth without without congestion congestion control control queues. queues.
queues. Figure Figure 7. 7. Real-time Real-time bandwidth bandwidth with with congestion congestion control control queues. queues.
At flows from c5, c1, and c2. At At 10 10 s, the path (S1, S4, S5) has been saturated with the flows from c5,c5, c1,c1, and c2. c2. At 20 20 s,20we we At 10 s, s, the the path path (S1, (S1,S4, S4,S5) S5)has hasbeen beensaturated saturatedwith withthe the flows from and Ats, s, received the request of the normal flows from c3 and c4. At this time, if we do not use congestion received the request of the normal flows from c3 and c4. At this time, if we do not use congestion we received the request of the normal flows from c3 and c4. At this time, if we do not use congestion control queues (case 1), the flow from c5 with high priority directly affects the quality of the control queues queues (case (case 1), 1), the the flow flow from from c5 c5 with with high high priority priority directly directly affects affects the the quality quality of of the the control transmission of other flows, which can be observed in Figure 6. In contrast, we noticed the transmission ofofother other flows, which canobserved be observed in 6.Figure 6. In we contrast, the transmission flows, which can be in Figure In contrast, noticedwe the noticed comparison comparison inwe Figure 7.congestion If we we used used control congestion control queues (case 2),received when we we received the request comparison in Figure If congestion control queues (case 2), when received request in Figure 7. If used7. queues (case 2), when we the requestthe of normal of normal flows from c3 and c4, flows from c5 were scheduled into the congestion control queues. of normal c3 and c4, c5 flows from c5 wereinto scheduled into thecontrol congestion control queues. flows fromflows c3 andfrom c4, flows from were scheduled the congestion queues. Considering Considering the bandwidth remained on the path, the flow from c5 will be rescheduled to the path Considering the bandwidth remained on the path, the flow from c5 will be rescheduled to the path the bandwidth remained on the path, the flow from c5 will be rescheduled to the path to be transmitted. to be transmitted. With this mechanism, we can provide a new solution to control the congestion to be transmitted. With this mechanism, we can provide a new solution to control the congestion With this mechanism, we can provide a new solution to control the congestion caused by the prolonged caused prolonged occupation caused by by the the prolongedflows. occupation of of malicious malicious flows. flows. occupation of malicious As shown in Figure 8, we demonstrated the comparison the link delay in the two As shown in Figure 8, we demonstrated the comparison comparison of of the link link delay delay in in the the two two As shown in Figure 8, we demonstrated the of the aforementioned cases. We can easily draw the conclusion that by using the congestion control aforementioned cases. cases.We Wecan can easily draw conclusion by using the congestion aforementioned easily draw the the conclusion that that by using the congestion controlcontrol queue, queue, the caused by flows can be with aa relative reduction in queue, the congestion congestion caused by malicious malicious canaddressed be well well addressed addressed with reduction relative in reduction in the congestion caused by malicious flows canflows be well with a relative link delay. link delay. delay. link
Future Internet 2018, 10, 64
10 of 12
Figure 8. Comparison of the link delay.
5. Conclusions and Future Work In this paper, we focused on the problem of traffic scheduling and congestion control across data centers and aimed to provide an approach that could greatly improve link utilization. To realize this goal, we designed DSCSD, a dynamic traffic scheduling and congestion control scheme across data centers based on SDN. The moment a flow arrives, it relies on the connection of traffic parameters and link information to select paths. Furthermore, it can achieve real-time dynamic scheduling to avoid congestion caused by the burst of instantaneous traffic, and can also balance the link loads. Compared with traditional approaches, the experiment and analysis had an obvious effect on the classification and diversion of flows, thereby improving the link utilization across data centers. Better than the SDN-based scheme with threshold value, the real-time monitoring and dynamic scheduling of the shortest paths were fully reflected. Meanwhile, we innovatively adopted the mechanism of a multilevel feedback queue for congestion control, which is suitable for different types of flows and can implement anomaly detection by preventing malicious flow from chronically occupying the bandwidth. Our proposed DSCSD scheme can be easily deployed to the existing data center networks to deal with the low utilization of link resources such as the data centers of live video streaming platforms and online video platforms. Although our scheme solved the traffic scheduling problem efficiently, there are still some limitations. First, our scheme is based on SDN, but DSCSD is not applicable in traditional network environments or a hybrid network environment. Second, more fine-grained and flexible hierarchical control might be helpful to further enhance the experimental result. In addition, we did not take into account the issue of energy saving in the traffic scheduling and congestion control of data center networks. Accordingly, we would like to enrich DSCSD with energy management in the future. Author Contributions: D.S. conceived and designed the system model and algorithm; K.Z. was responsible for literature retrieval and chart making; Y.F. performed the experiments, analyzed the data, and wrote the paper; J.C. designed the research plan, conceived the algorithm model, and polished the paper. Funding: This research was funded by the [National Natural Science Foundation of China] grant number [61502008], the [Key Scientific Research Project of Henan Higher Education] grant number [16A520084], the [Natural Science Foundation of Anhui Province] grant number [1508085QF132] and [the Doctoral Research Start-Up Funds Project of Anhui University]. Conflicts of Interest: The authors declare no conflict of interest.
Future Internet 2018, 10, 64
11 of 12
References 1. 2.
3.
4. 5.
6. 7.
8.
9. 10. 11. 12. 13.
14.
15. 16. 17.
18.
19.
Cui, L.; Yu, F.R.; Yan, Q. When big data meets software-defined networking: SDN for big data and big data for SDN. IEEE Netw. 2016, 30, 58–65. [CrossRef] Lan, Y.L.; Wang, K.; Hsu, Y.H. Dynamic load-balanced path optimization in SDN-based data center networks. In Proceedings of the 10th International Symposium on Communication Systems, Networks and Digital Signal Processing, Prague, Czech Republic, 20–22 July 2016; pp. 1–6. Ghaffarinejad, A.; Syrotiuk, V.R. Load Balancing in a Campus Network Using Software Defined Networking. In Proceedings of the Third GENI Research and Educational Experiment Workshop, Atlanta, GA, USA, 19–20 March 2014; pp. 75–76. Xia, W.; Wen, Y.; Foh, C.; Niyato, D.; Xie, H. A Survey on Software-Defined Networking. IEEE Commun. Surv. Tutor. 2015, 17, 27–51. [CrossRef] Nunes, A.; Mendonca, M.; Nguyen, X.; Obraczka, K.; Turletti, T. A Survey of Software-Defined Networking: Past, Present, and Future of Programmable Networks. IEEE Commun. Surv. Tutor. 2014, 16, 1617–1634. [CrossRef] Lin, P.; Bi, J.; Wang, Y. WEBridge: West–east bridge for distributed heterogeneous SDN NOSes peering. Secur. Commun. Netw. 2015, 8, 1926–1942. [CrossRef] Sezer, S.; Scott-Hayward, S.; Chouhan, P.K.; Fraser, B.; Lake, D.; Finnegan, J.; Viljoen, N.; Miller, M.; Rao, N. Are we ready for SDN? Implementation challenges for software-defined networks. IEEE Commun. Mag. 2013, 51, 36–43. [CrossRef] Mckeown, N.; Anderson, T.; Balakrishnan, H.; Parulkar, G.; Peterson, L.; Rexford, J.; Shenker, S.; Turner, J. OpenFlow: Enabling innovation in campus networks. ACM SIGCOMM Comput. Commun. Rev. 2008, 38, 69–74. [CrossRef] Kim, H.; Feamster, N. Improving network management with software defined networking. IEEE Commun. Mag. 2013, 51, 114–119. [CrossRef] Greenberg, A.; Hamilton, J.; Maltz, D.A.; Patel, P. The cost of a cloud: Research problems in data center networks. ACM SIGCOMM Comput. Commun. Rev. 2008, 39, 68–73. [CrossRef] Cheng, J.; Cheng, J.; Zhou, M.; Liu, F.; Gao, S.; Liu, C. Routing in Internet of Vehicles: A Review. IEEE Trans. Intell. Transp. Syst. 2015, 16, 2339–2352. [CrossRef] Ghemawat, S.; Gobioff, H.; Leung, S.T. The Google file system. In Proceedings of the Nineteenth ACM Symposium on Operating Systems Principles, Bolton Landing, NY, USA, 19–22 October 2003; pp. 29–43. Shvachko, K.; Kuang, H.; Radia, S.; Chansler, R. The Hadoop Distributed File System. In Proceedings of the IEEE 26th Symposium on MASS Storage Systems and Technologies, Incline Village, NV, USA, 3–7 May 2010; pp. 1–10. Dean, J.; Ghemawat, S. MapReduce: Simplified Data Processing on Large Clusters. In Proceedings of the 6th Conference on Symposium on Opearting Systems Design & Implementation, San Francisco, CA, USA, 6–8 December 2004; pp. 137–150. Ali, S.T.; Sivaraman, V.; Radford, A.; Jha, S. A Survey of Securing Networks Using Software Defined Networking. IEEE Trans. Reliab. 2015, 64, 1–12. [CrossRef] Tavakoli, A.; Casado, M.; Koponen, T.; Shenker, S. Applying NOX to the Datacenter. In Proceedings of the Eighth ACM Workshop on Hot Topics in Networks (HotNets-VIII), New York, NY, USA, 22–23 October 2009. Tootoonchian, A.; Ganjali, Y. HyperFlow: A distributed control plane for OpenFlow. In Proceedings of the Internet Network Management Conference on Research on Enterprise Networking, San Jose, CA, USA, 27 April 2010; p. 3. Yu, Y.; Lin, Y.; Zhang, J.; Zhao, Y.; Han, J.; Zheng, H.; Cui, Y.; Xiao, M.; Li, H.; Peng, Y.; et al. Field Demonstration of Datacenter Resource Migration via Multi-Domain Software Defined Transport Networks with Multi-Controller Collaboration. In Proceedings of the Optical Fiber Communication Conference, San Francisco, CA, USA, 9–13 March 2014; pp. 1–3. Zhang, C.; Hu, J.; Qiu, J.; Chen, Q. Reliable Output Feedback Control for T-S Fuzzy Systems with Decentralized Event Triggering Communication and Actuator Failures. IEEE Trans. Cybern. 2017, 47, 2592–2602. [CrossRef] [PubMed]
Future Internet 2018, 10, 64
20.
21.
22. 23.
24.
25.
26. 27. 28.
29. 30.
31.
32. 33. 34.
12 of 12
Zhang, C.; Feng, G.; Qiu, J.; Zhang, W. T-S Fuzzy-model-based Piecewise H_infinity Output Feedback Controller Design for Networked Nonlinear Systems with Medium Access Constraint. Fuzzy Sets Syst. 2014, 248, 86–105. [CrossRef] Koponen, T.; Casado, M.; Gude, N.S.; Stribling, J.; Poutievski, L.; Zhu, M.; Ramanathan, R.; Iwata, Y.; Inoue, H.; Hama, T.; et al. Onix: A distributed control platform for large-scale production networks. In Proceedings of the Usenix Symposium on Operating Systems Design and Implementation, Vancouver, BC, Canada, 4–6 October 2010; pp. 351–364. Benson, T.; Anand, A.; Akella, A.; Zhang, M. MicroTE: Fine grained traffic engineering for data centers. In Proceedings of the CONEXT, Tokyo, Japan, 6–9 December 2011. Hindman, B.; Konwinski, A.; Zaharia, M.; Ghodsi, A.; Joseph, A.D.; Katz, R.; Shenker, S.; Stoica, I. Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center. In Proceedings of the 8th USENIX Conference on Networked Systems Design and Implementation, San Jose, CA, USA, 25–27 April 2012; pp. 429–483. Curtis, A.R.; Kim, W.; Yalagandula, P. Mahout: Low-overhead datacenter traffic management using end-host-based elephant detection. In Proceedings of the 2011 Proceedings IEEE INFOCOM, Shanghai, China, 10–15 April 2011; pp. 1629–1637. Kanagavelu, R.; Mingjie, L.N.; Mi, K.M.; Lee, B.; Francis; Heryandi. OpenFlow based control for re-routing with differentiated flows in Data Center Networks. In Proceedings of the 18th IEEE International Conference on Networks, Singapore, 12–14 December 2012; pp. 228–233. Khurshid, A.; Zou, X.; Zhou, W.; Caesar, M.; Godfrey, P.B. Veriflow: Verifying network-wide invariants in real time. ACM SIGCOMM Comput. Commun. Rev. 2012, 42, 467–472. [CrossRef] Tso, F.P.; Pezaros, D.P. Baatdaat: Measurement-based flow scheduling for cloud data centers. In Proceedings of the 2013 IEEE Symposium on Computers and Communications (ISCC), Split, Croatia, 7–10 July 2013. Li, J.; Chang, X.; Ren, Y.; Zhang, Z.; Wang, G. An Effective Path Load Balancing Mechanism Based on SDN. In Proceedings of the IEEE 13th International Conference on Trust, Security and Privacy in Computing and Communications, Beijing, China, 24–26 September 2014; pp. 527–533. Li, D.; Wang, S.; Zhu, K.; Xia, S. A survey of network update in SDN. Front. Comput. Sci. 2017, 11, 4–12. [CrossRef] Jain, S.; Kumar, A.; Mandal, S.; Ong, J.; Poutievski, L.; Singh, A.; Venkata, S.; Wanderer, J.; Zhou, J.; Zhu, M.; et al. B4: Experience with a globally-deployed software defined wan. ACM SIGCOMM Comput. Commun. Rev. 2013, 43, 3–14. [CrossRef] Alizadeh, M.; Atikoglu, B.; Kabbani, A.; Lakshmikantha, A.; Pan, R.; Prabhakar, B.; Seaman, M. Data center transport mechanisms: Congestion control theory and IEEE standardization. In Proceedings of the 46th Annual Allerton Conference on Communication, Control, and Computing, Urbana-Champaign, IL, USA, 23–26 September 2008; pp. 1270–1277. Duan, Q.; Ansari, N.; Toy, M. Software-defined network virtualization: An architectural framework for integrating SDN and NFV for service provisioning in future networks. IEEE Netw. 2016, 30, 10–16. [CrossRef] Zhong, H.; Fang, Y.; Cui, J. Reprint of “LBBSRT: An efficient SDN load balancing scheme based on server response time”. Futur. Gener. Comput. Syst. 2018, 80, 409–416. [CrossRef] Shu, R.; Ren, F.; Zhang, J.; Zhang, T.; Lin, C. Analysing and improving convergence of quantized congestion notification in Data Center Ethernet. Comput. Netw. 2018, 130, 51–64. [CrossRef] © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).