Noname manuscript No. (will be inserted by the editor)
A Rate Allocation Framework for Multi-Class Services in Software-Defined Networks Minh-Thuyen Thi · Thong Huynh · Mikio Hasegawa · Won-Joo Hwang
Received: date / Accepted: date
Abstract Software Defined Networking (SDN) is a network architecture with a programmable control plane (e.g., controllers) and simple data plane (e.g., forwarders). One of the popular SDN protocols/standards is OpenFlow, for which researchers have recently proposed some quality-of-service (QoS) supports. However, the proposals for rate allocation have some limitations in network scalability and multi-class services’ supports. In the literature, rate allocation formulations are commonly based on the framework of Network Utility Maximization (NUM). Nevertheless, multi-class services are rarely considered in that framework since they make the formulated NUM become nonconvex and prevent its subgradient-based algorithm from converging. In this paper, we propose a scalable QoS rate allocation framework for OpenFlow in which multi-class services are considered. The convergence issue in the algorithm of our NUM-based framework is resolved by an admission control scheme. The network scalability is improved by our decentralized algorithms that can run Minh-Thuyen Thi Department of Information and Communications System, Inje University 197, Inje-ro, Gimhae, Gyeongnam, Korea E-mail:
[email protected] Thong Huynh Department of Electrical Engineering, Tokyo University of Science 6-3-1, Niijyuku, Katsushika-ku, Tokyo, 125-8585 Japan E-mail:
[email protected] Mikio Hasegawa Department of Electrical Engineering, Tokyo University of Science 6-3-1, Niijyuku, Katsushika-ku, Tokyo, 125-8585 Japan E-mail:
[email protected] Won-Joo Hwang ( ) Department of Information and Communications Engineering, HSV-TRC, Inje University, 197, Inje-ro, Gimhae, Gyeongnam, Korea E-mail:
[email protected]
2
Minh-Thuyen Thi et al.
on multiple parallel controllers. Extensive simulation and emulation results are provided to evaluate the performance of our method. Keywords OpenFlow · inelastic flow · network utility maximization 1 Introduction Software Defined Networking (SDN) supports the implementation of new managing schemes or policies at its programmable control plane; its data plane only has to follow the forwarding rules from that control plane. SDN is promisingly advantageous in supporting quality-of-service (QoS) when compared to other architectures like IntServ (integrated services) and Diffserv (differentiated services) [1], [2]. OpenFlow is an SDN protocol/standard that starts to have simple QoS supports in its specification [3]. One advantage of SDN/OpenFlow over other architectures is the openness to the implementations of new QoS supports. These QoS supports are important because through them, OpenFlow can attract more attention and be more adoptable. As shown in [2] and [4], currently there are few works that target QoS supports in SDN/OpenFlow. Moreover, in OpenFlow, the recent studies of QoS supports, especially QoS rate allocation, still have some limitations. Some of those studies do not investigate the scalability of the centralized controller while some do not consider multi-class services and admission control ([5], [6], [7], and [8]). Based on network utility maximization (NUM) [9], we propose a QoS rate allocation framework with decentralized algorithms for multi-class services in OpenFlow-based networks. At the same time, we utilize OpenFlow to have two supports for this rate allocation framework: the admission control and the reduction of algorithm execution time. In our work, we deal with three main issues: scalability of OpenFlow centralized control, admission control for multi-class service and the requirement for processing resources to remove time-scale separation assumption. First, the algorithms of our framework have a decentralized design to improve network scalability. In OpenFlow-based networks, since the scalability is limited by the capacity of the centralized controller, we propose to run the algorithms on multiple parallel controllers. When more controllers are used, the network can handle a larger number of active flows and a higher degree of flow dynamics. The evaluation results show that our decentralized algorithms can improve scalability of the OpenFlow network in an emulator. Second, our framework considers two classes of services: elastic and inelastic. Both services are not commonly considered in a same NUM-based rate allocation formulation; part of the reason is that with inelastic traffic, the problem becomes nonconvex and its subgradient-based rate allocation algorithm does not always converge [10], [11]. Our framework considers both services; and to deal with the convergence issue of the subgradient-based algorithm, we propose an admission control scheme. Note that admission control is a natural approach when dealing with inelastic traffic [12]. The author in [12] shows that performance of an inelastic flow drops significantly if its rate falls below a
Title Suppressed Due to Excessive Length
3
critical level, while admission control can contribute to the avoidance of that situation. We propose that the centralized controllers of OpenFlow will serve as central authorities to perform admission control. Third, OpenFlow can be utilized to reduce the computation time of a NUM-based subgradient iterative algorithm. This reduction contributes to the removal of the time-scale separation assumption that is typically adopted in NUM-based rate allocation [13]. It assumes that network flows are long-lived and whenever the number of flows in the network changes, the data rates of the flows are re-computed instantaneously. However, in a realistic setting, the network flows arrive and depart dynamically; rate allocation algorithms, especially distributed ones, may not have enough time to converge [14], [15], [16]. We reduce computation time in two ways. First, centralized computation of OpenFlow helps to reduce the convergence time of subgradient iterative algorithms by avoiding synchronization delay and also by controlling dynamic step size. Second, we propose a heuristic to formulate reduced-NUM - a reduced version that has a smaller size than the original NUM. The experiment results show that our algorithms significantly reduce computation time in comparison to other algorithms. Our contributions are as follows. – We present a QoS rate allocation framework for elastic and inelastic traffic in OpenFlow-based networks, then formulate reduced-NUM to decrease the time of finding solutions (Section 3 and Section 4). – We propose algorithms and present a decentralized design to run these algorithms on multiple parallel controllers, improving network scalability (Section 4). – We provide extensive simulation and emulation results that can be used to support the adoption and deployment of our scheme in practical OpenFlowbased networks. (Section 5). 2 Related Works Recently, there have been some studies of QoS supports for rate allocation in SDN/OpenFlow. The authors in [5] proposed QoS Application Programming Interfaces (APIs) that allow network administrators to set policies on the rate allocation for different kinds of flows. In [6], QoS APIs can support end hosts to request guaranteed minimum bandwidth. Seddiki et al. [7] proposed a framework that classifies traffic and performs rate shaping according to userspecified priorities of flows. The authors in [8] studied the allocation for link bandwidth and flow tables in SDN. These resources are allocated to achieve proportionally fair bandwidth allocation and minimum delay. However, the works in [5], [6], [7] and [8] depend on centralized control while scalability is not investigated. Multi-class services and admission control are not considered in [8]. Moreover, in [5], [6] and [7], all the flows are constantly controlled (e.g., by rate limiting); this approach requires a great amount of processing resources, especially when the number of active flows is large. In our studies, we
4
Minh-Thuyen Thi et al.
propose a decentralized QoS rate allocation framework for multi-class services in OpenFlow-based networks. In our approach, instead of controlling the flows constantly, the controllers find rate allocation solutions and notify the flows’ senders; then those senders control their sending rate. In this paper, we address three main issues: the scalability of OpenFlow centralized controller, admission control for multi-class service, and time-scale separation assumption. There are concerns about the scalability of the centralized controller in SDN/OpenFlow [17]. The authors in [17] show that with large scale networks (i.e., with a large number of forwarders, or a high level of flow dynamics), the central controller can hardly handle all incoming requests. That concern is confirmed by a number of studies that propose the support of multiple parallel controllers in OpenFlow deployments ([18], [19], [20], [1]) since this support has not been standardized in the OpenFlow specification [3], [20]. Motivated by this issue, we provide a decentralized design for our algorithms to run on those parallel controllers, sharing computational tasks and improving scalability. For the problem of NUM-based rate allocation with multi-class services, a number of studies have been made. Unlike our work, these studies consider non-OpenFlow networks and adopt the time-scale separation assumption. The authors in [10] and [11] show that the rate allocation formulation becomes nonconvex when it has both elastic and inelastic utility functions. Because of this nonconvexity, the subgradient-based iterative rate allocation algorithms do not always converge. To deal with the convergence problem caused by nonconvexity, Tychogiorgos et al. ([21]) showed that heuristic algorithms are necessary. Some of the research work that uses heuristic algorithms are [10], [11], and [22]. In this paper, we propose a heuristic admission control algorithm to deal with this convergence issue. For the problem of time-scale separation assumption, there are mathematical results from [23] and [13]. In [23], the authors show that network stability can be achieved without the time-scale separation assumption given that the average amount of traffic is within the network stability region. In [13], Lin et al. show that in the presence of flow-level dynamics, we can achieve network stability by a large class of congestion control algorithms. These studies differ from ours in that they only considered elastic traffic and mostly presented mathematical results while we provided the algorithms and evaluational results.
3 System Model 3.1 System Model We apply the NUM framework to the set of active flows S. The problem is to maximize the total network utility, subject to capacity constraint:
Title Suppressed Due to Excessive Length
5
Table 1: Notation Symbols s S l L xs Us (xs ) cl α B bs L(s) se
Definitions a network flow set of active flows in the network a network link set of all links data rate of flow s utility function of flow s capacity of link l fairness parameter of the framework the set of flows’ priority levels priority level of flow s, b ∈ B set of links used by flow s a newly arriving flow which needs flowinitiation from controllers set of flows which share at least one common link with L(e s) (including se) e s) sets of links used by flows in S(e
e s) S(e e s) L(e
X
max
Us (xs )
s∈S
subject to X
xs ≤ cl , ∀l ∈ L,
(1)
s:l∈L(s)
xs ≥ 0, ∀s ∈ S, where Us (xs ) is utility function of flow s with rate xs , cl is capacity of link l, L(s) is the set of links used by s. Table 1 summarizes the main notations in this paper.
3.2 Multi-Class Services Framework A concave utility function is widely used for all kinds of traffic in order to avoid non-convexity. However, as shown in the works of Shenker [12], a sigmoidal utility function reflects the characteristics of inelastic traffic more accurately. Elastic traffic (e.g., file transfer) is tolerant of a rate’s degradation while inelastic traffic (e.g., video streaming) requires the data rate to be higher than a preferred value for normal operations. We use two different forms of utility functions for two kinds of traffic: a concave function for elastic flows and a sigmoidal function for inelastic flows. Moreover, the utility function of each flow has a priority parameter to reflect the QoS class of that flow. We assume that the utility functions of elastic flows have the following logarithmic form [24]:
6
Minh-Thuyen Thi et al.
Utility
1.5 α = 6 α = 30
1
flow 1 flow 2 flow 1
0.5 flow 2 0 0
0.5
1
1.5
2
2.5 3 Data rate
3.5
4
4.5
5
(a) Utility functions of elastic flows under 2 values of α Marginal utility
1 α = 6 α = 30
0.8 0.6 flow 1
0.4 flow 1 0.2
flow 2
flow 2
0 0
0.5
1 Data rate
1.5
2
(b) Marginal utility of elastic flows under 2 values of α
Fig. 1: Elastic flows and its marginal utility
Us (xs ) =
bs α log( xs + 1), bs ∈ B, α bs
(2)
and the utility functions of inelastic flows have the following sigmoidal form [25], [10]: Us (xs ) =
1 α
1 + e− bs (xs −ds )
,
(3)
where bs denotes the priority of flow s, bs ∈ B, where B is the set of all priority levels. The flows with higher priority have larger bs . The value ds at the inflection point on the curve of inelastic utility function (Fig. 2a) is the aforementioned preferred value. With multimedia flows, ds is set based on the coding rate. The fairness parameter α is used to control the trade-off between fairness (among the allocated rates) and efficiency (the aggregate utility). The trade-off between fairness and efficiency is the effect of the marginal utility [24], [26], which is the derivative of the utility function with respect to the data rate. Fig. 1b and Fig. 2b illustrate the marginal utility of the curves in Fig. 1a and Fig. 2a, respectively. In the case of elastic traffic, consider an example of two flows with utility functions as in Fig. 1a. Provided that they have different bs , their allocated data rates are influenced by the marginal utility effect. With small α, the derivatives of two utility functions approach
Title Suppressed Due to Excessive Length
7
1 Utility
flow 2 flow 1
α = 6 α = 30
0.8 0.6 0.4
flow 1
0.2
flow 2
0 0
0.5
1
1.5
2
2.5 3 ds Data rate
3.5
4
4.5
5
(a) Utility functions of inelastic flows under 2 values of α Marginal utility
2.5
flow 2
α = 6 α = 30
2 flow 1
1.5 1
flow 2 0.5 flow 1 0 1.5
2
2.5 Data rate
3
3.5
(b) Marginal utility of inelastic flows under 2 values of α
Fig. 2: Inelastic flows and its marginal utility zero slowly and at very different paces (Fig. 1b). Then, the rate allocation algorithm tends to favor the flow with the higher marginal to achieve a higher aggregate utility, making the fairness degrade. In contrast, with a larger α, we gain a lower aggregate utility and higher fairness. In the case of inelastic traffic, a similar effect happens as in Fig. 2a and Fig. 2b. After formulating the model of multi-class services, our problem still keeps the same form as in (1), except that Us (xs ) becomes a flow-specific utility function. 4 Solution procedure In our network, when a new flow se arrives, the controllers find its routing solution first, then they compute the rate allocation for se and other Pflows. In order to find the routing solution, the controllers maintain link cost s:l∈L(s) x∗s − cl of each link l, where x∗s is the allocated rate of a flow s ∈ S. The controllers use Dijkstra’s algorithm to find the shortest path L(e s) for se based on those costs. In the case when a flow departs, the controllers do not execute any algorithm, they only update S. The supports of OpenFlow protocol are that it provides flow management and network state information collecting. The information on topology is gathered using the Link-Layer Discovery Protocol [27] while the link capacity is collected by requesting port description from forwarders [3].
8
Minh-Thuyen Thi et al.
4.1 Reduced-NUM After finding the routing solution L(e s), the controllers compute the rate allocation using a subgradient projection algorithm. Since our NUM formulation has the sigmoidal utility functions, the dual objective function may not be differentiable everywhere [10]; hence, we can not use a gradient-based method. To reduce the computation time of this algorithm, we formulate reduced-NUM - a reduced version of (1). This reduced-NUM formulation only includes the flows that share at least one common link with L(e s), instead of including all e active flows S. We use S(e s) to denote that set of link-sharing flows (including se) in the reduced-NUM formulation. As shown in [16], for a NUM-based framework to be implemented in a practical network, sub-optimality is inevitable. Our reduced-NUM also leads e s) and S, we always to a sub-optimal solution since based on the definition of S(e e s) ⊆ (S ∪ {e e s)| ≤ |S| + 1. have S(e s}) and therefore, |S(e The reduced-NUM formulation has the following forms: X
max
Us (xs )
e s) s∈S(e
subject to X
(4) e s), xs ≤ cl , l ∈ L(e
s:l∈L(s)
e s), xs ≥ 0, s ∈ S(e e s) is the set of links used by flows in S(e e s). where L(e In the subgradient projection method, the computation of each xs , s ∈ e s), is given as: S(e xs (λs ) = arg max{Us (xs ) − xs λs },
(5)
xs
where λl is Lagrangian multiplier associated with link l, λs = value of λl at iteration t + 1 is as follows:
P
l∈L(s)
λl . The
" λl (t + 1) = λl (t) − γ(t) cl −
X
x∗s
e s)}:l∈L(s) s∈{S\S(e
!#+ −
X
xs (λs (t))
e s), s ∈ S, , l ∈ L(e
e s):l∈L(s) s∈S(e +
where [a] = max {0, a}; γ(t) is step-size at iteration t.
(6)
Title Suppressed Due to Excessive Length
9
4.2 Admission Control Algorithm Problem (4) is a nonconvex problem since it has both concave and nonconcave utility functions [11]. When solving this problem by the subgradient projection method, the algorithm may not converge. When the algorithm does not converge, the allocated rates of the inelastic flows oscillate between two values: the lower one is zero and the upper one is positive. Different flows have different upper values. In our admission control scheme, after running the algorithm for a specific time, if the algorithm converges, the rate allocation solution is found and we admit se. If the algorithm does not converge, we perform a capacity check. For flows that oscillate, we check whether their upper values satisfy the capacity constraint. If the constraint is satisfied, se is admitted and its allocated rate is the upper value; otherwise, we reject se. This check allows us to avoid the case in which the algorithm does not converge and se is rejected even though the solution is feasible. Algorithm 1 summarizes our admission control heuristic. e s) and The check requires the data of link capacity, the set of flows in S(e their allocated rates. These data are maintained and updated by the controllers. Note that the check is conducted only for the links of L(e s), instead of e s) will all the links in L, because if we admit se, the data rate of the flows in S(e decrease due to bandwidth sharing, and therefore, congestion can only occur on the links of L(e s). Algorithm 1 Admission control algorithm vector x e ← output from the subgradient algorithm of (4) if x e converges to one vector then return admit se with x e is rate allocation else vector x− ← x e (for flows which oscillate, take their upper values) for link s) do P l in L(e − if s:l∈L(s) xs > cl then return reject se end if end for return admit se with x− is rate allocation end if
4.3 Decentralized Design Our algorithms run on multiple parallel controllers to share computational talks and improve scalability. We cluster the network and each controller is responsible for computational tasks in one cluster. Fig. 3 illustrates our network in which each controller belongs to a cluster of forwarders. Every controller has two levels of control on the forwarders: monitoring and executing the rate allocation. Specifically, each controller can
10
Minh-Thuyen Thi et al.
Fig. 3: Clustered network and control messages
monitor and communicate with all forwarders. However, for the tasks of routing and rate allocation, each controller is responsible for only forwarders in its cluster. When a new flow arrives at a forwarder, only the controller of the same cluster executes routing and rate allocation algorithms. These tasks are performed locally at that controller. The global data it needs to maintain and update are the network topology, links’ capacity, set of active flows S and their allocated rates x∗s . Each cluster has the quantity of the forwarders for which its controller is able to handle rate allocation. For different networks with different sizes, a sufficient number of controllers are used. However, the total number of forwarders needs to be less than or equal to the maximum quantity that a controller can monitor. The control message exchanges in our network are shown in Fig. 3. The edge forwarder of a flow is the first forwarder where the flow arrives. In the default control message exchange mechanism of OpenFlow, when a new flow arrives at its edge forwarder, this forwarder requests flow-initiation from the controller (M1 in Fig. 3). This controller executes routing and rate allocation algorithms, then sends the results of routing to all forwarders. We propose to add these control messages: (M2) instead of sending the routing results to only all forwarders, the controller also sends these results to other controllers, and at the same time, sends the allocation results to other controllers and the edge forwarders; (M3) the edge forwarders send the allocation result of each flow to the flow’s source (then each source adjusts its transmission rate); (M4) when a flow departs, its edge forwarder notifies the controllers. The control messages (M2) and (M4) allow the controllers to keep up-to-date information of active flows in S. We assume that the transmission rate of each flow (elastic or inelastic) can be adjusted by its source upon receiving the allocation result.
Title Suppressed Due to Excessive Length
11
5 Evaluation Opnet Modeler 1 is used to simulate and evaluate our algorithms in comparison with others. In the emulation, we verify the performance of our algorithms in a Mininet emulator [28].
5.1 Simulation We evaluate our framework on these aspects: (i) the performance with respect to network dynamics, (ii) fairness-efficiency trade-off and (iii) optimality. Our framework (the Framework for Rate allocation in OpenFlow-based networks - FRO) is compared with the Optimization Flow Control (OFC) in [29], Rate Allocation for Inelastic Flows (RAIF) in [11] and Rate Control for Multi-class Services (RCMS) in [10]. Routing is not considered in these three works, therefore, they use the same routing algorithm with ours. Moreover, the OFC is proposed only for elastic traffic, it is simulated without inelastic flows. The default settings of our simulation are as follows. – Hardware models: The controllers and forwarders are created from a new node model while the links are from a new link model. The bandwidth of the links between forwarders is 20 Mbps and between controllers and forwarders, it is 10 Mbps. The delay of each link is 1 ms. – Topology: We generate a random topology with 300 forwarders and 600 links. In Opnet, the topology type is set to mesh/randomized, the minimum number of connections per node is 1 and the maximum number of connections per node is 3. The default number of controller is 4. – Traffic: For simplification, the network flows are from forwarder to forwarder, instead of host to host. Flow arrival follows the Poisson process with a default arrival rate 120 flows/s. Flow size is exponentially distributed with mean: 8 Mb for the file transfer service (i.e., elastic flow) and 8 seconds duration for the multimedia streaming service (i.e., inelastic flow). The traffic of each streaming flow is generated in 8 seconds; then it is terminated. The coding rate ds of the inelastic flow is 80 Kbps. The ratio between the number of elastic and inelastic flows is 3:1. – Other parameters: We define flow-initiation time as the interval from the time a flow arrives at its edge forwarder to the time it receives its allocated rate or rejection message. The default value of the fairness parameter is α = 8. Controllers generate the sequence of step size according to the rule γ(t) = γt0 , with the constant γ0 > 0. We use Jain’s index [30] to measure fairness. 1
OPNET Modeler. OPNET Tech., Inc. [Online]. Available: http://www.opnet.com
12
Minh-Thuyen Thi et al.
Fig. 4: Flow-initiation time Table 2: Flow-initiation time At 160 active flows At 960 active flows
OFC 239 ms
RCMS 305 ms
RAIF 291 ms
FRO 35 ms
816 ms
982 ms
915 ms
66 ms
5.1.1 Performance with respect to network dynamics Fig. 4 depicts the measured flow-initiation time versus the number of active flows. Note that in the simulation, the number of active flows is the product of the arrival rate and flow duration. In the figure, the FRO has a significantly lower flow-initiation time than the other schemes because it uses centralized computation and it solves the reduced problem (4) instead of the original one (1). Centralized computation reduces convergence time by avoiding synchronization delay when updating (5)-(6) and by controlling dynamic step sizes of subgradient algorithm. Table 2 shows the flow-initiation time at two extreme levels of network state: 160 and 960 active flows. When the number of active flows increases, the flow-initiation time of the FRO grows slower than others. At 960 active flows, the flow-initiation time of the FRO is about 12.4 times, 13.9 times, and 14.9 times lower than that of the OFC, RAIF and RCMS, respectively. Fig. 5 illustrates the admission rate with different values of the arrival rate. Note that the OFC is not plotted because in this scheme, inelastic traffic and admission control are not considered. The figure shows that, at a low arrival rate, the three methods admit nearly 100% flow. However, when the arrival rate increases, the admission rates of the FAIF and RCMS drop sharply. At
Title Suppressed Due to Excessive Length
13
Fig. 5: Admission rate
Fig. 6: Mean of number of active flows
the arrival rate of 120 flows/s, the admission rates of the FAIF and RCMS are 9.33% and 10.08%, respectively, while the admission rate of the FRO is 99.05%. The network is configured in such a way that when a controller is busy executing flow-initiation for a flow, other flows that arrive at that controller have to wait. The waiting flows are rejected if they wait for longer than a threshold. The FRO rejects flows mainly through the capacity check in the rate allocation algorithm while the RAIF and RCMS reject mostly due to the long waiting time. Fig. 6 depicts the number of active flows in the network at an arrival rate 120 flows/s. Due to the same reason as in Fig. 5, the FRO admits and handles a significantly higher number of active flows than the two other schemes.
14
Minh-Thuyen Thi et al.
Fig. 7: The deviation between the FRO and the optimal solution. The lowest deviation is 0.21%, the highest is 3.33%.
5.1.2 Sub-optimality We refer to the solution of problem (1) as the optimal allocation and use it to evaluate our scheme’s sub-optimality. To find this solution, our algorithm and rate allocation heuristic are used to generate a set of locally optimal solutions, then perform a search on this set. We measure standard deviation between the allocated rates of our solution and the optimal one. The means of this standard deviation are the optimal rates. e s)| The main difference between (4) and (1) is the disparity between |S(e s)| and |S|. The ratio |S(e |S| affects the sub-optimality and also the flow-initiation time. This ratio tends to increase if the network has more active flows and fewer links. We can explain that with a small number of links, there are few available routing paths, and therefore, each path tends to be shared by a larger e s)|. In this case, we achieve high optimality (Fig. number of flows, i.e., larger |S(e e
7) but flow-initiation time is also high (Fig. 8). On the contrary, when is small, both optimality and flow-initiation time are low.
e s)| |S(e |S|
5.1.3 Fairness and efficiency trade-off We run this simulation under three scenarios: the number of priority levels |B| = 1, |B| = 4, and |B| = 8. The impact of α on the fairness of data rates is shown in Fig. 9. For all three scenarios, the fairness grows with the increase of α. Moreover, when the number of priority levels is large, the achieved rates of flows tend to deviate, making the fairness lower. Fig. 10 illustrates the change of the aggregate utility with different values of α. The standard deviation among priority levels is denoted as dev(B). In the figure, both values of priority deviation and aggregate utility are normalized. This simulation includes three scenarios: dev(B) = 1, dev(B) = 0.5 and dev(B) = 0.2. We can see that, due to the marginal utility effect, the aggregate
Title Suppressed Due to Excessive Length
15
Fig. 8: The flow-initiation time. The lowest flow-initiation time is 31 ms, the highest is 92 ms.
Fig. 9: Fairness and α
utility drops when α increases. Moreover, when dev(B) is large, the aggregate utility is high and decreases sharply. Larger dev(B) leads to a higher deviation of marginal utilities. This means the marginal utility effect is stronger, making the aggregate utility become higher and drop more sharply. On the contrary, in the case when dev(B) is small, the aggregate utility is low and drops slowly.
5.2 Implementation For emulation, we use Mininet emulator version 2.0 with controller POX and Open vSwitch version 1.4.3. This emulator runs on a Linux-based workstation with a Core i7 - 2.93 GHz processor and 4096 MB memory.
16
Minh-Thuyen Thi et al.
Fig. 10: Aggregate utility and α
The emulator is set up as follows. – Topology: Based on parametrized topologies in Mininet, we make a customized script and use it to generate a random network. From the referenced topologies in [28], the number of forwarders is set to 100; the number of links is 200. In the graph of the topology, the vertex degree of each forwarder is in [2,15]. – Hardware models: The link has a bandwidth of 20 Mbps and a delay of 1 ms. The loss ratio of links is 0% and the maximum queue size is 1000. These values are specified in the topology generating script. The flow entries in the flow tables [3] have a timeout of 60 seconds. – Traffic: Each forwarder is connected to one host that generates network flows to other hosts. This traffic generation is supported by Scapy program 2 . Flow arrival follows the Poisson process where the arrival rate ranges from 20 to 80 flows/s. The coding rate ds of inelastic flow is 60 Kbps. The flow size, traffic ratio, and algorithmic parameters have the same values as in simulation. 5.2.1 Performance with respect to network dynamics We run the emulator for five scenarios: with 1, 2, 4, 6 and 8 controllers. The flow-initiation time of those scenarios are shown in Fig. 11. Similar to simulation results, the flow-initiation time grows when the number of active flows increases. Note that the curve of the 4-controllers scenario has the same setting with the FRO curve in Fig. 4. 2
Scapy - http://www.secdev.org/projects/scapy/
Title Suppressed Due to Excessive Length
17
Fig. 11: Flow-initiation time
Fig. 11 also shows that with the network scale as in this emulation, the multiple controllers design outperforms the single controller design. This network needs around four controllers; beyond this number, the performance is not improved considerably.
5.2.2 Control Message Overhead The overhead is measured by the average amount of control messages in 1 second. Fig. 12 illustrates the overhead of the five scenarios versus the number of active flows. We keep the fixed flow size and adjust the arrival rate in order to change the number of active flows. In our algorithms, the control messages are generated according to the events of the flows’ arrivals and departures. Therefore, a higher number of active flows implies that such events occur more frequently, making the overhead larger. The results also show that there is a trade-off between the control message overhead and the processing capacity, which is represented by the number of controllers. The overhead is large when the number of controllers is high due to the proposed control messages (M2) and (M4). In Fig. 12, from the 1-controller scenario to the 8-controllers scenario, the rate at which the overhead increases becomes lower and lower. This can be explained that the growth of the control message (M2) is not only affected by the number of controllers (Section 4.3) but also by the number of forwarders.
18
Minh-Thuyen Thi et al.
Fig. 12: Control message overhead Table 3: Number of controller for the first network: 60 forwarders and 60 links
Arrival (flows/s)
40 80 120
8 3 5 6
Flow 16 3 5 7
size 24 4 6 8
32 5 6 9
5.2.3 Number of controller We measure the required number of controllers that provides an acceptable performance on the rejecting rate. Referred from [31], the threshold of the maximum rejecting rate is set to 0.05. The acceptable performance is defined as: the rejecting rate when running a scenario needs to be lower or equal to the threshold in more than 99.5% of the simulation time. In the three tables: Table 3, 4 and 5, the required numbers of controllers are shown with three different network sizes. The second network has the same forwarder but more links than the first one while the third network has the same link but more forwarders than the second one. The number of required controllers is denoted by nc . The results in the three tables show that nc grows with the increase of the arrival rate and also of flow size. Moreover, when two scenarios have the same product of arrival rate and flow size, the scenario with the larger arrival rate contains higher or equal nc . For instance, in Table 3, at the scenario (arrival:flow size) = (80:8), nc = 5 while at the scenario (40:16), nc = 3. With a higher arrival rate, there is
Title Suppressed Due to Excessive Length
19
Table 4: Number of controller for the second network: 60 forwarders and 200 links
Arrival (flows/s)
40 80 120
8 1 2 2
Flow 16 1 2 3
size 24 1 2 3
32 1 2 4
Table 5: Number of controller for the third scenario: 100 forwarders and 200 links
Arrival (flows/s)
40 80 120
8 1 3 4
Flow 16 2 3 5
size 24 2 4 6
32 2 5 6
a greater chance that a flow will be rejected; hence, more controller is required to guarantee an acceptable performance. The results also suggest that overall computational complexity is more affected by rate allocation than routing. We note that the first network requires the most controller while the second network requires the least. Given that the second network has more links than the first one, while the number of forwarders is the same, the routing computation in the second network has a higher complexity. However, the second network has more routing paths than e s)| lower. Therefore, the first one, making the number of link-sharing flows |S(e compared to the first network, routing in the second network requires a higher computational complexity, but rate allocation has a lower requirement. We can conclude that rate allocation affects the overall computational complexity more than routing.
6 Conclusion We propose a QoS rate allocation framework for multi-class services in OpenFlowbased networks. At the same time, we utilize OpenFlow to have two supports for rate allocation: the reduction of computation time and the admission control. Evaluation results show that our algorithms significantly reduce computation time compared to other algorithms. Moreover, with a decentralized design, our algorithms can run on multiple parallel controllers, improving network scalability, i.e., the controllers can handle more active flows and a higher level of flow dynamics. In the evaluation, we also measure the appropriate number of controllers that provides an acceptable network performance. This measurement and other evaluation results in this paper can be referred by network administrators for their network designing.
20
Minh-Thuyen Thi et al.
Note that our formulation and evaluation are for flow-level dynamics. Our framework can not be applied directly to packet-level dynamics, which has packets arriving in bursts. For future research, we plan to develop a framework with the consideration of packet-level dynamics. With this new framework, we can provide experiment results with a more realistic traffic model. Acknowledgements This work was supported under the framework of international cooperation program managed by National Research Foundation of Korea(2014K2A2A4001678).
References 1. H.E. Egilmez and A.M. Tekalp. Distributed qos architectures for multimedia streaming over software defined networks. Multimedia, IEEE Transactions on, 16(6):1597–1609, Oct 2014. 2. Fei Hu, Qi Hao, and Ke Bao. A survey on software-defined network and openflow: From concept to implementation. Communications Surveys Tutorials, IEEE, 16(4):2181– 2206, Fourthquarter 2014. 3. Open Networking Foundation (ONF)-Palo Alto CA, USA. Openflow switch specification v1.3.0, [Online]. Available: https://www. opennetworking.org/ Accessed: 2015. 4. B. Sonkoly, A. Gulyas, F. Nemeth, J. Czentye, K. Kurucz, B. Novak, and G. Vaszkun. On qos support to ofelia and openflow. In Software Defined Networking (EWSDN), 2012 European Workshop on, pages 109–113, Oct 2012. 5. Wonho Kim, Puneet Sharma, Jeongkeun Lee, Sujata Banerjee, Jean Tourrilhes, SungJu Lee, and Praveen Yalagandula. Automated and scalable qos control for network convergence. In Proceedings of the 2010 Internet Network Management Conference on Research on Enterprise Networking, INM/WREN’10, pages 1–1, Berkeley, CA, USA, 2010. USENIX Association. 6. Andrew D. Ferguson, Arjun Guha, Chen Liang, Rodrigo Fonseca, and Shriram Krishnamurthi. Participatory networking: An api for application control of sdns. SIGCOMM Comput. Commun. Rev., 43(4):327–338, August 2013. 7. M. Said Seddiki, Muhammad Shahbaz, Sean Donovan, Sarthak Grover, Miseon Park, Nick Feamster, and Ye-Qiong Song. Flowqos: Qos for the rest of us. In Proceedings of the Third Workshop on Hot Topics in Software Defined Networking, HotSDN ’14, pages 207–208, New York, NY, USA, 2014. ACM. 8. Feng Tao, Bi Jun, and Wang Ke. Allocation and scheduling of network resource for multiple control applications in sdn. Communications, China, 12(6):85–95, June 2015. 9. D. Tan F. P. Kelly, A. Maulloo. Rate control in communication networks: Shadow prices, proportional fairness and stability. Journal of the Operational Research Society, 49(3):237252, 1998. 10. Jang-Won Lee, Ravi R. Mazumdar, and Ness B. Shroff. Non-convex optimization and rate control for multi-class services in the internet. IEEE/ACM Trans. Netw., 13(4):827– 840, August 2005. 11. P. Hande, Zhang Shengyu, and Mung Chiang. Distributed rate allocation for inelastic flows. Networking, IEEE/ACM Transactions on, 15(6):1240–1253, Dec 2007. 12. S. Shenker. Fundamental design issues for the future internet. Selected Areas in Communications, IEEE Journal on, 13(7):1176–1188, Sept 1995. 13. Xiaojun Lin, N.B. Shroff, and R. Srikant. On the connection-level stability of congestioncontrolled communication networks. Information Theory, IEEE Transactions on, 54(5):2317–2338, May 2008. 14. Xiaojun Lin and N.B. Shroff. The impact of imperfect scheduling on cross-layer rate control in wireless networks. In INFOCOM 2005. 24th Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings IEEE, volume 3, pages 1804–1814 vol. 3, March 2005.
Title Suppressed Due to Excessive Length
21
15. Mung Chiang, S.H. Low, A.R. Calderbank, and J.C. Doyle. Layering as optimization decomposition: A mathematical theory of network architectures. Proceedings of the IEEE, 95(1):255–312, Jan 2007. 16. Tian Lan, Xiaojun Lin, Mung Chiang, and R.B. Lee. Stability and benefits of suboptimal utility maximization. Networking, IEEE/ACM Transactions on, 19(4):1194–1207, Aug 2011. 17. S.H. Yeganeh, A. Tootoonchian, and Y. Ganjali. On scalability of software-defined networking. Communications Magazine, IEEE, 51(2):136–141, February 2013. 18. Amin Tootoonchian and Yashar Ganjali. Hyperflow: A distributed control plane for openflow. In Proceedings of the 2010 Internet Network Management Conference on Research on Enterprise Networking, INM/WREN’10, pages 3–3, Berkeley, CA, USA, 2010. USENIX Association. 19. Pingping Lin, Jun Bi, and Hongyu Hu. Asic: An architecture for scalable intra-domain control in openflow. In Proceedings of the 7th International Conference on Future Internet Technologies, CFI ’12, pages 21–26, New York, NY, USA, 2012. ACM. 20. Dan Marconett and S.J.B. Yoo. Flowbroker: A software-defined network controller architecture for multi-domain brokering and reputation. Journal of Network and Systems Management, 23(2):328–359, 2015. 21. G. Tychogiorgos, A. Gkelias, and K. K. Leung. Utility-proportional fairness in wireless networks. In Personal Indoor and Mobile Radio Communications (PIMRC), 2012 IEEE 23rd International Symposium on, pages 839–844, Sept 2012. 22. G. Tychogiorgos, A. Gkelias, and K.K. Leung. Towards a fair non-convex resource allocation in wireless networks. In Personal Indoor and Mobile Radio Communications (PIMRC), 2011 IEEE 22nd International Symposium on, pages 36–40, Sept 2011. 23. Kexin Ma, R. Mazumdar, and Jun Luo. On the performance of primal/dual schemes for congestion control in networks with dynamic flows. In INFOCOM 2008. The 27th Conference on Computer Communications. IEEE, pages –, April 2008. 24. Kae Won Choi, Wha Sook Jeon, and Dong Geun Jeong. Efficient load-aware routing scheme for wireless mesh networks. Mobile Computing, IEEE Transactions on, 9(9):1293–1307, Sept 2010. 25. Maryam Fazel and Mung Chiang. Network utility maximization with nonconcave utilities using sum-of-squares method. In Decision and Control, 2005 and 2005 European Control Conference. CDC-ECC’05. 44th IEEE Conference on, pages 1867–1874. IEEE, 2005. 26. Li Chen, Bin Wang, Li Chen, Xin Zhang, and Dacheng Yang. Utility-based resource allocation for mixed traffic in wireless networks. In Computer Communications Workshops (INFOCOM WKSHPS), 2011 IEEE Conference on, pages 91–96, April 2011. 27. J. Kempf, E. Bellagamba, A. Kern, D. Jocha, A. Takacs, and P. Skoldstrom. Scalable fault management for openflow. In Communications (ICC), 2012 IEEE International Conference on, pages 6606–6610, June 2012. 28. Bob Lantz, Brandon Heller, and Nick McKeown. A network in a laptop: Rapid prototyping for software-defined networks. In Proceedings of the 9th ACM SIGCOMM Workshop on Hot Topics in Networks, Hotnets-IX, pages 19:1–19:6, New York, NY, USA, 2010. ACM. 29. S.H. Low and D.E. Lapsley. Optimization flow control. i. basic algorithm and convergence. Networking, IEEE/ACM Transactions on, 7(6):861–874, Dec 1999. 30. Raj Jain, Dah-Ming Chiu, and William Hawe. A quantitative measure of fairness and discrimination for resource allocation in shared computer systems. 1998. 31. S. Weber and G. de Veciana. Rate adaptive multimedia streams: optimization and admission control. Networking, IEEE/ACM Transactions on, 13(6):1275–1288, Dec 2005.
Author Biographies Minh-Thuyen Thi received his B.S. degree in Computer Science from Vietnam National University, Ho Chi Minh city, Vietnam, in 2011. He received
22
Minh-Thuyen Thi et al.
his M.Eng. degree in Information and Communications from Inje University, Gimhae, Republic of Korea, in 2014. He has been working toward the Ph.D. degree at Department of Information and Communications System at Inje University, Korea. His research interests are software-defined networking, deviceto-device communication, network optimization. Thong Huynh received the M.E., and Ph.D. degrees from Department of Information and Communications Engineering, Inje University, Korea in 2012, and 2015 respectively. He received bachelors degree in Electronics and Telecommunication at Excellent Engineering Training Program, Ho Chi Minh University of Technology, Vietnam in 2010. He is currently postdoctoral fellow at Department of Electrical Engineering, Tokyo University of Scicenceence, Japan. His research interests are in the fields of wireless communication, wireless sensor networks, LTE, 5G networks. Mikio Hasegawa received the B.Eng., M.Eng., and Dr.Eng. degrees from the Tokyo University of Science, Tokyo, Japan, in 1995, 1997, and 2000, respectively. Currently, he is a Professor in the Department of Electrical Engineering, Tokyo University of Science. His research interests include mobile networks, cognitive radio networks, optimization algorithms, and chaos theory and its applications. Dr. Hasegawa has served as a Secretary of the Chapter Operation Committee for the IEEE Japan Council. He also has served as an Associate Editor of the IEICE Transactions on Communications, an Associate Editor of IEICE Communications Express, and a Chair of the IEICE Technical Committee on Complex Communication Sciences. Won-Joo Hwang received the Ph.D. Degree in Information Systems Engineering from Osaka University Japan in 2002. He received his B.S. and M.S. degree in Computer Engineering from Pusan National University, Pusan, Republic of Korea, in 1998 and 2000. Since September 2002, he has been an assistance professor at Inje University, Gyeongnam, Republic of Korea. His research interests are in network optimization and cross layer design. He is a member of the IEICE and IEEE.