End-to-end Differentiated Service Provisioning and Virtual ... the network environment, while at the same time makes it more difficult for Service Providers (SPs) to .... multiple E2E VNs managed by different Supreme Controllers at the top level.
Heterogeneous Multi-domain Network Virtualization with End-to-end Differentiated Service Provisioning and Virtual Network Organization Xiaoyuan Cao, Noboru Yoshikane, Takehiro Tsuritani, Itsuro Morita KDDI R&D Laboratories, 2-1-15 Ohara, Fujimino, Saitama, 356-8502 Japan. {xi-cao, yoshikane, tsuri, morita}@kddilabs.jp
Abstract: Hierarchical network virtualization is demonstrated on a heterogeneous multi-domain network testbed, based on SDN/Openflow and two-level Flowvisor. Differentiated services with various sets of QoS requirements are provisioned in the end-to-end optimally organized virtual networks. OCIS codes: (060.4250) Networks; (060.4254) Networks, combinatorial network design; (060.4256) network optimization
1. Introduction Network virtualization (NV) [1] enables the co-existence of multiple isolated virtual networks (VN) by slicing and restructuring of a shared physical infrastructure, and therefore provides the flexibility to accommodate various differentiated services and applications. The diversification of network switching and transporting technologies enriches the network environment, while at the same time makes it more difficult for Service Providers (SPs) to deliver end-to-end (E2E) services across multiple heterogeneous domains [2, 3]. SPs need to create virtual E2E networks by synthesizing the networking resources from multiple infrastructure providers (InPs). However, resources from heterogeneous network environment could be virtualized in vastly different forms. For instance, in an electrical IP network, resources are synthesized in a much finer granularity, while in a WDM network resources are in a coarse form and switched based on wavelengths/wavebands. It is quite challenging to optimally organize and orchestrate various virtual networks all together in order to fully utilize the network resources. And for this reason, it is even more demanding to enable E2E differentiated service provisioning with certain quality of service (QoS) based on the synthesized multi-domain virtual networks, which happens to be a significant issue for network virtualization in future Internet and a key competition point for SPs [4]. Particularly, one service could have multidimensional QoS requirements to fulfill. For instance, an on-demand video service requires not only low delay, but also high bandwidth and high transmission quality. Furthermore, as traffic travels through heterogeneous network domains, different kinds of QoS requirements should apply. Specifically, electrical network and optical network has different definitions and focus for QoS, which needs to be considered piecewise and accordingly along the path. In light of this, we demonstrate the heterogeneous multi-domain network virtualization with E2E differentiated service provisioning and VN organization, based on centralized software-defined networking (SDN) and Openflow technology [5] which can provide cost-effective and flexible control over the VNs. Flowvisor [6] is used as the virtualization tool for its advantage and convenience in Openflow-enable SDN approach. Flowvisor virtualizes the network firstly by creating multiple slices to be managed by separate controllers and then creating different flowspaces that associate packets of a particular type in the network to specific slices. Hierarchical virtualization is deployed in the network, i.e. intra-domain virtualization and inter-domain virtualization which provides bordernode/connection view to the network Orchestrator at the top level. Services with different sets of QoS requirements are accommodated in multiple E2E VNs synthesized and organized from heterogeneous network domains. 2. Testbed setup with hierarchical multi-domain virtualization We set up a three-domain network as shown in Fig. 1, including two electrical networks (E-domain I and II) comprising of several commercial Openflow switches (OFS, Node 1-6) connected with GbE electrical interface, and one optical WDM network (O-domain) comprising of optical cross connects (OXC, Node A-F) based on all-optical switching with the corresponding Openflow Agents (for protocol and message exchange between Openflow Controllers/Flowvisors and OXCs) [7]. In this experiment, the border links between the electrical networks and the optical network are constructed with a few wavelengths generated by DWDM SFP modules on border OFS for simplicity. Each domain was virtualized using an intra-domain Flowvisor and managed by a set of Controllers, where each Controller was in charge of one virtual electrical network (VEN) or virtual optical network (VON) [8] created from a slice of the physical network. The network was further virtualized by an inter-domain Flowvisor, where only the border nodes and the corresponding interconnecting links were visible to the Supreme Controllers at
the top level and the Orchestrator. Traditional multi-domain virtualization usually took two extreme approaches: either the Orchestrator saw the full network topology, or each domain was represented by one single virtual node for the orchestrator. Our hierarchical approach, however, abstracts the border nodes of each domain for the Orchestrator as representatives of each network domain, for the limited number of border nodes comprise a topology that can be easily handled by the Orchestrator but still have the necessary information for interconnecting the neighboring domains, which positively improves the NV scalability. 3. Differentiated service provisioning and VN organization To evaluate VN setup, three traffic flows with different QoS requirements for transmission delay and quality were generated to emulate three types of services (On-demand Video service, Big Data Analysis & Backup service [9], and Web Browse & Application service) at the client side as shown in Table 1. One Erbium-doped fiber amplifier (EDFA) was placed before OXC-A for optical signal amplification in order to distinguish the transmission quality between the paths starting from OXC-A and OXC-B. VN setup process is shown in Fig. 1 and explained as follows: Step I: VN request sent from the Operator to Orchestrator, indicating QoS requirement sets for each service. According to the destination IP address, the Orchestrator requested the path computation element (PCE) [10] at the top level to calculate several preliminary E2E routes interconnected by border nodes. Step II: Intra-domain VN calculation. Orchestrator informed PCE in each domain to calculate VN between pairs of border nodes chosen in step I with intra-domain QoS requirements, e.g. for video service, low delay (fewer hops) was required in E-domains and high transmission quality (path with EDFA) was required in O-domain. Within each domain, PCE calculated several candidate VENs or VONs that could satisfy the QoS requirements and replied to the Orchestrator regarding the available border nodes and the corresponding resource utilization. Step III: VN organization and inter-domain VN setup. With the candidate VENs and VONs, the Orchestrator and top-level PCE would make the final decision as which and how candidates should be used and connected. Here, a novel approach, which could provide VN convergence and divergence mechanisms according to QoS requirement, was applied for VN organization. As we noticed in table 1, video service and Big Data service had the same transmission quality requirement. Although they had different QoS requirements in E-domain I and were accommodated with different VENs, separately accommodating them in O-domain would degrade the resource utilization and increase the management burden. Therefore in such a circumstance, it was reasonable and beneficial to converge together multiple VNs and/or diverge one apart afterwards. As in this example, VEN1 and VEN2 were bundled together into the same VON1 for transmission in O-domain and split into VEN4 and VEN5 in E-domain II. After this, the network represented by border nodes was sliced by inter-domain Flowvisor and interconnected into multiple E2E VNs managed by different Supreme Controllers at the top level. Step IV: Intra-domain VEN and VON setup. According to the E2E VN, we sliced each domain using intradomain Flowvisor and set up multiple VENs or VONs controlled by different Controllers. The final VN setup decision of our demonstration is shown in Table 1. Step V: Path setup and transmission. After VN setup, transmission was initiated and paths were found within VENs and VONs sequentially based on Openflow control, where Packet-in messages were forwarded from OFS and border OXC Agent to the corresponding Controllers and Flow_Mod messages were sent out to set up flowtables [11]. As shown in the Flow_Mod messages of Fig. 2 (a), Video and Big Data traffic coming from different input ports (P.9 and P.10) on OFS_3 were converged onto the same output port P.49 and diverged apart on OFS_5 (from port Operator
VN1 5 A E
3
VN2 3 A E 5
1
1
VN3 B F
4
Client
Electrical domain I
Inter-domain Virtualization
6
Optical domain
Electrical domain II
Orchestrator
VN request E2E Route Calculation VEN I request
1
VON request VEN II request
Flowvisor Flowvisor VEN1
3
VEN2 1
1
3
2
1
VON1
VEN3 4
VON2
VEN4 5
A
C
E
B
D
VEN5 5
F
6
VEN6
Intra-domain Virtualization
VN Setup
Candidate VEN I reply Candidate VON reply Candidate VEN II reply E2E VN Setup
VEN I setup Flowvisor Flowvisor
Video
Flowvisor Flowvisor
P.49
P.9
Big Data
5
P.10 3
1
Flowvisor Flowvisor
A B
2
C D
E F
P.17
P.51
6
P.18
Electrical Openflow Switch OXC with Openflow Agent Orchestrator Controller
Web 4 E-domain I
O-domain
E-domain II
PCE
VON setup VEN II setup
Transmission Initiation
Transmission Setup
Connection setup Path setup
Connection setup
Fig. 1: Heterogeneous multi-domain network virtualization and VN-Transmission setup process.
P.51 to P.17 and P.18 separately). Also as shown in the Flow_Mod messages and optical spectra in Fig. 2 (b, d), paths were set up along A-C-E (within VON1) for Video and Big Data traffic with higher optical power (higher transmission quality) and B-D-F (within VON2) for Web traffic. VN management: VN is managed hierarchically from top to bottom level, i.e. in case that change is required for certain service, Orchestrator would firstly check the need for current E2E VN adjustment (specifically border node and interconnection change); then each domain would adjust VEN/VON setup according to the change from the top level. For instance, if the transmission quality requirement of Big Data service is no longer high as the video service, Orchestrator would decide it should no longer be converged with the video service. Therefore, Orchestrator would reselect and reorganize previously calculated and recorded candidate VENs and VONs, where the border OXC node B was chosen for this service instead of A, and OFS_4 was chosen accordingly. Then, the bottom level would make the corresponding adjustment by selecting new VEN from node 1 to 4 and new VON from node B to E (shown in Fig. 2 (c)). Thus, the new E2E VN setup would be Node 1-2-4--B-D-E--5. Service Video Big Data Web
Traffic Pattern Continuous Bursts Continuous stream Random Packets
Table 1: E2E differentiated service accommodation and VN setup Delay Quality E-domain I O-domain Low High VEN1 VON1 Medium High VEN2 VON1 Low to Medium Medium VEN3 VON2
E-domain II VEN4 VEN5 VEN6
E2E VN VN1 VN2 VN3
4. Conclusion We have demonstrated hierarchical NV for multi-domain network using two-level virtualization via Flowvisor. E2E VNs were provisioned and managed for differentiated services across the heterogeneous multi-domain network. VNs were optimally organized by network convergence and divergence in different network domain to improve resource utilization and operation efficiency. This work is partially supported by Ministry of Internal Affairs and Communications (MIC) "STRAUSS", Japan.
5. Reference N. Chowdhury, R. Boutaba, “A survey of network virtualization,” Computer Networks, 54 (5), pp. 862-876 (2010). Q. Duan, “Modeling and Analysis of End-to-End Quality of Service Provisioning in Virtualization-Based Future Internet,” 19th International Conference on Computer Communications and Networks (ICCCN), 2010. [3] D. Siracusa, et al, “Edge-To-Edge Virtualization and Orchestration in Heterogeneous Transport Networks,” 2013 IEEE SDN for Future Networks and Services (SDN4FNS), 2013. [4] I. Ayadi, et al, “QoS-based network virtualization to future networks: An approach based on network constraints,” 4th International Conference on the Network of the Future (NOF), 2013. [5] http://www.opennetworking.org/, [Online] “The Open networking foundation homepage for SDN and Openflow”. [6] R. Sherwood et al., “Can the production network be the testbed?”, Proc. of USENIX OSDI, 2010. [7] L. Liu, et al, “Field Trial of an OpenFlow-Based Unified Control Plane for Multilayer Multi-granularity Optical Switching Networks”, J. Lightwave Technol., 31(4), pp. 506-514, (2013). [8] A. Pages, el al, “Virtual network embedding in optical infrastructures,” 14th International Conference on Transparent Optical Networks (ICTON), 2012. [9] S. Kaisler, et al, “Big Data: Issues and Challenges Moving Forward”, IEEE 46th Hawaii International Conference on System Sciences, 2013. [10] A. Farrel, et al, “A Path Computation Element (PCE)-Based Architecture,” IETF RFC 4655 (2006). [11] B. Heller, “Openflow switch specification, version 1.0.0,” Dec 2009. [Online]. Available: http://www.openflow.org/documents/openflowspec-v1.0.0.pdf. [1] [2]
Video
Big Data Optical power [dBm]
0 -10 -20
Video & Big Data
-30 -40 -50 1290
1300
1310
1320
1330
1320
1330
Wavelength [nm]
(a)
Web
(b)
New Big Data
-15
Optical power [dBm]
Video & Big Data
-25
Web
-35
-45 -55 -65 1290
1300
1310
Wavelength [nm]
(c)
Fig. 2: Experimental results for path setup and VN organization.
(d)