Software-Defined Networking of Linux Containers - IEEE Xplore

3 downloads 1926 Views 231KB Size Report
another host, in real time – preserving processes status. The. Linux containers represent an emerging technology for fast and lightweight process virtualization.
Software-Defined Networking of Linux Containers Cosmin Costache1, Octavian Machidon1, Adrian Mladin1, Florin Sandu1, Razvan Bocu1 Transilvania”University, Department of Electronics and Computers – Brasov, Romania

1”

Abstract — Today’s IT organizations who act as service providers are under increasing pressure to keep up with the continuous and growing demand for IT services. Through the shift from interactive, manual processes to automated, selfdeployment of resources, the providers can increase the efficiency of delivering on-demand services. Virtualized resources, particularly virtual machines and containers that are in the focus of the present paper, make the infrastructure transparent to the final user, are easier to be configured and can seamlessly migrate to another host, in real time – preserving processes status. The Linux containers represent an emerging technology for fast and lightweight process virtualization. Because the containers require less resources to run by sharing the operating system kernel, a higher density of containers can be achieved on the same host, opposed to other virtualization solutions like hardware or paravirtualization. The paper presents a solution to enable the ondemand provisioning of Linux containers using Software-Defined Networking, a flexible approach to treating even control-level resources “as a Service”. Keywords — SDN; Linux containers; virtualization; SoftwareDefined Networking;

I. INTRODUCTION The rise of cloud services and the increasing number of mobile personal devices such as smartphones, tablets and notebooks accessing the cloud services, are putting a lot of stress on the IT and network infrastructure. The need to be able to provision the user services on demand and optimize the resource allocation has become a priority for many service providers.

II. SOFTWARE DEFINED NETWORKING The Software Defined Networking (SDN) is a new architectural concept that aims to decouple the network control and forwarding functions [1]. This separation enables the network control layer to be programmable. Another key feature of the SDN is the use of open protocols for the communication between the network elements and the controller [2]. A typical SDN architecture consists of 3 layers. The top layer is the application layer which includes applications delivering services. The applications interact with the SDN controller which facilitates automated network management. At the bottom is the physical network layer composed of plain network elements. The network elements are simplified and they concentrate only on the forwarding functions. All the decisions, route calculations and policies are implemented in the controller [3]. Figure 1 shows the typical SDN architecture. In an SDN environment the controller is the central point, providing an abstract, centralized view of the entire network. The most common protocol used in SDN networks for the communication between the controller and the network elements (switches) is the OpenFlow protocol. There are several commercial and open-source SDN controllers. For the current research, we have decided to use the Open Daylight controller. Open Daylight is a Java based controller providing enterprise grade performance.

In this paper we present a flexible solution based on lightweight Linux containers that can enable the on-demand provisioning of user applications or services. The applications are running inside isolated containers that can be started and interconnected on demand. After starting, the virtual network interface of the container is connected to a virtual switch instance. The networking between the containers is dynamically configured by adding or updating flow definitions into the virtual switches. The paper is structured as follows: section 2 explains the concept of software defined networking, while section 3 will cover the main virtualization solutions. Our solution for implementing the software defined networking of Linux containers is presented in section 4, the conclusions being presented in section 5.

Fig. 1. The SDN Architecture

The software defined networking is an emerging architecture model that is well suited for the dynamic nature of today’s applications [4]. Exposing the network control through

APIs, the network services are abstracted and the network itself becomes programmable [5]. The method we are presenting leverages the capability to control and program the network through the SDN Controller, in order to interconnect the Linux containers. III. VIRTUALIZATION SOLUTIONS The virtualization can be described in a generic way as a separation of the service request from the underlying physical delivery of that service [6]. In computer virtualization, an additional layer called hypervisor is typically added between the hardware and the operating system. The hypervisor layer is responsible for both sharing of hardware resource and the enforcement of mandatory access control rules based on the available hardware resources. There are three types of virtualization: full virtualization, para-virtualization and operating system level (OS-level) virtualization. In the following sub-sections we will present the concepts used by each virtualization model. A. Full virtualization The full virtualization is designed to provide a total abstraction of the underlying hardware and creates a complete virtual system for the guest operating system [7]. The hypervisor monitors the hardware resources and mediates between the guest operating systems and the underlying hardware. With this model, no modifications are needed in the guest OS. Each virtual machine is independent and unaware of other virtual machines running on the same physical hardware. One advantage of this virtualization technique is the decoupling of the software (OS) from the underlying hardware. The performance of this technique is less than bare hardware because of the hypervisor mediation. The most common full virtualization solutions are provided by VMWare, Microsoft and Oracle. B. Para-virtualization The para-virtualization technique requires modifications in the guest operating systems that are running inside the virtual machines. This method uses a hypervisor for shared access to the underlying hardware but integrates virtualization-aware code into the operating system itself. As a result, the guest operating systems are aware that they are executing on top of a hypervisor and allow them to interact more directly with the host system's hardware. This leads to a higher performance and less virtualization overhead. The primary advantage of the para-virtualization is the decrease of performance penalty observed in the full virtualization. C. OS-level virtualization The OS-level virtualization does not require an additional hypervisor layer. Instead, the virtualization capabilities are part of the host operating system (OS). This technique virtualizes servers on top of the host operating system itself. The overhead produced through the hypervisor mediation is eliminated and enables near native performance.

Kernel-based Virtual Machine (KVM) is an open source hypervisor that provides enterprise-class performance to run Windows or Linux guest virtual machines on x86 hardware. A very used alternative is OpenVZ [8]. Linux containers (LXC) represent a different method of OS-level virtualization. It allows multiple isolated Linux systems (containers) to be run on a single host operating system. The host kernel provides process isolation and performs resource management. This means that even though all the containers are running under the same kernel, each container is a virtual environment that has its own file system, processes, memory, devices, etc. In the research presented hereby, we used an open source implementation of the Linux Containers technology called Docker. Docker is an open-source platform for the management of Linux containers. Docker containers can be seen as extremely lightweight virtual machines that allow code to be run in isolation from other containers. A Docker container can boot extremely fast making it the best candidate for on demand provisioning scenarios. IV. SDN FOR LINUX CONTAINERS The containers have been in the IT environment for long time, but the use of containers instead of virtual machines is a novel approach [9]. Linux containers are lighter and provide better performance compared to classical virtual machines. A full virtual machine can take up to several minutes to be provisioned, whereas a container can be instantiated and started in seconds. Because the containers do not run on top of a hypervisor, the applications they contain offer superior performance close to bare-metal performance. This paper will present a method to easily interconnect containers running on different virtual machines. It is also possible to isolate the interconnected containers into VLANs. The test environment is composed of 3 virtual machines running Linux and each VM is hosting multiple containers. Because we had only one physical machine available, we decided to use virtual machines to simulate a network topology with 3 nodes. All the virtual machines run on top of a Linux OS using the Kernel Virtualization Module (KVM). The host OS is a 64bit Ubuntu distribution (12.04 LTS). The virtual machines are running on a Linux OS based on the Ubuntu 14.04 distribution and each has allocated 1GB of RAM. Because the Docker containers are lightweight, each virtual machine will host multiple Docker containers. Additionally, on each virtual machine we have installed a virtual switch module. After creation, all the containers on a virtual machine, will be attached to its local switch. The switches will be linked using GRE (Generic Routing Encapsulation) tunnels. For the virtual switch we have chosen the open source software called “open vSwitch”. To simplify the test environment we have configured static IP addresses on each virtual machine. To enable the communication between the containers located on different virtual machines, we have created GRE

tunnels between the 3 open vSwitch instances. The tunnel configuration is depicted in Fig. 2. Each open vSwitch instance is connected using GRE tunnels with its peers from the other virtual machines. On each virtual switch we will create a bridge corresponding to the actual network interface. We will call this bridge “tun-br” (tunnel bridge). The bridge will have an TEP interface (Tunnel Endpoint) which will get an IP address assigned.

repository. For our test scenario we have configured a private Docker repository and made it available on the network. The repository has been populated with several Docker preconfigured containers. To facilitate the search and retrieval from the repository, each container has an associated unique ID. A Docker container can be retrieved from the repository using the pull command: docker pull . After being retrieved from the repository, the application packaged in the container can be executed using the run command: docker run . We can always attach to a running container using the attach command: docker attach . If the containers are started from scripts, they can be assigned to variables, for easy handling. As an example: C1=$(docker run -d -n=false -t -i ubuntu /bin/bash) docker attach $C1

Fig.2 The GRE tunnel configuration between the VMs

The bridge is created in the vSwitch from the command line interface using following commands (for simplicity we will present only the configuration from one virtual machine, the configuration on the other two virtual machines being done in a similar manner):

When the container is started, the Docker daemon will automatically assign MAC and IP addresses and the container will be attached to the default docker0 bridge. An example with 2 containers connected to the default bridge is shown in figure 3.

ovs-vsctl add-br tun-br ovs-vsctl add-port tun-br tep0 -- set interface tep0 type=internal To connect two virtual machines we have to create a tunnel between them. To create the GRE tunnel to the peer virtual switches an additional bridge is created between the GRE ports on each virtual machine. The bridge is called “sdnbr” The following commands are to exemplify the GRE tunnel creation from VM1 (the allocated IP addresses for the VMs are 192.168.122.101 –VM1, 192.168.122.102-VM2, etc.) ovs-vsctl add-br sdn-br ovs-vsctl set bridge sdn-br stp_enable=true ovs-vsctl add-port sdn-br gre1 -- set interface gre1 type=gre options:remote_ip=192.168.122.101 ovs-vsctl add-port sdn-br gre2 -- set interface gre2 type=gre options:remote_ip=192.168.122.102

Fig. 3 Docker container configuration on VM1

Because we want the containers to be able to communicate using the configured GRE tunnel, we will have to attach them to the previously created bridge sdn-br3. Also we would like to manually assign them the MAC and IP address. For this purposes we have developed a script for container configuration: ovsattach / [VLAN] Usage example: C1=$(docker run -d -n=false -t -i ubuntu /bin/bash)

After all the GRE tunnels are created, the next step is to create the Docker containers on each virtual machine. This step requires that Docker is already installed and configured on the machine. We will skip the Docker installation steps, because they are not in the focus of this paper. The content and runtime configuration of a Docker is stored in a repository as a template also called Docker image. A Docker image can be downloaded from a public or private

sudo ./ovsattach.sh sdn-br3 $C1 1.0.0.3/24 00:00:00:00:00:03 20 To simplify the syntax, we have assigned the container to a variable C1. The steps were performed on the other virtual machines to add the containers.

The final network configuration is depicted in Fig. 4. It is composed of 3 virtual machines, each having an open vSwitch instance and up to 3 Docker containers.

Fig. 4 Full network configuration

After all the virtual switches were configured, we have installed an SDN controller on the host machine. We have chosen the Open Daylight SDN controller, because it is a mature product, providing enterprise grade performance. All the virtual switches running inside the virtual machines were registered to this controller. After registration the switches became controller aware and all the configurations we have done directly into the switches using the command line interface, can be done now using the GUI interface provided by the Open Daylight controller. The registration of a virtual switch to the SDN controller has to be done from a command line like the following: ovs-vsctl set-controller sdn-br3 tcp:192.168.122.1:6633 In this example we assume that the controller is running on the host machine having the IP address 192.168.122.1 and it is listening on port 6633. After the switches are registered with the controller, all the packets received by the switches, that do not match any entry in the flow table, are sent to the controller which takes the appropriate decisions. The controller can decide to insert a rule in the flow table of the switch or to drop the package. Beside the GUI interface, the controller exposes a set of APIs that can be used to automatically configure the flows between containers. This will allow the SDN applications to dynamically provision containers and configure the data flow as response to user requests. Using scripts we can now instantiate Docker containers into any of the available virtual machines and in the same time using the APIs provided by the SDN controller, to configure the underlying network to interconnect the containers. This creates the premises of dynamic provisioning of user services [10].

V. CONCLUSIONS In certain scenarios of resource allocation, a good alternative to classical virtual machines is represented by containers, that are using certain already available specific capabilities of the operating systems: processing and storage reservation per process – “control groups”, sharing of the binary files and libraries etc. We have extended this concept of grouping to services, that can be configured to run in isolation inside a container instantiated on demand. Due to the lightweight nature of the Linux containers, the service provider can reach a higher density of containers using the same resources as opposed to other virtualization solutions. Our research covered the SDN-based control of data flow in bridges, that can be done remotely and even granted to 3rd parties (service customers). The control of the bridges was done via an Open vSwitch data base that can be accessed in the Cloud. Recently, OpenStack embedded Docker control, proving the real interest and good perspective of Linux containers – e.g. their remote instantiation in scalable SDN. ACKNOWLEDGMENT This paper is supported by the Sectoral Operational Programme Human Resources Development (SOP HRD), ID134378 financed from the European Social Fund and by the Romanian Government. REFERENCES [1]

Nadeau T., Gray K., “SDN – Software Defined Networks”, O’Reilly, 2013, ISBN: 978-1-449-34230-2 [2] Nygren A., Pfa B., Lantz B., Heller, “OpenFlow Switch Specification”, version 1.3.3 – ONF, Open Networking Foundation – September 27, 2013 [3] Jain R., Paul S., ”Network virtualization and software defined networking for cloud computing: a survey”, IEEE Communications Magazine, vol.51, no.11, pp.24-31 [4] Bakshi, K., "Considerations for Software Defined Networking (SDN): Approaches and use cases", IEEE Aerospace Conference, 2013, pp.1-9 [5] ONF, Open Networking Foundation – "Software-Defined Networking: The New Norm for Networks". White paper, April 13, 2012. Retrieved June 2014 [6] Buyyaa R., Yeoa C.S., Venugopala S., Broberga J., Brandicc I. – “Cloud Computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility”, Future Generation Computer Systems, Volume 25, June 2009 [7] VMware Whitepaper, “Understanding Full Virtualization, Paravirtualization, and Hardware Assist” – www.vmware.com/resources/ techresources/1008, Retrieved May 2014 [8] Kolyshkin K. – “Virtualization in Linux” – OpenVZ Technical Report, September 2006 – http://download.openvz.org/doc/ openvz-intro.pdf, Retrieved May 2014 [9] Bardac M., Deaconescu R., Florea A.M. – "Scaling Peer-to-Peer Testing with Linux Containers", The 9th RoEduNet Conference, Sibiu, Romania, June 2010 [10] Xavier M., Neves M., Rossi F., Ferreto T., Lange T., De Rose C. – “Performance evaluation of container-based virtualization for high performance computing environments,” in Parallel, Distributed and Network-Based Processing (PDP), 2013 21st Euromicro International Conference, 2013, pp. 233–240