Live Migration for Multiple Correlated Virtual Machines in Cloud-based Data Centers. Gang Sun1, Dan Liao1,2, Dongcheng Zhao1, Zichuan Xu3, Hongfang Yu1.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSC.2015.2477825, IEEE Transactions on Services Computing
Live Migration for Multiple Correlated Virtual Machines in Cloud-based Data Centers Gang Sun1, Dan Liao1,2, Dongcheng Zhao1, Zichuan Xu3, Hongfang Yu1 1
Key Lab of Optical Fiber Sensing and Communications (Ministry of Education), University of Electronic Science and Technology of China, China 2 Institute of Electronic and Information Engineering in Dongguan, UESTC, China 3 Research School of Computer Science, The Australian National University, Canberra, Australia
Abstract: With the development of cloud computing, virtual machine migration is emerging as a promising technique to save energy, enhance resource utilizations, and guarantee Quality of Service (QoS) in cloud datacenters. Most of existing studies on the virtual machine migration, however are based on a single virtual machine migration. Although there are some researches on multiple virtual machines migration, the author usually does not consider the correlation among these virtual machines. In practice, in order to save energy and maintain system performance, cloud providers usually need to migrate multiple correlated virtual machines or migrate the entire virtual datacenter (VDC) request. In this paper,
we focus on the efficient online live migration of multiple correlated VMs in VDC requests, for optimizing the migration performance. To solve this problem, we propose an efficient VDC migration algorithm (VDC-M). We use the US-wide NSF network as substrate network to conduct extensive simulation experiments. Simulation results show that the performance of the proposed algorithm is promising in terms of the total VDC remapping cost, the blocking ratio, the average migration time and the average downtime. Key words: Virtual Machines; Migration; Datacenter; Cloud Computing
1 INTRODUCTION With the development of cloud computing, datacenter virtualization technologies are attracting more and more attention from the industry as well as academia. Cloud providers use datacenter virtualization technologies to divide physical datacenter resources into virtual datacenters (VDCs), for sharing the physical resources. Each VDC can provide related services for users. Usually, the VDC is a collection of virtual resources (e.g., virtual machines (VMs), virtual switches, virtual routers or virtual links). Meanwhile, a VDC request usually has Service Level Agreements (SLAs) with service provider. Therefore, the SLAs (e.g., the transmission delays, resource demands, etc.) must be met while mapping the VDC request onto the physical datacenter. However, since the increasing of user demands, single datacenter may be unable to accommodate resources to the increasing user requests. Thus, the concept of federating multiple datacenters to provision a VDC request has been proposed, and cloud service providers thus are provisioning virtualized resources in multiple datacenters to users [1-3]. Furthermore, with the increasing of power and physical resources (e.g., CPU, storage, memory or bandwidth) consumption in datacenters, saving energy and improving resource utilization have gradually become a hot topic either
in industry or academia. Furthermore, in order to avoid the violation of SLAs that caused by the lack of network resources, researchers have proposed various VM migration methods in single datacenter or multiple datacenters [4-9]. These VM migration methods allow researchers use the appropriate migration strategy to achieve the purposes of saving energy, improving resource utilization and avoiding the violation of SLAs. For reducing the migration time of the virtual machines and accelerate the migration process, the authors in [6] studied and modeled the multiple virtual machines migration problem and proposed an efficient scheduling method for multiple virtual machines migration in cloud. Although the scheduling method can migrate multiple virtual machines, the authors did not consider the correlation among the virtual machines. However, in practice, in order to save energy and maintain system, cloud providers usually need to migrate multiple correlated virtual machines or migrate the entire VDC request. For example, in [7, 8], Walter Cerroni et al. studied the problem of migrating a group of cooperating VMs, and proposed serial and parallel migration strategies for migrating multiple VMs. The authors compared the migration time and the downtime of the serial and parallel migration strategies for migrating multiple VMs. However, they did not study the problem of resource allocation or provisioning for VMs migration. In [9], the authors also clearly stated a correlation among multiple virtual machines and proposed an algorithm to remap virtual machines, compute migration paths and schedule migration bandwidth in multiple datacenters, however, the main goal of the algorithm is to compute migration paths and schedule migration bandwidth rather than remap the VMs. Therefore, to optimize the migration performance of multiple correlated virtual machines or a VDC migration request, it is necessary to design a new migration algorithm. In this paper, we study the problem of migrating multiple correlated virtual machines in multiple datacenters. Since multiple correlated virtual machines belong to the same VDC request, so we consider the whole VDC request or a part of virtual machines of the VDC request as a migration request. We propose a live online migration algorithm, VDC-M, to migrate the VDC migration request among multiple datacenters. Finally, we use the US-wide NSF network as the substrate network to conduct extensive simulations for evaluating the performance of our proposed algorithm.
1939-1374 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSC.2015.2477825, IEEE Transactions on Services Computing
Simulation results show that the performance of the proposed algorithm is promising in terms of the total VDC remapping cost, the blocking ratio, the average migration time and the average downtime. The remainder of the paper is organized as follows. In Section 2, we discuss the related work. In Section 3 we present the formulation for the problem studied in this work. We propose our heuristics in Section 4, continuing with an evaluation and analysis of the obtained simulation results in Section 5. We conclude this paper in Section 6.
2 RELATED WORK 2.1 Single VM Migration In order to reduce the probability of violating the SLAs, cloud providers may need to guarantee the quality of service (QoS) by migrating virtual machines. Furthermore, cloud providers can also save energy and improve resource utilization by migrating and consolidating virtual machines. There exists some researches on the offline or online single virtual machine migration problem [10-20]. Since the survivability requirements, researchers have shifted their research direction to live migration. Therefore, some existing researches on the virtual machines migration have proposed strategies to minimize the migration time and the downtime. For example, the authors in [10-12] had proposed and implemented the online or live migration strategy to migrate virtual machines, for minimizing the virtual machine downtime. In [13], the authors proposed a performance analysis model of a single virtual machine migration process, to analyze the migration time and the downtime of virtual machines. In [14], the authors studied the problem that how much bandwidth should be provisioned to guarantee the total migration time and downtime of a live VM migration. In addition to optimize and measure the migration time and the downtime, the researchers also proposed some related virtual machines migration algorithms to achieve other goals. For example, in [15], the authors proposed an energy efficient scheduling algorithm of virtual machines to reduce the total energy consumed by the cloud, which also supports dynamic voltage and frequency scaling well. In [16], in order to reduce in the overall cost and energy consumption, the authors proposed approach performs defragmentation with low energy cost and SLA violations. In [17], in order to maximize energy efficiency and utilization of cloud resources while guaranteeing the service level agreements, the authors proposed a virtual machine migration and consolidation algorithm. In [18], for guaranteeing QoS and reducing the expenses, the authors proposed an autonomic management model to efficiently schedule the cloud resources. In [19], in order to enhance survivability of the virtual machine, the authors proposed a virtual machine backup and migration algorithm; when the original VM fails, the application/task will be migrated to the backup VM and continues to run. In [20], the authors proposed a virtual machine migration algorithm to reduce the network load caused by the migration.
These researches on virtual machine migration usually aim at single virtual machine migration problem, and most are directed at a single datacenter. So these researches are not suitable for the scenario of multiple VMs migration among multiple data centers. 2.2 Multiple VMs Migration With the development of virtual machine migration technologies, in recent years, some researchers have tried to solve the multiple VMs migration problem. In [6], the authors analyzed and modeled the multiple virtual machines migration problem, as well proposed a scheduling method to reduce the migration time of the virtual machines and thus accelerate the migration process. In [21], the authors migrated applications from one location to another. Since applications are usually a set of multiple correlated virtual machines, so the authors considered the applications as a whole to migrate, and then measured out the migration time and downtime. In [22], the authors conducted simulation experiments by using Xen 3.3.0, and tested the migration time and other related parameters when the job is migrated from one sub-cluster to the other one. In [23], the authors proposed a method to manage virtual machine cluster migration. When the amount of available resources of the source physical machines for hosting the virtual machines are close to the threshold, the virtual machine cluster will be migrate to new physical machines. In [24], to avoid the potential of violating SLAs, the authors proposed the iAware approach, a lightweight interference-aware virtual machines live migration strategy. Although the algorithms proposed in [6, 21-24] that are suitable for multiple virtual machines migration or virtual machine cluster migration problems, the authors did not consider the correlation among virtual machines. In practice, cloud providers usually need migrate multiple correlated virtual machines or migrate the entire VDC request to achieve optimization goals. For example, in [7, 8], the authors studied the problem of migrating groups of cooperating virtual machines. However, although the authors in [7, 8] proposed the serial and parallel migration strategies for multiple correlated VMs, they did not study the problem of resource allocation or provisioning for multiple VMs in the migration process. The research [9] addressed the problem of migrating multiple correlated virtual machines in multiple datacenters, although the authors proposed an algorithm to remap virtual machines, find migration paths and schedule migration bandwidth in multiple datacenters, the main goal of the algorithm is to find migration paths and schedule migration bandwidth rather than remap virtual machines. Therefore, there are few VDC or multiple correlated VMs migration algorithms, especially for the researches of remapping the VDC migration request. Therefore, it’s essential for us to propose the VDC-M algorithm for addressing the problem of online migrating a VDC or multiple correlated virtual machines among multiple datacenters in this paper.
1939-1374 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSC.2015.2477825, IEEE Transactions on Services Computing
3 PROBLEM STATEMENT AND FORMULATION 3.1 Problem Statement We study the problem of migrating multiple correlated virtual machines in multiple datacenters. We consider a scenario in which there are multiple correlated virtual machines need to be migrated from source servers to new destination servers. Multiple correlated virtual machines are a set of virtual machines, while all virtual machines are connected by virtual links. Since multiple correlated virtual machines belong to a VDC request, these multiple correlated virtual machines are either the whole VDC request, or a part of virtual machines of the VDC request. Therefore, in this work, multiple correlated virtual machines can be denoted as a VDC migration request. Specifically, given a physical datacenter (interchangeably called substrate network in this paper) composed by multiple datacenters have different locations interconnected by a core network, a VDC migration request need to be migrated, and the original mapping for the VDC request, the problem is how to efficiently migrate these correlated virtual machines included in the VDC request to the destination physical servers, such that the cost of remapping, the blocking ratio, the total migration time and total downtime are minimized, while all of the migration constraints are satisfied.
mapping records of VMs (where MO(VMi) represents the original server hosting virtual the machine VMi ), and MOE represents the original mapping records of virtual links. We use CD to denote the maximum transmission delay constraints of substrate paths to host the virtual links. An example of the migration request is shown in Figure 1. In this example, the numbers in the rectangles next to the virtual machines represent the server resource requirements, the size of virtual machine memory and migration bandwidth requirements, and the numbers next to the virtual links represent the link resource demands and tolerable transmission delay. 3.3 Substrate Network Similarly, we model the substrate network as an undirected weighted graph GS = (NS, RS, ES), where NS and RS represent the set of physical servers and the set of physical routers and switches, ES represents the set of substrate links. We define SC = (CE, CN, CR) as the substrate network resource constraints, where CE represent the attributes of substrate links that include the bandwidth capacity c(es), delay d(es) and the cost of per unit of link resources p(es). CN represents the attributes of the substrate servers that include the cost of per unit of server resource p(ns) and the capacity of server resources c(ns). Similarly, we use CR to denote the attributes of substrate routers and switches. An example of substrate network is shown in Figure 2.
1/0.8/1 VM1
/3
2/0.9/1
DataCenter B
0 .5
0 .6 /3
DataCenter A
0.6/3 VM2
VM3
2/1/1
0.6/3
0.5/2
DataCenter C
1/0.6/1
VM4
0.6/2
VM5
1/0.7/1
Fig.1 A VDC Migration Request
3.2 VDC Migration Request We model a VDC migration request as an undirected weighted graph GV = (NV, EV), where NV = {VM1,VM2,…, VMi ,…,VMn} denotes the set of the virtual machines, and n represents the number of virtual machines. EV ={e1,e2,…,e|Ev|} indicates the set of virtual links, |EV| denotes the number of O virtual links. We define MC = {CN, CE, CD, VN, BN, M N , M OE } as the migration constraints, where CN = {ε(VM1), ε(VM2),…, ε(VMi), …, ε(VMn)} represents server resource demands, CE ={x1,x2,…,x|Ev|} denotes the bandwidth resource requirements, where xi denotes the bandwidth demand of the virtual link ei, ei∈EV. VN ={V1, V2,…, Vn} represents the set of the amounts of virtual machine memory for all VMs. BN ={B1, B2,…, Bn} O represents migration bandwidth request of all VMs, M N ={MO(VM1), MO(VM2)…, MO(VMn)} denotes the original
DataCenter D
Fig.2 An example of substrate network
3.4 VDC Migration VDC migrations can be divided into two phases. The first phase is to reallocate resources for the VDC migration request, namely remapping the VDC migration request. The second phase is to find migration paths and allocate bandwidth resources for these migration paths, then migrate each virtual machine from the source server to the destination server. In the remapping process for a VDC migration request, we need to allocate a different substrate server and the required computing resources for each virtual machine of the VDC migration request, as well map each virtual link of the VDC migration request. The above VDC remapping procedure can be formulated as follows. (1) Virtual machines remapping: The process of virtual machines remapping can be denoted as: R
MN M NR : ( NV , CN ) ( N S1 , C N 1 ) ,
1939-1374 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSC.2015.2477825, IEEE Transactions on Services Computing
M R (VM i ) N S1 , VM i NV ,
pVM1
R( M (VM i )) (VM i ), VM i NV , where NS1⊂NS, CN1denotes the server resources allocated to the VDC migration request, M NR ={MR(VM1), MR(VM2),…, MR(VMn)}represents the remapping records of the virtual machines in the VDC migration requests. MR(VMi) denotes a new server hosting virtual machine VMi, R (MR(VMi)) denotes the available resources of the new server. In the virtual machines remapping process, we need to assign the VMs onto different physical servers. (2) VDC link remapping: The mapping for VDC link can be formulated as:
VM1
M R (ei ) pei , ei EV , pei P1 , B( pe ) min b(es ) , pe P , 1
i
es pi
i
B( pei ) xi , pei P ,
VM3
DataCenter A
DataCenter B
VM1
VM2
VM3
VM2 VM2
R
ME M ER : ( EV , CE ) ( P1 , C E1 ) ,
VM3
pVM2 pVM3
R
VM1
DataCenter C
DataCenter D
Fig. 3 Example of A VDC Migration
Next, according to the migration bandwidth resource requirements and the size of original memory, we can calculate the migration time and the downtime.
1
3.5 VDC Migration Time
D( pei ) d (es ), pei P , 1
es pi
D( pei ) CD , pei P1 , where M ER ={MR(e1), MR(e2),…, MR(e|Ev|)} represents the remapping records of the links in the VDC migration requests. P1⊂P, where P represents the set of end-to-end substrate paths, and each substrate path is a subset of ES. CE1 represents the allocated link resources. MR(ei) denotes a new substrate path hosting virtual link ei. B(pei) denotes the available bandwidth resources of substrate path pei, D(pei) represents the delay of the substrate path pei. In the migrating process, we have to compute a migration path and allocate migration bandwidth to migrate virtual machines from the source servers to the destination servers. The process can be denoted as: F F : ( M NR , M N0 , BN ) ( P 2 ) ,
B( pVMi ) Bi , pVMi P 2 , where P2 represents a set of the migration paths of all virtual machines, and pVMi represents the migration path of virtual machine VMi. After computing the migration path of each virtual machine, we migrate all virtual machines of the VDC migration request. Figure 3 shows an example of remapping and migrating a VDC migration request. In this example, the VDC migration request is originally mapped into the datacenter A and C, as the figure shown in black dashed box. In the remapping process, the VDC migration request is mapped into the datacenter B and D, as shown in red solid box in Figure 3. The migration paths for VM1, VM2 and VM3 are denoted as pVM1, pVM2 and pVM3, respectively. Note that there does not fully demonstrate the mappings of links of the VDC request in this example.
We use the pre-copy strategy to migrate a single virtual machine. For multiple virtual machines, we use parallel migration strategy. The migration time and the downtime of the single virtual machine are calculated similar with Ref. [8]. We get the migration time of the i-th VM (i =1, 2,…n) as follows: Ti , mig i i 1 Ti , j n 1
Vi 1 ri ni 1 Bi 1 ri
ni = min{logri (Vth / Vi ) , nmax }
(1) (2)
Where ni is the actual number of iterations, and the migration will be stopped at (ni+1)-th migration. After ni iterations, the virtual machine begins to be shutdown. We denote the size of original memory of the i-th VM as Vi. Vth is the threshold for stopping iteration. Nmax is the maximum number of iterations. Bi denotes the transmission rate. We define the ratio of the dirtying rate to the transmission rate as ri = PD/Bi. D and P are memory page dirtying rate and memory page size, respectively. Therefore, the start time of the downtime Ti ,start down can be calculated as: i Ti ,start down i 1 Ti , j
n
Vi 1 ri ni . Bi 1 ri
(3)
The end time of the downtime Ti ,end down is the sum of the migration time and the intrinsic time Tres . i Ti ,end down i 1 Ti , j Tres
n 1
Vi 1 ri ni 1 Tres Bi 1 ri
(4)
Therefore, the downtime of the i-th virtual machine is:
Ti , down
Vi ni ri Tres . Bi
1939-1374 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
(5)
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSC.2015.2477825, IEEE Transactions on Services Computing
In this paper, we focus on the problem of migrating the VDC migration request which consists of multiple correlated virtual machines. For improving the efficiency, we adopt parallel migration strategy to migrate multiple VMs. During the migration process, each virtual machine has its own independent migration bandwidth. Therefore, the migration time of the whole VDC migration request is the migration time of the virtual machine that lastly complete migration.
V 1 ri ni 1 TVDC , mig max Ti , mig max i , i 1, 2,..., n (6) Bi 1 ri The start time of the downtime of the VDC migration start request TVDC , down is the time of the virtual machine that firstly shutdown.
Vi 1 ri ni start start TVDC min T min , down i , down Bi 1 ri
, i 1, 2,..., n
(7)
The end time of the downtime of the VDC migration end request TVDC , down is the time of the virtual machine that lastly completed migration. end end TVDC , down max Ti , down
V 1 ri ni 1 max i Tres , i 1, 2,..., n Bi 1 ri
(8)
Therefore, the downtime of the VDC migration request can be calculated as in Equation (9). TVDC , down end start TVDC , down TVDC , down
(9)
V 1 ri V 1 ri max i Tres min i , i 1, 2,..., n B 1 r i Bi 1 ri i ni 1
ni
4 HEURISTIC ALGORITHM We propose the VDC migration (VDC-M) algorithm for solving the problem of online migration for multiple VDC requests where the requests arrive dynamically. Our proposed algorithm first remaps the VDC migration request, then finds migration paths and allocates bandwidth resources to migrate virtual machines from the source servers to the destination servers. Without loss of generality, we assume that the VDC requests arrive according to a Poisson process in this work. In the VDC-M algorithm, all arrived VDC migration requests are first buffered in a queue, denoted as ArrivedVDC. We define ExpiredVDC as the set of expired VDC migration requests. Each VDC migration request in the ArrivedVDC queue is remapped and migrated one by one. Some requests in queue ArrivedVDC may be blocked due to the lack of resources. We define VDCblo as the set of blocked migration requests. The main steps of VDC-M algorithm include remapping the VDC, computing migration paths and allocating bandwidth for migration paths. In the VDC-M algorithm, when a VDC migration request arrives, it first calls the VDC
remapping (RM) procedure to remap the VDC migration request, and then calls the VDC routing and bandwidth allocation (RBA) procedure to compute migration paths, and finally allocates bandwidth resource for the migration paths and migrates the VDC. The pseudo-code of the proposed migration algorithm is shown in Algorithm 1. Algorithm 1: VDC Migration (VDC-M) Input:1.Substrate network GS = (NS, RS, ES) and resource constraints SC = (CE, CN, CR); 2.VDC requests queue ArrivedVDC . total Output: Migration cost M cost and the set of blocked VDCs,
VDCblo . total 1: Initialization: let M cost 0 and VDCblo ; 2: while ArrivedVDC , do 3: Updating the set of expired VDC migration requests ExpiredVDC , release the occupied resources according to ExpiredVDC , then let ExpiredVDC ; 4: Call RM procedure for remapping the first VDC migration request VDC1 in ArrivedVDC ; 5: if found a remapping solution M R for VDC1 , then 6: Call RBA procedure to migrate VDC1 ; 7: if VDC1 is migrated successfully, then total total 8: M cost M cost M CR , updating substrate network; 9: else 10: VDCblo VDCblo {VDC1} ; 11: end if 12: end if 13: ArrivedVDC ArrivedVDC \ {VDC1} ; 14: end while total , 15: return M cost VDCblo . The RM procedure is responsible for remapping a VDC migration request, as shown in Procedure 1. It is used to find the destination server for each VM and reallocate resources for each VM and each virtual link before the migration. Procedure 1 gives multiple candidate remapping solutions, from which we find the remapping solution with minimum cost as the final remapping solution, denoted as MR. The solution MR includes the remapping cost M CR , the remapping records of the virtual machines M NR and the remapping records of the links M ER . In the RM procedure, we define Con(VMi) = |Adj(VMi)| as the connectivity degree of virtual machine VMi in the VDC request. The connectivity degree of virtual machine VMi represents the number of virtual machines that connecting with VMi. We first need mark all of the used servers in the original mapping record M NO as unavailable. Procedure 1: VDC Remapping (RM) Input: 1. Substrate network GS = (NS, RS, ES) and resource constraints SC = (CE, CN, CR);
1939-1374 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSC.2015.2477825, IEEE Transactions on Services Computing
2. VDC request GV = (NV, EV) and migration constraints MC = (CN, CE, CD, VN, BN, M NO , M EO ). Output: Remapping solution M R {M CR , M NR , M ER } . 1: Store all of the available physical servers in UMN S ; 2: Sort the VMs included in NV in descending order according to the connectivity degree Con(VM i ) | Adj(VM i ) | and then store them in MVM V ; take out the first VM VM1 from MVMV ; 3: for each ns UMN S , do 4:
Let M CR =0 , M ER = ,
M NR =
; find the minimal cost path
p( M (VM 1 ) ns ) ; calculate the server resource cost O
Cost (VM1 ns ) and migrating cost MCost ( p( M O (VM 1 ) ns ))
according to Equations (11) and (14); U S UMN S \ {n s } , M R (VM 1 ) ns , N MVM V \ {VM 1} ; 5: while ( | N | ) 6: Select VM i N , which adjacent to other remapped VMs; 7: 8:
for each nk U S , do Find the minimal cost path p( M O (VM i ) nk ) , calculate and record MCost ( p( M O (VM i ) nk )) and Cost (VM i nk ) according to Equations (14) and (11); R(VM i nk ) ;
9: 10:
for each ei Ei , do Find the minimal cost path pei ; compute and record Cost ( pe ) according to Equation (12); i
R(VM i nk ) R(VM i nk ) pei ;
11: 12: 13: 14:
end for Calculate and record MMCost (VM i nk ) according to Equation (13); end for Find the remapping solution for VM i with the minimal remapping and migrating cost MMCost (VM i nk ) ; M NR M NR {M R (VM k ) nk }
N N \ {VM i } ,
,
M ER M ER R (VM i nk ) ,
M M MMCost (VM i nk ) , R C
R C
U S U S \ {n k } ;
15: 16:
set of all virtual links that connects VMi and other remapped VMs. The server resource cost Cost(VMi →nk) and bandwidth cost Cost(pei) are defined as follows: Cost (VM i nk ) p(nk ) (VM i ) ,
(11)
Cost ( pei ) p(es ) *xi ,
(12)
es pei
where pei is the substrate path connecting nk and ns (the substrate server hosting virtual machine VMj). The remapping and migrating cost of each VM can be computed according to Equation (13). MMCost (VM i nk ) VMCost (VM i nk ) MCost ( p(M O (VM i ) nk ))
, (13)
where a is used to balance the weight or importance between VMCost(VMi → nk) and MCost(p(MO(VMi) → nk)). And the MCost(p(MO(VMi)→nk)) denotes the cost of migrating virtual machine VMi, it can be defined as follows: MCost ( p( M O (VM i ) nk ))=
es p M O (VM i ), nk
p(es )*Bi ,
(14)
where p(MO(VMi )→nk) represents the path for migrating VMi from the original server MO(VMi) to the new server nk. The routing and bandwidth allocation (RBA) procedure is used to compute migration paths and allocate bandwidth resource for them. It finds migration path and allocates migration bandwidth according to the original mapping records and the remapping records of each virtual machine and the bandwidth requirements, and then calculate the migration time and the downtime of the VDC migration request. Detailed RBA procedure is shown in Procedure 2. Procedure 2: VDC Routing and bandwidth allocation (RBA) Input: 1. Substrate network GS = (NS, RS, ES) and resource constraints SC = (CE, CN, CR); 2. VDC request GV = (NV, EV) and migration constraints MC = (CN, CE, CD, VN, BN, M NO , M EO ). 3. Remapping solution M R {M CR , M NR , M ER } . Output: Migration time TVDC , mig , downtime TVDC , down . start end 1: Initialization: let TVDC , down 0 , TVDC , down 0 , Ti , mig 0 ,
end while
end Ti ,start down 0 , Ti , down 0 , TVDC , mig 0 , TVDC , down 0 ;
M p {M CR , M NR , M ER } ;
17: end for 18: Select the remapping solution M R has the minimal cost from the set of available remapping solutions Mp; 19: return MR. The remapping cost of each VM can be computed according to Equation (10). VMCost (VM i nk ) Cost (VM i nk ) Cost ( pei ) , (10) ei Ei
where nk denotes a substrate server, ei represents a virtual link that connects VMi and a remapped VMj, Ei ⊂ EV represents the
2: Compute the path p M O (VM i ), M R (VM i ) with minimal cost for each VM i ; 3: When migration paths of all VMs are found, we parallelly migrate all of the VMs need to be migrated; end 4: Calculate Ti , mig , Ti ,start down and Ti , down of each VM i , according to Equation (1) , (3) and (4), respectively; start end 5: Calculate TVDC , mig , TVDC , down , TVDC , down and TVDC , down , according to Equation (6) , (7) , (8) and (9) , respectively; 6: return TVDC , mig , TVDC , down
1939-1374 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSC.2015.2477825, IEEE Transactions on Services Computing
5 SIMULATION RESULTS We evaluate the performance of the proposed algorithm by conducting extensive simulations. In this section, we first introduce the simulation environment, then describe several performance parameters that are used in our simulations. Last, we describe our main simulation results and analysis. 5.1 Simulation Environment In our simulations, we use the US-wide NSF network as the substrate network, there are seven datacenters attached to the US-wide NSF network as shown in Fig.4. Each datacenter has a fat tree topology [25] which is shown in Fig.5. In this paper, in order to expand the scope of our algorithm, we use a randomly generated VDC migration request rather than a two layer tree topology. DataCenter A
DataCenter D DataCenter B
DataCenter E
DataCenter C
DataCenter F
DataCenter G
Fig.4 The US-wide NSF network
is 40Gbps (as shown in Fig.5). The bandwidth capacity of each core network link is 100Gbps. Without loss generality, we assume that: i) Per unit server resource cost and the per unit link bandwidth cost are both 1 unit; ii) The transmission delay of each core network link is 1 unit; iii) Each link in intra datacenter has no transmission delay. Furthermore, in our simulations, we assume that the VDC migration requests arrive following a Poisson process, the number of virtual machines in each VDC migration request is varied among 6, 12, 24 and 36; the server resource demand of each VDC migration request follows a uniform distribution U(1.3, 1.5), U(0.6, 0.8), U(0.2, 0.4) or U(0.1, 0.3); the link resource requirement of each VDC migration request also follows a uniform distribution U(1.3, 1.5), U(0.6, 0.8), U(0.2, 0.4) or U(0.1, 0.3). The migration bandwidth of each virtual machine is 1 Gbps, the transmission delay constraint of each link is 4 time units. Most of the research on VM migration problem only considers the single VM migration, although there are some multiple VMs migration algorithms, they do not consider the correlation among these virtual machines. Therefore, these existing algorithms are not comparable with our approach proposed in this work. The authors in [24] have researched the problem of multiple virtual machines migration, however, the objective of [24] is different from ours. For achieving fair comparison, we extend and modify the algorithms proposed in [24] as follows: i) we replace the objective of [24] by using ours; ii) we add the virtual link remapping process into the algorithm proposed in [24]. The extended algorithm denoted as VDC-SM in our simulation results. 5.2 Performance Metrics We use the following metrics to evaluate the performance our proposed algorithm in the simulations. In the scenario of unlimited resource capacity, we measure the total remapping and migrating cost, the total migrating cost, the total remapping cost, the VDC remapping cost in core network, the VDC remapping cost in datacenter, the average migration time and the average downtime. In the scenario of limited resource capacity, we additionally measure the blocking ratio. (1) The total VDC Remapping and Migrating Cost: The total cost of using substrate network resources for remapping and migrating all VDC migration requests. It can be calculated as in Equation (15). total M cost
40Gbps Link
In the scenario of limited resource capacity, the computing resource capacity of the substrate servers follows a uniform distribution from 8 to 10 units. We refer to the Ref. [26] to set the bandwidth capacity of the physical links in the intra data center. In each datacenter, the bandwidth capacity of the physical links directly connecting the servers is 10Gbps and the bandwidth capacity of the router-to-router physical links
M CR ,
(15)
| ArrivedVDC |
10Gbps Link
Fig.5 The fat tree topology
where M CR represents the cost of using substrate network resources for remapping and migrating a VDC request. (2) The total VDC Migrating Cost: It is the total cost of using substrate network resources for migrating all VDC migration requests. migrate M cost
M migrate ,
| ArrivedVDC |
1939-1374 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
(16)
(3) The total VDC remapping Cost: It is the total cost of using substrate network resources for remapping these VDC migration requests. remap M cost
(17)
M remap
| ArrivedVDC |
Where Mremap represents the cost of using substrate network resources for remapping a VDC migration request. Note that, the total VDC remapping and migrating cost is the sum of the total VDC remapping cost and the total VDC migrating cost. (4) The VDC remapping Cost in Core Network: It is the cost of using core network resources for remapping the VDC migration requests.
core M cost
M Core ,
(18)
| ArrivedVDC |
where MCore represents the cost of using core network resources for remapping a VDC migration request. (5) The VDC Remapping Cost in Datacenter: It is the cost of using intra-datacenter resources for remapping all VDC migration requests. datacenter M cost
M Datacenter ,
5
6x10
5
5x10
5
4x10
5
3x10
5
2x10
5
1x10
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Traffic Load ( number of cumulative arrived VDC requests)
(a) 5
7x10
5
6x10
5
5
4x10
5
3x10
5
2x10
5
1x10
(19)
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Traffic Load ( number of cumulative arrived VDC requests)
(6) The Average Migration Time: The migration time of each virtual machine can be calculated as the Equation (6). Thus, the average migration time of each virtual machine is:
TVDC , mig
| ArrivedVDC |
| ArrivedVDC |
.
(20)
(b) Total VDC Remapping and Migrating Cost
where MDatacenter represents the cost of using intra-datacenter resources for remapping a VDC request. Note that, the total VDC remapping cost is the sum of the VDC remapping cost in core network and the VDC remapping cost in datacenter.
average Tmig
(7) The Average Downtime: The downtime of each virtual machine can be calculated as the Equation (9). And thus the average downtime of each virtual machine is:
T
| ArrivedVDC |
.
(21)
(8)The Blocking Ratio: It is the ratio of the number of blocked VDC migration requests to the number of total arrived VDC migration requests. The blocking ratio Pb is defined as:
| VDCblo | Pb , | ArrivedVDC |
(22)
Where |VDCblo| and |ArrivedVDC| represent the numbers of blocked VDC migration requests and total arrived VDC migration requests, respectively. 5.3 Simulation Results and Analysis
VDC-M, n = 24 VDC-SM, n = 24
5
7x10
5
6x10
5
5x10
5
4x10
5
3x10
5
2x10
5
1x10
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Traffic Load ( number of cumulative arrived VDC requests)
(c) 6
TVDC , down
| ArrivedVDC |
5
8x10
0
Total VDC Remapping and Migrating Cost
average down
VDC-M, n = 12 VDC-SM, n = 12
5x10
| ArrivedVDC |
VDC-M, n = 6 VDC-SM, n = 6
0
Total VDC Remapping and Migrating Cost
where Mmigrate denotes the cost of using substrate network resources for migrating a VDC request.
Total VDC Remapping and Migrating Cost
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSC.2015.2477825, IEEE Transactions on Services Computing
1x10
5
9x10
VDC-M, n = 36 VDC-SM, n = 36
5
8x10
5
7x10
5
6x10
5
5x10
5
4x10
5
3x10
5
2x10
5
1x10
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Traffic Load (number of cumulative arrived VDC requests)
(d) Fig.6 The total VDC remapping and migrating cost as a function of the traffic load, for the different number of virtual machines in VDC.
1939-1374 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSC.2015.2477825, IEEE Transactions on Services Computing
4
6x10
VDC-M, n = 6 VDC-SM, n = 6
5
3.5x10
5
3.0x10
Total VDC Migrating Cost
Fig.6 compares the total VDC remapping and migrating costs of the VDC-M algorithm and the VDC-SM algorithm, where the number of virtual machines (i.e., n) is varied among 6, 12, 24 and 36. From Figure 6, we can see that the total VDC remapping and migrating cost of the VDC-M algorithm is lower than that of the VDC-SM algorithm. This is because the VDC-M algorithm can provide multiple candidate remapping solutions and then find the optimal one with minimum cost as the final remapping solution. And the candidate remapping solutions of the VDC-M algorithm are more than that of the VDC-SM algorithm. Thererfore, the VDC-M algorithm is more likely to achieve lower remapping and migrating cost than VDC-SM algorithm does.
VDC-M, n = 36 VDC-SM, n = 36
5
2.5x10
5
2.0x10
5
1.5x10
5
1.0x10
4
5.0x10
0.0 1000 2000
3000 4000 5000 6000 7000 8000
9000 10000
Traffic Load (number of cumulative arrived VDC requests)
(d) Fig.7 The total VDC migrating cost as a function of the traffic load, for the different number of virtual machines in VDC.
4
Total VDC Migrating Cost
5x10
4
4x10
4
3x10
4
2x10
4
1x10
0 1000 2000
3000 4000 5000 6000 7000 8000
9000 10000
Traffic Load (number of cumulative arrived VDC requests)
(a)
Fig.7 depicts the total VDC migrating costs of the VDC-M algorithm and the VDC-SM algorithm, when the number of virtual machines is varied among 6, 12, 24 and 36. From the Fig.7, we can see that the total VDC migrating cost of the VDC-M algorithm is lower than that of the VDC-SM algorithm. This is because the VDC-M algorithm considers the migration cost of each VM during the remapping process, whereas VDC-SM algorithm doesn’t consider that during the remapping process. Furthermore, the VDC-M algorithm provides more candidate remapping solutions than the VDC-SM does. Thererfore, the migrating cost of VDC-M algorithm is lower than that of the VDC-SM algorithm.
5
1.2x10
VDC-M, n = 12 VDC-SM, n = 12
5
6x10
5 5
Total VDC Remapping Cost
Total VDC Migrating Cost
1.0x10
4
8.0x10
4
6.0x10
4
4.0x10
VDC-M, n = 6 VDC-SM, n = 6
5x10
5
4x10
5
3x10
5
2x10
4
2.0x10
5
1x10
0.0
0
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Traffic Load (number of cumulative arrived VDC requests)
Traffic Load (number of cumulative arrived VDC requests)
(b)
(a) 5
6x10
Total VDC Migrating Cost
2.0x10
VDC-M, n = 24 VDC-SM, n = 24
5
5x10
Total VDC Remapping Cost
5
5
1.5x10
5
1.0x10
4
5.0x10
VDC-M, n = 12 VDC-SM, n = 12
5
4x10
5
3x10
5
2x10
5
1x10
0
0.0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Traffic Load (number of cumulative arrived VDC requests)
(c)
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Traffic Load (number of cumulative arrived VDC requests)
(b)
1939-1374 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSC.2015.2477825, IEEE Transactions on Services Computing
5
6x10
VDC Remapping Cost in Core Network
VDC-M, n = 24 VDC-SM, n = 24
Total VDC Remapping Cost
5
5x10
5
4x10
5
3x10
5
2x10
5
1x10
VDC-M, n = 24 VDC-SM, n = 24
4
4x10
4
3x10
4
2x10
4
1x10
0
0
1000
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Traffic Load (number of cumulative arrived VDC requests)
2000
3000
4000
(c)
7000
8000
9000 10000
VDC-M, n = 36 VDC-SM, n = 36
4
VDC Remapping Cost in Core Network
Total VDC Remapping Cost
6000
(b)
VDC-M, n = 36 VDC-SM, n = 36
5
6x10
5000
Traffic Load (number of cumulative arrived VDC requests) n=24
5
5x10
5
4x10
5
3x10
5
2x10
5
1x10
6x10
4
5x10
4
4x10
4
3x10
4
2x10
4
1x10
0
0 1000 2000
3000 4000 5000 6000 7000 8000
1000
9000 10000
2000
3000
4000
5000
6000
7000
8000
9000 10000
Traffic Load (number of cumulative arrived VDC requests)
Traffic Load (number of cumulative arrived VDC requests)
(c)
(d) Fig.8 The total VDC remapping cost as a function of the traffic load, for the different number of virtual machines in VDC.
Fig.9 The VDC remapping cost in core network as a function of the traffic load, for the different number of virtual machines in VDC.
Fig.8 compares the total VDC remapping cost of the VDC-M algorithm and the VDC-SM algorithm, where the number of virtual machines is varied among 6, 12, 24 and 36. Figure 8 shows that the total VDC remapping cost of the VDC-M algorithm is lower than that of the VDC-SM algorithm. This is because the candidate remapping solutions of the VDC-M algorithm are more than that of the VDC-SM algorithm. Thererfore, the VDC-M algorithm is more likely to achieve lower remapping cost than VDC-SM algorithm does. From Figure 6, 7, 8, we can see that the VDC-M algorithm can simultaneously reduce the total VDC remapping cost and the total VDC migrating cost, and thus results in a lower total VDC remapping and migrating cost.
Fig.9 shows the VDC remapping cost in core network of the VDC-M algorithm and the VDC-SM algorithm, when the number of virtual machines varies from 12 to 36. When the number of virtual machines is 6 and the number of servers of each datacenter is 16 (as shown in Fig.5) , all virtual machines can be provisioned by one datacenter, thus the VDC remapping cost in core network is zero and the simulation results of this scenario is absent from Fig.9. From the results we can see that the VDC remapping cost in core network of the VDC-M algorithm is lower than that of the VDC-SM algorithm. This is because in the VDC-M algorithm, the virtual machines will be remapped to the same datacenter as much as possible, thus can reduce the resource consumption in core network. 5
6x10
VDC Remapping Cost in Core Network
VDC Remapping Cost in DataCenter
VDC-M, n = 12 VDC-SM, n = 12
2
2.5x10
2
2.0x10
2
1.5x10
2
1.0x10
1
5.0x10
0.0
5
5x10
VDC-M, n = 12 VDC-SM, n = 12
5
4x10
5
3x10
5
2x10
5
1x10
0
1000
2000
3000
4000
5000
6000
7000
8000
9000 10000
Traffic Load (number of cumulative arrived VDC requests)
(a)
1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Traffic Load (number of cumulative arrived VDC requests)
(a)
1939-1374 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSC.2015.2477825, IEEE Transactions on Services Computing
5
migration time of all virtual machines. However, in the VDC-SM algorithm, the VMs migrated by using serial strategy, and thus the migration time equals to the sum of the migration time of all virtual machines. Furthermore, when the number of virtual machines is fixed, the average migration times of two algorithms increased with the increasing of the virtual machine memory size. This is because that larger virtual machine memory size leads to a larger total amount of memory of all virtual machines, and thus need to consume more time to finish the migration.
VDC Remapping Cost in DataCenter
6x10
5
5x10
VDC-M, n = 24 VDC-SM, n = 24
5
4x10
5
3x10
5
2x10
5
1x10
9.0 0 1000 2000
3000 4000 5000 6000 7000 8000
VDC-M, V~U(0.5,1) VDC-M, V~U(0.7,1) VDC-M, V~U(0.9,1) VDC-M, V = 1
9000 10000
Average Migration Time(Seconds)
Traffic Load (number of cumulative arrived VDC requests)
(b) 5
VDC Remapping Cost in DataCenter
6x10
VDC-M, n = 36 VDC-SM, n = 36
5
5x10
5
4x10
5
3x10
8.8
8.6
8.4
8.2
5
2x10
8.0
6
5
1x10
12
24
36
Number of VMs: n
(a) 0 1000 2000
3000 4000 5000 6000 7000 8000
9000 10000
Traffic Load (number of cumulative arrived VDC requests)
VDC-SM, V~U(0.5,1) VDC-SM, V~U(0.7,1) VDC-SM, V~U(0.9,1) VDC-SM, V = 1
300
Figure 10 compares the VDC remapping cost in datacenter of the VDC-M algorithm and the VDC-SM algorithm, where the number of virtual machines is 12, 24 or 36. When the number of virtual machines is 6 and the number of servers of each datacenter is fixed at 16, all virtual machines can be accommodated by one datacenter, thus the VDC remapping cost in core network is zero. Therefore, the VDC remapping costs in intra-datacenter of these two compared algorithms are same as that shown in Fig.8 (a). Figure 10 shows that the VDC remapping cost in datacenter of the VDC-M algorithm is lower than that of the VDC-SM algorithm when the number of virtual machines varies from 12 to 36. This is because the VDC-M algorithm can achieve more candidate remapping solutions than the VDC-SM algorithm does. So the VDC-M algorithm is not only beneficial to reduce the VDC remapping cost in core network, but also helpful to lower the VDC remapping cost in datacenter, and thus results in lowering the total VDC remapping cost. Figure 11 shows the results on average migration time of the VDC-M algorithm and the VDC-SM algorithm under different VM memory sizes, for example V~U(0.5, 1) means that the size of VM memory follows the uniform distribution U(0.5, 1). From Figure 11 we can see that the VDC-M algorithm significantly reduce the average migration time than the VDC-SM algorithm does, under the same VM memory size. Since the VDC-M algorithm migrates the VMs in a parallel manner, and the migration time is the maximum
250
200
150
100
50
0
6
12
24
36
Number of VMs: n
(b) Fig.11 The average migration time as a function of the number of virtual machines in VDC. 6
Average Downtime(Seconds)
Fig.10 The VDC remapping cost in datacenter as a function of the traffic load, for the different number of virtual machines in VDC.
Average Migration Time(Seconds)
(c)
5
VDC-M, V~U(0.5,1) VDC-M, V~U(0.7,1) VDC-M, V~U(0.9,1) VDC-M, V = 1
4
3
2
1
0
6
12
24
Number of VMs: n
(a)
1939-1374 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
36
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSC.2015.2477825, IEEE Transactions on Services Computing
60
VDC-SM, V~U(0.5,1) VDC-SM, V~U(0.7,1) VDC-SM, V~U(0.9,1) VDC-SM, V = 1
250
VDC-M, n = 12 VDC-SM, n = 12
50
Blocking Ratio (%)
Average Downtime(Seconds)
300
200
150
40
30
20
100
10
50
0
1000
2000
3000
4000
5000
6000
7000
8000
9000 10000
Traffic Load (number of cumulative arrived VDC requests)
(b)
0
6
12
24
36
60
Number of VMs: n
(b)
60
Blocking Ratio (%)
40
30
20
10
0
1000
2000
3000
4000
5000
6000
7000
8000
9000 10000
Traffic Load (number of cumulative arrived VDC requests)
(c) 60
VDC-M, n = 36 VDC-SM, n = 36
50
40
30
20
10
0
1000
2000
3000
4000
5000
6000
7000
8000
9000 10000
Traffic Load (number of cumulative arrived VDC requests)
(d) Fig.13 The blocking ratio as a function of the traffic load, for the different number of virtual machines in VDC.
Figure 13 illustrates the blocking ratios of the VDC-M and VDC-SM algorithms, under various virtual machine numbers. From the Fig.13, we see that the blocking ratio of the VDC-M algorithm is much lower than that of the VDC-SM algorithm. This is because that the VDC-M algorithm can achieve much better remapping solution than the VDC-SM algorithm does, and thus reduces the resource consumption, as well results in a lower blocking ratio in the long run.
VDC-M, n = 6 VDC-SM, n = 6
40
30
20
10
0
Blocking Ratio (%)
Fig.12 shows the simulation results on average downtime of the compared algorithms, VDC-M and VDC-SM. From the figures we can see that the average downtime of the VDC-M algorithm is much lower than that of the VDC-SM, while the number of virtual machines and the size of virtual machine memory (denoted as V in Fig.11) of the two algorithms are the same. This is because the VDC-M algorithm uses the parallel migration strategy to migrate the virtual machines in the VDC request, whereas the VDC-SM algorithm adopts the serial migration strategy. Therefore, the average downtime of the VDC-M algorithm is much lower than that of the VDC-SM algorithm. Furthermore, when the virtual machine memory sized is fixed, the downtime of the VDC-M algorithm will be increased very slowly with the growth of the number of virtual machines, or even remains stable; whereas the average downtime of the VDC-SM algorithm increased with the growth of the number of virtual machines. While with the fixed number of virtual machines, the downtime time of the VDC-M algorithm decreased with the increasing of virtual machine memory size, whereas the downtime time of the VDC-SM algorithm goes in the opposite direction. This is because that these two algorithms use different migration strategies, and thus results in different downtimes.
Blocking Ratio (%)
Fig.12 The average downtime as a function of the number of virtual machines in VDC.
50
VDC-M, n = 24 VDC-SM, n = 24
50
6 CONCLUSION AND DISCUSSION 1000
2000
3000
4000
5000
6000
7000
8000
9000 10000
Traffic Load (number of cumulative arrived VDC requests)
(a)
In this paper, we study the problem of online live migration for multiple correlated VMs (i.e.,VDC migration requests). We devise an efficient algorithm, VDC-M, for solving this
1939-1374 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSC.2015.2477825, IEEE Transactions on Services Computing
problem. In the VDC-M algorithm, we consider the correlations between the VMs include in the VDC migration request, and treat these correlated VMs integrally rather than separately. In the VDC-M algorithm, we first remap the VDC migration request, then compute the migration paths and allocate bandwidth resources for migrating virtual machines from the source servers to the destination ones.We use the US-wide NSF network as the substrate network to conduct extensive simulations to evaluate the performance of our proposed algorithm. The experimental results show that our approach performance is better than the benchmark algorithm, VDC-SM, for migrating the VDC migration requests in terms of the VDC remapping cost, the blocking ratio, the average migration time and the average downtime. However, for reducing the blocking ratio and the VDC remapping and migrating cost, the VDC-M algorithm tries to provide as more candidate solutions as possible, and thus leads to a higher processing time consumption. Therefore, the time-efficiency of our approach is lower than that of the VDC-SM algorithm. Moreover, VMs are migrated by using parallel migration strategy in our proposed algorithm VDC-M, whereas the serial migration strategy employed in VDC-SM algorithm for migrating the VMs one by one. Therefore, for a fixed bandwidth migration path for each VM, at a particular period during the VM migration, our algorithm would lead to a heavier network traffic load than VDC-SM algorithm does.
ACKNOWLEDGEMENT This research is partially supported by the National Grand Fundamental Research 973 Program of China under grant (2013CB329103), Natural Science Foundation of China grant (61271171, 61571098), China Postdoctoral Science Foundation (2015M570778), the Fundamental Research Funds for the Central Universities (ZYGX2013J002), Guangdong Science and Technology Project (2012B090400031, 2012B090500003, 2012B091000163), and National Development and Reform Commission Project.
REFERENCES [1]
[2]
[3]
[4]
[5]
Hao Jin, Deng Pan, Jing Xu, and Niki Pissinou. Efficient VM Placement with Multiple Deterministic and Stochastic Resources in Datacenters. IEEE GLOBECOM, pp.2505-2510, 2012. Ahmed Amokrane, Mohamed Faten Zhani, Rami Langar, Raouf Boutaba, and Guy Pujolle. Greenhead: Virtual Datacenter Embedding across Distributed Infrastructures. IEEE Transactions on Cloud Computing, 1(1), 2013. Xiaolin Xu, Hai Jin, Song Wu, and Yihong Wang. Rethink the storage of virtual machine images in clouds. Future Generation Computer Systems, 50, pp.75-86, 2015. Davide adami, Stefano Giordano, Michele Pagano, and Simone Roma. Virtual Machines Migration in a Cloud Datacenter Scenario: An Experimental Analysis. IEEE ICC, pp.2578-2582, 2013. Mahdi Aiash, Glenford Mapp, and Orhan Gemikonakli. Secure Live Virtual Machines Migration: Issues and Solutions. IEEE International Conference on Advanced Information Networking and Applications Workshops, pp.160-165, 2014.
[6]
[7]
[8]
[9]
[10] [11] [12] [13]
[14] [15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
Zhenzhong Zhang, Limin Xiao, Xianchu Chen, and Junjie Peng. A Scheduling Method for Multiple Virtual Machines Migration in Cloud. Lecture Notes in Computer Science, 8147, pp.130-142, 2013. Walter Cerroni. Multiple Virtual Machine Live Migration in Federated Cloud Systems. IEEE INFOCOM Workshop on Cross-Cloud Systems, pp.25-30, 2014. Franco Callegati, and Walter Cerroni. Live Migration of Virtualized Edge Networks: Analytical Modeling and Performance Evaluation. IEEE Workshop on Software Defined Networks for Future Networks and Services (SDN4FNS 2013), pp.1-6, 2013. Tusher Kumer Sarker, and Maolin Tang. Performance-driven Live Migration of Multiple Virtual Machines in Datacenters. IEEE International Conference on Granular Computing (GrC), pp.253-258, 2013. Citrix Systems, Inc. Xen Hypervisor. http://www.xenproject.org/ developers/teams/hypervisor.html. Microsoft, Inc. Windows Server 2012 R2. http://www.microsoft.com/ en-us/server-cloud/hyper-v-server. VMware, Inc. VMware vSphere Hypervisor. http://www.vmware.com/ products/vsphere-hypervisor. Hamzeh Khazaei, Jelena Misic, and Vojislav B. Misic. Performance of an IaaS Cloud with Live Migration of Virtual Machines. IEEE GLOBECOM, pp.2289-2293, 2013. Jiao Zhang, Fengyuan Ren, and Chuang Lin. Delay Guaranteed Live Migration of Virtual Machines. IEEE INFOCOM, pp.574-582, 2014. Youwei Ding, Xiaolin Qin, Liang Liu, and Taochun Wang. Energy efficient scheduling of virtual machines in cloud with deadline constraint. Future Generation Computer Systems, 50, pp.62-74, 2015. K. Sunil Rao, and P. Santhi Thilagam. Heuristics based server consolidation with residual resource defragmentation in cloud data centers. Future Generation Computer Systems, 50, pp.87-98, 2015. Liuhua Chen, and Haiying Shen. Consolidating Complementary VMs with Spatial/Temporal-awareness in Cloud Datacenters. IEEE INFOCOM, pp.1033-1041, 2014. Mohamed Mohameda, Mourad Amziani, Djamel Belaïda, Samir Tata, and Tarek Melliti. An autonomic approach to manage elasticity of business processes in the Cloud. Future Generation Computer Systems, 50, pp.49-61, 2015 Jielong Xu, Jian Tang, Kevin Kwiat, Weiyi Zhang, and Guoliang Xue. Enhancing Survivability in Virtualized Data Centers: A Service-Aware Approach. IEEE Journal on Selected Areas in Communications, 31(12), pp.2610-2619, 2013. Daochao Huang, Yangyang Gao, Fei Song, Dong Yang, and Hongke Zhang. Multi-objective Virtual Machine Migration in Virtualized Data Center Environments. IEEE ICC, pp.3699-3704, 2013. Eric Keller, Soudeh Ghorbani, Matt Caesar, and Jennifer Rexford. Live Migration of an Entire Network (and its Hosts). The 11th ACM Workshop on Hot Topics in Networks, (HotNets-XI), pp.109-114, 2012. Muhammad Atif, and Peter Strazdins. Adaptive parallel application resource remapping through the live migration of virtual machines. Future Generation Computer Systems, 37, pp.136-161, 2014. Chao-Tung Yang, Jung-Chun Liu, and Kuan-Lung Huang. A method for managing green power of a virtual machine cluster in cloud. Future Generation Computer Systems, 37, pp.26-36, 2014. Fei Xu, Fangming Liu, Linghui Liu, Hai Jin, Bo Li, and Baochun Li. iAware: Making Live Migration of Virtual Machines Interference Aware in the Cloud. IEEE Transactions on Computers, 63(12), pp.3012-3025, 2014. Shuo Fang, Renuga Kanagavelu, Bu-Sung Lee, Chuan Heng Foh, and Khin Mi Mi Aung. Power-efficient Virtual Machine Placement and Migration in Data Centers. IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing, pp.1408-1413, 2013. Mohammad Alizadeh, Shuang Yang, Milad Sharif, Sachin Katti, Nick McKeown, Balaji Prabhakar, and Scott Shenker. pFabric: Minimal Near-Optimal Datacenter Transport. ACM SIGCOMM, 43(4), pp. 435-446, 2013.
1939-1374 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TSC.2015.2477825, IEEE Transactions on Services Computing
Gang Sun is an associate professor at University of Electronic Science and Technology of China (UESTC). He received his Ph.D. degree in Communication and Information Engineering in 2012 from UESTC. His research interests are in the area of network virtualization, datacenter networking and cloud computing.
Dan Liao is an associate professor at University of Electronic Science and Technology of China (UESTC). He received his Ph.D. degree in Communication and Information Engineering in 2007 from UESTC. His research interests are in the area of next generation network, wired and wireless computer communication networks and protocols.
Dongcheng Zhao is pursuing his Master degree in Communication and Information System at University of Electronic Science and Technology of China. His research interests include cloud computing and datacenter networking.
Zichuan Xu received his ME degree and BSc degree from Dalian University of Technology in China in 2011 and 2008, both in Computer Science. He is currently pursuing his PhD study in the Research School of Computer Science at the Australian National University. His research interests include cloud computing, wireless sensor networks, routing protocol design for wireless networks, algorithmic game theory, and optimization problems.
Hongfang Yu received her B.S. degree in Electrical Engineering in 1996 from Xidian University, her M.S. degree and Ph.D. degree in Communication and Information Engineering in 1999 and 2006 from University of Electronic Science and Technology of China, respectively. From 2009 to 2010, she was a Visiting Scholar at the Department of Computer Science and Engineering, University at Buffalo (SUNY). Her research interests include network survivability and next generation Internet, cloud computing etc.
1939-1374 (c) 2015 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.