Virtual Machine Live Migration for Pervasive Services in Cloud ...

27 downloads 16412 Views 237KB Size Report
Index Terms—Mobile cloud computing, vehicular network, vir- tual machine migration, dirty page transfer, resource reservation. I. INTRODUCTION. In the era of ...
2013 8th International Conference on Communications and Networking in China (CHINACOM)

Virtual Machine Live Migration for Pervasive Services in Cloud-Assisted Vehicular Networks Rong Yu1 , Yan Zhang2 , Huimin Wu1 , Periklis Chatzimisios3 , and Shengli Xie1 Guangdong University of Technology, China, email: {yurong, huiminwu, eeshlxie}@ieee.org 2 Simula Research Laboratory, Norway, email: [email protected] Alexander Technological Educational Institute of Thessaloniki, Greece, email: [email protected] 1

3

Abstract—The physical resources of vehicles and roadside infrastructures are stringently constrained in vehicular networks. The application of mobile cloud computing technology will significantly improve the utilization of intensive physical resources. In the newly emerged paradigm of cloud-assisted vehicular networks, vehicle mobility poses a significant challenge to the continuity of cloud services. This paper proposes efficient Virtual Machine (VM) live migration mechanisms to deal with the problem. In particular, a selective dirty page transfer strategy is designed to enhance the efficiency of data transfer in VM live migration. Besides, an optimal resource reservation scheme is proposed to ensure sufficient physical resources at a target cloud site such that migration dropping is significantly reduced. Simulations are carried out to demonstrate the efficiency of the two proposed mechanisms. Index Terms—Mobile cloud computing, vehicular network, virtual machine migration, dirty page transfer, resource reservation.

actuators are explosively growing and will reach a magnitude of 10 ∼ 100 Tbs per day in the next few years. This tremendous amount of data includes a variety of structured and unstructured data, some of which should be processed rapidly for fast decision making. On the other hand, every single vehicle or roadside infrastructure in IoV is heavily constrained in the physical resource. Due to the practical requirements on the size, weight and cost of the hardware systems, the vehicles and roadside units generally have very limited computation and storage abilities. To overcome the above difficulty, we consider cloud-assisted vehicular network architecture [9], in which cloud computing is exploited as a key enabling technology to organize all the physical resources in vehicular networks. However, deploying clouds in vehicular networks still has to face a number of challenges. ∙

I. I NTRODUCTION In the era of Internet of Things (IoT) [1], vehicular networks evolve from traditional vehicular ad hoc networks (VANETs) [2] towards the new paradigm of Internet of Vehicles (IoV) [3]. In IoV, smart sensors and actuators are equipped in the vehicular and roadside infrastructures for data collection and decision execution. Advanced wireless communication technologies are applied for inter-networking and information exchange. The short-range wireless communication technology, such as the Dedicated Short Range Communications (DSRC) technology, is specifically designed for the Vehicleto-Vehicle (V2V) and Vehicle-to-Roadside (V2R) communications. The IEEE 802.11p, or called Wireless Access in Vehicular Environments (WAVE) [4], is currently the most popular international standard for DSRC. The long-range wireless communication technology, such as 3G or 4G cellular technology, is exploited for remote connections between the vehicular/roadside units and the data center. Cognitive radio, as a revolutionary communication technology [5][6], could be leveraged to enrich spectrum access strategies and enhance radio resource utilization in VANETs [7][8]. By efficiently integrating these heterogenous wireless communication technologies, IoV constitutes a fundamental information platform, which is an indispensable part of the Intelligent Transport Systems (ITS). There exists practical difficulty that hinders the development of vehicular networks. On the one hand, the volume of data transmitted by the vehicular and roadside sensors and







Continuity of Cloud Services: Vehicles mobility leads to network topology dynamic changes. Frequently changing may interrupt ongoing cloud services. Accordingly, cloud resource management should be carefully designed. Quality of Cloud Services: Due to complicated vehicular communication environment, wireless channels may suffer from signal fading. The unstable channel quality may severely degrade the quality of cloud service. Accessibility of Cloud Services: Vehicle networks are essentially hybrid networks that integrate different types of wireless mobile communication systems. The cloud architecture is preferred to provide interfaces to support different types of access modes. Security and Privacy of Cloud Servies: In cloud-assisted vehicular networks, important and private information are transmitted, stored and processed in vehicles and roadside infrastructures which are vulnerable for underlying attacks. Efficient secure mechanisms are needed to protect cloud services in vehicular networks.

In this paper, we consider the development of clouds in vehicles, roadside infrastructures and ITS central server clusters, namely, vehicular cloud, roadside cloud and central cloud, respectively. In this paradigm, vehicles are allowed to access to these three types of cloud sites for services. Due to the vehicle mobility, the cloud services have to shift from one cloud to another to maintain the on-going services. Our motivation is to design efficient mechanisms to ensure the continuity of cloud services. We propose that Virtual Machine (VM) live migration is a key technique towards pervasive services in

540

978-1-4799-1406-7 © 2013 IEEE

ITS Central Cloud VM-A

2 Roadside Cloud RSU-1

3

1

VM-A

VM-A

RSU-2

Internet

Vehicular Cloud VM-A B A

A

1 Fig. 1.

Inter-Roadside-Cloud Migration

2

A

Across Roadside-Central Cloud Migration

3

Across Central-Vehicle Cloud Migration

VM live migration scenarios

vehicular networks. The remainder of the paper is organized as follows. Section II describes the main scenarios and general procedure of VM live migration in vehicular networks. Section III proposes a selective dirty page transfer strategy to improve the migration efficiency. A resource reservation approach is proposed and analyzed in section IV. The paper is concluded in Section V.



II. VM L IVE M IGRATION IN C LOUD -A SSISTED V EHICULAR N ETWORKS



A Virtual Machine (VM) is a simulation of a real or abstract machine. In the literature, VM research has been mainly studied in computer networks [12]. In the recent study [13], VM migration is considered for dynamic resource management in cloud environments. In [14], VM replication and scheduling are intelligently combined for VM migration across wide area network environments. The study in [15] conducts a number of interesting experiments to compare several resource management schemes for VM migration. However, there are few studies on VM resource management in mobile cloud environments. In [16], cloudlet is discussed and customized in the mobile computing environments. A. VM Live Migration Scenarios VM live migration refers to the process that an operating VM is transferred along with its applications across different physical machines. As shown in Fig. 1, in cloud-assisted vehicular networks, vehicles have cloud services from roadside infrastructures, ITS center servers and other vehicles. As a vehicle moves on the road, it has to handoff to different infrastructures. At the same time, the cloud service has to shift from one cloud site to another. Since VM is the fundamental entity that carries out cloud services, VM live migration is a preferred means to guarantee service continuity. In a VM migration, VM image has to be copied from the source to the destination cloud site. In cloud-assisted vehicular networks, VM live migration has several scenarios as explained below. ∙ Inter-Roadside-Cloud Migration: In Fig. 1 (see case-1), when vehicle 𝐴 moves from the radio range of RSU-1 to that of RSU-2, a VM migration is needed. Guest VM-𝐴



should be transferred between the two roadside cloudlets to resume its service. Across Roadside-Central Cloud Migration: In Fig. 1 (see case-2), when vehicle 𝐴 moves out of the radio range of RSU-2. No more roadside cloud but central cloud is available. In this case, guest VM-𝐴 has to be migrated from roadside cloud to central cloud. After that, vehicle 𝐴 will resume its service by accessing to the central cloud using cellular wireless communications. Across Central-vehicular Cloud Migration: In Fig. 1 (see case-3), when vehicle 𝐴 moves to the radio range of other vehicles, say, vehicle 𝐵 here, vehicle 𝐴 has the opportunity to transfer its VM to vehicle 𝐵. After migration, vehicle 𝐴 will resume its service by accessing to the vehicular cloud using V2V communications. Across Roadside-Vehicular Cloud Migration: This scenario, which is not shown in Fig. 1, is similar to the one across roadside-central cloud, except that the destination vehicle should potentially connect to the source RSU so that there exists a data link for VM migration.

B. VM Live Migration Flow Chart A VM live migration involves a number of interactions among the end-user vehicle, the source and the target cloud sites. We consider the pre-copy approach [10] and design the procedure of VM live migration in cloud-assisted vehicular networks, as illustrated in Fig. 2.

541

1) RSS Threshold Detection: The end-user vehicle will periodically monitor the Received Signal Strength (RSS) from both the source and the target cloud sites. As long as the difference of the RSS from the two cloud sites reaches the preset threshold, the end-user vehicle will send a migration request to the target cloud site. 2) VM & Network Resource Reallocation: Upon receiving the VM migration request, the target cloud site should perform VM and network resource reallocation to decide whether the migration is acceptable. If the migration is admitted, a wireless link and a certain amount of cloud resource will be assigned for the migrated VM. If the

Source Cloud Site

End-User Vehicle

Target Cloud Site

VM Services RSS Threshold Detection VM Migration Request

t1

VM & Network Resource Re-allocation VM Migration Decision VM Migration Instruction

t2

t3 VM Memory Transfer

t4

VM Memory Dirty Pages

t5 VM Service Halt

Last Dirty Pages

t6 VM Migration Completion

t7 Radio Link Setup Request

t8

Radio Link Setup Response

… Fig. 2.

t9 VM Service Resume

VM Live Migration Procedure

migration is denied, the end-user vehicle has to seek for other target cloud sites. 3) VM Data Transfer: After the end-user triggering the migration, the source cloud site starts to transfer VM data to the target cloud site. The process has two steps, firstly to transfer VM memory and then to transfer VM memory dirty pages. VM memory is the main body of the VM. During VM memory transfer, VM memory dirty pages are generated as the VM service is still running. Generally, it will take a number of phases to transfer the dirty pages, because new dirty pages may be generated while transferring the current dirty pages. 4) VM Service Halt: If the last dirty pages are small enough, the source cloud site will temporally suspend the VM service, finish the transfer of the last dirty pages, and announce the completion of VM live migration. 5) VM Service Resume: Upon VM migration completion, the end-user vehicle will request the target cloud site to setup a new radio link, through which the VM service is called to continue.

In the above procedure, the total migration time starts from 𝑡4 to 𝑡9 , and the VM downtime starts from 𝑡6 to 𝑡9 (see Fig. 2). The duration of 𝑡5 to 𝑡6 is consumed for dirty page transfer.

III. S ELECTIVE D IRTY PAGE T RANSFER FOR VM L IVE M IGRATION A. Selective Dirty Page Transfer Strategy In the entire process of VM data transfer, the transfer of dirty pages is essential to both the total migration time and the VM downtime. In the most popular VM live migration strategy that is used in Xen [11], dirty pages are completely copied to the target cloud site in each round of transfer. This action is repeated until one of the following conditions is satisfied: ∙ ∙ ∙

The number of dirtied pages is less than 50 in the last round of transfer. There are 30 rounds of transfer carried out. The total amount of transferred data is 3 times of the memory size of the migrated VM.

The strategy is inefficient in the sense that most of the previously transferred pages have to be retransferred later. Some of them may be transferred many times. Existing researches have indicated that pages in a memory are used in different frequency, which implies that pages should be transferred in a specifically designed sequence. Motivated by this observation, we propose the selective dirty page transfer strategy, which has the main features as described below.

542



In each round of dirty page transfer, only a fixed number of pages are copied to the target site.

The pages to be transferred are selected from all dirty pages by evaluating their dirtied rates. ∙ The dirtied rates of pages are dynamically updated according to the actual observations. The details of the selective dirty page transfer strategy are explained in the upcoming subsection, where we formulate the problem into a optimal stopping framework and discuss the efficient solution. ∙

the set of Ω𝑠,𝑛 . Let Ω𝑛𝑒𝑤 𝑑,𝑛 denote the set of the newly dirtied pages in stage 𝑛. We have ∑ 𝑟𝑖 (4) E{∣Ω𝑛𝑒𝑤 𝑑,𝑛 ∣} = 𝑝𝑖 ∈Ω𝑠,𝑛 ∪Ω𝑐,𝑛

and then, 𝑑𝑡 (𝑋1 , 𝑋2 , ⋅ ⋅ ⋅ , 𝑋𝑛 )} = E{𝑌𝑛+1

𝑚𝑡 𝑑𝑡 E{𝑌𝑛+1 (𝑋1 , 𝑋2 , ⋅ ⋅ ⋅ , 𝑋𝑛 )} = (𝑛 + 1)𝑡𝑠 + E{𝑌𝑛+1 }

B. Optimal Stopping Formulation and Solution Consider a VM with 𝐼 memory pages, and let 𝑝𝑖 (𝑖 = 1, 2, ⋅ ⋅ ⋅ , 𝐼) denote the 𝑖-th page in the memory. There are two types of pages: clean pages and dirty pages. Dirty pages are further classified into selected dirty pages for transfer and remaining dirty pages. Let Ω𝑐 , Ω𝑠 , Ω𝑑 denote the sets of clean pages, selected dirty pages, and remaining dirty pages, respectively. The entire process of dirty page transfer is divided into a series of stages, which is indexed by 𝑛 = 1, 2, ⋅ ⋅ ⋅ . In each stage, which takes a time duration 𝑡𝑠 , a fixed number of 𝐾 dirty pages are transferred. Let 𝑥𝑖𝑛 denote the state of the 𝑖th page at the 𝑛-th stage (i.e., at the beginning of the 𝑛-th round of data transfer). Here, 𝑥𝑖𝑛 = {0, 1}, and 0 stands for clean and 1 for dirty, respectively. The state vector of the VM memory at the 𝑛-th stage is represented by 𝑋𝑛 = (𝑥1𝑛 , 𝑥2𝑛 , ⋅ ⋅ ⋅ , 𝑥𝐼𝑛 ). The state space is denoted by 𝒳 with size 2𝐼 . If the dirty page transfer iteration stops at a certain stage, the VM is suspended and then the remaining dirty pages are copied to the target site. Let 𝑌𝑛𝑑𝑡 and 𝑌𝑛𝑚𝑡 , respectively, denote the VM downtime and total migration time if the dirty pages transfer stops right before the 𝑛-th stage. Let 𝑤 denote the data transfer rate in terms of pages per second. We have 𝑌𝑛𝑑𝑡 (𝑋1 , 𝑋2 , ⋅ ⋅ ⋅ , 𝑋𝑛 ) =

∣Ω𝑑,𝑛 ∣ + ∣Ω𝑠,𝑛 ∣ 𝐾 𝑤

𝑌𝑛𝑚𝑡 (𝑋1 , 𝑋2 , ⋅ ⋅ ⋅ , 𝑋𝑛 ) = 𝑛𝑡𝑠 + 𝑌𝑛𝑑𝑡

(1) (2)

where ∣Ω∣ represents the number of elements in set Ω. The VM downtime and total migration time are two essential metrics for VM live migration. Different VM services have different requirements on them. For instance, in storage-intensive applications, the total migration time should be reduced to prevent from additional memory occupation in both source and destination sites; while in real-time applications, the downtime should be strictly limited for satisfying Quality of Service (QoS) provisioning. To balance downtime and migration time costs, we define the cost function for stopping dirty page transfer right before the 𝑛-th stage by 𝑌𝑛 (𝑋1 , 𝑋2 , ⋅ ⋅ ⋅ , 𝑋𝑛 ) = 𝛼𝑌𝑛𝑑𝑡 + (1 − 𝛼)𝑌𝑛𝑚𝑡

∣Ω𝑑,𝑛 ∣ + E{∣Ω𝑛𝑒𝑤 𝑑,𝑛 ∣} 𝐾 (5) 𝑤

(3)

where 0 < 𝛼 < 1 is a constant preset by the migrated VM. If the selected dirty pages are transferred at a certain stage, some of the clean pages will become dirty. The dirtied rate of a given page is defined as the probability that the clean page will become dirty in one stage. The dirtied rate of the 𝑖-th page is denoted by 𝑟𝑖 . To select out the pages for transfer, we sort the dirty pages by their dirtied rates. At the 𝑛-th stage, the 𝐾 pages with the smallest dirtied rate are included into

(6)

Substituting (5) and (6) into (3) yields E{𝑌𝑛+1 (𝑋1 , 𝑋2 , ⋅ ⋅ ⋅ , 𝑋𝑛 )} ∑ ∣Ω𝑑,𝑛 ∣ + 𝑝𝑖 ∈Ω𝑠,𝑛 ∪Ω𝑐,𝑛 𝑟𝑖 𝐾 = (𝑛 + 1)(1 − 𝛼)𝑡𝑠 + 𝑤 The optimal cost function 𝑉 (𝑛) is derived by

(7)

𝑉 (𝑛) = min{𝑌𝑛 (𝑋1 , ⋅ ⋅ ⋅ , 𝑋𝑛 ), E{𝑉𝑛+1 (𝑋1 , ⋅ ⋅ ⋅ , 𝑋𝑛 )}} (8) where E{𝑉𝑛+1 (𝑋1 , 𝑋2 , ⋅ ⋅ ⋅ , 𝑋𝑛 )} ∑ = 𝑉𝑛+1 (𝑋1 , 𝑋2 , ⋅ ⋅ ⋅ , 𝑋𝑛 , 𝑋𝑛+1 = 𝑋)

(9)

𝑋∈𝒳

For a practical VM live migration, the total migration time is generally restricted in a predefined value. There exists a maximum number of stages before which the iteration of dirty page transfer should be stopped. As a consequence, the optimal stopping problem we discuss here is a finite horizon problem, which, in principle, could be solved by using backward recursion approach. However, the size of the state space in our problem is large (e.g., for a VM memory of 256 MB and 4 KB per page, there are 65536 pages totally) and the state transition matrix is too huge to efficiently carry out backward recursion algorithm. Instead, we adopt the one-steplook-ahead approach to approximate the optimal solution of the formulated optimal stopping problem. It turns out that the sub-optimal dirty page transfer strategy is to compare the cost function 𝑌𝑛 with the expected cost 𝐸{𝑌𝑛+1 } at each stage, and stop the transfer iteration if 𝐸{𝑌𝑛+1 } > 𝑌𝑛 . To evaluate the proposed selective dirty page transfer strategy, we carry out a simulation to compare the performance of the strategy in Xen and that of our strategy. In the simulation, the migrated VM has a memory of 256 MB with 4 KB per page and totally 65536 pages. The transfer rate is 10 MBps. As shown in Fig. 3, the selective dirty page transfer strategy has a prominent advantage over the Xen-based transfer strategy. Specifically, in the first 10 seconds, the amount of transferred dirty pages in the selective transfer strategy (14 K) is 3.2 times of that in the Xen-based strategy (4 K). IV. O PTIMAL R ESOURCE R ESERVATION FOR VM L IVE M IGRATION A. Resource Reservation Scheme The proposed VM migration procedure (see Fig. 2) involves resource re-allocation in the target cloud site. If the resources of the target cloud site have been intensively occupied, after

543

4

6.6

−3

x 10

1

Reservation (Rc =0.01)

Selective Transfer Xen−based Transfer

6.4

x 10

0.9

b

Reservation (Rc =0.02) b

0.8

No reservation

0.7

6

0.6

5.8

Rd

Number of Dirty Pages

6.2

0.5

5.6

0.4 5.4

0.3 5.2

0.2 5 4.8

0.1 0

10

20

30

40

0 0.1

50

Times (s)

Fig. 3.

The amount of remaining dirty pages in VM memory

Fig. 4.

VM migration and resource re-allocation, some of the VMs may not have sufficient resources and may not even resume their services. In order to avoid resource over-commitment, the target cloud site has to deny the VM migration so as to maintain the services of existing VMs. In this case, the cloud service of a vehicle with VM migration is said to be dropped. To reduce service dropping, we propose a resource reservation scheme. In the scheme, a small portion of the cloud site resources are reserved merely for migrated VMs, but not for local VMs. Since there are dedicated resources for VM migration, the dropping rate of cloud services will be significantly decreased. In the proposed resource reservation scheme, resources are divided into two categories: reserved resources and common resources. Let 𝐶𝑟 and 𝑀𝑟 denote the reserved resources, and 𝐶𝑐 = 𝐶 − 𝐶𝑟 and 𝑀𝑐 = 𝑀 − 𝑀𝑟 the common resources in computation and storage, respectively. In a VM migration, VM arrival refers to the event that VM is created either for a new local VM or a migrated VM. VM departure refers to the request of VM deletion, either for an ending of VM service or a VM migration to another cloud site. The resource reservation scheme has operations in the following cases. ∙





Local VM arrival: When there is a request for creating a new local VM, resource allocation will be carried out, e.g., using the proposed game-theoretic allocation scheme. Since a part of resources are reserved, the local VMs can only share the common resources, i.e., 𝐶𝑐 and 𝑀𝑐 in computation and storage resources, respectively. If the resource allocation result satisfies all existing VMs, the new local VM is admitted; otherwise, it is blocked. Local VM departure: The resource allocation is also performed when the service of one of the local VMs ends or migrates to another cloud site. Migrated VM arrival: Upon a request of VM migration, the target cloud site will re-allocate resources. In this case, the reserved resources will be also taken into account. Specifically, the existing local VMs and the migrated VM will share the total amount of resources. After



0.15

0.2 Arrival Rate of Local VMs

0.25

0.3

Dropping rate in terms of local VM arrival rate

re-allocation, if all the VMs (including the migrated VM) are satisfied, the VM migration is approved; otherwise, VM migration request is rejected. Migrated VM departure: The resource allocation also happens when the service of a migrated VM ends or migrates to another cloud site. It is noticeable that, if there is no migrated VMs in a cloud site, the resource allocation could only use common resources. The reserved resources will be conserved for further usage upon VM migration.

B. Optimal Resource Reservation We consider 𝐾 classes of VMs. Let 𝑐𝑘 and 𝑚𝑘 represent the amount of required resources of the 𝑘-th class of VMs in computation and storage, respectively. Let 𝑛𝑙𝑘 and 𝑛𝑔𝑘 denote the number of local and migrated VMs of class 𝑘, respectively. Suppose that the arrivals and departures of both local and migrated VMs follow Poisson process model. The system state transition is formulated as continuoustime Markov process. Let n𝑙 = (𝑛𝑙1 , ⋅ ⋅ ⋅ , 𝑛𝑙𝑘 , ⋅ ⋅ ⋅ , 𝑛𝑙𝐾 ) and n𝑔 = (𝑛𝑔1 , ⋅ ⋅ ⋅ , 𝑛𝑔𝑘 , ⋅ ⋅ ⋅ , 𝑛𝑔𝐾 ). The system state is represented by s = (n𝑙 , n𝑔 ). Given arrival and departure rate of new and migrated VMs, the steady state probability matrix Π will be derived by a 2𝐾-dimension Markov chain model. Let 𝑅𝑏 and 𝑅𝑑 denote the blocking rate and the dropping rate, respectively. Then, a new local VM is blocked if the total amount of required resources of local VMs (including the new ∑𝐾 one) exceeds that of common resources, i.e., 𝑘=1 𝑛𝑙𝑘 𝑐𝑘 > 𝐶𝑐 ∑𝐾 𝑙 or 𝑘=1 𝑛𝑘 𝑚𝑘 > 𝑀𝑐 . A migrated VM is dropped if the total amount of required resources of all VMs (including the one) is more∑than that of all resources, i.e., ∑𝐾 migrated 𝐾 𝑔 𝑔 𝑙 𝑙 𝑐 𝑘=1 (𝑛𝑘 + 𝑛𝑘 )𝑐𝑘 > 𝐶 or 𝑘=1 (𝑛𝑘 + 𝑛𝑘 )𝑚𝑘 > 𝑀 . Let 𝑅𝑏 denote the constraint of blocking rate. The optimal number of reserved resources is derived by solving the following optimization problem.

544

min s.t.

𝑅𝑑 (𝐶𝑟 , 𝑀𝑟 ); 𝑅𝑏 (𝐶𝑟 , 𝑀𝑟 ) ≤ 𝑅𝑏𝑐

(10)

Fig.4 shows the performance comparison with and without resource reservation. The total resources of roadside cloud are 50 and 100 units in computation and storage, respectively. Two classes of VMs are considered. VMs of class-1 are for computation-type applications, which needs resources 20 units in computation and 15 units in storage. VMs of class-2 are for storage-type applications, which need resources 10 units in computation and 40 units in storage. Results show that the dropping rate of migrated VM is significantly reduced with resource reservation, which demonstrates the efficiency of our proposed mechanism to protect VM migration. In particular, the dropping rate could be decreased over 60% at low VM arrival rate and nearly 20% at high VM arrival rate. V. C ONCLUSION The application of mobile cloud computing in traditional VANET gives rise to the new paradigm of cloud-assisted vehicular networks. The high mobility of vehicles causes a significant challenges for providing uninterrupted cloud services in cloud-assisted vehicular networks. VM live migration is a preferred means to shift to cloud services from source cloud site to target cloud site. To improve the efficiency of dirty page transfer in VM live migration, this paper proposes a selective dirty page transfer strategy, which is shown to enhance the dirty page transfer rate by 3.2 times. In addition, to reduce the phenomena of migration dropping, an optimal resource reservation scheme is designed, which is demonstrated to decrease the dropping rate up to 60%.

[8] T. Wang, L. Song, Z. Han, and B. Jiao “Popular Content Distribution in CR-VANETs with Joint Spectrum Sensing and Channel Access,” to appear, IEEE Journal on Selected Areas in Communications. [9] R. Hussain, J. Son, H. Eun, S. Kim and H. Oh, “Rethinking Vehicular Communications: Merging VANET with cloud computing”, in Proc. IEEE 4th International Conference on Cloud Computing Technology and Science, pp. 606-609, 2012. [10] C. Clark, K. Fraser, S. Hand, J. G. Hansen, E. Jul, C. Limpach, I. Pratt, and A. Warfield, “Live migration of virtual machines,” in Proc. USENIX Symposium on Networked Systems Design and Implementation (NSDI05), Berkeley, CA, USA, 2005, pp. 273-286. [11] P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt and A. Warfield, “Xen and the Art of Virtualization,” in Proc. of the nineteenth ACM symposium on Operating systems principles, pp 164-177, Bolton Landing, NY, USA, 2003. [12] M. Rosenblum and T. Garfinkel, “virtual machine monitors: current technology and future trends”, Computer, 38(5), pp. 39-47, May 2005. [13] M. Mishra, A. Das, P. Kulkarni, A. Sahoo, “Dynamic resource management using virtual machine migrations”, IEEE Communications Magazine, vol.50, no.9, pp.34-40, 2012. [14] S. K. Bose, S. Brock, R. Skeoch and S. Rao, “Cloudspider: Combining replication with scheduling for optimizing live migration of virtual machines across wide area networks”, in Proc. International Symp. on Cluster, Cloud and Grid Computing, pp. 13-22, 2011. [15] K. Ye, X. Jiang, D. Huang, J. Chen, and B. Wang. “Live Migration of Multiple Virtual Machines with Resource Reservation in Cloud Computing Environments”, in Proc. IEEE CLOUD, July 2011. [16] M. Satyanarayanan, P. Bahl, R. Caceres, and N. Davies. “The Case for VM-based Cloudlets in Mobile Computing”, IEEE Pervasive Computing, 8(4), 2009.

ACKNOWLEDAGEMENT This research is partially supported by program of NSFC (grant no. U1035001, U1201253, 61203117), the Opening Project of Key Lab. of Cognitive Radio and Information Processing (GUET), Ministry of Education (grant no. 2011KF06), the project 217006 funded by the Research Council of Norway, the European Commission FP7 Project EVANS (grant no. 2010-269323), and the European Commission COST Action IC0902, IC0905 and IC1004. R EFERENCES [1] ITU Strategy and Policy Unit (SPU), ITU Internet Reports 2005: The Internet of Things, Geneva, International Tele-communication Union (ITU), 2005. [2] Yazhi Liu, Jianwei Niu, Jian Ma, Lei Shu, Takahiro Hara, “The Insights of Message Delivery Delay in VANETs with a Bidirectional Traffic Model”. Journal of Network and Computer Applications, Elservier, 2012. [3] M. Miche, T. Bohnert, “The Internet of Vehicles or the Second Generation of Telematic Services”, ERCIM News, vol.77, pp.43-45, 2009. [4] R. A. Uzcategui, and G. Acosta-Marum, “WAVE: A Tutorial”, IEEE Communications Magazine, 47(5), pp. 126-133, May 2009. [5] S. Xie, Y. Liu, Y. Zhang, and . Yu, “A Parallel Cooperative Spectrum Sensing in Cognitive Radio Networks”, IEEE Transactions on Vehicular Technology, vol. 59, no. 8, pp.4079 C 4092, 2010. [6] R. Yu, Y. Zhang, Y. Liu, S. Xie, L. Song and M. Guizani, “Secondary Users Cooperation in Cognitive Radio Networks: Balancing Sensing Accuracy and Efficiency”, IEEE Wireless Communications Magazine, vol. 19, no. 2, April 2012, pp.2-9. [7] T. Wang, L. Song, and Z. Han, “Coalitional Graph Games for Popular Content Distribution in Cognitive Radio VANETs,” to appear, IEEE Transactions on Vehicular Technologies.

545