A User Profile-Aware Policy-Based Management Framework for ...

30 downloads 13919 Views 402KB Size Report
computing technology allows hosting pervasive applications and this has ... consumers, and those in the scientific and business domains. Although cloud ...
2014 IEEE Fourth International Conference on Big Data and Cloud Computing

A User Profile-Aware Policy-Based Management Framework for Greening the Cloud Fadi Alhaddadin, William Liu and Jairo A. Gutiérrez School of Computer and Mathematical Sciences Auckland University of Technology, Auckland, New Zealand Emails: [email protected]; {william.liu, jairo.gutierrez}@aut.ac.nz and thus results on a greater total cost of acquisition (TCA). Thus, keeping servers underutilized contributes to a great inefficiency from the perspective of power consumption, which is considered as a critical problem in the field of cloud computing today. In order to overcome the issue of underutilized computing resources, the virtualization approach has been widely deployed. It allows cloud providers to create multiple Virtual Machines (VMs) instances on a single physical server with the goal of improving the utilization of resources. However the accuracy and efficiency of cloud management is very important as aggressive consolidation of virtual machines can lead to performance degradation and violation to the service level agreement (SLA) established between cloud providers and their customers [6]. Therefore, this paper presents a new approach that aims at reducing the power consumption in datacenters without affecting the QoS via a User Profile-Aware PolicyBased Management framework.

Abstract— Cloud computing technology is gaining great popularity due to the utility-oriented information services that it offers worldwide. The pay-as-you-go elasticity of cloud computing technology allows hosting pervasive applications and this has become possible for various end-users such as consumers, and those in the scientific and business domains. Although cloud technology returns great benefits and offers numerous advantages, data centers consume significant amounts of electricity to operate hence they require high operational costs and cause harmful impact to the environment such as significant carbon footprints and emissions. In this paper, we propose a User Profile-Aware Policy Switching (UPAPS) management framework to exploit and differentiate user profiles so as to achieve better power efficiency and optimized resource management while still guarantying the quality of service of their cloud services. The proposed UPAPS framework enables more flexibility in resource management. The simulation results have shown significant improvements on the cloud infrastructure in terms of power efficiency and resource management while fulfilling the satisfaction of the users in terms of service quality and cost by using the proposed UPAPS framework. In addition, the proposed framework has proved abilities in effectively managing a group of heuristic algorithms for virtual machine allocation by employing new architectural components such as an User Profile-Aware Policy Switching (UPAPS) unit and an User Service Profile (USP) unit with consideration to both customer requirements and energy efficiency on the cloud service provider perspective.

The remainder of this paper is organized as follows: after a general description of the energy efficiency issue of cloud computing, given in Section II, Section III presents the proposed framework and explains how it works towards addressing current limitations. Section IV presents extensive case studies and results analysis while Section V concludes the paper and discusses future work.

Keywords: Power Efficiency, Policy-Based Network Management, Energy Efficient Cloud Computing, User Profile Aware Policy Switching, Quality of Service, Policy Conflicts

I.

II. A. Related Work

INTRODUCTION

Energy efficiency is considered as one of the most challenging issues in the field of cloud computing. The increasing demand on cloud computing infrastructures and services has become a major environmental issue due to the amount of power it requires. A recent work [7] on cloud computing has identified a need for collaboration among servers, communication networks, and power networks in order to reduce the total power consumption by the ICT equipment in a cloud computing environment. In order to achieve a stable collaboration among servers, communication networks and power networks, it is essential to design and employ a steady and satisfactory management scheme which assures the required collaboration within the network. In 2002, Verma conducted research on Policy-Based Management Systems and their suitability for network administration purposes [8]. Verma’s study aimed to demonstrate how network management can be simplified using a policy-based management system. It also intended to express the main framework-related issues encountered and those that need to be considered when developing policybased management systems such as the critical issue of policy conflicts. In 2005, Agrawal, Lee, and Lobo from the IBM T. J. Watson Research Centre conducted a study [9] which

Cloud computing technology represents a different method for architecting and remotely managing computer resources. It facilitates a computing-as-a-service model where computing resources are made available as a utility service [1-2]. It makes available as many resources as demanded by the user in a pay-as-you-go basis which makes it different from the traditional computing models on which enterprises have to invest significantly to implement their own IT infrastructures [3-4]. However, the spread of cloud computing has led to establishing large-scale data centers which comprise of thousands of computing nodes and consume massive amounts of energy. Surprisingly, the main reason behind the huge amount of power consumption that cloud is responsible for is not only the quantity of computing resources but rather lies in the inefficient usage of these resources. The study [5] which involved collecting data from more than 5000 production servers in a period of six months has shown that although servers usually are not idle, their utilization rarely approaches 100% as most of the time they operate at 10-50% of their full capacity; this leads to extra expenses on over-provisioning 978-1-4799-6719-3/14 $31.00 © 2014 IEEE DOI 10.1109/BDCloud.2014.116

BACKGROUND

681 682

provided an overview of how the Policy Management for Automation Computing (PMAC) platform works and manages networked systems. They demonstrated the concept and the technical details of the management model on networked systems. The outcomes of the study revealed the ability of policy-based network management in reducing the burden on the human administrator in such networks by providing systematic means to create, modify, distribute, and enforce policies for managed resources. Based on these existing solutions, we find it necessary to develop a new policy based management framework which can effectively handle the computing resources in order to obtain sufficient levels of energy efficiency in the cloud infrastructure and also overcome energy-related issues of cloud computing.

analyzing the historical and real-time data of the VMs resource usage and performance. The authors proposed five host overloading and utilization detection algorithms: Static Threshold VM allocation policy (THR), Inter Quartile Range (IQR), Median Absolute Deviation (MAD), Local Regression (LR), and Local Regression Robust (LRR). In addition, three VM selection algorithms are also developed: Minimum Migration Time (MMT), Random Selection (RS), and Maximum Correlation (MC). Table 1 Comparison of various algorithm combinations and their performance [6]

B. Service Level Agreements (SLA) Metrics The Quality of Service (QoS) is a very important matter in the fields of networking and cloud computing, thus satisfying the QoS requirements and also maintaining them is an essential issue for the management of any cloud-based service. The QoS is normally delivered in the context of an agreed Service Level Agreement (SLA) which can be specified in terms of many characteristics such as minimum throughput, availability or maximum response time delivered by the system. SLAs characteristics can vary according to various applications and for cloud computing it is necessary to define independent workload metrics to be used for evaluating the performance of the virtual machines. In our work, we use two major SLA metrics proposed in [6]. The first metric is called Service Level Agreement Violation per Active Host (SLATAH). It is the percentage of time, of which active hosts have experienced a CPU utilization of 100%, and can be calculated as: ଵ

ܵ‫ ܪܣܶܣܮ‬ൌ ே ൅ ෍



்௦೔

௜ୀଵ ்௔೔

Table 1 summarizes the characteristics of these different algorithm combinations (median values). It can be seen that the differences are in the efficiency value of the consolidation algorithms in terms of SLA violations and power consumption ratios. The dynamic VM consolidation algorithms such as NPA and DVFS perform significantly better than static allocation policies. However, they lead to greater power consumption rates in comparison to all other consolidation algorithms. According to the results in Table 1, LR-MMT-1.2 produces the best trade-off between power consumption and SLA violation due to the relatively small number of VM migrations, while the efficiency in terms of the same trade-off decreases by going up as LRR-MMT-1.2 is the second best algorithm followed by MAD-MMT-2.5. In our work, we have selected the top three static allocation policies which are LR, LRR, and MAD and one dynamic VM consolidation algorithm (DVFS) as sampling policies to demonstrate how the proposed policy switching framework works, and also validated its effectiveness towards overcoming the issue of usage inefficiency in cloud system.

(1)

Where N indicates the number of hosts; ܶ‫ݏ‬௜ is the total time during which the host ݅ has experienced the utilization of 100% leading to an SLA violation. ܶܽ௜ indicates the total time during which host ݅ is in the active state. The second metric is the Performance Degradation caused by VM Migration (PDM) and can be calculated as: . ܲ‫ ܯܦ‬ൌ

ଵ ெ



൅෎

஼ௗೕ

௝ୀଵ

஼௥ೕ

D. Issues with the Current Systems

(2)

Cloud computing allows users to pay for what they actually use based on time, storage capacity, or other usage measures that can be agreed on between the cloud provider and users. However, the current implementation of cloud computing is not yet efficient due to the issue of underutilization of computing resources. It causes extra energy consumption and therefore increases the operating cost in the cloud. Nevertheless, it can also be a disadvantage for cloud clients. The clients may have very dynamic needs for different cloud services which demand various QoS at different times. For example, a user may require high QoS during business hours due to the need for real time voice and video communications. However, outside business hours, only the email facility is needed hence the QoS might not have a great impact on their business operation. This means that clients need to purchase the highest available QoS throughout the day but they do not need the same QoS after business hours.

Where M indicates the number of virtual machines, ‫݀ܥ‬௝ is the estimate of the performance degradation of the virtual machine ݆ caused by migration, and ‫ݎܥ‬௝ is the total CPU capacity requested by the virtual machine ݆ during its lifetime which is estimated as 10% of the CPU utilization in MIPS during all migrations of the VM݆. Since both the SLATAH and PDM metrics have similar importance in terms of the level of SLA violation by the infrastructure, they can be combined into one SLA Violation (SLAV) metric which is calculated as below: ܵ‫ ܸܣܮ‬ൌ ܵ‫ܪܣܶܣܮ‬Ǥ ܲ‫ܯܦ‬

(3)

C. Existing Heuristic Algorithms for VM Consolidation The study in [6] has proposed a set of adaptive heuristics for energy efficient and dynamic consolidation of VMs by

683 682

III.

THE PROPOSED UPAPS FRA AMEWORK

Service B for the rest of thhe day; these requirements are coded into the system by useers via a user interface supplied by the cloud service provider.

In response to the issue of power consum mption caused by insufficient usage of computing resourcces in a cloud environment, we propose a new policy swittching framework that aims at tackling this issue and provides the effective trade-off between energy consumption and QoS by considering users’ dynamic requirementss. The proposed framework involves a dynamic user proffile-aware policy management approach that can flexiblyy manage cloud resources according to the user finer or granular requirements. The main goal is to reduce the extra energy consumption caused by the inefficient usaage of computing resources without aggressively impactinng on the QoS provided to users, as well as exploiting user profiles to differentiate the QoS demanded and delivereed. It also aims at improving the cloud infrastructure withinn the context of business at reducing the operating costts of the cloud infrastructure as well as facilitating more fleexible options for users to better manage their cloud service usaage costs.

The policy repository coomponent stores all policies produced by the policy manaagement tool in the system. All USPs are also stored in thee policy repository. The cloud system deals with each user according a to their profiles. Each rule in a USP has a policy coode in the repository to comply with. For example, at 5pm the system automatically invokes a policy from the repository to t be enforced by the policy enforcement point to switchh the user from Service A to Service B as per previous exam mple, and at 8am another policy switches the user from Servvice B to Service A. Figure 2 shows an example of a USPs specification. s

A. Policy-Based Network Management Fraamework The Policy-Based Network Managemennt (PBNM) is a promising solution for managing heteroggeneous network resources. It addresses the requirements for providing p flexible and dynamic management, and also deals with w the escalating size and complexity of modern networked systems. s Figure 1 below illustrates the PBNM architecture which we apply into our framework.

Figure 2 Examples of User Service Profiles To enable the policy sw witching functionality, we also propose the Policy Switchiing Based on Time (PSBT) algorithm presented in Figurre 3. The metric used in our proposed algorithm is time; hoowever, the same algorithm can be modified with other metrics m such as workload or utilization etc. The scheduler will w invoke the method that lists all USPs. In steps 2 to 4, the scheduler s sets the parameters as metrics to be used. The keyy parameters according to the proposed PSBT are the start time t and the end time. The start time indicates the start time off a particular time window for a particular service scheme acccording to the USPs of each cloud user, while the end tim me indicates the finished time in that time window.

Figure 1 PBNM framework archiitecture As shown in Figure 1, the PBNM frameework consists of four main components namely: policy mannagement tools, a policy repository, policy decision points and a finally policy enforcement points. Policy management toolls are used by the administrator to define policies to be enfforced within the network. According to our system moodel, the policy management tool can also be accessible by users, to some extent, in order to configure their dynamic seervice profiles. B. User-Profile-based Differentiated Servicce Architecture In order to implement the above framewoork, we propose a new architectural component called the Useer Service Profile (USP) which is a database that contains innstructions that a user can choice for managing their profilles. User Service Profiles (USPs) refer to the sequence of policies that are to be m decided by used according to time, workload, or other metrics users within the available options offereed by the cloud provider. For example, a user can decide to purchase Service A during business hours (from 8am to 5ppm), switching to

Figure 3 Policy Switchhing Based on Time (PSBT) C. An example of the proposeed framework Here we illustrate an exam mple of a cloud service provider that has two major clients: a university u user and a bank user as shown Figure 4 below. The cloud provider has two a part of the policy-based architectural units which are management framework, nam mely: a policy management tool

684 683

and a repository; both are connected to the main policy distribution point (PDP) which belongs to the cloud provider’s system. It is also called the mother PDP and the main role is deciding policies to be invoked from the policy repository to be enforced in further steps. Each client, in order to manage their own network, has an internal PDP as a gateway to the cloud service provider. The clients’ main PDP is connected to the mother PDP in order to communicate with the cloud system and receive policies to be further distributed to the internal PDPs. In the example, the bank receives policies from the mother PDP through its local main PDP, and the main local PDP in the bank has another policy repository that is created internally by the bank in order to store policies on the local system. The main local PDP forwards policies to another PDP within the local system that is responsible for distributing these policies to users according to their USPs. Each user has a Policy Enforcement Point (PEP) assigned, and receives resources according to policies. For example, a user can be a local network or a set of computers connected within one network which could form part of a particular branch in the bank etc. Policies enforced on each user are decided by the PDP according to the user’s USPs.

and it permits the modeling of virtualized environments and supports on-demand resource provisioning and management. Similar to the parameters built in [6], the simulation studies involved running a data center that comprises 800 heterogeneous physical nodes, half of them are HP ProLiant ML110 G4 servers, and the other half are HP ProLiant ML110 G5 servers. The CPU frequency for each server is mapped onto MIPS ratings: 1860 MIPS for each core of the HP ProLiant ML110 G5 servers and 2660 MIPS for each core of the HP ProLiant ML110 G5 servers. The network bandwidth for each server is 1 GB. B. Single Policy Model The first set of simulation runs involved a cloud scenario using one heuristic algorithm that aims at reducing the power consumed by datacenters. Three heuristic algorithms and one dynamic algorithm were involved. Each algorithm was run for one full simulated day. The results showed that the dynamic consolidation algorithm (DVFS) outperformed the heuristic algorithms in terms of the SLAV metric. On the other hand, the energy consumption rate caused by DVFS in one full simulated day was 803.91 kWh which is much higher than the energy consumed by the other heuristic algorithms.

Figure 5 Results of simulating three heuristic algorithms However, results indicated zero SLA violations caused by DVFS while the heuristic algorithms produced violations due to their use of aggressive VMs consolidations. The Figure 5 shows the performance of other three heuristic algorithms in terms of energy consumption and SLAV. As seen in Figure 5, the power consumption rates varied among algorithms. The LR_MMT_1.2 algorithm produced the highest energy consumption rate which was 116.71 kWh, followed by LRR_MMT_1.2 which produced 116.48 kWh. Finally, the MAD_MMT_1.2 produced an energy consumption rate of 114.27 kWh. While the efficiency of each heuristic algorithm does not only rely on the power consumption, but also on the SLAV metric since the reason for energy consumption reduction is the consolidation processes the algorithms perform on virtual machines. From Figure 5, it can be noticed that there is an inverse relationship between power consumption and the SLAV metric. This is justified by the aggressive consolidation of virtual machines that causes less energy consumption. For instance, MAD_MMT_2.5 with SLAV of 0.524% appears to produce the best results in terms of energy consumption, while in fact, the reason behind this energy efficiency is the high violation to the SLA due to aggressive consolidation. Therefore, none of these algorithms

Figure 4 Example of the proposed system architecture As seen in Figure 4, each user has a PEP and USPs in order to be part of the system. Each device runs an instance of PEP and its corresponding USP. The PEP reads instructions from the USP and communicates with the PDP accordingly. The PDP retrieves policies from the policy repository and sends them to the PDP. The PDP responds to the PEP and supplies the PEP with the policies to be enforced on its particular user. To illustrate how the proposed framework can handle different clients with different requirements and also how the policy system can be scaled flexibly, we create the second client on the university case network which is configured with two PDPs after the main local PDP to reflect the reality that a university usually has more than one operating environment or platform running. IV. A. Simulation Setup

CASE STUDIES

We have chosen CloudSim [10] for the simulation studies; the system is widely used by the cloud research community

685 684

can be effective solution when used on theiir own due to the SLAV that each algorithm causes. Moreovver, although the DVFS algorithm does not cause any violatioon to the SLA, the energy consumption rate that it causes is connsidered relatively high. Nevertheless, applying the DVFS algoorithm on its own would put cloud users in a position to pay foor the highest QoS all the time even if they don’t actually need it. i

The Proposed UPAPS system In order to satisfy the client’s requirements, these i the USP of the client as a set requirements can be defined in of policies for switching. Thiss set of policies plays a key role in the process of scheduling the t service schemes provided to the client. According to the requirements r of the department, the service scheme required during d business hours is service scheme “A”, while after busiiness hours, there is a need to create a set of policies that can automatically handle the scheduling and the switching processes without violating the QoS required. This can be donne by adopting the policy based management framework. Poliicies can be switched from one to another depending on worklload predefined by the client.

C. Multiple Polices Model c scenario in In this simulation study, we propose a cloud which a cloud provider offers four service schemes to users and each scheme is priced differently accoording to its QoS level. These are the proposed service models: Service A has the highest QoS due to its use of the DVFS policy. Service B has the second highest QoS and it uses thhe LR_MMT_1.2 policy. Service C, which has lower QoS, iss generated using LRR_MMT_1.2, and finally Service D withh the lowest QoS, is implemented by the MAD_MMT_2.5 pollicy. The Table 2 below assumes their price models and their associated prices per hour.

The following presents the set of policies included in the USP predefined by the deparrtment manager. Here N is the number of PC running at a parrticular time window:

Table 2 Price plan for service scheemes

Figure 6 below presents an example of the client’s USP in which the set of policies abovve is included in order to satisfy the requirements of the user. The scenario represents a university deppartment manager who requires cloud services for their officees. He decides to reduce the costs of ICT by choosing service schemes s required according to their real needs. The service schemes by the manager are not fully dependent on the t opening hours of the department, but rather on the woorking hours and workload. These requirements include 3 service schemes which are scheduled according to both woorking hours and workload. It requires the highest availaable QoS during business hours, due to the need for fast inteernet connections for voice and video communications that usually u take place during working hours. After business hours, due to the nature of university work, there is always a posssibility that some staff are working overtime in order to fiinish tasks when needed. This means that the workload off the department network is unpredictable outside businesss hours. For that case, it requires scheduling service schemess according to the number of running computers on the department. For example, when the number of computers runnning is more than 3, the workload is considered high, otherwisee is low.

Figure 6 USP of the University U Department Case Policy Conflicts and Resolutioons

Current system model

Policy conflicts occur whenn the satisfaction of conditions of two or more policies in thee system at the same time. The PBNM framework sometime encounters policy conflicts that can be a barrier towards the adoption of such management framework. In this scenarioo, policies were set up and designed to be enforced accoording to two metrics: time and workload. Therefore, there is no occurrence of policy conflicts. However, for larger scale configurations, there is always a potential for policy conflicts that take place due to the need for more requiremeents to be satisfied and higher demand on a greater num mber of metrics in various applications. For example, let us assume that the design of our policies set in the universsity scenario was different in a

According to the current system model, too satisfy the user’s requirements, the service scheme “A” haas to be selected during business hours and also after businness hours when workload is high. Thus its associated DVF FS policy is used which consumes energy on the rate of 803..91 Kw per hour. The total energy consumed in whole day is 803.91 kWh* 24 hours = 19293.84 Kw and its associated cosst is 0.34 NZ$/per hour * 24 hours = 8.16 NZ$. However, thhe current system does not support such dynamic requiremennts because only one service scheme can be provided throughhout the day, and therefore no flexibility of requirements can be b handled.

686 685

V.

way that Service Scheme “B” was to be provided outside business hours unless there is a workload on the network, and other service schemes are to be provided according to the size of the workload on the network. In this case, in order to overcome the issue of policy conflicts, we need to define a unique priority value for each policy. For example, service “A” has priority “3”, and service “B” has priority “2” and that of service “D” is “1”. Therefore, outside business hours, service “D” should be provided, but when the workload starts to increase, the policy activated will be the one associated with the service which as a higher priority value . Prioritizing polices in the policy-based management framework helps avoiding policy conflicts and therefore obtaining the best combination of policies that such as management framework can offer.

CONCLUSIONS AND FUTURE WORK

The contribution of this work is to advance the cloud computing system in two ways. Firstly, the recognition that Policy-Based Management techniques can be used to reduce cloud system’s energy consumption, and thus contributing to develop a strong and competitive cloud services industry using more sustainable designs. Secondly, it allows the fulfillment of additional benefits in terms of what cloud computing can offer on the basis of pay-as-you-go elasticity, because users, according to our proposed framework, can design their service schemes and pay for what they exactly need. The simulation results have shown that the adoption of a policy-based management framework works effectively towards reducing the energy consumption in cloud systems. Results also showed that the policy switching approach using the proposed PSBT algorithm is efficient for both cloud service providers and cloud users. In our future work, we aim at enhancing the user profile aware policy based management framework to handle more complex cases increasing demanding and finer user needs.

In our case studies, we further create a scenario derived from the university case to validate the proposed system in terms of energy consumption and costs. Here we assume the following: From 5pm to 8pm, three computers were being used by 3 staff members who were performing overtime work. From 8pm to 9pm, the manager was doing some work in the department using one computer. After 9pm, the department network was not occupied and none of the office computers were running. According to this scenario, the workload of the department after business hours can be summarized as the following:

VI. [1]

[2]

• 3 hours of service scheme “A” • 1 hour of service scheme “B” • 11 hours of service scheme “D”

[3]

Then the total energy consumption for each of the above time windows is calculated as follows: [4]

Service A: 803.91 kWh*3 hours =2411.73 Kw Service B: 116.71 kWh * 1 hour = 116.71 Kw Service D: 55.08 kWh * 11 hours = 605.88 Kw

[5]

Therefore, the total energy consumed after working hours =2411.73+116.71+605.88=3134.32 Kw rather than the previous 12058.65 i.e., energy saving is about 74% to the single policy case. In addition, the total cost of this scenario is calculated as follows:

[6]

[7]

Service A: 0.34 NZ$/per hour*3 hours = 1.02 NZ$ Service B: 0.27 NZ$/per hour*1 hours = 0.27 NZ$ [8]

Service D: 0.13 NZ$/per hour*11 hours = 1.43 NZ$ Therefore, the total cost of the cloud service after working hours = 1.02 + 0.27 + 1.43 = 2.72 NZ$ rather than the previous 5.1 NZ$ i.e., cost saving is about 47% to the single policy case.

[9]

[10]

It can be seen that there is a significant reduction on both energy consumption (74%) on the cloud service provider side and also the service cost reduction (47%) on the users’ side by comparing the resulting costs with the costs calculated using a single policy scenario. This primary case study confirms that the effectiveness of the proposed framework in terms of energy efficiency and also user satisfaction which makes this framework a win-win solution to resolve the problem of underutilization in cloud computing resources.

687 686

REFERENCES

T. B. Winans and J. S. Brown, “Cloud Computing: A Collection of Working Papers,” Deloitte Consulting LLP, New York, pp. 1-27. E. A. Fischer and P. M. Figliola, "Overview and Issues for Implementation of the Federal Cloud Computing Initiative: Implications for Federal Information Technology Reform Management," Congressional Research Service, 2013. P. Mell and G. Timothy, “The NIST Definition of Cloud Computing: Recommendations of the National Institute of Standards and Technology”, 2011. M. Mishra, A. Das, P. Kulkarni and A. Sahoo, "Dynamic Resource Management Using Virtual Machine Migrations," in Cloud Computing: Networking and communication challenges, 2012. L. A. Barroso and U. Hölzle, "The Case for EnergyProportional Computing," Computer , pp. 33-37, 2007. A. Beloglazov and R. Buyya, "Optimal Online Deterministic Algorithms and Adaptive Heuristics for Energy and Performance Efficient Dynamic Consolidation of Virtual Machines in Cloud Data Centers," Concurrency and Computation: Practice and Experience, pp. 1-24, 2011. S.-i. Kuribayashi, "Reducing Total Power Consumption Method in Cloud Computing Environments," International Journal of Computer Networks & Communications, pp. 69-84, 2012 D. C. Verma, "Simplifying Network Administration Using Policy-Based Management," IBM Thomas J Watson Research Cente, IEEE Network , pp. 20-26, 2002. D. Agrawal, K.-W. Lee and J. Lobo, "Policy-Based Management of Networked Computing Systems," IEEE Communications Magazine, pp. 69-75, 2005. Rodrigo N. Calheiros, Rajiv Ranjan, Anton Beloglazov, Cesar A. F. De Rose, and Rajkumar Buyya, CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms, Software: Practice and Experience, Volume 41, Number 1, Pages: 23-50, ISSN: 0038-0644, Wiley Press, New York, USA, January 2011.

Suggest Documents