2014 Fourth International Conference on Advanced Computing & Communication Technologies
CLOUD COMPUTING: A NEW AGE OF COMPUTING Gurjeet Singh Gill
[email protected] Student, ITM University Gurgaon-122017
Aditya Wadhwa
[email protected] Student, ITM University Gurgaon-122017
Aman Jatain
[email protected] Assistant Professor, ITM University Gurgaon-122017
ABSTRACT
2. BEGINNING OF CONCEPT
The demand in computing and Internet has been increasing since the day of invention. The continuous increase in needs of computerised infrastructure and internet framework has led to big demands for computer experts. In the present day there is a big implementation of hardware along with software. Although has been a significant improvement in reducing the size of hardware and increasing the capacity of software (Moore’s Law), still on a global scale the requirement of big hardware implementation and availability of software as an easy service is still to reach. To give this a solution the concept of cloud computing is introduced.
The concept began in 1950’s when schools and industry started using the mainframes with large scale. Installation of mainframes was done called ‘server room’ and multiple users were using the mainframe through dumb terminals(used only for accessing).The mainframes were costly to buy and maintain, so to use them for every single user is not of an advantage, but shared environment gave it a new hope in multiuser environment .In 1970’s the IBM cooperation took initiative and invented what called ‘virtual machines’ which allowed the preinstalled mainframes to have several virtual environments over a single node. It took the pre-existing mainframe environment to further level by slower several computing environments to work in sync. [2]Further in 1990’s, the telecommunication companies offered virtual private network (VPN) with a point to point network with a quality of device at nominal cost. The use of cloud as a symbol was to refer the connection between the provider and the system. The increasing popularity of computer system allowed the scientist to think of time sharing systems, which can prove more beneficial in optimal use of placation, platform and infrastructure. In 2000s, Amazon made the use of cloud computing and started providing it to customers for utility computing as Amazon Web Services (AWS) in 2006. In 2008, the first open source cloud provider Eucalyptus came who deployed private clouds. Following same year, OpenNebula gave the first open source software developed for a European project and became a provider of private and hybrid clouds. In 2011 IBM gave the IBM Smart Cloud frame work and after that it is still continuing to come till date [3].
1. INTRODUCTION Nowadays Internet Services has connected almost every part of the industry. The software and their applications including the hardware specifications is a need of the growing industry. The emergence of new companies every year increases the demand and the need of new level of technology. There are 2.07 billion users of internet today.[3]Handling request of every customer requires time and resources. The increasing need of software has put an implication on generating better hardware. The cloud computing is a way of helping the world with an integration of the system to provide the services. The inter-accessibility has been made possible with the cloud computing. The concept helped in making the industry and the internet more adaptable to each other by helping in accessing the resources available along with the least requirement of hardware implementation. The cloud computing helps us to visualise the system as a “give me what i require”, i.e. asking the cloud for services with a distant and immediate need without the installation of huge hardware platforms. Moreover, the cloud gives the inter-connected environment for sharing and accessing the required resources and services. The focus comes to how this cloud computing can prove be of great significance to our world.
3. METHODOLOGY The review surveyed the available literature and researches using a proper and symmetric approach, The researches include : we have analysed the major research databases for computer sciences (i.e the work done till now mainly by Google, IBM): cloud, Science Direct ,elastic computing, Iaas-Infrastructure as a service, also Paas-platform as a service, and SaaS-Sofware as a service, XaaS- Everything as a service. If elaborated more it can be said that the search done by these Big Masters in the Technology sector was limited from 2005 to 2009, as all the clouds which were worked on were launches on
Figure 2.1: Overview of Cloud 978-1-4799-4910-6/14 $31.00 © 2014 IEEE DOI 10.1109/ACCT.2014.37
243
after the October 2005.Now Counting on the Companies which Started it publically: Amazon -EC2(Elastic Compute Cloud) first launched it , in August 2006 Google launched its application names as the APP ENGINE in April 2008.Counting on its popularity it, according to the trends followed by Google , the CLOUD term started becoming popular in 20007 , that is also illustrated in a figure. The rummages that were taken from the five target databases returned over from many papers. The major abstracts and titles of such papers were read and for symmetric reasons it was decided to use only some of these peer-reviewed papers for the review.
protocols, standards, interfaces, techniques for modeling, a new use-cases arising through cloud computing, and building clouds. Table shown with this provides an overview of the papers reviewed in this review and their categories. As it can be viewed in the table, the majority of the papers were published in 2009. Table 3.1 Papers Published in 2009
Figure 3.1:Graph depicting Popularity of Cloud Year wise
4. CLOUD INFRASTRUCTURE It is a combination of several hardware pieces in a joint environment with accessibility to each connected device. The following are the basic constituents of cloud:
Only a limited publications were included i.e some nonpeered that were well quoted definitions or a abstract of a workshop that moreover discussed the challenges that were encountered during research, as some could be termed as relevant but not appropriately matched by comparable peer viewed work. Furthermore, papers that had inappropriate titles or summary and those that mainly focused on e-science and High Performance Computing were also left out of the review as these areas are not within the core focus of our review. The references of the some highly renowned papers were checked but no additional papers were found as such to be important to add to this review based on the criteria followed above. The resulted in a total of more than 20 publications being that we selected for review. The papers were divided into three categories based on their core focus; the categories were been made are as follows:
1) A Database: With a combination of servers and software to keep the database healthy and intact the operation can be carried out of storing and manipulating the data. It keeps the record of the incoming and servicing clients that are using the system. Multiple copies are maintained for consistency of user data. 2) Servers: To link between host and the customer one has to use servers to keep the system intact. The servers help in maintain the connectivity and data flow for the incoming requests.
3) Devices: Laptops, smart phones, printer and other computing devices can access the cloud maintaining the overall connectivity.
1) General introductions, 2) Technological aspects of cloud computing and 3) Organizational aspects The further category is discussed elsewhere. Some papers that provided general introductions to cloud computing have been referenced throughout the paper. The technological category was further divided into papers that handle with lessons from related technologies,
244
Figure 4.1 Types of Services 1) Software as a Service (SaaS): It is the software availability through the web browser and user can access it or provide it to web use. At this level of cloud the users don’t have any permission over the infrastructure or host computer. Google docs are an effective example of SaaS level cloud. 2) Platform as a service (PaaS): This is the application development stage in which programming languages and tools are used supported by the PaaS provider. The PaaS provider uses a higher level of abstraction allowing him to not care about infrastructure. The Google app engine and Microsoft Azure is the example of PaaS.
Figure 4.2 Types of Cloud
[5]
Seen in Other Research papers Vaquero et al. [6] and Youseff et al. [7] concur with the NIST definition to a significant extent. For example, Vaquero et al. studied 22 definitions of cloud computing and proposed the following definition: “Clouds are a large pool of readily usable and accessible virtualized resources such that as development platforms, hardware and services. So such resources can be dynamically re-configured to adjust to a variable load (scale), allows also for maximum resource utilization. Such a pool of resources is typically exploited by a pay-per-use model in which guarantees are offered by the Infrastructure Provider by means of customized SLAs.”
3) Infrastructure as a Service (IaaS): This is a low level of abstraction which allows the user to access the infrastructure using virtual machines. This is where the users can have resources like storage, memory and processing power available for executing the tasks. It is a more flexible level of service which allows users to put programs on operating system. Moreover, The cost and maintenance are also of an advantage. 4.1 Types of clouds: 1) Private cloud:-It is the cloud which is the possession of a single organisation or person. The cloud operability is allowed to the owner or a third party. 2) Public cloud:-A cloud that can be used by the general public and require investment because they are owned by the large cooperation’s. 3) Community cloud:-A cloud that is shared by several organisations and have the properties that are satisfying all needs. 4) Hybrid cloud:-It is the collaboration of the three clouds given above. The clouds can be managed individually but the data and application passes through the hybrid cloud. Also bursting can take place in a hybrid cloud which may allow a private cloud to become public.
5. DEFINITIONS Much of the discussion has been there in industry as to what cloud computing actually means. The word cloud computing seems to come from computer network diagrams that show the internet as a cloud. Most of the Big Daddy’s in IT companies and market research firms such as IBM, Gartner and Forrester Research, Sun Microsystems have shown whitepapers that attempt to define the meaning of this term. These discussions are mostly coming to a TERMINATION POINT and a common definition is starting to come. The US –NIST (National Institute of Standards and Technology) has developed a definition that widely covers the commonly agreed concepts of cloud computing. The US-NIST working definition cut shorts cloud computing as:
245
lack of virtualization that resulted in jobs being dependant on the underlying infrastructure. This gradually resulted in some not so necessary complexity that had an effect on wider adoption [9]. Ian Foster – who was one among the experts of grid computing – this man compared cloud computing with grid computing and came to a conclusion that though the details and technologies of the these two are different, their vision is essentially the same [10]. The vision is to provide computing as a utility in the same way that other public utilities such as gas and electricity are provided. In fact the dream of utility computing has been around since the 1960s and advocated by the likes of John McCarthy and Douglas Parkhill. For illustration, the influential mainframe operating system Multics had a number of design goals that are remarkably similar to the aims of current cloud computing providers. Such design goals included remote terminal/end access, continuous operational provision (that was initially inspired by electricity and telephone services), , reliable file systems that users trust to store their only copy of files, scalability, information sharing controls, and an ability to support different programming environments [11].Foster et al. [10] compared and contrasted cloud computing with grid computing. What they believe is that cloud computing is an evolved version of grid computing, in a way such that it solves and answers the new requirements of today’s time, the existence of low-cost virtualization, and takes into account the expensiveness of running clusters. IT has greatly evolved in the last 15 years since grid computing was invented, and at present it is on a much larger scale that enables fundamentally different approaches. Foster et al. see similarities between the two concepts in their vision and architecture, see a relation between the concepts in some fields as in the programming model (“MapReduce is only yet another parallel programming model”) and application model (but clouds are not appropriate for HPC applications that require special interconnects for efficient multi-core scaling), and they explain fundamental differences in the business model, security, resource management, and abstractions. Foster et al. find that in many of these fields there is scope for both the cloud and grid research communities to learn from each other’s findings, and highlight the need for open protocols in the cloud, something grid computing adopted in its early days. Finally, Foster et al. believe that neither the electric nor computing grid of the future will look like the traditional electric grid. Instead, for both grids they see a mix of micro-productions (alternative energy or grid computing) and large utilities (large power plants or data centres).Therefore it is unsurprising that many people compare cloud computing to mainframe computing. Although, it should be noted that however many of the ideas are the common, the user experience of cloud computing is absolutely the opposite of mainframe computing. Mainframe computing limited people's freedom by restricting them to a very rigid and firm
“a model which is such that it enables convenient, ondemand network access to a shared pool of configurable computing resources (e.g. storage, applications, networks, servers and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction” The above definition of NIST is the clearest and most comprehensive definitions of cloud computing and is widely referenced in most of the US government documents and projects. The definition described cloud computing as having five most vital characteristics, three service models, and four such deployment models. And, the essential characteristics are: Broad network access: The resources mentioned previously can be seen over a network using multiple devices such as laptops or mobiles phones or pads. On-demand self-service: The computing resources can be acquired and used at anytime without the need of interaction from human with cloud service providers. Computing resources include processing power, storage, virtual machines etc. Rapid elasticity: a user can quickly acquire more resources from the cloud by scaling out. They can scale back in by releasing those resources once they are no longer required. Measured service: resource usage is metered using appropriate metrics such monitoring storage usage, CPU hours, bandwidth usage etc. Resource pooling: cloud service providers pool their resources that are then shared by multiple users. This is referred to as multi-tenancy where for example a physical server may host several
virtual machines belonging to different users. 6. RELATED TECHNOLOGIES THAT EFFECT IT The remainder of this paper reviews the research that describes technological aspects of research in cloud computing. This starts with a look at lessons to be learnt from related fields of research. In the following, standards and interfaces in cloud computing as well as interoperability between different cloud systems are explained. Then, techniques for designing and building clouds are summarized, which include advances in management software, hardware provisioning, and simulators that have been developed to evaluate design decisions and cloud management choices. This is rounded up by presenting new use-cases that have become possible through cloud computing. Voas and Zhang [8] identified cloud computing as the next computing paradigm that follows on from mainframes, PCs, networked computing, the internet and grid computing. These developments are likely to have similarly profound effects as the move from mainframes to PCs had on the ways in which software was developed and deployed. One of the reasons that prevented grid computing from being widely used was the
246
environment; cloud computing elaborates their freedom by giving them access to a variety of services in a selfservice manner and resources. If it is concerned to the to the Market-Oriented Cloud Computing, we note that a follow-on work from their Market-Oriented Grid Computing and Market-Oriented Utility Computing papers, Buyya et al. [12] described their research on market oriented resource allocation and their Aneka resource broker: If considering the case of few availability of resources, that is not all service requests will be of equal importance, and also a resource broker will regulate the supply and demand of resources at market equilibrium. For illustration A batch job might be preferably processed when the resource value is low, whereas a critical live service request would need to be processed at any price. Say Aneka that is commercialised through Manjrasoft, which is a service broker that mediates between consumers and providers by buying capacities from the provider and subleasing them to the consumers. Although, many of such resource trading requires the availability of ubiquitous cloud platforms with limited resources, and is in contrast to the desire for simple pricing models Also seen as cloud computing delivers IT as a service, many a cloud researchers can also learn from service oriented architecture (SOA). As a matter of fact, very first paper that introduced PaaS [13] and also described PaaS as an artefact of joining infrastructure provisioning with the principles of SaaS and SOA. By then, none of the academic work has been published in the field of PaaS. Thus we need to take our to-date understanding of PaaS from the current developments in the industry, in particular from the two major vendors that is from Google App Engine and Force.com. Sedayao [14] built a monitoring tool using SOA services and principles, also it describe their experience from building a robust distributed application consisting of unreliable parts and the implication for cloud computing. As design goal for distributed computing scenarios such as cloud computing they propose, “indistinguishable routers in a network, and as such any service using other cloud services needs to verify input and have hold down periods before determining that a service is down”[14]. Zhang and Zhou [15] analyse convergence from SOA and virtualisation for cloud computing and present seven architectural principles and derive ten interconnected architectural modules. And these build the foundation for their IBM cloud usage model, which is proposed as Cloud Computing Open Architecture (CCOA). Vouk [9] described cloud computing from a SOA perspective and talked about the Virtual Computing Laboratory (VCL) as an implementation of a cloud. VCL is an "open source implementation of a secure production-level on-demand utility and service oriented technology for wide-area access to solutions based on virtualised resources, including computational, storage and software resources" [9]. So in this respect, VCL could be categorised as an
IaaS layer service. Napper and Bientinesi [16] performed an experiment to compare the potential performance of Amazon’s cloud computing with the performance of the most powerful, purpose build, HPC (high performance computers) in the Top500 list in terms of solving scientific calculations using the LINPACK benchmark. They found that the performance of individual nodes in the cloud is similar to those in HPC, but that there is a severe loss in performance when using multiple nodes, although the used benchmark was expected to scale In a linear manner. On comparing the AMD and INTEL: AMD instances scaled significantly better than the Intel instances, but the cost for the computations were equivalent with both types. As the performance achieved decreased exponentially in the cloud and only linearly in HPC systems, Napper and Bientinesi [16] conclude that despite the vast availability of resources in cloud computing, such of these offerings have not been able to compete with the supercomputers in the most of the top500 list for scientific computations. For a non peerreviewed review of main note speeches for a workshop on distributed systems Birman et al. [17] express that the distributed systems research agenda is a bit different to the cloud agenda. They in a way conflict that while technologies from distributed systems are relevant for cloud computing, somehow they are no longer core aspects of research. For illustration they list strong synchronisation and consistency as main research topics from the distributed systems. So in cloud computing they remain relevant, but as the overarching design goal in the cloud is scalability, the search is now for decoupling and thus avoiding synchronisation, else than improving synchronisation technologies. Birman et al. [17] come to a cloud research agenda comprising four directions: managing the existing compute power and the loads present in the data centre; developing stabile largescale event notification platforms and management technologies; improving virtualisation technology; and understanding how to work efficiently with a large number of low-end and faulty components.
7. BUILDING CLOUDS Here we are referring to work which helps building the cloud offerings. Such offering requires management software, hardware provision, simulators to evaluate the design, and evaluating management choices. Sotomayor et al. [18] present two main objects for managing cloud infrastructures: OpenNebula, this is an virtual infrastructure manager, and Haizea, which is a resource lease manager. So to manage the virtual infrastructure, these ,OpenNebula provides a unified view of virtual resources regardless of the underlying virtualization platform, also it manages the full lifecycle of the VMs, and also it supports configurable resource allocation policies including policies for times when the demand exceeds the available resources. Sotomayor et al. argues that in the two different sections of cloud that’s is private
247
or by putting cold nodes into the container which are not powered on once there is demand due to failure in some of the other components.
and hybrid clouds resources will be limited, and in the sense that situations will occur where the demand cannot be met, along with that requests for resources will have to be prioritized, queued, pre-reserved, deployed to external clouds, or even rejected. Now they propose advance reservations to have resources available to serve higher prioritized requests that are expected to be shortly arriving. Such can be solved with resource lease managers such as the proposed Haizea, and something like a futures market for cloud computing resources, that does pre-empts resource usage and puts in place advance resource reservations, so that highly prioritized demand can be served promptly. Haizea can act as a scheduling backend for OpenNebula, and together they advance other virtual infrastructure managers by giving the functionality to scale out to external clouds, and providing support for scheduling groups of VMs, such that either the entire group of VMs are provided resources or no member of the group. Also in combination they provide resources by best-effort, as done by Amazon EC2, by immediate provision, as done by Eucalyptus, and in addition using advance reservations. Song et al. [19] have extended IBM data centre management software to be able to deal with cloud-scale data centers, by using a hierarchical set up of management servers instead of a central one. As even simple tasks such as discovering systems or collecting inventory can overwhelm a single management server when the number of managed components or endpoints increases, they partition the endpoints to balance the management workload. Song et al. chose a hierarchical distribution of management components, as a centralised topology will in any possible implementation result in bottlenecks, and because P2P structuring exhibits complexities that are not easy to understand. For flexibility, components of the management have backup servers which have been notified with the changes from the original server. If even once this notification that is no longer arrives, and the backup server will replace the original server’s task until it comes back to operation. Also in a study, this person Song et al. shows that such solution scales “that are almost linearly” to 2048 managed endpoints with almost 8 managing servers .And so also, the cloud-scale solutions might need to manage a number of virtual machines that is one or two orders of magnitude larger, thus in the future will become even larger. And it is left for future work to test that is if the solution will be practically possible and scale for such numbers of managed endpoints.
Figure 7.1: Cloud in little more Detail.
This work aims at supporting the design of shipping containers with respect to costs, reliability, and performance. Considering reliability, the Markov chains that are used to calculate the expected mean time to failure over the lifecycle. For performance and cost, these Markov chains are extended into Markov reward models. These happen under the assumption of exponential failure times, and need to be evaluated against real data. The shipping containers could be used for selling private clouds in a box. Sriram [21] discusses some of the issues with scaling the size of data centers used to provide cloud computing services. He presents the development and initial results of a simulation tool for predicting the performance of cloud computing data centers which incorporates normal failures, failures that occur frequently due to the sheer number of components and the expected average lifecycle of each component and that are treated as the normal case rather than as an exception. Sriram shows that for small data centers and small failure rates the middleware protocol does not play a role, but for large data centers distributed middleware protocols scale better. CloudSim, another modeling and simulation toolkit has been proposed by Buyya et al. [22]. CloudSim simulates the performance of consumer applications executed in the
Also this man, Vishwanath et al. [20] described the provision of shipping containers that contain building blocks for data centers. Containers described are not only serviced over their lifecycle, except they allow for graceful failure of components until performance degrades below a certain threshold and the entire container gets replaced. So To achieve this, Vishwanath et al. start with over-provisioning the demand at the start,
248
cloud. The topology contains a resource broker and the data centres where the application is executed. The simulator can then estimate the performance overhead of the cloud solution. CloudSim is built on top of a grid computing simulator (GridSim) and looks at the scheduling of the execution application, and the impact of virtualization on the application’s performance. Abdel Salam et al. [23] seek to optimize change management strategies, which are necessary for updates and maintenance, for low energy consumption of a could data centre. However, this work simply derives the actual load from the Service Level Agreements (SLA) negotiated with current customers. Abdel Salam then show that the number of servers currently required is proportional to the load, and identifies the number of idle servers as those available after all SLAs are fulfilled on a minimum set of servers. These are suggested as candidates for pending change management requests. One of the key aspects of cloud computing is elasticity, however, which will make it difficult to estimate the load from the SLAs in place
REFERENCES [1] www.sodtechnologies.com [2] A Brief History of Cloud Computing, Posted by James Steddum in Cloud, Technology (blog.softlayer.com) [3] www.wikipedia.com [4] www.cloudcomputinginindia.com [5]Types-of-Cloud-Computing-Private-Public-and hybrid Cloudsblog.appcore.com [6] A break in the clouds: towards a cloud definition. SIGCOMM Comput. Commun.VAQUERO, L., MERINO, L., CACERES, J. and LINDNER, M. 2009. A break in the clouds: towards a cloud definition. SIGCOMM Comput. Commun.. [7] YOUSEFF, L., BUTRICO, M. and DA SILVA, Unified Ontology of Cloud Computing. 2008. GCE. D. 2008. [8] Cloud Computing: New Wine or Just a New Bottle? IT Professional VOAS, J. and ZHANG, J. 2009. [9] In Information Technology Interfaces, 2008. ITI 2008. 30th International Conference on, 31-40. VOUK, M. A. 2008. Cloud computing — Issues, research and implementations. [10] In Grid Computing Environments Workshop (GCE '08), Austin, Texas, USA, November 2008, 1-10. FOSTER, I., ZHAO, Y., RAICU, I. and LU, S. 2008. Cloud Computing and Grid Computing 360-Degree Compared. [11] Corbató, F. J., Saltzer, J. H., and Clingen, C. T. 1972. Multics: the first seven years. In Proceedings of the May 16-18, 1972, Spring Joint Computer Conference, Atlantic City, New Jersey, May 1972, 571-583. [12] BUYYA, R., YEO, C. and VENUGOPAL, S. 2008. Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities. In High Performance Computing and Communications, 2008. HPCC '08. 10th IEEE International Conference on, 5-13. [13] CHANG, M., HE, J., and E. Leon, "ServiceOrientation in the Computing Infrastructure," 2006, [14] Proceedings of the 10th International Conference on Information Integration and Web-based Applications & Service SEDAYAO, J. 2008. Implementing and operating an internet scale distributed application using service oriented architecture principles and cloud computing infrastructure. In iiWAS '08: s, [15] ZHANG, L. and ZHOU, Q. 2009. CCOA: Cloud Computing Open Architecture. In Web Services, 2009. ICWS 2009. IEEE International Conference on, 607-616. [16] NAPPER, J. and BIENTINESI, P. 2009. Can cloud computing reach the top500? In UCHPC-MAW '09:
8. SOME DRAWBACKS But even though through the level of expertise there are still some challenges that are face today: 1. Server robbery: If the server is attacked the entire data is compromised along with the entire workforce. 2. User level of knowledge: The people working at lower level of the available services may not be known of the fact how and who is handling the data they have stored and accessed. 3. Heavy load armada: This may lead to overwhelming pressure on the system and can cause absolute distress to the system; the users working on the levels of abstraction can face several jams and accessing problems. 4.Cost marketing: The main challenge is to have competitive cost in the market, which can prove to be an important task. The size and services of the cloud can differ the cost in many ways from small cloud to large organisational and public clouds. The fact comes straight as cloud faces so many difficulties but still intact to the nature of internet and requirements of the system and people. The security concerns are taken at maximum level to overcome the challenges to a huge extent.
9. CONCLUSION From this review paper we would like to conclude that the research on cloud computing has reached a very level in technology and implementation. From universities to the hospitals, research centres, government organisations, large scale private and public co-operations are using the cloud computing to benefit the overall outcome. Even in the upcoming scope of technology the cloud computing is the one to flourish. Though it took a while to reach the day it is today, but, the level to usage is quite magnificent. The ease of accessing the data through a single cloud and storing the outcome at a safe and known place is becoming the present and the future trend.
249
Proceedings of the combined workshops on UnConventional high performance computing workshop plus memory access workshop, 17-20. [17] Birman, K., Chockler, G., and van Renesse, R. 2009. Toward a cloud computing research agenda. SIGACT News, 40, 2, 68-80. [18] SOTOMAYOR, B., RUBé, , IEEE 13, 5, 14-22. A., LLORENTE, I. and FOSTER, I. 2009. Virtual Infrastructure Management in Private and Hybrid Clouds. Internet Computing [19] SONG, S., RYU, K. and DA SILVA, D. 2009. Blue Eyes:. IEEE International Symposium.. Scalable and reliable system management for cloud computing. In Parallel & Distributed Processing, 2009. IPDPS 2009 [20] VISHWANATH, K., GREENBERG, A. and REED, D. 2009. Modular Data Centers: [21]SRIRAM, I. 2009. In 1st International Conference on Cloud Computing (CloudCom 2009), pp. 381-392. A Simulation Tool Exploring Cloud-Scale Data Centres. [22] BUYYA, R., RANJAN, R. and CALHEIROS, R. N. 2009. Modeling and , In High Performance Computing & Simulation, 2009. HPCS '09. International Conference on, 1-11. simulation of scalable Cloud computing environments and the CloudSim toolkit: Challenges and opportunities. [23] H. AbdelSalam, K. Maly, R. Mukkamala, M. Zubair, Towards Energy Efficient Change Management in a Cloud Computing Environment," 2009 and D. Kaminsky ".
250