largely depend on the amount of data, so many businesses are inclined to gather ... Data analysis applications often access a subset of all the data available in a ...
troduces a novel hardware shared-resource management policy, called ... over, multicore processors ideally enable co-hosting mixed application workload.
The efficient storage, management, and manipulation of large datasets is important in many fields of science, engineering and business. Simulations and ...
3.2.1 Simple MQ with grid execution hierarchy. Each queue in the system is assigned a maximum time that a task may stay in the queue (queueing time) Tq, and ...
Jan 6, 2015 - MOTIVATION - TREE QUERIES. We draw our motivation from key classes of analytic queries frequently encountered in data warehouses and.
integrated HPC grid and cloud usage, which we extended to explore application and infrastructure adaptation on such hybrid infrastructure [5]. We established ...
Long Wang, Henry Palit, Gary Lee, Tuan Ngo, and Rajkumar Buyya. Summary. Many e-science applications can be modeled as workflow applications. In this.
we present benchmarks for different classes of processors. We start ..... online. Nevertheless we believe that it was once an important benchmark that should not be omitted. .... These areas include processors for automotive, consumer, digital.
and develop benchmarks capable of reflecting a processor's performance. ... we present benchmarks for different classes of processors. ..... efficiently on distributed systems. .... The four applications for the data stream processing are an.
Department of Clinical Pathology Zagreb University Hospital Center, Zagreb; and .... ably smaller in general hospitals, as well as lack of spe- ..... 7 Furness PN.
May 21, 2014 - manage training stressors is an influential factor in determining the overall effectiveness of ... Weightlifting Coach, and a Level 3. Cycling Coach ...
Results. According to calculated Kim Unit activity index, pathologists' workload in two investigated pathology departments ..... The Royal College of Pathologists.
bor Computer Corporation, he joined the staff of The Department of. Hospital .... Michigan's Amdahl computer under the MTS operating system. To construct the ...
IEEE Cloud ComputIng publIshEd by thE IEEE ComputEr soCIEty. 2325-6095/16/$33.00 ... infrastructures clearly prohibit a manual approach. Automation of ...
clouds occur and gain an appreciation for what is happening in the atmosphere. They will ... Students to know the basic
include, MODIS, IKONOS , HYPERION, SPOT4 and SPOT 5 images. ... For HYPERION images, the spectral resolution was from 400 to 950 nm with a bandwith ...
Email: {blfeng, jiahenglu, yangnan}@ruc.edu.cn. 2 Department ... processing have shifted the bulk of workloads to data ...... sources in hosting centers, in 'SOSP'.
2 PhD, Full Professor, Departamento de Enfermagem, Universidade Federal de Santa Catarina, Florianópolis ..... 2013;29(5):847-9. 7. Marx K. O Capital: crÃtica da economia polÃtica. Livro. 1. ... DisponÃvel em: http://www.scielo.br/pdf/tce/v22n1/.
Mar 1, 2017 - 1Radiology Technologist, Master and PhD student in Nursing. Professor at the Radiology Technology Course, Federal Institute of Education,.
May 26, 2015 - of extracting frequently appearing operations in big data computing. ... To cover diversity of big data a
In the dot-com boom of the mid-. '90s, many a startup invested venture capital into traditional enterprise solutions like Sun SPARC servers. Is this the cloud?
Aug 1, 2014 - In order to answer the last two questions, it is necessary to in- troduce a .... ties, another fraction is vaporized and released into the gas phase. (the numerical .... sumed power-law exponent of 0.5 is somewhat lower than the more re
Bringing new life to the science of the water cycle. In 1978, in ... investigating how tiny life forms, too, might be ..... training of a new generation of scien- tists who ...
Nefeli: Hint-based Execution of Workloads in Clouds
Feb 8, 2011 - Private Cloud: data and processes are managed within an ... Hybrid Cloud: a combination of public and private clouds ... VMs build virtual.
Nefeli: Hint-based Execution of Workloads in Clouds Konstantinos Tsakalozos # , Mema Roussopoulos # , Vangelis Floros ∗ and Alex Delis # # Department
of Informatics and Telecommunications, University of Athens, 15748, Greece ∗ Greek Research & Technology Network, Athens, 11527, Greece
What is Cloud Computing “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction”. Delivery Models:
Introduction
I
Software as a Service (SaaS)
I
Platform as a Service (PaaS) - Google App Engine
I
Infrastructure as a Service (IaaS) - Amazon EC2
3/33
Deployment models of Clouds
Introduction
I
Public Cloud: available, not definitely free, user’s data not publicly visible
I
Private Cloud: data and processes are managed within an organization
I
Community Cloud: controlled by a group of organizations that have shared interests
I
Hybrid Cloud: a combination of public and private clouds that interoperate
4/33
Cloud characteristics
Introduction
I
Rapid Elasticity: Scale up/downwards.
I
Measured Service: Monitored by the cloud provider
I
On-Demand Self-Service: Without any human interaction
I
Ubiquitous Network Access: Not necessarily accessible through the Internet
I
Resource Pooling: Use a multi-tenant model. Physical resources are reassigned.
5/33
Infrastructure as a Service (IaaS)
The consumer has full control over:
Introduction
I
processing power
I
storage
I
networking components (like firewalls)
I
software stack ( including the operating system and deployed applications)
6/33
Infrastructure as a Service (IaaS)
The consumer has full control over: I
processing power
I
storage
I
networking components (like firewalls)
I
software stack ( including the operating system and deployed applications)
The consumer has no information/control over the cloud infrastructure beneath Containment in a virtual infrastructure (virtual machines, virtual networks)
The cloud providers collect requirements in the form of SLAs, utility functions.
I
The cloud middleware ([OpenNebula][Eucalyptus][Nimbus][VMWare]) considers the requirements and does the placement of VMs to physical hosting nodes.
I
As VMs consume resources (CPU, network) SLAs may fail.
I
VMs are migrated.
7/33
Related work
Autonomic systems [Kephart03][Wang07][Van09]: 1. Monitor the cloud’s and virtual infrastructure’s performance. 2. Analyze results. 3. Plan SLA satisfaction/exploit utility functions [Kephart07]. 4. Execute VM migration. I
Introduction
React to SLA failures. Hamper top performance
8/33
Problem Statement
Introduction
I
Our goal: effective Virtual Machine deployment within the physical infrastructure of an Infrastructure as a Service Cloud.
I
With Nefeli users pass deployment hints while the physical infrastructure remains “cloudy”.
9/33
Nefeli approach
Virtual infrastructures serve “task-flows”: I
Flows of data.
I
Task dependencies.
I
Task parallelization.
If we were given the “task-flows”, we could make educated decisions on Virtual Machine deployment?
Nefeli
10/33
Nefeli approach - Task flow examples
Parallel VMs simultaneously consuming the same resource (e.g CPU): Better to deploy on separate hosting nodes.
Sequential VMs producing excessive network traffic: Better to be placed on the same hosting node.
Nefeli
11/33
Nefeli approach - Task flow examples
Parallel VMs simultaneously consuming the same resource (e.g CPU): Better to deploy on separate hosting nodes.
Sequential VMs producing excessive network traffic: Better to be placed on the same hosting node.
The challenge: preserve the cloud abstractions The user must describe the virtual infrastructure without any reference or knowledge of the physical hosting nodes.
Constraints are predefined, utility functions. The user knows their semantics. The cloud administration knows how to exploit it. Commonly used constraints:
Nefeli
I
FavorVM: Try to reserve a single hosting node for a specific VM.
I
MinTraf: Deploy on the same host a set of VMs so as to minimize traffic over physical network connections.
I
ParVMs: Try to deploy a set of VMs in separate physical nodes so as not to compete over the same resources
13/33
Task-flow example User/Consumer uses hard or weighted (soft) constraints to provide deployment hints
1
2
4
5
Nefeli
............. 1 2 3 3 2 4 1 1 0.4 2 0.3
14/33
Administration Constraints User provided XML
Small Note
Specifications of Virtual Machine
Constraints
Constraints set by the cloud administration: I
PowerSave: Reduce the number of hosting nodes used for VM deployment
I
EmptyNode: Offload a specific physical node
id: Object
id: Object
Deployment Patterns
Administration Constraints
Nefeli
Hardware Specifications Deployment Pofile Edit layers 0 and 1 as needed
15/33
Deployment Profile I
I
A VM-to-host mapping is termed deployment profile. The output of Nefeli. Each constraint I I
I
is realized as a utility function evaluates a deployment profile
Each deployment profile m is assigned a score: X Score(m) = wi Consti (m), Const i ∈Cs
where Cs is the set of all constraints and w the respective weights. I
Nefeli
The goal of Nefeli is to find the profile with the highest score. We employ Simulated Annealing [Kirkpatrik83].
16/33
Handling multiple task-flows on-the-fly
Nefeli
I
One deployment profile, for many task-flows. The set of constraints to be considered is the union of all constraints
I
A transition between deployment profiles involves VM migrations. No live migration causes down time of the virtual infrastructures.
I
Given an initial deployment profile ms , we first produce k high scoring profiles then pick the one whose transition from ms requires migrating fewer VMs.
I
We trade profile quality for swifter transitions.
17/33
Event handling
Multiple task-flows within the same virtual infrastructure. The transition between deployment profiles is triggered by Events. Taxonomy of Events:
Nefeli
I
Direct human intervention: submission or removal of task-flows, activation of constraints like PowerSave, EmptyHost
I
Monitoring activity: in the context of the cluster, the virtual infrastructure or an authorized third party component
Implemented as a thin layer between a cloud middleware and the user.
I
Cloud middleware connect aims to add support for any cloud middleware.
Cloud Middleware Connector Extra Functionalities
Cloud Middleware
VM
VM
VM
...
Monitoring Tools
VM
.....
Hosting Nodes
VM
VM
... Nefeli
20/33
Nefeli - Implementation Internals Consumer
Note
Specifications of Virtual Machines
Constraints
User−monitored Events
Solver
Deployer
Deployment Profiles
Notification Mechanism
Provider
Nefeli − The Cloud Gateway Planner
Cloud Middleware Connector
Administration Constraints
Infrastructure Monitoring Physical nodes
Nefeli
21/33
Nefeli - Evaluation Evaluation of Nefeli running an a simulated infrastructure. Deployment policies: I
Random
I
Balanced, round robin
I
Power save, use as few hosting nodes as possible
I
Nefeli
I
Nefeli with power save
Measure throughput as we increase CPU performance and cloud size.
Nefeli on a real cloud infrastructure Nefeli Vs OpenNebula, measuring execution time.
Evaluation
22/33
Nefeli - Montage workflow “Montage: An Astronomical Image Mosaic Engine”, California Institute of Technology, http://montage.ipac.caltech.edu Constraints: 2
1
5
4
3
8
7
6
9
I
ParVMs on VMs {1, 2, 3, 4}, w = 0.30
I
ParVMs on VMs {5, 6, 7, 8, 9, 10}, w = 0.30
I
ParVMs on VMs {13, 14, 15, 16}, w = 0.30
I
MinTraf on VMs {17, 18, 19, 20}, w = 0.50
I
PowerSave, w = 0.80 Only for Nefeli+PowerSave
10
11
12
13
15
14
17
16
18
19
20
Evaluation
23/33
Nefeli - Montage evaluation Measure throughput of the trailing node.
Throughput (KBytes/Sec)
700 600 500 Random Power Balance Nefeli
400 300 200 100 0 x10
x40
x70 x100 x130 x160 CPU performance scale
x200
Increasing CPU performance Evaluation
24/33
Throughput (KBytes/Sec)
40 Random Power Balance Nefeli Nefeli-power
35 30 25 20 15 10 5 3
4
5 6 7 8 Number of hosting nodes
9
10
Throughput per active node (KBytes/Sec)
Nefeli - Montage evaluation
9 Random Power Balance Nefeli Nefeli-power
8 7 6 5 4 3 2 1 3
4
5 6 7 8 Number of hosting nodes
9
10
Increasing physical hosts number
Evaluation
25/33
Nefeli - Real application evaluation
Application Input: I
a) A video
I
b) Encoding (DVD, SVCD, VCD)
I
c) Region (NTSC, PAL)
Video and audio encoding performed on different nodes. The application has 6 different deployment profiles, one per encoding+region. At run time the application informs Nefeli on which profile to follow.
Evaluation
26/33
Nefeli - Real application evaluation
Evaluation details:
Evaluation
I
3 nodes each with 8 GB of RAM and an Intel(R) Core(TM)2 CPU 6600 at 2.40 GHz CPU
I
connected through a 1 GBps Ethernet switch
I
Live migration not available
I
VM hypervisor Xen 3.2-1
I
Cloud middleware is OpenNebula v.1.2.0
I
VMs use 512 MB of RAM and face no restriction on the CPU resource usage.
27/33
Nefeli - Real application evaluation
1200
Split Transcode Merge
Time (Sec)
1000 800 600 400 200 0 ONE
Nefeli
17% improvement.
Evaluation
28/33
Conclusions
Conclusion
I
Nefeli enhances the interaction of users with IaaS clouds.
I
We plan to integrate Nefeli with already existing monitoring mechanisms add support for Eucalyptus and Nimbus
I
Investigate alternative scheduling options
I
Organize and better manage virtual resources for applications that necessitate the use of massive data sets
29/33
Related work - Autonomic Computing Keph. 03 J. O. Kephart and D. M. Chess, “The Vision of Autonomic Computing,” IEEE Computer, vol. 36, no. 1, pp. 41-50, 2003. Keph. 07 J. O. Kephart and R. Das, “Achieving Self-Management via Utility Functions,” IEEE Internet Computing, vol. 11, no. 1, pp. 40-48, 2007. Wang 07 X. Wang, D. Lan, G. Wang, X. Fang, M. Ye, Y. Chen and Q. Q.B. Wang, “Appliance-Based Autonomic Provisioning Framework for Virtualized Outsourcing Data Center,” in Proc. of the 4th Int. Conf. on Autonomic Computing, Washington, DC, 2007, p. 29. Van s09 H. N. Van, F. D. Tran and J. M. Menaud, “Autonomic Virtual Resource Management for Service Hosting Platforms,” in Proc. of the 2009 ICSE Workshop on Software Engineering Challenges of Cloud Computing, Vancouver, BC, Canada, 2009, pp. 1-8 Conclusion
30/33
References - Virtual Infrastructures
ONE “OpenNebula”, http://www.opennebula.org, May 2009. Eucal. D. Nurmi, R. Wolski, C. Grzegorczyk, G. Obertelli, S. Soman,L. Youseff, and D. Zagorodnov, “The Eucalyptus Open-Source Cloud-Computing System,” in 9th IEEE/ACM Int. Symposium on ClusterComputing and the Grid (CCGRID), Shanghai, China, May 2009, pp.124-131. Nimbus “Nimbus,” http://workspace.globus.org/, Nov. 2009. VMWare VMware, “vSphere,” http://www.vmware.com/products/vsphere/, Nov. 2009.
Conclusion
31/33
Related work - Other
Kirkp.83 S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, pp. 671-680, 1983. Xen 03 P. Barham, B. Dragovic, K. Fraser, S. Hand, T. Harris, A. Ho, R. Neugebauer, I. Pratt, and A. Warfield, “Xen and the Art of Virtualization,” in Proc. of the 19th ACM Symposium on Operating Systems Principles. Lake George, NY: ACM, October 2003, pp. 164-177. Mont. 09 “Montage: An Astronomical Image Mosaic Engine”, California Institute of Technology, http://montage.ipac.caltech.edu, Pasadena, CA, 2009
Conclusion
32/33
Nefeli - Montage workflow details Two ratios for each node: a) Input-to-Output, b) Cycles-to-Output Input-to-Output: Cycles-to-Output: I VMs {1, 2, 3, 4}, 0.5070 I VMs {1, 2, 3, 4}, 0.5830 I VMs: {5, 6, 7, 8, 9, 10}, I VMs: {5, 6, 7, 8, 9, 10}, 0.0500 29.4070 I VMs: 11, 0.0014 I VMs: 11, 1.5000 I