Optimized Service Level Agreement Based Workload Balancing Strategy for Cloud Environment B S Rajeshwari
Dr. M Dakshayini
Department of Computer Science and Engineering
Department of Information Science and Engineering
B M S College of Engineering
Abstract
-
B M S College of Engineering
Bangalore, India
Bangalore, India
raj eshwari
[email protected]
[email protected] as-usage basis depending upon their requirements. Rajkumar Buyya et al. [1] described cloud computing "as a type of parallel and distributed system consisting of a collection of interconnected and virtualized computers that are dynamically provisioned as computing resources based on service-level agreements established through negotiation between the service provider and customers". Primarily cloud computing provides following types of service models
The emerging technological demands of users
call for expanding service model which avoids problem of purchasing and maintaining IT infrastructure and supports for computation-intensive services. This has directed to the development of a new computing model termed
Cloud
Computing.
In cloud
computing,
the
computing resources are distributed in various data centers worldwide and these resources are offered to the customers
on
demand
on
a
pay
as
usage
basis.
Currently, due to the increased usage of cloud, there is a tremendous
increase
in
workload.
The
uneven
•
distribution of load among the servers results in server overloading and may lead to the server crash. This affects
the
performance.
Cloud
computing
service
providers can attract the customers and maximize their profit by providing Quality of Service (QoS). Providing both QoS and load balancing among the servers are the most challenging research issues. Hence, in this paper, a
•
framework is designed to offer both QoS and balancing the
load
among
the
servers
in
cloud.
This
different
different
processing
clusters.
Agreement
In
(SLA)
the
power first
based
are
paper
stage,
grouped Service
scheduling
Platform as a Service (PaaS) model, where complete resources needed to design develop, testing, deploy and hosting an application are
proposes a two stage scheduling algorithm. The servers with
Software as a Service (SaaS) model, where customers can request for desired software, use it and pay only for the duration of time it was used, instead of purchasing, installing and maintaining on their local machine. An example for SaaS is Google Docs.
provided as services without spending money
into
for purchasing and maintaining the servers,
Level
storage and software. Eg. Google App Engine.
algorithm
determines the priority of the tasks and assigns the tasks to the respective cluster. In the second stage, the
•
Infrastructure
as
a
Service
(IaaS)
model,
the load
where, infrastructures like a virtualized server,
among the servers within each cluster. The proposed
memory, and storage are provided as services.
algorithm
Eg.
Idle-Server
Monitoring has
used is
algorithm
the
parameter
and
simulator.
Experimental
balances
response
implemented
time using
results
shows
as
a
QoS
CloudSim that
our
algorithm provides better response time, waiting time, the servers as compared to other existing algorithms. •
Keywords
-
Service
Level
Agreement;
Elastic
Compute
(EC2)
and
Cloud Computing has the following assets
effective resource utilization and balancing load among
Computing; Load Balancing.
Amazon
Simple Storage Service (S3).
Cloud •
Customers can scale up and scale down the resources dynamically as needed. Customers can pay only for how much the resources were used.
I. INTRODUCTION •
Cloud Computing technique offers customers where IT infrastructure like server, software, storage and platform for developing applications can be rapidly provisioned and released as needed on a pay978-1-4799-8047-5/15/$31.00 ©2015 IEEE
Service provider fully manages the service. Customers no longer need to concern about purchase, installation, maintenance of server and software updates.
160
•
•
No investment on server, storage, software and licensing. Users can access cloud from anywhere with an internet connection.
At present, cloud computing is suffering from some challenges like security, QoS, power consumption and load balancing etc. Currently, as there is an increase in technology and consumer demands, there is excessive workload, which calls for the need of a load balancer. Concept of balancing the load among the servers in cloud has an important effect on the performance. The uneven distribution of load among the servers results in server overloading and may lead to the crashing of servers. This degrades the performance. Load balancing is a technique that distributes the load equally among the servers which avoids overloading of servers, server crashes and the performance degrades. Load balancing is an important factor that provides good response time, effective resource utilization. Hence, an efficient load balancing is needed. Another issue in cloud computing is providing a service that satisfies service level agreement (SLA). SLA is a contract between service provider and the customer in delivering QoS. In cloud computing, SLA is termed as "The cloud providers agreed to the level of performance for the certain aspects of the services with the providers" [2]. This requires that a task has to be scheduled such that the response time is reduced and provide services to the customer within the scheduled time as specified in SLA. Thus, by adhering to SLA and achieving proper load balancing, service providers can attract the customers and can maximize their profit, otherwise customers move to the other service providers. Thus providing both QoS and load balancing among the servers are the most challenging research issues. Hence we have proposed a framework that schedules a task based on SLA and also maintains the load balancing among multiple servers, thus provides both good response time and effective resource utilization. The rest of this paper is structured as follows: section 2 presents the related work. In section 3, the proposed algorithm and workflow model are discussed. The section 4 describes the experimental settings, Section 5 discussed simulation results and finally, section 6 presents conclusion. II. RELETED WORK Cloud computing is a technique where groups of servers are distributed in the data centers that allows centralized data storage and online access to the computing resources or services. As the request enters, it has to be distributed equally among the 2015
servers otherwise results in server overloading, performance degrades and not effective utilization of resources. Efficient load balancing technique improves response time of the task as well as utilizes the resources effectively. Ivona Brandic et al[12] discussed that in cloud, services are provided to the customers based on SLA, an agreement signed between the customer and the service providers including the non-functional requirements of the services specified as Quality of Service (QoS). Shu Ching Wang et ai., [4] presented two algorithms: OLB (Opportunistic Load Balancing) and LBMM (Load Balance Min-Min) under three level cloud computing network. At first level, OLB algorithm distributes incoming tasks to the service manager at arbitrary order without considering the current load of the service manager. Further the service manager splits the task into subtasks. At second level, The LBMM scheduling algorithm computes execution time of each subtask on each node and assigns the subtasks to the node that takes minimum execution time. Shu-Ching Wang et ai., [5] presented a hierarchical cloud computing network and three scheduling (BTO+EOLB+EMM) algorithms. In this proposed network, servers are clustered based on processing capacity and memory capacity. At the first stage, the Best Task Order (BTO) algorithm determines the best execution order for each task. At the second stage, Enhanced Opportunistic Load Balancing (EOLB) algorithm schedules a task to the suitable service manager. Further, the service manager splits the task into subtasks. At the third stage, Enhanced Min-Min (EMM) algorithm schedules the subtask to a suitable service node that takes minimum execution time. Meenakshi Sharma et ai., [6][7] suggested Throttled load balancer that maintains the status of all virtual machines (busy/idle). When a task enters, balancer checks for an idle virtual machine (VM) and sends its virtual machine ill to the data center controller. Further the data center controller distributes the task to the identified virtual machine. Shanti Swaroop Moharana et ai.,[8] discussed Round Robin algorithm, where first request will be assigned to a virtual machine by picking randomly from the set of VM's. Subsequently, algorithm schedules the next requests to the next virtual machines in circular fashion. Since Round Robin algorithm schedules task without considering execution time of the task and current load of the server, there is a possibility of some servers getting heavily loaded. Jasmin James et ai., [16] presented Weighted Active Monitoring Load Balancing (WALB) Algorithm. WALB algorithm assigns weight for each virtual machine (VM) depending upon the processing power. When the task arrives, the algorithm finds least loaded and suitable VM depending upon the weight and assigns the task to the identified VM. K C Gouda et ai., [l3] proposed resource allocation model based on the priority. The proposed algorithm calculates the node and time
IEEE International Advance Computing Conference (IACC)
161
priority value based on the specified conditions and checks total number of requested nodes is less than the available nodes, if yes schedules the task, otherwise the tasks are put into the queue. If the requested resource exceeds the limit, rejects the task.
Step 2:
When a task xi arrives, SLA based scheduling
algorithm
determines
the
priority
of
task
by
considering deadline, cost and task length.
Step 3:
SLA based scheduling algorithm assigns the
task xi to the respective cluster based on the calculated III.
PROPOSED SCHEDULING METHOD
priority as follows:
i)
In this proposed framework, the servers with different processing power are grouped into different clusters. The proposed two stage scheduling algorithm consists of SLA Based Scheduling Algorithm and Idle-Server Monitoring Algorithm.
If Xi is a high priority task, it is assigned to high processing power servers cluster.
ii) If
Xj is an average priority task, it is assigned
to a medium processing power servers cluster.
iii)
If Xj is a low priority task, it is assigned to a low processing power servers cluster.
A. SLA Based Scheduling Algorithm At the first stage, when a task enters into the task queue, SLA based scheduling algorithm computes the priority of the task by considering task length, deadline and cost. Then the task is scheduled to the respective servers cluster.
Step 4:
a) If task Xj is assigned to high processing
power
servers
algorithm
cluster, high
Idle-Server processing
found, it assigns Xi to Si. Otherwise the task is put
b)
If task Xj is assigned to medium processing
servers
cluster,
Idle-Server
1:
Group upon
servers the
into
different
processing
power
its task Xj to the identified server
Sj.
Otherwise it
found, the tasks is put into queue.
c)
power
If task Xj is assigned to low processing servers
cluster,
Idle-Server
Monitoring
algorithm first searches for an idle server
Sj
in
medium processing power servers cluster. If found, it assigns
its
task
Xi
to
the
identified
server
Sj.
Otherwise it checks for any idle server Sj in its cluster, if found assigns the tasks to the identified server Sj. If not found, the tasks is put into queue.
Step 5:
Steps 2 to 4 are repeated for each incoming
task.
high
power servers cluster and low processing power servers cluster.
2015
in high
assigns the tasks to the identified server Sj. If not
clusters as
Sj
checks for any idle server Sj in its cluster, if found
processing power servers cluster, medium processing
162
Monitoring
processing power servers cluster. If found, it assigns
The two stage scheduling algorithm is described as follows:
Step
servers
into queue.
algorithm first searches for an idle server
At the second stage, Idle-Server Monitoring algorithm runs within each cluster to monitor a set of servers in its cluster. The algorithm checks for any idle server in its cluster. If found, it assigns a task to the identified server. If a task cannot be assigned to a server, the task is put into a queue. Within each cluster, Idle-Server Monitoring algorithm maintains all the servers status in a table. Thus, Idle-Server Monitoring algorithm within medium processing power servers cluster checks for any idle high processing power server within high processing power cluster. If a free server is found, it assigns its task to the identified high processing power server. Similarly, Idle-Server Monitoring algorithm within low processing power servers cluster checks for any free medium processing power server within medium processing power servers cluster. If a free server is found, it assigns its task to the identified medium processing power server. By doing this, the resources are utilized effectively and the tasks are completed within the scheduled time. The system experiences no overloads and reduces request rejection.
depending
Monitoring
power
cluster checks for any idle server Sj in its cluster. If
power
B. Idle-ServerMonitoring Algorithm
within
IEEE International Advance Computing Conference (IACC)
C.
Workflow Diagram of the Proposed Two Stage Scheduling Algorithm
Fig, 1. Proposed Two Stage Scheduling
IV. This
section
time, waiting time and resource utilization are considered as
EXPERThJENTAL SETUP
describes
the
setup
that
performance measures. was
used
for
experimentation to run the tasks. Java language is used for implementation of SLA based scheduling algorithm and Idle Server
Monitoring
simulating
the
algorithm.
above
CloudSim
environment,
was
because
chosen it
for
provides
emulation of real processor. Tasks of different length are taken for experimention. The request from different users is taken and priority is calculated based on SLA. V. This
part
shows
the
comparative
performance
of
average
In Fig.2, y-axis signifies average response time and x-axis signifies the different priority tasks. From the figure, we can observe the proposed system has improved average response time for the average priority tasks by 9 milliseconds compared to Round Robin algorithm and by 8 milliseconds compared to Throttled
RESULTS AND DISCUSSION
describes the
Fig.2
response time of three algorithms for 3 different priority tasks.
performance
of the
algorithm.
Similarly
the
proposed
system
has
improved average response time for the low priority tasks by proposed
11 milliseconds compared to Round Robin algorithm and by 17
algorithm. The model is implemented using CloudSim toolkit.
milliseconds compared to Throttled algorithm. Thus, figure
The
under
shows that, without effecting the response time of high priority
different loads and its results are compared with other two
tasks, a significant improvement in response time of average
existing algorithms such as Throttled load balancing algorithm
priority tasks by utilizing high processing power servers and
effectiveness
of proposed
model
is evaluated
and Round Robin algorithm. Three metrics such as response
2015
IEEE International Advance Computing Conference (IACC)
163
significant improvement in response time of low priority tasks
time for all average priority tasks and more execution time for
by utilizing medium processing power servers.
all low priority tasks.
l�
35 30
.. E
25
I SlAtldle-Server Monitoring Algorithm
20
c
0E f:
15
I Round Robin Algorithm
10
0 • c 0 0. • 0 a: cD > «
I Throttled Algorithm
.-
E
[
W E i=
[ 0
"
Hi�Pnon�Tilh AV!liglPnon� lowPnon�Tal�1 MI
u W x UJ
17 16 15 14 13 12 11
-Exection Time· High·Priority· TaSKS
1� 8 7 6
" I
5
4
11
".
.. W1
Jo
1\, ,IIJU
\I WA/ II "" \ 11 -..;./1 v '\
0
-Exection Time· Average Priority
I
A
! / .J/ ....., ./ T ...1
TaSKS V
��
00 � � 0 m � M M � 0 N � N � � M � � � N � � M N m m m � � � � � � � � � 00 00 00 00 � � •
�
TaSKS
Priorities
................. ....)
-Exection Time· low·Priority· TaSKS
.................... > . .
Fig.2. Comparision of Response Time
Flg.4. CompanslOn of ExectlOn TIme of DIfferent Pnonty Tasks
Fig.3 shows comparison of average response time of high
Fig.5, illustrate comparison of average waiting time of three
priority tasks, average priority tasks and low priority tasks. In
algorithms for 3 priority tasks. In Fig.5, y-axis denotes average
Fig.3,
waiting time and x-axis denotes the different priority tasks. The
y-axis represents average response time and x-axis
represents the different priority tasks. From Fig.3, we can
graph shows that the proposed algorithm has improved waiting
observe that high priority tasks are faster than the average
time compared to the other two algorithms. This shows that the
priority tasks by 5 milliseconds and average priority tasks are
proposed algorithm responses to the request immediately most
faster than the low priority tasks by 15 milliseconds.
of the time.
30
70
Vi 25 E
.. 60 E
c
[
Q) 20 E i= Q) 15 V>
i=
10
"
c 0 0. V> Q) Cl: OD > «
al [ 0
5
al > «
0 High Priority Tasks
Averge Priority Tasks
50 40 30
I SlAtldle-Server Monitoring M gorithm
10
I Round Robin �gorithm
10
Low Priority Tasks High Priori� Tas�s Average Priority Ms lowPriori� Tash
Priorities ...............••.....
)
Different Priorities TasKs )
Fig.3. Comparision of Response Time of Different Priority Tasks
Fig.4, illustrates comparative performance on execution time for the 3 different priority tasks. In Fig.4, y-axis represents execution time of each task and x-axis represents the different tasks. Fig.4, shows that the proposed algorithm takes lesser execution time for all high priority tasks, average execution
164
2015
I Throttled Algorithm
......... __. .
Fig.5. Comparision of Waiting Time
Fig.6,
illustrate
comparison
of resource utilization
of
servers in three algorithms in all 3 clusters. From Fig.6, we can observe that our proposed model utilizes high processing power servers efficiently compared to other two algorithms,
IEEE International Advance Computing Conference (IACC)
whereas load is shared among servers in medium processing power cluster and low processing power clusters. The figure shows overloading of servers in other two algorithms in low processing power cluster. Thus our proposed algorithm utilizes
[4] Shu Ching Wang, Kuo Qin Yan, Wen Pin Liao, Shun Sheng Wang, "Towards a Load Balancing in a Three Level Cloud Computing Network", 3'd International Conference on Computer Science and Infonnation Technology, Volume
I,
9'h
to
Il'h
July
2010,
pp
108-113,00l:10.1109/ICCSIT
2010.5563889
the resource effectively in all the three clusters and avoids server overloading, Thus balancing load among the servers.
[5] Shu Ching Wang, Kuo Qin Yan, Shun Sheng, Wang, Ching Wei, Chen, "A Three Phase Scheduling in a Hierarchical Cloud Computing Network", 3,d International Conference on Communication and Mobile Computing, Taiwan, 18th to 20'h April 201 I, pp 114-117, 001: 10.1109/CMC 2011.28.
[6] Meenakshi Shanna, Pankaj Shanna, "Perfonnance Evaluation of Adaptive Virtual • SLAtldle·Server
1 1 1
Monitoring
!�14 H1
Algorithm • Throttled Algorithm
1
Machine
Load
Balancing
Algorithm",
International
Journal
of
Advanced Computer Science and Applications, Volume 3, Issue 2, pp 86-88, 2012, ISSN: 2156-5570.
[7] Meenakshi Shanna, Pankaj Shanna, Sandeep Shanna, "Efficient Load Balancing Algorithm in
VM
Cloud Environment", International Journal of
Advanced Computer Science and Applications", Volume 3, Issue I, pp 439441, 2012, ISSN:0976-8491[online], ISSN:2229-433[Print].
• Round Robin
[8] Shanti Swaroop Moharana, Rajadeepan 0 Ramesh, Oigamber Powar,
Algorithm
"Analysis of Load Balancers in Cloud Computing", International Journal of Computer Science and Engineering, Volume 2, issue 2,pp 101-108, May 2013, ISSN: 2278-9960.
High Power Servers
Medium Power
Low Power Servers
Cluster
Servers Cluster
Cluster [10] Anthony j Velte, Toby J Velte, Robert Eisenpeter, "Cloud Computing: A
Fig.6. Comparision of Resource Utilization
Practical Approach", 1st Edition, 2009, Tata McGraw Hill Publishers, ISBN: 0071626948.
VI.
[ Il]http://csrc.nist.gov/publications/nistpubs/800-145/SP800-145.pdf.
CONCLUSION
The main aim of the proposed algorithm is to satisfy both SLA as well as load balancing among the servers. In this paper, a two
stage
scheduling
algorithm
SLA
Based
Scheduling
algorithm and Idle-Server Monitoring algorithm is presented
[12] Ivona Brandic, Vincent C. Emeakaroha, Michael Maurer, Schahram Oustdar, Sandor Acs, Attila Kertesz, Gabor Kecskemeti,"LAYSI: A Layered Approach
for
SLA-Violation
Propagation
in
Self-manageable
Cloud
Infrastructures",34th Annual IEEE Computer Software and Applications Conference, pp 365-360,2010,001 10.1109/COMPSACW.2010.70.
and evaluated. SLA Based Scheduling algorithm schedules the tasks to the respective cluster based on SLA and Idle-Server
[13] K C Gouda, Radhika T V, Akshatha M, "Priority Based Resource
Monitoring algorithm balances the load among the clusters as
Allocation Model for Cloud Computing", International Journal of Science,
well as within the cluster. The result shows that our algorithm provides good response time, reduces waiting time, effective utilization of resources and achieves better load balancing among the servers.
engineering and Technology Research, Volume 2, Issue I, pp 215-219, January 2013.
[14] Bhavani B H, H S Guruprasad, "Resource Provisioning Techniques in Cloud Computing Environment: A Survey", International Journal of Research in Computer and Communication Technology, pp395-401, Volume 3, Issue 3, March 2015.
REFERENCES [1]Raj Kumar Buyya, CheeShin Yeo, Srikumar Venugopal, James Broberg, Ivona Brandic, "Cloud Computing and Emerging IT Platfonns: Vision, Hype and Reality for Delivering Computing as the 5th Utility", Journal of Future Generation Computer Systems, Volume 25, Issue 6,June 2009,pp 599616,001:10.1016/future 2008.12.001.
John
W
Rittinghouse,
James
F
Ransome,
"Cloud
Computing:
[16] Jasmin James, Bhupendra Venna, "Efficient
VM
Load Balancing
Algorithm for a Cloud Computing Environment", International Journal on Computer Science and Engineering, Volume 4,pp 1658-1663, September
[2] Raj Kumar Buyya, J.Broberg, A.Goscinst, "Cloud Computing: Principles and paradigms", New Jersey: John Wiley
[15]
Implementation, Management and Security", CRC Press, 2009.
& Sons,
2012, ISSN: 0975-3397.
Inc, 2011.
[3] Rajeshwari B S, Dr. M Oakshayini, "COMPREHENSIVE STUDY ON LOAD BALANCING TECHNIQUES IN CLOUD", an International Journal of Advanced Computer Technology, Volume 3, Issue 6, June 2014, pp:900907, ISSN: 2320-0790.
2015
IEEE International Advance Computing Conference (IACC)
165