Improved Strategies for Enhanced Business Performance in Cloud based IT Industries 1
T.R.Gopalakrishnan Nair 1
Vaidehi . M2
Suma. V 3
Research & Industry Incubation Centre, DSI, Bangalore, India, Aramco Endowed Chair - Technology, PMU, KSA. 2,3 Research & Industry Incubation Centre, DSI, Bangalore, India 1
[email protected] [email protected],
[email protected]
Abstract Emergence of sophisticated technologies in IT industries has posed several challenges such as production of products using advanced technical process for instance Result Orientation Approach, Deployment, Assessment and Refinement (RADAR) in a dynamic and competitive environment. The key challenge for any engineer is therefore to develop process and products which ultimately lead towards total customer satisfaction. Recent development in technology has driven most of the IT industries to operate in the cloud environment due to reduced infrastructure investment and maintenance overheads. However, existing process in cloud lacks efficient multiple service paradigms that can provide improved business gain. Thus, it is the responsibility of every engineer to contribute towards effective and efficient techniques and models that can enhance the business performance. The position of this paper is to present several major issues prevailing in the IT industries such as delay time, response time, performance etc., which call for immediate attention in order to position themselves in the market. Further, this paper provides improved strategies through efficient job scheduling and modified resource allocation techniques for aforementioned issues in order to enhance the business performance in cloud-based IT sectors. The simulated results provided in this paper indicate the impact of enhanced solutions incorporated in the job processing strategies. They further enable better performance of the cloud with reduced delay and response time resulting towards improved throughput. Subsequently, it increases the job acceptance ratio with respect to time and thereby leading the industry to accomplish total customer satisfaction in addition to the continued sustainability in the competitive business market. Key words: Cloud Computing, Resource Allocation, Job Scheduling, Business Performance I Introduction The cloud computing model is becoming more popular in the IT industries due to less investment on infrastructure and maintenance hence having an impact on cost and time optimization. The cloud has created a new panorama to align IT and business goals. It is primarily a powerful computing model providing Software-as-Service, Infrastructure-as-Service and Application-asService in a virtualized environment. The enormous computing power is made possible through distributed computing and the advanced communication networks. The on-demand characteristic of cloud computing combined with pay-go-use model means that as the application demand grows, so as the demand of the resources required to service the requests. The important facet that differentiates the cloud computing infrastructure is the implementation of the virtualization technique. With the virtualization technique, the requested computing resources are provided to its clients regardless of the heterogeneous environment based on SLA (service level agreements) between the service providers and its clients.
In this scenario the ability equals exactly the demand as long as the application is designed properly and its architecture is agreeable in quick formation of virtual machines, less response time, minimum delay in data transfer. Thereby the model should cater to virtual machine cost and low data transfer cost. The present cloud infrastructure has limited consideration for supporting virtual machine formation that meets QoS needs of clients and minimizes the virtual machine costs to maximize return on investment. This type of computation model promises to reduce the capital and operational cost of the clients as only minimal management efforts are required from the users to run their application in the cloud. Hence to anticipate how much work they can accomplish within the limited budget and how to achieve the maximum ROI before running the application in the cloud becomes an important problem for consideration. This issue can be reduced to minimization in the response and the delay time and henceforth an improvement in the throughput of the computing model. The total execution time is estimated in three different phases. In the first phase the formation of the virtual machines and they will be idle waiting for the scheduler to schedule the jobs in the queue, once jobs are allocated, the virtual machines in the cloud will start processing, which is the second phase, and finally in the third phase the cleanup or the destruction of the virtual machines. The throughput of the computing model can be estimated as the total number of jobs executed within a time span without considering the virtual machine formation time and destruction time. This paper is structured as follows, in section II we present the related work, section III the challenge considered and the problem definition. In section IV, the architecture of scheduling model and how jobs can be allocated to various virtual machines and can be executed in the cloud, are being defined. The performance analysis of the model is presented in section V. II Related Work Due to the recent emergence of cloud computing research in this area is in the preliminary stage. Jiyan et.al, (2010) have proposed a resource allocation mechanism with preemptable task execution which increases the utilization of clouds. They have proposed an adaptive resource allocation algorithm for cloud system with preemptable tasks but their approach does not pertain to cost optimization and time optimization [1]. Rajkumar Buyya et.al, have focused on the development of energy-aware dynamic resource provisioning and allocation algorithms to minimize the service allocation to maximize the return on investment (ROI) of cloud infrastructure set up by service providers, the research says that the present state-of-the-art in cloud infrastructure has no or limited consideration for supporting cost optimized resource allocation that meets QoS needs of its clients and minimize the energy cost to maximize ROI [2]. Hiyan Qian and Deep Medhi have presented a multi-time period optimization model for saving the operational cost by combining two aspects one is the dynamic voltage /frequency scaling DVFS and the second aspect is turning servers on/off over a time horizon [3]. This approach has an overhead of turning off and on of the servers to optimize cost. Sivadon Chaisiri et.al, (2011) have proposed an optimal cloud resource provisioning algorithm (OCRP) to provision resources offered by multiple cloud providers. They have considered the stochastic programming model to formulate the OCRP algorithm. This algorithm can provision
computing resources that can be used in multiple stages. The demand and price uncertainty has been taken care by the algorithm [4]. III Problem definition However tremendous research is being carried out in the cloud, it is apparent that the current scheduling techniques are not efficiently Efficient scheduling and resource allocation is a critical characteristic of cloud computing based on which the performance of the system is estimated. The considered characteristics have an impact on cost optimization, which can be obtained by improved response time and processing time. IV Design Model To handle the aforementioned challenge we have proposed a scheduling algorithm and compared it with the existing round robin scheduling to estimate response time, processing time, which is having an impact on cost . Consumer 1
Consumer n
Consumer 2 Primar y Queue
Job Schedule r
VM2
VM1 VMn
Figure1. Equally spread current execution load to the cloud system The figure 1 depicts the architecture model for which the proposed algorithm is implemented. Here the jobs are submitted by the clients to the computing system. As the submitted jobs arrive to the cloud they are queued in the stack. The cloud manager estimates the job size and checks for the availability of the virtual machine and also the capacity of the virtual machine. Once the job size and the available resource (virtual machine) size match, the job scheduler immediately allocates the identified resource to the job in queue. Unlike the round robin scheduling algorithm, there is no overhead of fixing the time slots to schedule the jobs in a periodic way. The impact of the ESCE algorithm is that there is an improvement in response time and the processing time. The jobs are equally spread, the complete computing system is load balanced and no virtual machines are underutilized. Due to this advantage, there is a reduce in the virtual machine cost and the data transfer cost shown in section V.
V Simulation setup and Performance analysis. We had used the cloud analyst tool to evaluate the proposed algorithm, the simulation was carried out for a period of one hour. The table 1 shows the simulation set up. Here we considered the number of users, data centers. Table 1. User Base Configuration
Table 2. Data Centre Configuration
UB
R/U/H
DS/R
DC
B1
06
100
DC4
UB2
06
100
DC3
UB3
06
100
DC2
UB4
06
100
DC1
UB5
06
100
DC5
UB – User base, R/U/H –Request per user per hour, DS/R – Data Size per request,
DC
No. of VMs
Memory BW
CPV $/Hr
Mc $/s
Sc $/s
Dt $/Gb
DC1
40
1024
1000
0.1
0.05
0.1
0.1
DC2
20
512
100
0.1
0.05
0.1
0.1
DC3
50
512
10000
0.1
0.05
0.1
0.1
DC4
35
1024
1000
0.1
0.05
0.1
0.1
DC – Datacenter; Mc- Memory cost, Sc – Storage cost Dt- Data Transfer cost.
Overall Response Time Summary using Round robin algorithm Avg (ms)
Min (ms)
Max (ms)
Overall response time:
358.03
170.11
625.12
Data Center processing time:
0.30
0.4
0.75
UB1
UB2
UB3
Figure 2. Response time with respect to Round Robin Scheduling Technique Cost Total Virtual Machine Cost ($): Total Data Transfer Cost ($): Grand Total: ($)
100 05 105
Data Center
DC1
VM Cost $
Data Transfer cost Total $ $
100
05
105
Overall Response Time Summary using Equal Spread Current Execution Load algorithm
UB1
Avg (ms)
Min (ms)
Max (ms)
Overall response time:
330.45
161.11
565.11
Data Center processing time:
0.23
0.02
0.61
UB2
UB3
Figure 3. Response time with respect to Equal spread current execution load scheduling algorithm `Cost Total Virtual Machine Cost ($): Total Data Transfer Cost ($): Grand Total : ($) :
55 05 55
Data Center DC1
VM Cost $
Data Transfer Total $ Cost $
50
5
55
The performance of the ESCE algorithm is depicted in figure 3. Comparing with figure 2 the overall response time and data centre processing time is improved. It is also seen that the virtual machine cost and the data transfer cost in the ESCE algorithm is much better when compared to Round Robin algorithm. The results strongly shows that around 45-50% gain has been achieved using ESCE algorithm IV Conclusion Growth of IT industries has led to the emergence of new technologies such as cloud-based operations for the purpose of sustainability in silicon market. Therefore, it is the key challenge of every IT engineer to develop products that can enhance the business performance in the cloudbased IT sectors. Current strategies lack efficient scheduling and resource allocation techniques leading to increased operational cost and henceforth a threat to complete customer satisfaction. This paper therefore aims towards the development of enhanced strategies through improved job scheduling and resource allocation techniques for overcoming the above-stated issues. Here, Equal Spread Current Execution Load algorithm dynamically allocates the resources to the job in queue leading reduced cost in data transfer and virtual machine formation. The simulation results further show a reduction up to 50% in the cost thereby leading to improved business performance and retention of total customer satisfaction.
References [1] Jiyin Li, Meikang Qiu, Jain-Wei Niu, YuChen, Zhong Ming “ Adaptive Resource Allocation for Preeemptable Jobs in Cloud Systems”. IEEE International Conference on Intelligent Systems Design and Applications, pp. 31-36, 2010. [2] Bahman Javadi, Ruppa K. Thulasiram, and Rajkumar Buyya “Statistical Modeling of Spot Instance Prices in Public Cloud Environments” 2011 Fourth IEEE International Conference on Utility and Cloud Computing, IEEE Computer Society, 2011, pp 219-228 [3] Haiyang Qian and Deep Medhi “Server Operational Cost Optimization for Cloud Computing Service Providers over a Time Horizon” Proceeding Hot-ICE'11 Proceedings of the 11th USENIX conference USENIX Association Berkeley, CA, USA 2011 [4] Sivadon Chaisiri, Bu-Sung Lee, and Dusit Niyato “Optimization of Resource Provisioning Cost in Cloud Computing” IEEE TRANSACTIONS ON SERVICES COMPUTING, 2011 [5] T.R.Gopalakrishnan Nair, Vaidehi, M.: “Efficient resource arbitration and allocation strategies in cloud computing through virtualization” 2011 IEEE International Conference on Cloud Computing and Intelligence Systems September 2011, Beijing, China, pp. 397-401 [6] Shu-Ching Wang, Kuo-Qin Yan, Shun-Sheng Wang, Ching-Wei Chen “A Three-Phases Scheduling in a Hierarchical Cloud Computing Network.” , 2011 Third International Conference on Communications and Mobile Computing. IEEE Computer Society, pp.114-117 [7] Oprescu, A., Kielmann, T.: Bag-of-Tasks Scheduling under Budget Constraints. In: 2010 2nd IEEE International Conference on Cloud Computing Technology and Science. IEEE Computer Society, pp.351-359 [8] G. Tesauro, R. Das, W. Walsh, and J. Kephart, “Utility-Function-Driven Resource Allocation in Autonomic Systems,” in Autonomic Computing, 2005. ICAC 2005. Proceedings. Second International Conference on,2005, pp. 342–343. [9] G. Tesauro, N. Jong, R. Das, and M. Bennani, “A Hybrid ReinforcementLearning Approach to Autonomic Resource Allocation,” in ICAC ’06:Proceedings of the 2006 IEEE International Conference on Autonomic Computing. IEEE Computer Society, 2006, pp. 65–7 [10] R. Das, J. Kephart, I. Whalley, and P. Vytas, “Towards Commercialization of Utility-based Resource Allocation,” in ICAC ’06: IEEEInternational Conference on Autonomic Computing, 2006, pp. 287–290. [11] J. Almeida, V. Almeida, D. Ardagna, C. Francalanci, and M. Trubian,“Resource Management in the Autonomic Service- Oriented Architecture,”in ICAC ’06: IEEE International Conference on AutonomicComputing, 2006, pp. 84– 92. [12] G. Khanna, K. Beaty, G. Kar, and A. Kochut, “Application PerformanceManagement in Virtualized Server Environm,” in Network Operationsand Management Symposium, 2006. NOMS 2006. 10th IEEE/IFIP,2006, pp 373–381.