Paper Title (use style: paper title)

19 downloads 8827 Views 250KB Size Report
Feb 2, 2017 - Here large groups of remote servers are networked to allow centralized data ... jobs while the balancer for each cloud partition chooses the best load .... distributed Web-server system needs to appear as a single host to the ...
Volume||5||Issue||02||February-2017||Pages-6192-6198||ISSN(e):2321-7545 Website: http://ijsae.in Index Copernicus Value- 56.65 DOI: http://dx.doi.org/10.18535/ijsre/v5i02.01

Load Balancing using Switch mechanism Author Prof. Arvind U. Jadhav Computer Science & Engineering, PES College of Engineering, Aurangabad, Email id: [email protected] ABSTRACT The cloud computing environment has an important impact on the performance in Load Balancing, load balancing makes cloud computing more efficient and improves user satisfaction in this paper a better load balance model for the public cloud based on the cloud partitioning concept using switch mechanism to choose different strategies for different situations and algorithm applies the game theory to the load balancing strategy which improves the efficiency in the public cloud environment. INTRODUCTION 1.1 Cloud computing The technology in the field of computer science mostly uses cloud computing. Gartner’s report says that the cloud will bring changes to the IT industry as improvement in cloud and hence is changing our life by providing users with new types of services Users get service from a cloud without paying attention to details. The NIST gave a definition of cloud computing as a model for enabling ubiquitous convenient, ondemand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications and services) which can be released with minimal management effort or service provider interaction Cloud computing refers to applications and services that run on a distributed network using virtualized resources and accessed by common Internet protocols and networking standards. Here large groups of remote servers are networked to allow centralized data storage and online access to computer services or resources. Clouds can be classified as public private or hybrid ,Abstraction: Cloud computing abstracts the details of system implementation from users and developers. Applications run on physical systems that aren't specified, data is stored in locations that are unknown administration of systems is outsourced to others and access by users is ubiquitous. genuine gesture of the elephant in a real-time fashion. 2. LITERATURE SURVEY Before developing the tool it is necessary to determine the time factor, economy company strength. Next steps are to determine which operating system and language can be used for developing the tool. Before building the system the above consideration are taken into account for developing the proposed system. 2.1. Existing System Processing so many jobs in the cloud computing environment is complex problem with load balancing as workload the public cloud is increases which become crucial to improve system performance and maintain stability. It is Hard to predict he job arrival pattern and the capacities of each node in the cloud differ The Load balancing schemes depending on whether the system is dynamics or static. Here Static schemes do not use the system information and are less complex while dynamics schemes use system information

Prof. Arvind U. Jadhav IJSRE Volume 05 Issue 02 February 2017

Page 6192

Existing Algorithm Step 1: The Load Balancer maintains an index table of VMs and the number of requests currently allocated to the VM. At the start all VM’s have 0 allocations. Step 2: When a request to allocate a new VM from the Data Center Controller arrives, it parses the table and identifies the least loaded VM. If there are more than one, the first identified is selected. Step 3: The Load Balancer returns the VM ID to the Data Center Controller. Step 4: The Data Center Controller sends the request to the VM identified by that id. Step 5: Data Center Controller notifies the Load Balancer of the new allocation. Step 6: The Load Balancer updates the allocation table incrementing the allocations count for that VM. Step 7: When the VM finishes processing the request, and the Data Center Controller receives the response cloud let, it notifies the Load Balancer of the VM De-allocation. Step 8: The Load Balancer updates the allocation table by decrementing the allocation count for the VM by one. Step 9: Continue from step 2. 2.2. Proposed System In Proposed system main controller and balancers together and analyze the information, the dynamic control has little influence on the other working nodes and system status then provides a basis for choosing the right load balancing strategy the load balancing model given aims at the public cloud which has numerous nodes with distributed computing resources in many different geographic locations Thus this model divides the public cloud into several cloud partitions When the environment is very large and complex these divisions simplify the load balancing The cloud has a main controller that chooses the suitable partitions for arriving jobs while the balancer for each cloud partition chooses the best load balancing strategy. 3. SYSTEM DEVELOPMENT 3.1. System Model The cloud environment consist of numerous node and it is partitioned into an n clusters based on cloud clustering technique our proposed model consists of main controller which controls all load balancer in each cloud cluster The main controller maintains all the details include index table its current status information of all load balancer in each cluster. The index table consists of both static parameters number of CPU Processing speed memory size etc and dynamic parameters network bandwidth, CPU and memory utilization ratio all load balancer should refresh their status for every T when the job arrives to main controller in cloud environment the initial step is to choose the correct cloud cluster and follows the algorithm as 1. Index table is initialized as 0 for all nodes which are connected to the central controller in cloud environment. 2. When the controller receives any new request from client the controller queries the load balancer of each cluster for the job allocation. 3. The controller passes the index table to find the next available node which having less weight. If found, continues the processing. If not found the index table is reinitialized to 0 and in an increment manner, then controller passes the table again to find the next available node. 4. After finishing the process the load balancer update the status in the allocation table which is maintained by the controller.

Prof. Arvind U. Jadhav IJSRE Volume 05 Issue 02 February 2017

Page 6193

Fig:3.1: System Model Proposed Algorithm: Enhanced Load Balancing Algorithm using Efficient Cloud Management System Step1. Initially VM status will be 0 as all the VMs are available. Cloud Manager in the data center maintains a data structure comprising of the Job ID, VM ID and VM Status. Step2. When there is a queue of requests, the cloud manager parses the data structure for allocation to identify the least utilized VM. If availability of VMs is more then the VM with least hop time is considered. Step3. The Cloud Manager updates the data structure automatically after allocation. Step4. The Cloud Manager periodically monitors the status of the VMs for the distribution of the load, if an overloaded VM is found, and then the cloud manager migrates the load of the overloaded VM to the underutilized VM. Step 5. The decision of selecting the underutilized VM will be based on the hop time. The VM with least hop time is considered. Step 6.The Cloud Manager updates the data structure by modifying the entries accordingly on a time to time basis Step7. The cycle repeats from Step2. 3.2 Design Model It is evident from the progress of our survey that none of the load balancing techniques are efficient in avoiding deadlocks among the VMs. Henceforth, a load balancing algorithm to avoid deadlock by incorporating the migration has been proposed. Figure Depicts the design of the cloud architecture for this approach. According to this design various users submit their diverse applications to the cloud service provider through a communication channel. The Cloud Manager in the cloud service provider‟s data center is the prime entity to distribute the execution load among all the VMs by keeping track of the status of the VM. The Cloud Manager maintains a data structure containing the VM ID, Job ID of the jobs that has to be allocated to the corresponding VM and VM Status to keep track of the load distribution. The VM Status represents the percentage of utilization. The Cloud Manager allocates the resources and distributes the load as per the data structure. The Cloud Manager analyzes the VM status routinely to distribute the execution load evenly. In course of processing, if any VM is overloaded then the jobs are migrated to the VM which is underutilized by tracking the data structure. If there are more than one available VM then the assignment is based on the least hop time completion of the execution the Cloud Manager automatically updates the data structure.

Prof. Arvind U. Jadhav IJSRE Volume 05 Issue 02 February 2017

Page 6194

Cloud Manger. Further, the load balancing approach can be represented using a mathematical model. Let the graph be G = (V,E), where V is the disjoint vertex set represented as V= v1 U v2 in which v1 represents the set of VMs in the data center and v2 represents the set of jobs received and E is the mapping between the two sets of vertices's. Figure Depicts the mathematical model.

Figure 3.2.1: Cloud Architecture for Load Balancing when a job arrives at the public cloud, the first step is to choose the right partition.

Figure 3.2.4: Job assigning strategy The cloud partition status can be divided into three types dle hen the percentage of idle nodes exceeds change to idle status. Normal hen the percentage of the normal nodes exceeds β, change to normal load status. Overload hen the percentage of the overloaded nodes exceeds Υ, change to overloaded status. The parameters , β, Υ are set by the cloud partition balancers. The main controller has to communicate with the balancers frequently to refresh the status information.

when job i arrives at the system the main controller queries the cloud partition where job is located. If this location's status is idle or normal the job is handled locally. If not another cloud partition is found that is not overloaded. Prof. Arvind U. Jadhav IJSRE Volume 05 Issue 02 February 2017

Page 6195

3.3 Workload balance for the normal node status When the cloud partition is normal, jobs are arriving much faster than in the idle state and the situation is far more complex so a different strategy is used for the load balancing. Load Balancing Model Based On Cloud Partitioning for Public Cloud System Development Each user wants his jobs completed in the shortest time, so the public cloud needs a method that can complete the jobs of all users with reasonable response time .In order to achieve this game theory is used. Game theory has non-cooperative games and cooperative games. Cooperative game there are several decision makers that cooperate in making decisions such that each of them will operate at its optimum Decision makers have complete freedom of prepay communication to make joint agreements about their operating points. Non cooperative approach: In this case there are several decision makers that are not allowed to cooperate in making decisions. Each decision maker optimizes its own response time Independently of the others and they all eventually reach an equilibrium. This situation can be viewed as a non-cooperative game among decision makers .The equilibrium is called Nash equilibrium and it can be obtained by non-distributed non cooperative policy. At the Nash equilibrium decision maker cannot receive any further benefit by changing its own decision. In non-cooperative games, each decision maker makes decisions only for his own benefit. The system then reaches the Nash equilibrium, where each decision maker makes the optimized decision. The Nash equilibrium is when each player in the game has chosen a strategy and no player can benefit by changing his or her strategy while the other player‟s strategies remain unchanged. 4. PERFORMANCE ANALYSIS 4.1 Analytical Analysis In computing, load balancing distributes workloads across multiple computing resources, such as computers, a computer cluster, network links, central processing units or disk drives. Load balancing aims to optimize resource use, maximize throughput, minimize response time, and avoid overload of any single resource. Using multiple components with load balancing instead of a single component may increase reliability and availability through redundancy. Load balancing usually involves dedicated software or hardware, such as a multilayer switch or a Domain Name System server process. Load balancing refers to efficiently distributing incoming network traffic across a group of back end servers, also known as a server farm or server pool. Modern high-traffic websites must serve hundreds of thousands, if not millions, of concurrent requests from users or clients and return the correct text, images, video, or application data, all in a fast and reliable manner. To cost-effectively scale to meet these high volumes, modern computing best practice generally requires adding more servers. Load Balancing Algorithms Different load balancing algorithms provide different benefits; the choice of load balancing method depends on your needs: Round Robin – Requests are distributed across the group of servers sequentially. Least Connections – A new request is sent to the server with the fewest current connections to clients. The relative computing capacity of each server is factored into determining which one has the least connections IP Hash – The IP address of the client is used to determine which server receives the request. Computational Analysis: Here we are considering as a distributed Web-server system any architecture consisting of multiple Webserver hosts with some mechanism to spread the incoming client requests among the servers. The nodes may be locally or geographically distributed (namely,LAN and WAN Web-server systems, respectively). A distributed Web-server system needs to appear as a single host to the outside world so that users need not be concerned about the names or locations of the replicated servers.

Prof. Arvind U. Jadhav IJSRE Volume 05 Issue 02 February 2017

Page 6196

Table no:4.1 Comparing Existing and Proposed System Parameter

Existing System

Our System

Scalability

It is difficult to scale existing system due to its complex structure.

Scalable can add new server at any time

Availability

Availability of Server

If server is not available the task get hand over to

Transparency

User is unaware of internal

User involvement makes system transparent

Data storage

User is unaware of where data stored

User gets informed that at which server the data is

Flexibility issue

As compare to existing system our system is more flexible because we can add or remove server easily

Flexibility

5. CONCLUSION Cloud computing is a promising technology, which is a pay-go model that provides the required resources to its clients and virtualization is one of the core characteristics of cloud computing it is possible to virtualize the factors that modulates business performance such as IT resources hardware, software and operating system in the cloud- computing platform. Cloud having the characteristic of distributed computing there can be chances of deadlocks occurring during the resource allocation process. The load balancing is implemented in the cloud computing environment to provide on demand resources with high availability. But the existing load balancing approaches suffers from various overhead and also fails to avoid deadlocks when there more requests competing for the same resource at a time when there are resources available are insufficient to service the arrived requests. The enhanced load balancing approach using the efficient cloud management system is proposed to overcome the aforementioned limitations. The evaluation of the proposed approach will be done in terms of the response time and also by considering the hop time and wait time during the migration process of the load balancing approach to avoid deadlocks. Henceforth, the proposed work improves the business performance of the cloud service provider by achieving the aforementioned objectives. REFERENCES 1. Shu-Ching Wang, Kuok-Qin Yan, Wen-Pin Liao, Shun- Sheng ang, “Towards a Load Balancing in a Three- level Cloud Computing Network”, 2010 EEE, pp. 108- 113. 2. Vlad Nao, Radu Prodan, Thomas Fahringer, “Cost- Efficient Hosting and Load Balancing of Massively Multiplayer Online Games”, 11th EEE/ACM nternational Conference on Grid Computing, 2010, pp. 9-16. 3. Microsoft Academic Research Cloud computing, http://libra.msra.cn/Keyword/6051/cloudcomputing?ry=cloud%20computing, 2012. 4. B.Adler,Load balancing in the cloud: Tools, tipsand techniques,http://www.rightscalecenter/whitepapers/Load-Balancing-in-the-Cloud.pdf, 2012 5. K. Nishant, P. Sharma, V. Krishna, C. Gupta, K. P. Singh, N. Nitin, and R. Rastogi, Load balancing of nodes in cloud using ant colony optimization, in Proc. 14 th International Conference on Computer Modelling and Simulation (UKSim),Cambridgeshire, United Kingdom, Mar. 2012, pp. 28-30. 6. A.Route, Public cloud, http://searchcloudcomputing.techtarget.com/definition/public- cloud, 2012. Prof. Arvind U. Jadhav IJSRE Volume 05 Issue 02 February 2017

Page 6197

7. R. Hunter, The why of cloud, http://www.gartner.com/DisplayDocument?dod=226469&ref= g noreg, 2012.

Prof. Arvind U. Jadhav, Currently working as Asst.Prof in Computer Science & Engineering, PES College of Engineering, Aurangabad, Email id: [email protected]

Prof. Arvind U. Jadhav IJSRE Volume 05 Issue 02 February 2017

Page 6198