The Application of Bi-Ievel Programming with Stackelberg Equilibrium in Cloud Computing based on Simplified Sw arm Optimization Wei-Chang Yehab,
Yuan-Ming Yehc
Li-Min Lin b
aAdvanced Analytics Institute University of Technology Sydney Sydney NSW 2007, Australia
[email protected]
CFaculty of Science University of Sydney Sydney NSW 2006, Australia
[email protected]
�ational Tsing Hua University Hsinchu Taiwan 300, R.O.C.
Abstract-Cloud
computing
(CC)
is a configurable resource
model that can be rapidly provisioned and released with minimal management
effort
or
service
provider
interaction.
In
CC
services, buyers and sellers always have strong (leader) or weak (follower) influence, and leaders have the authority to make decisions. Considering how to only reduce the buyer's cost or increase the seller's profit does not reflect the economic situation in reality. Hence, a novel model based on the Stackelberg leadership is proposed and formulated by bi-Ievel programming (BLP) to analyze such non-cooperative decision relationships in
Cc. A new method called bi-Ievel simplified swarm optimization
(BLSSO) is developed to solve the BLP. This BLSSO uses a dynamic regional search and guarantees global convergence. Computational results show that the proposed BLSSO technique is very competitive and satisfies a number of criteria, including the number of times the best solution is found, the average number
of
the
earliest
best
solution
found,
and
the
total
computational time. Thus, both the Stackelberg leadership and the BLSSO can provide decision makers with a different view on Cc. Keywords
Artificial
Intelligence,
Decision
support,
Evolutionary computing and genetic algorithms, Applications and Expert Knowledge-Intensive Systems
I.
INTRODUCTION
Cloud computing (CC) is a natural evolution of the widespread adoption of virtualization, service-oriented architecture, and autonomic and utility computing. Cloud computing integrates and computes resources via the Internet, and it thus needs powerful and high-performance computing for the virtual computing cluster. In recent years, numerous universities, vendors, and government organizations have invested in research around the topic of CC for scientific and industrial applications [1,2]. The National Institute of Standards and Technology (NIST) gives a conci se and specific definition to CC as a model for enabling convenient, on demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction."
A cloud service is broadly divided into three categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as-a-Service (SaaS). It is sold on demand and fully managed by the provider such that all end-users can have as much or as little of a service as they want at any given time without needing to know about, or have control over, the technology infrastructure "in the cloud" that supports them [1,2]. Lately, efforts have been focused on providing Quality of Service (QoS) guarantees (as required by real-time interactive applications) in CC, i.e., CC must promise continuous service, stable connections, and safe sharing. With the maturity of cloud computing technology, it could be applied to businesses and commerce, and an important issue is the economic mechanism. In order to achieve fairness in the use of resources, we consider how to meet various needs under budget. In recent years, studies have been conducted that tackle the economic issues of cloud computing. A number of studies have investigated the need for costs cuts and effective business processes. However, there is no general consensus on cloud computing transactions. When decision makers are in contlict with one another, a decentralized decision-making problem arises. The decentralized decision-making should be in accordance with departments of organizations and should form a kind of hierarchical structure. The decision makers' objectives are independent and they may have some contlict of profit; therefore, multilevel programming (MLP) would be needed to find a solution. Bi-level programming (BLP)-a type of MLP-is a mathematical mode that can be used to solve decentralized decision-making problems. The main characteristic of this mode is that decision makers have their own independent objective function at all levels in the hierarchy and they control a selection of decision variables. Cloud computing services need to deal with tens of thousands of jobs at the same time and return information to users in a timely manner. Therefore, the Stackelberg equilibrium solution should be found by an effective algorithm to minimize the total run-time on limited resources. We propose applying our modified simplified swarm optimization (SSO) algorithm to find the Stackelberg equilibrium solution. This algorithm uses a dynamic regional search method to
- 809-
ensure global convergence, and it can make modifications independently by leaving the local region to enhance the solution. The main objective of this study is to investigate in depth a bi-level cloud computing transaction system and to construct a resource distribution decision model. Choosing a partner is an essential process in cloud computing transaction systems, and we propose a method of systematization. A partner model is constructed to fit the customer's demands effectively; this would program the optimal cloud computing transaction system and product decisions. By supporting the decision maker, the evaluation results of the decision-making program are obtained effectively and quickly.
subject to
� L.... X 17
.. .
I
i=I,2,...,n
Y:O:, : Wi, i=1,2,...,n
(3) (4)
Y:O:: A ,j=1,2,. . . ,m � �, 1
(5)
O:O::Yi, Ogij, i=1,2,...,n,j=I,2,...,m
(6)
. .
i=l
where I J
II.
:o:: Y,
l�l
Xii-"
THE STACKELBERG LEADERSHIP MODEL
Game theory studies the strategic interaction among players that have alternatives to choose from [3,4,5]. Strategic interaction refers to the situation where a player's optimal choice depends on the optimal choice of other players, and vice versa. It is assumed that each player will know the equilibrium strategies of the other players, and that no player has anything to gain by changing only his/her own strategy unilaterally.
Yi c;(Y,)
The Stackelberg leadership model is a strategic game in economics in which the leader firm moves first, after which the follower firms move sequentially [3,4,5]. If we consider the two-person game problem, the leader has the right to make the first decision, and then the follower must optimize their performance within the leader's strategy. The leader and follower have their own decision variables and objective functions, and the leader can only influence (rather than dictate) the reactions of the follower through their own decision variables, while the follower has full authority to decide how to optimize their objective function in view of the decisions of the leader.
Wi Ai
The hardware vendor number. The service provider number. The hardware capacity of the /h hardware vendor from the /' service provider. The hardware capacity price that the i'h hardware vendor provides to the/h service provider. The total hardware capacity of the /h hardware vendor. The unit cost of the i'h hardware vendor to purchase hardware capacity Yi• The capacity constraint of the {h hardware. The total hardware capacity that the/h service provider requires. The hardware capacity unit cost that thej'h service provider orders from the th hardware vendor. The unit transmission cost of hardware capacity being delivered from the /h hardware vendor to the/h service provider through the network. The unit holding cost of the /' service provider.
The concept of Stackelberg equilibrium can be applied to cloud computing because vendors and service providers always play a strong or weak role. Service providers require hardware architecture platforms for information storage and computation, and they also need to provide services to users. Hardware vendors want to maximize their profit, but service providers want to minimize their operation costs; therefore, the developed model consists of a decentralized planning system in which the upper level is the leader and the lower level is the objective of the follower. We have used a cloud computing collaboration function to obtain the best resource distribution. Hardware vendors are the leaders having the power to control and determine hardware capacity. These vendors have a strong influence over the service provider's behavior. Service providers can select different service solutions for users according to their own characteristics and on the basis of the costs provided to them by different users.
The objective functions of the leader and follower are given by Eq. (1) and Eq. (2) ; Eqs. (3)-(6) are the constraints. Eq. (3) constrains the total hardware capacity of all service providers because the th hardware vendor cannot exceed the total space of its hardware caracity. Eq. (4) constrains the total hardware capacity of the i' hardware vendor and cannot exceed the capacity constraint of the {h hardware vendor. Eq. (5) constrains the hardware capacity the hardware vendor must introduce to exceed the demands of any service providers. Eq. (6) is a nonnegative constraint.
The problem and model discussed in this study formulated as follows:
We assume that there is one leader and m followers in a decentralized two-level decision system. Let x and Yi be the decision vector of the leader and the ith followers respectively, i=l, 2,... , m. We also assume that the objective functions (without loss of generality, all are to be maximized) of the
Objective function: Max Pij·Xij-C;(Y,)-Y, Min Pij·Xij+Sij( XiJXij+Tij(XiJXij+hy·Xij
can be
(1) (2)
III.
BI-LE VEL PROGRAMMING
BLP can solve two major problems. One is special structures within mathematics, such as saddle points and maximum-minimum problems; the other is the Stackelberg equilibrium problem [3-7].
- 810-
leader and ith followers are F(x,Yl , ....,Ym) and.{;(x,Yl , ....,YnJ respectively, i=l, 2,... , m. In addition, let S be the feasible set of control vector x of the leader, defined by S={x I G(x):O;O},
(7)
where G is a vector-valued function of decision vector x and 0 is the null vector. Then, for each decision x chosen by the leader, the feasible set Y of control array (y], ...., Ym) of followers should be dependent on x, and is represented generally by Y(x)={ (Yl,Y2"",Ym) I g(X,Y]'Y2"",Ym):O;O},
(8)
where g is a vector-valued function of x andYi. We assume that the leader first chooses a control vector XES and the followers subsequently determine their control array (yj,Y2, ...,Ym)E Y(x). The general formulation of a BLP problem is (9) subject to G(x):o;O (constraint of the leader)
(10)
where (x, y; , ..., y: ) solves the following problems (i=l, 2, m) :
...,
The complexity of solving a nonconvex optimization problem has been widely discussed. leroslow was the first to provide proof of nondeterministic polynomial time (NP) hardness for BLP problems in 1985 [8]. Since then, various methods for solving BLP problems have been proposed. Lee and Shih [9] and Lan [10] gave a description of different categories of BLP problems. Falk studied a general linear maximum-minimum problem whereh was substituted forti for single linear programming, and is a particular case of BLP problems [11]. Falk's algorithm method combines linear programming with the branch-and-bound technique [11]. In addition, Konno (1976) used the cutting plane technique to solve this problem [12]. When computational complexity is increased, the Karush Kuhn-Tucker (KKT) conditions and branch-and-bounding are less efficient, and it is therefore difficult to solve large problems [13,14]. In fact, although most problems have bi level characteristics, they can be regarded as single-level (single-objective) programming problems when searching for solutions. The main reason why it is still difficult to solve large BLP problems is the lack of an efficient algorithm; this is the biggest obstacle for BLP problems. "Soft computing is tolerant of imprecision, uncertainty, partial truth, and approximation" [15]. Thus, in order to obtain the global optimal solution for general BLP models, we should design heuristic processes. This study designs SSO to solve the Stackelberg-Nash equilibrium in a BLP problem with multiple followers in which information may be exchanged among followers.
(11)
IV.
subject to glx,Yj, ....,YnJ:O;O (constraint of the ith follower). (12) The basic optimality properties of the BLP problem have been extensively discussed [5] and some properties to be stressed here are: (1)
The feasible region (S) is nonconvex, closed, and connected [6,7].
(2)
If (x), X2) is an extreme point of S, then (x), X2) is an extreme point of S [6].
(3)
Under nondegeneracy, if a BLP problem is solvable, then at least one optimal solution is an extreme point of S [6].
(4)
If the first best solution is feasible, then it is an optimal solution. If the first best solution is infeasible, then there exists a boundary-feasible extreme point optimal problem [6].
(5)
The global solution for the BLP problem may not be a Pareto-optimal solution [6].
(6)
The optimal solution bounds of the leader and followers can be easily found [7].
THE INTRODUCTION OF SSO
In this study, we propose an improved SSO to deal with bi level problems. First, the original SSO is described below. SSO is an emerging population-based stochastic optimization method [16-19]. It belongs to the category of SI, and is also an evolutionary computation method. The advantages of SSO are simplicity (easy to implement and having just a few parameters to tune), efficiency (a fast convergence rate), and flexibility [16-19]. The proposed SSO is based on the SSO proposed by Yeh in [16]. Before discussing the proposed SSO, we formally introduce SSO in this section. In SSO, each solution is encoded as a finite-length string with a fitness value. Like most SCs, SSO is also initialized with a population of random solutions inside the problem space and then searches for optimal solutions by updating generations. Let pBest Pi=(Pn,Pi2, ...,Pi,POP) be the best fitness function value of the ith solution with its own history and gbest G=(g),g2,".,gpop) be the solution with the best fitness function value among all pBests. The fundamental concept of SSO is that each variable of any solution needs to be updated to a value related to its current value, its current pBest (as a local search), the gbest (as a global search), or a random feasible value to maintain population diversity and enhance the capacity of escaping from a local optimum. The following update method of SSO is based only on this simple mathematically modeling after ClI" cp, and Cx are given [16]:
- 811 -
r,; E [O,C,,) if ,,; E [C",C ) ,, if ,,; E [C ,C ) p g if ,,; E [C ,1)