Fast Quality Driven Selection of Composite Web Services

2 downloads 2415 Views 259KB Size Report
of Web services is rapidly increasing and the QoS of ... fast method for quality driven composite Web services ... process development and management stage.
Fast Quality Driven Selection of Composite Web Services Jae-Ho Jang Dept. of Computer Science Yonsei Univ., Seoul, Korea [email protected]

Dong-Hoon Shin Dept. of Computer Science Yonsei Univ., Seoul, Korea [email protected]

Abstract The composite Web service selection is the process of building an executable composite Web service. Generally, the selection process involves decision making in terms of non-functional properties of Web services such as QoS requirements. Since the number of Web services is rapidly increasing and the QoS of the Web services environment changes dynamically, the fast selection is important. This paper presents a fast method for quality driven composite Web services selection based on a workflow partition strategy. The proposed method partitions an abstract workflow into two sub-workflows to decrease the number of candidate services that should be considered. A mixed integer linear programming is utilized for solving the service selection problem. Experimental results show that the partition strategy performed faster than the optimal strategy. Also, the qualities of the selected composite Web services are not significantly different from the optimal one.

1. Introduction Web services are becoming a new paradigm for future software on a loosely coupled, distributed, and heterogeneous environment. Web services enable business applications on heterogeneous platforms to interact with each other in a standardized way. Recently, with the wide spread of SOA (ServiceOriented Architecture) there is a growing demand to achieve complex and large scale business processes by composing Web services. A composite Web service [1] is made up of multiple Web services that interact with one another and provides complex functionality, which cannot be realized by single service. The logic of a composite Web service can be specified by an abstract workflow, where functional modules (e.g., constituent Web services) and their dependencies (e.g., flow of control)

Proceedings of the European Conference on Web Services (ECOWS'06) 0-7695-2737-X/06 $20.00 © 2006

Kyong-Ho Lee Dept. of Computer Science Yonsei Univ., Seoul, Korea [email protected]

are represented by tasks and transitions, respectively. An executable composite Web service can be formed by binding the tasks of its abstract workflow with Web services that fulfill their required functionalities. As the number of available Web services dramatically increases, many Web services with identical functionality but different quality of service (QoS) may exist. Therefore, the QoS of a composite Web service is determined by QoS of the constituent Web services. The composite Web service selection is the process of building an executable composite Web service that satisfies a given QoS requirement of an abstract workflow by selecting appropriate Web services. Considering QoS properties of composite Web services can be the key enabler of business success. For example, a process designer may want to design a large scale business process but with limited budget, or service customers want to retrieve their results within certain time. With absence of an automated QoS-aware composition system, all the available services must be considered manually to meet such requests, resulting in increasing cost and time needed for service development. QoS-aware systems can also be utilized in the post-deployment process such as monitoring and adaptation. Previously, some efforts have been made to incorporate the end-to-end QoS analysis method into process development and management stage. Cardoso et al. [3] developed the QoS model and QoS computation algorithms for the METOR workflow system [8]. Specifically, they were interested in the prediction of QoS of the workflow system and the experiment was carried out with genomic project scenario. AgFlow, a middleware system that enables QoS-aware service composition, have been proposed by Zeng et al. [7]. In general, a QoS requirement consists of one or more constraints. A constraint may be classified into a local constraint for each task or a global constraint for

Table 1. QoS classification Category

Example of quality criteria

Time

Latency

Cost

Cost

Probability

Representative value Maximum

Reliability Availability Throughput

Capacity Capacity

Minimum

Definition The time required for preparing a service (e.g., deployment) and subsequent processing of a request. The cost involved in executing a service The probability that a service will successfully perform its require d functions The probability that a service will be available The number of successful service request completion in a given period The number of concurrent requests that can be processed with guaranteed performance

an entire process. A selection process that involves more than one quality criterion and requires global optimization can be regarded as a combinatorial optimization problem, which cannot be solved in polynomial time. Since the Web services environment has highly dynamic QoS and the number of Web services is rapidly increasing, the fast selection of composite Web services is particularly important. This paper presents a fast method for quality driven composite Web services selection based on a workflow partition strategy. The proposed method partitions an abstract workflow into two sub-workflows to decrease the number of possible combinations of candidate services that should be considered. The QoS requirement is also decomposed for each partitioned workflow while satisfying the given QoS requirement of the entire workflow. Since the decomposition of a QoS requirement is based on heuristics, a selection might fail to find composite Web services from the partitioned workflows. To avoid such a failure, the tightness of a QoS requirement is defined and the determination of workflow partition is done according to the tightness of the QoS requirement. Finally, composite services are selected by using a mixed integer linear programming (MILP) method [2]. The remainder of this paper is organized as follows. A QoS model for composite Web services is introduced in Section 2. In Section 3, a detailed explanation of proposed method is given. Experimental results are discussed in Section 4. Finally, Section 5 summarizes the conclusions and discusses opportunities for future works.

2. QoS model for composite Web Services Generally, QoS is expressed as a range of values. To simplify the selection problem, the worst case value of the range was regarded as the representative value of

Proceedings of the European Conference on Web Services (ECOWS'06) 0-7695-2737-X/06 $20.00 © 2006

the QoS. This also ensures a request will be satisfied while QoS varies within the range. For example, if the latency of a given service varies from 10ms to 20ms, its latency is regarded as 20ms. As shown in Table 1, QoS was classified into 4 categories and a representative value was defined for each category. The proposed method only considers the quality criteria described in Table 1. However, it is possible to accommodate other quality criteria belonging to one of these categories. The QoS of a composite Web service is computed by aggregating the QoS of constituent Web services. The QoS information of constituent Web services can be retrieved from a QoS broker. Service providers should monitor and elicit QoS metrics and register their QoS characteristics to the broker. The monitoring and elicitation of QoS metrics may be domain- or systemdependent and out of scope of this paper. The proposed method defines four general workflow structures for QoS aggregation as shown in Figure 1. The sequential structure consists of tasks that are executed in sequential order, i.e., from t1 to tn. In the parallel structure, tasks are executed concurrently and their results are synchronized. The arc on the transitions indicates that the corresponding control constructs are ‘and-split’ or ‘and-join’. On the other hand, the conditional structure is composed of ‘xorsplit’ and ‘xor-join’. Namely, only one task among t1, n

…, tn is executed where pi,

∑ p = ŗ , is the probability i

i =ŗ

that task ti will be executed. A loop is eliminated by duplicating the structure as many times as its maximum execution number. It was assumed that the maximum execution number can be obtained from a past execution log. The QoS of each workflow structure is computed by applying QoS aggregation rules as shown in Table 2. Here, Time(ti), Cost(ti), Probability(ti), and Capacity(ti) represent the qualities of four categories, respectively.

Table 2. QoS aggregation rules Category Structure

Time

Sequential

∑ Time(t ) ȱ

Cost

n

n

∑ Cost(t ) ȱ

∏ Probability(t ) ȱ

n

Min(Capacity(ti )) ȱ

i

i

i =1

n

i =1

i =1

n

Max (Time(ti )) ȱ

n

Min(Capacity(ti )) ȱ

i

i =1

i=1

Conditional

∏ Probability(t ) ȱ

i

i =1

Capacity

n

n

∑ Cost(t ) ȱ

i

n

Parallel

Probability

i=1

i =1

n

n

n

∑ Time(t ) × p ȱ ∑ Cost (t ) × p ȱ ∑ Probability(t ) × p ∑ Capacity(t ) × p i

i =1

i

i

i =1

The aggregation rules are similar to those of the method proposed by Cardoso et al., except those qualities which belong to the capacity category is considered. In the case of a nested structure, the aggregation rules are recursively applied from the inner most workflow structure thus QoS of the entire workflow can be formulated.

Figure 1. Four workflow structures.

3. The proposed method for the fast selection of composite Web Services In this section, the proposed composite web services selection method is described. The proposed method consists of four phases: computing the tightness of a given QoS requirement, partitioning a workflow, decomposing the QoS requirement, and selecting services based on MILP.

3.1. Computing the tightness of a QoS requirement As the first step, the tightness of a given QoS requirement is computed. The tightness represents the

Proceedings of the European Conference on Web Services (ECOWS'06) 0-7695-2737-X/06 $20.00 © 2006

i

i

i =1

i

i

i

i =1

restrictiveness of the QoS requirement. Intuitively, as the requested level of QoS is increased, the number of composite Web services that meet the required QoS would decrease. Accordingly, the tightness is computed based on the difference between the QoS requirement and the best QoS that can be realized by candidate services. If the given QoS requirement were too tight, the proposed method might fail to find composite Web services that meet the given QoS requirement. Therefore, the workflow is partitioned only when the tightness is less than threshold THtightness. Before computing the tightness of a QoS requirement, the best QoS for each quality criterion is computed. For this purpose, temporal composite Web services for each quality criterion are built. A temporal composite Web service that corresponds to a certain quality criterion can be built by selecting a service that has the best quality among candidate services for each task. Similarly, the worst QoS can be computed by selecting services with the worst quality. Clearly, if a requested QoS is higher than the best QoS, the requirement can not be met. Also, if a requested QoS is lower than the worst quality, the requirement is always satisfied. The tightness of constraint for each quality criterion q is computed by

TightnessOfConstraint(q)=

Const(q)-WorstQoS(q) , BestQoS(q)-WorstQoS(q) (1)

where Const(q), BestQoS(q), and WorstQoS(q) denote the given constraint, best QoS, and worst QoS for the quality criterion q, respectively. The tightness of a QoS requirement that consists of n constraints is computed by n Tightnessi ∑ i2 (2) TightnessOfRequirement = i=1 n , 1 ∑ 2 i=1 i

where Tightnessi is the i-th largest value among TightnessOfConstraint(q). Generally, the tightness of the entire requirement is dominantly affected by the tightest constraint. Thus, the tightness of a constraint is divided by the second power of its rank to adjust impact on the tightness of a QoS requirement. The tightness of a requirement is compared with threshold THtightness to determine whether or not to partitioned a workflow. The threshold is dynamically updated based on the log file that contains the records about selection failures. Further discussion about the threshold is given in Section 3.4.

3.2. Partitioning a workflow Generally, composite Web services selection with global optimization considers all combination of candidate services. Therefore, the number of candidate services largely affects the computational cost. Workflow partition involves the partitioning of the set of candidate services; thus it was expected that selecting services from partitioned workflows will be faster than selecting services from an entire workflow. In order to partition a workflow, a workflow is first transformed to the form of a reduced sequential graph (RSG). A RSG is a graph where all parallel and conditional structures of a workflow are reduced to single tasks, resulting in only sequential structures. An initial RSG is a directed acyclic graph of a workflow that eliminated all loops by duplication (see Section 2) and each vertex stores the number of candidate services for the corresponding task. Detailed information such as the branching probability and ‘xor/and’ identifiers, are omitted in an initial RSG. A final RSG is constructed by repeatedly applying the following graph reduction rules to an initial RSG:  Rule 1. Vertices that correspond to parallel or conditional structures are reduced to a single vertex.  Rule 2. If vertices of sequential structures are nested within parallel or conditional structures, they are reduced to a single vertex.  Rule 3. Nested structures are reduced starting from the inner most structure.  Rule 4. When reducing a graph, the total number of candidate services of the graph is preserved. The reduction stops when only a sequential structure exists in RSG. For example, Figure 2 illustrates an initial RSG and its final RSG. The number under the vertex indicates the number of candidate services of the corresponding task.

Proceedings of the European Conference on Web Services (ECOWS'06) 0-7695-2737-X/06 $20.00 © 2006

Figure 2. The example of RSG construction. RSG ensures that parallel and conditional structures be preserved in partitioned workflows, and thus simplifies the computation and decomposition of a QoS requirement. For example, the QoS of the time category of a parallel structure is determined by all of its constituent tasks, thus the structure must be preserved. RSG can be constructed in time O(n) where n is the total number of tasks in an abstract workflow. Once a final RSG is constructed, set A = {(v, n) | v = a vertex in RSG, n = the number of candidate services that v stores} resulted. Subsequently, an approximate subset sum algorithm [4] is used to find a subset, A′, whose number of candidate services is approximately half of the total. Set A is then partitioned into two subset A′ and A″ (= A - A′). An example of set partitioning is given in Figure 3.

Figure 3. An example of applying an subset sum algorithm. The goal of the proposed partitioning is to minimize the selection failure that may result from the decomposition of a QoS requirement, while reducing the problem size. Since QoS requirement must be decomposed for each sub-workflow, the number of the sub-workflows is minimized. Additionally, a workflow is partitioned only when the ratio in size of two subsets is greater than or equal to threshold THpartition , that is, A′ A′′ ≥ TH partition (if A′ ≤ A′′ ), where A′ is the total number of candidate services of subset A′ . Finally, sub-workflows are generated from the two subsets. The partitioned RSG can be constructed from

each subset by connecting its elements one after another. The order of tasks is not important since RSG is consisted of sequential structure and the QoS aggregation rules for a sequential structure are composed of commutative operators. Sub-workflows are directly generated from the two RSGs as shown in Figure 4. Task ts and task tf are dummy tasks denoting the start and end of a workflow. t1 t2

t6

t2

t6

t345

t3 ts

tf

ts

t5

t1

tf t4

Figure 4. An example of generating abstract workflows.

QoS′(q) + QoS′′ (q)  Const(q) QoS′(q) × QoS′′(q) QoS′(q) = QoS′′ (q)

if q ∈ Time or Cost, if q ∈ Probability, if q ∈ Capacity, (4)

where A B indicate that quality B satisfies constraint A. First, the condition that guarantees a given QoS requirement must be found. Let Const ′(q) and Const ′′(q ) be the constraints of the sub-workflows then (4) can be satisfied by the constraints that satisfy (5) since Const ′(q) QoS ′(q ) , and Const ′′(q ) QoS ′′(q ).

Const ′(q) + Const ′′(q) if q ∈ Time or Cost,  Const(q)= Const ′(q) × Const ′′(q) if q ∈ Probability, Const ′(q) = Const ′′(q) if q ∈ Capacity,  (5)

The partitioned workflows represent the workflow structure from which the QoS is computed. If we connect task tf in one of the sub-workflow to task ts in the other, the resultant workflow has the same quality as the original workflow. Therefore, the QoS of the entire workflow can be computed by

if q ∈ Time or Cost, QoS′(q) + QoS′′(q)  QoS(q)= QoS′(q) + QoS′′(q) if q ∈ Probability, Min(QoS′(q), QoS′′(q)) if q ∈ Capacity, (3) where and QoS ′(q) are QoS ′′( q) the QoS of composite Web services selected from sub-workflows for quality criterion q.

3.3. Decomposing a QoS requirement The composite Web services that are selected from sub-workflows must satisfy the QoS requirement. In the previous section, the workflow was partitioned by taking into account of workflow structures and number of candidate services. To obtain a complete selection problem, the QoS requirement must be decomposed for each sub-workflow. Since the QoS of the entire workflow can be computed by (3), the composite Web services selected from sub-workflows must satisfy the following expression by QoS aggregation rules (see Table 2) for the sequential structure:

Equation (5) is a sufficient condition but not a necessary condition for (4). However if (5) is satisfied then more composite Web services satisfying the QoS requirement may exist. Specifically, Const ′( q) and Const ′′(q) for the capacity category is equal to Const (q ) since they are aggregated based on a minimum function. The ideal case for the QoS decomposition is a maximization of the composite Web services satisfying the decomposed QoS requirement. In the worst case, the composite Web services that satisfy a decomposed QoS requirement may not exists. To avoid selection failure resulting from QoS decomposition, QoS must be decomposed while the existence of services that satisfy the decomposed QoS is guaranteed (if there exists one for the entire workflow). Next, the conditions must be found that guarantee the existence of service that satisfying the decomposed QoS. However, it is not practical to find such a QoS value since the all combinations of candidate services must be considered. The proposed method relaxes the problem by considering each quality criterion independently; thus, the existence of services that satisfy the decomposed QoS cannot be guaranteed for the QoS requirement that involves multiple quality criteria. As mentioned before, the QoS requirement cannot be satisfied if the requested QoS is higher than BestQoS ( q ), thus the following expression is always true for the relevant QoS requirement:

Const(q)∆BestQoS(q).

Proceedings of the European Conference on Web Services (ECOWS'06) 0-7695-2737-X/06 $20.00 © 2006

(6)

Similarly, since the proposed QoS decomposition method considers each quality criterion independently, the following expression is the necessary and sufficient condition for guaranteeing the existence of services satisfying the decomposed QoS:

Const ′(q)∆BestQoS′(q) AND Const ′′(q)∆BestQoS′′(q)′′.

(7)

where BestQoS ( q )′ and BestQoS ( q )′′ are the best QoS of the sub-workflows for the quality criteria q. Finally, we find the condition that satisfies both (5) and (7). The following equation is true on the same basis as (3):

 if q ∈ Time or Cost  BestQoS′(q) + BestQoS′′(q), BestQoS(q)=  if q ∈ Probability  BestQoS′(q) × BestQoS′′(q),

(8)

Provided that (5) is satisfied then the following inequalities can be obtained by (6) and (8):

if q ∈ Time or Cost Const ′(q) + Const ′′(q) ≥ BestQoS′(q) + BestQoS′′ (q),  if q ∈ Probability Const ′(q) × Const ′′(q) ≥ BestQoS′(q) × BestQoS′′(q), (9) Then, the following equation is the sufficient condition for (7):

BestQoS′(q) : BestQoS′′(q) = Const ′(q) : Const ′′(q). (10) Consequently, one of the constraints, Const ′(q) , is computed by the (11) which derived by solving simultaneous equations that consists of (9) and (10). The other constraint, i.e., Const ′′(q ) , can be computed by (5). When the service selection has failed for the subworkflows, the failure may either be due to an intrinsically unsatisfiable QoS requirement or inappropriate decomposition of the QoS requirement. Since the relevant case can not be decided, the failed problem is subject to be attempted without partitioning. The overhead for re-selection is large; therefore, a first step is to determine the workflow partitioning according to the tightness of QoS requirements.

Proceedings of the European Conference on Web Services (ECOWS'06) 0-7695-2737-X/06 $20.00 © 2006

if q ∈ Time or Cost  BestQoS ′(q )  Const(q ) × BestQoS (q ) ,  Const ′(q)=  if q ∈ Probability   Const(q ) × BestQoS ′(q ) .  BestQoS ′′(q ) 

(11)

3.4 Service selection In this section, we describe a detailed explanation of the service selection phase. The input of this phase is an abstract workflow, QoS properties of candidate services, and QoS requirements. The output of this phase is an executable workflow, in which all tasks have their corresponding concrete Web Services. Since the service selection phase is independent of the partitioning phase, any approach that can address the service selection problem can be adopted. In this paper, MILP is utilized for solving the service selection problem since our major concern is QoS requirements, and constraint satisfaction is nature of MILP. MILP finds the value of the variables that maximizes (or minimizes) the objective function while satisfying constraints. In contrast to some other approaches based on IP, the proposed method does not require CPA (critical path algorithm) [5] to aggregate latency for the parallel structure. In addition, all conditional branches are considered simultaneously thus global constraints are guaranteed to be satisfied. The service selection problem is transformed into a MILP problem as follows. The objective function maximizes (i.e., optimizes) the QoS of the composite service in terms of Wq, and is defined as follow: Maximize : ∑

QoS(q)-WorstQoS(q) × Wq, BestQoS(q)-WorstQoS(q)

(12)

where QoS(q) is the QoS of composite Web service for quality criteria q and Wq, ∑ Wq = 1 , is user defined weight for quality criteria q. The objective function is based on SAW (simple additive weighting) [6], which scores the set of multiple criteria by a normalized weighted sum. MILP constraints capture the QoS of the composite Web service and the QoS requirement. Let, CSij be the j-th candidate service of task ti of an abstract workflow. Then, every CSij is mapped into binary variable sij, sij ∈ {0,1}. The following MILP constraint enforce that

Table 3. MILP constraints for QoS aggregation Workflow Structure

Quality Category

MILP Constraint n

Time

QoS(Time) =∑ QoS(Time,ti)

(15)

i=1

n

Cost

Sequential

QoS(Cost) = ∑ QoS(Cost,ti)

(16)

i=1

n

Probability

QoSlog (Probability) = ∑ QoSlog (Probability, ti )

(17)

QoS(Capacity) ≤ QoS(Capacity,ti), 1 ≤ i ≤ n

(18)

QoS(Time) ≥ QoS(Time,ti), 1 ≤ i ≤ n

(19)

i=1

Capacity Time Cost

˜ǻ˜œǼ =

—

∑ ˜ǻ˜œǰ  Ǽ ’

(20)

’=ŗ

Parallel

n

Probability

QoSlog (Probability) = ∑ QoSlog (Probability, ti )

(21)

QoS(Capacity) ≤ QoS(Capacity,ti), 1 ≤ i ≤ n

(22)

i=1

Capacity

n

Time

QoS(Time) = ∑ QoS(Time,ti) × pi

(23)

i=1

n

Cost

QoS(Cost) = ∑ QoS(Cost,ti) × pi

(24)

i=1

Conditional

b

QoS(Probability) = ∑ QoS(probability,Wk)

Probability

(25)

k=1

n

QoS(Capacity) ≤ ∑ QoS(Capacity,tj) × pi

Capacity

(26)

j=1

only single candidate service can be selected for each task: CS(i)

∑ s =1, 1 ≤ i ≤ n,

(13)

ij

j=1

where CS(i) is the number of candidate services of task ti. Namely, sij is 1 if CSij is selected for task ti, otherwise 0. The QoS of a task tj for quality criteria q is defined as follow:

QoS(q, ti) =

∑ QoS(q, CS ) × s , ij

ij

(14)

j=1

where QoS(q, CSij) is the QoS of CSij for quality criteria q. The QoS of composite Web services are also captured by MILP constraints. The MILP constraint is

Proceedings of the European Conference on Web Services (ECOWS'06) 0-7695-2737-X/06 $20.00 © 2006

defined for each workflow structure. Table 3 describes the MILP constraints for workflow structures. Some quality categories computed by either maximum or minimum function cannot be directly formulated into the MILP constraint. In these cases, the MILP constraint is defined for each component task of the workflow structure as (18), (19), and (22). Since the objective function value becomes larger as QoS(Capacity) increases, it is maximized by the MILP solver fundamentally, i.e., until it is bound to the minimum value among QoS(Capacity,ti). On the other hand, QoS(Time) is minimized until it bound to the maximum value among QoS(Time,ti). Additionally, in the case of the probability category, QoS cannot be directly captured by the MILP constraints since the aggregation rule is based on multiplication. To resolve this problem, the logarithm function is accommodated to transform the aggregation

rule into a linear form (This idea is proposed by Zeng et. al.[7]). Consequently, the MILP constraint for the QoS of a task tj with respect to probability quality criteria is defined as follow:

In the case of the capacity category, candidate services that do not satisfy the global constraint are excluded from the set of candidate services to guarantee the global constraint.

CS(i)

QoSlog (Probability,tj)= ∑ ln(QoS(Probability,CSij)) × sij. j=1

(27) However, the logarithm function cannot be directly applied in case of conditional structure since it must be applied to a sum of variables, such as ln(∑ pij × QoS ( Probability,CSij ) × sij ). Therefore, every possible branch is identified and the MILP constraint is defined for each of them. For example, Figure 5 shows the example of a workflow that has two branches, where QoS ( Probability,Wk ) is the QoS of the probability category for an individual branch k. As shown in the Figure 5, the MILP constraint can be defined for each individual branch using (17) while qualities for the probability category are multiplied by its branching probability. The overall QoS is defined as the sum of QoS for every Wk. as shown in (25), where b is the total number of branches on an abstract workflow.

Figure 5. An example of the MILP constraint of conditional structure for probability category. As mentioned before, the global constraints are also enforced by MILP constraints. If the workflow does not have any branch, global constraints are enforced by setting the boundary for the variables that represent the QoS of composite Web services. Otherwise, MILP constraints that enforce the global constraint are defined for all possible branches. Figure 6 shows the example of MILP constraint that expresses the global constraint for branch W1 in Figure 5. Note that the QoS are not multiplied by the branching probability. While aggregation rules for conditional structure are based on expectation value, the probability must be excluded in terms of constraint satisfaction.

Proceedings of the European Conference on Web Services (ECOWS'06) 0-7695-2737-X/06 $20.00 © 2006

Figure 6. An example of the MILP constraint for global constraints. When solving the problems for the sub-workflows, both problems must successfully find solution, otherwise the selection fails. The problems for the subworkflows can be attempted in any order. If the first attempted problem succeeded, the QoS requirement of the remaining problems is recomputed using (5) while substituting Const ′( q ) by the actual QoS of the successful problem. This scheme relaxes the constraint of the problem attempted later. Once the Web services for the sub-workflows are selected, the composite Web services for the entire workflow can be built by binding a Web service of each task of sub-workflows to the corresponding task of entire workflow. When the service selection fails, the tightness of given QoS requirements is logged and the THtightness is updated. The THtightness is set to tightness that permit ε, 0 ≤ ε ≤ 1, failures based on log. For example, for 100 log data and ε = 0.05, THpartition is set to a tightness of the fifth smallest value from log. The larger ε increases the success rate while the decreasing the recall.

4. Experimental results In order to evaluate the performance of the proposed method, it was simulated and the results were compared with the optimal service selection strategy which does not partition the workflow.

4.1. Experimental environment The experimented were conducted for the two different environments. Experiment1 was designed to analyze the impact of the number of tasks on performance. In this experiment, workflows that has only the sequential structure was used and the number of tasks was increased from 4 to 20 and the total

(a) 2 constraints. (b) 3 constraints. (c) 4 constraints. Figure 7. The selection speed of Experiment1.

(a) 2 constraints. (b) 3 constraints. (c) 4 constraints. Figure 8. The selection speed of Experiment2. number of candidate service was fixed to 1000. Experiment2 was designed to analyze the impact of the number of candidate services. In this experiment, workflow that has the sequential, the parallel, and the conditional structure was used. The number of total candidate services was increased from 500 to 2500. In both experiments, four quality criteria, i.e., latency, cost, availability, and reliability were considered. The tasks belong to one of the service classes that define the QoS level. The QoS of candidate services were set according to the service class of the corresponding task. Table 4. shows the service classes used in both experiments. Table 4. The service classes and their QoS level Service class Quality criteria 1 2 3 4 5 Latency

0-40

80100 0.99Availability 0.999 0.99Reliability 0.999 Cost

8040-80 120

120160

160200

60-80 40-60 20-40 0-20 0.990.999 0.990.999

0.990.999 0.990.999

0.990.999 0.990.999

0.990.999 0.990.999

Proceedings of the European Conference on Web Services (ECOWS'06) 0-7695-2737-X/06 $20.00 © 2006

The requests, i.e., QoS requirements, that have 2, 3, and 4 global constraints were generated for the experiment. The experimental results can be heavily affected by characteristic of the generated QoS requirements. Since it is hard to generate fair request, we generated 5000 requests for each case. This was considered a large enough number to show the generality of the experiment. The quality criterion for the global constraint was randomly selected from those in Table 4, and the quality level was randomly set to value between BestQoS (q) and WorstQoS (q ). The THpartition and ε were 0.3 and 0.05, respectively, and the weight Wq for each quality criteria was 0.25. The experiment was conducted on Intel Pentiumtm4 2.4GHz with 1GB RAM with lp_solve 5.5.

4.2. Performance evaluation The selection speed was analyzed by average binding time. Figure 7 and Figure 8 compare the selection speed of the proposed method with the optimal selection strategy. The results show that the selection speed of both methods increases exponentially as the number of tasks and number of candidate services increases. However, the proposed method performs faster than optimal selection. In particular, the difference between the two cases grew

larger along with the size of problem; thus it is more scalable. Additionally, we compared the QoS of the selected comoposite Web services. For this purpose, we evaluated the difference between QoS value for each quality criteria since MILP’s objective function value cannot represent the detailed characteristic of multiple QoS criteria. The differnce between two cases was less than 5% for all quality criteria.

5. Conclusions and future work This paper presents a fast method for composite Web services selection based on a workflow partition strategy. The proposed method partitions an abstract workflow into two sub-workflows and decomposes the QoS requirement for each sub-workflow. Specifically, the workflow is partitioned according to the tightness of the QoS requirement. The selection problem is solved by MILP method. The experimental results show that the proposed method is faster than the optimal selection strategy and the QoS of the composite Web services is also reasonable. Specifically, the partition strategy is more scalable when the number of candidate services increases. The proposed method considers only the environment that the QoS properties of services are static and determined in advance. In a real-world situation, dynamic and non-deterministic QoS properties, which would be determined by interservice dependencies, may exist. We will concentrate our effort on addressing such QoS properties and evaluating the proposed method for dynamic environments where the QoS of a service varies at runtime.

Acknowledgments This research was supported by the Seoul R&BD Program(10705), Korea.

References [1] N. Milanovic and M. Malek, "Current solutions for Web service composition," IEEE Internet Computing, Vol. 8, No. 6, pp. 51-59, Nov. 2004. [2] M. Atallah, Algorithms and Theory of Computation Handbook, Boca Raton, CRC Press, 1999. [3] J. Cardoso, A. Sheth, J. Miller, J. Arnold, and K. Kochut, "Quality of Service for Workflows and Web Service Processes," J. Web Semantics, Vol. 1, No. 3, pp. 281-308, 2004. [4] O. H. Ibarra, and C.-E. Kim, "Fast approximation algorithms for the knapsack and sum of subset problems," J. ACM, Vol. 22, No. 4, pp. 463-468, 1975.

Proceedings of the European Conference on Web Services (ECOWS'06) 0-7695-2737-X/06 $20.00 © 2006

[5] M. Pinedof, Scheduling: Theory, Algorithms, and Systems, second ed. Prentice Hall, 2001. [6] C. L. Hwang and K. Yoon, “Multiple Attribute Decision Making,” Lecture Notes in Economics and Mathematical Systems, Vol. 186, 1981. [7] L. Zeng, B. Benatallah, A. H. H. Ngu, M. Dumas, J. Kalagnanam, and H. Chang, "QoS-Aware Middleware for Web Services Composition," IEEE Trans. Software Engineering, Vol. 30, No. 5, pp. 311 - 327, 2004. [8] K. J. Kochut, A. P. Sheth, J. A. Miller, "METEOR Model version 3," LSDIS Lab, Dept. of Computer Science, University of Georgia, 1999

Suggest Documents