JID:YJCSS AID:2644 /FLA
[m3G; v 1.87; Prn:4/01/2013; 11:23] P.1 (1-13)
Journal of Computer and System Sciences ••• (••••) •••–•••
Contents lists available at SciVerse ScienceDirect
Journal of Computer and System Sciences www.elsevier.com/locate/jcss
A robust trust model for service-oriented systems Xing Su a , Minjie Zhang a , Yi Mu a , Quan Bai b,∗ a b
School of Computer Science and Software Engineering, University of Wollongong, Wollongong, NSW 2522, Australia School of Computing and Mathematical Sciences, Auckland University of Technology, Auckland, New Zealand
a r t i c l e
i n f o
Article history: Received 2 March 2012 Received in revised form 26 October 2012 Accepted 8 November 2012 Available online xxxx Keywords: Trust Service selection Open environments Trust evaluation
a b s t r a c t In service-oriented computing applications, service consumers and providers need to evaluate the trust levels of potential partners before engaging in interactions. The accuracy of trust evaluation greatly affects the success rate of the interaction. Trust evaluation is a challenging problem in open and dynamic environment as there is no central mediator to manage standardized evaluation criteria or reputation records. In this paper, a novel trust model, called the priority-based trust model, is presented. The model derives the trustworthiness of a service provider from designated referees and its historical performance. In addition, consumers can specify their preferred priorities which will affect the result of trust evaluations. The experimental results show that the proposed model has better performance than other trust models, especially in open and dynamic environments. © 2012 Elsevier Inc. All rights reserved.
1. Introduction Service-oriented architecture (SOA) provides a way for service consumers and service providers to share and access services [1]. It supports loose coupling among system components (i.e., services), and dynamic interactions among service providers and service consumers (here we call them provider agents and consumer agents). In a service-oriented system, users can select services they require, and include them as parts of their workflows (e.g., many e-Science applications [2]). Today, with the expansion of service-oriented applications, more and more researchers have realized that trust has become a crucial aspect in service-oriented systems [3–7]. However, the dynamic nature of service-oriented systems makes trust evaluation a challenging task. First, in many service-oriented systems, there is no centralized mechanism for controlling and coordinating the interactions among agents. Hence, the decision making for selecting suitable providers can only be based on information provided by partners (agents) [8], the local view of the agent and/or the experience from previous interactions. Second, in some complex environments, a service-oriented system may contain agents with different trust evaluation criterion and selfish goals. Trust evaluation among such environments is more complicated. In order to tackle the challenges posed by service-oriented environments, various trust models have recently been proposed most of these models consider reputation, experience and other features of open environments [9–12]. Although some of these models can support decentralized trust management and individual trust evaluations [13–17], many existing models have limited capabilities in trust and preference representation, which greatly affect the accuracy of trust evaluations, and make them unsuitable for complex working environments. To overcome some limitations in existing trust models, a novel trust model, called the Priority-Based Trust (PBTrust) model was introduced in [18]. The PBTrust model evaluates the trustworthiness of a potential service provider from four perspectives: the provider’s experience on the service, the similarity of priority distribution of attributes between the referenced service and the requested service, the suitability of the
*
Corresponding author. E-mail addresses:
[email protected] (X. Su),
[email protected] (M. Zhang),
[email protected] (Y. Mu),
[email protected] (Q. Bai).
0022-0000/$ – see front matter © 2012 Elsevier Inc. All rights reserved. http://dx.doi.org/10.1016/j.jcss.2012.11.005
JID:YJCSS AID:2644 /FLA
2
[m3G; v 1.87; Prn:4/01/2013; 11:23] P.2 (1-13)
X. Su et al. / Journal of Computer and System Sciences ••• (••••) •••–•••
potential provider for the requested service and the time effectiveness of rating score from third parties. It can give a more robust and accurate evaluation of the trustworthiness of service providers in open dynamic environments. This paper extends the previous work on this topic suggesting some significant improvements and presenting comprehensive experiments and evaluations. The paper is organized as follows. Some current literature in the field of trust evaluation and trust models are reviewed in Section 2. Section 3 explains the scope of this research and establishes some related definitions. The principle of the PBTrust model is presented in Sections 4 and 5. In Section 6, some experimental results are introduced to demonstrate the performance of the PBTrust model. Finally, the conclusion and future work of this research is introduced in Section 7. 2. Related works Trust is a term with different meanings in different domains. In this research, trust is defined as the indicator of whether an agent is willing to rely on the actions of another agent. Hence, the result of trust evaluation will impact on the decision as to whether to establish interactions with a particular party. Trust evaluation is an essential part of many recommendation systems. However, many recommendation systems have a central server (recommender) which can access, collect and evaluate historical records from a large number of agents [19,20]. Although in recent years, some researchers have also recommended distributed architectures in recommendation systems, the recommenders still need to keep and maintain the connections with a number of agents in the system, and some trust related information such as trust criterion of different users [21,22]. Namely, the recommenders are actually providing a particular service (i.e., making recommendations) to other agents in the system. However, in distributed and dynamic environments, it is hard to ensure that recommenders can access more resources and trust information than other agents. It will also introduce extra overhead for including and maintaining a number of recommenders in distributed environments. Several trust evaluation approaches for distributed systems have been proposed in recent years. Sabater et al. [14,15] proposed the REGRET model for trust calculation by considering the reputation values not only of the witnesses but also of the neighbors and the system. A major contribution of the REGRET model is that it assigns weights to different aspects of a service and considers weighted reputation values of all aspects in the calculation of an overall service reputation. The REGRET model can also handle witness cheating problems effectively. Schmidt et al. [23] introduced a fuzzy-logic based trust evaluation model which can integrate post-interaction processes like business interaction reviews and credibility adjustment. In the model proposed by Schmidt et al., agents can query peer agents their opinions about potential partners, calculate their trustworthiness and based its service selection upon these results. The model also considered the credibility of peer agents in trust calculations. However, a common limitation of the proposed model in [14] and [23] is that they all used single values to represent the overall reputation of a service and this cannot comprehensively evaluate the performance of a service. Sensoy et al. [24] proposed a trust model that considers different attributes of a service. In this model, the attributes of a service are represented in ontologies. From these ontologies, service providers can understand the consumer requirements. Meanwhile, consumers can get a general understanding of the advantages and shortcomings of service providers. Tsai et al. [25] also introduced an ontological approach to trust evaluation. In addition, Tsai borrowed some concepts from Community-of-Interest systems, and classified users into different roles using Role Ontology. He then introduced four trust models for different user roles. Because of the descriptive power of ontologies, service quality and provider reputation can be described explicitly. Then, consumers can have a better understanding of the provider candidates. The ontological approach, however, involves complicated procedures to cover various situations and this is not practical, especially in highly dynamic environments. Huynh et al. introduced the Certified Reputation (CR) model in [17]. In the CR model, the reputation information is maintained by service providers. This feature makes the CR model suitable for open and dynamic environments. However, the CR model also uses single values to represent the reputations of service providers. In addition, it is hard to obtain objective service evaluations in the CR model as most service providers will keep only the best performance records as their references. 3. Problem description and definitions The PBTrust model was developed to overcome some limitations in the existing trust models and it is suitable for complex and dynamic environments. Before elaborating the details of the model, it will necessary to define the scope of this work and provide some necessary definitions. For the meanings of abbreviations and symbols used in this paper, see Appendix A. In general, there are two types of agents in a service-oriented system, i.e. consumer agents and provider agents. Here, we also define another role for consumer agents, i.e., to act as referees. Namely, a consumer agent can provide references about the quality of services to other consumer agents based on its previous experience. The reference of a service is kept by its provider agent. After a reference is generated by a consumer agent, it cannot be manipulated by any other agents (especially the provider agent). The meaning and format of references are defined in Definition 5.
JID:YJCSS AID:2644 /FLA
[m3G; v 1.87; Prn:4/01/2013; 11:23] P.3 (1-13)
X. Su et al. / Journal of Computer and System Sciences ••• (••••) •••–•••
3
In general, a service can be described by a number of attributes such as price, time, quality, etc. For different requests, the priority of different attributes of the same service can differ. In order to deal with the relationships between attributes and their corresponding priorities, we propose the following formal definition of service. Suppose there are n attributes used to describe a requested service and each attribute is in a requested priority as the condition to complete the service. The service can be represented by n attributes and their corresponding priorities, respectively. Definition 1. A Service Description (SDes) is the formal description of a service. SDes is defined in the following matrix:
SDes =
A1 W1
A2 W2
A3 W3
· · · An · · · Wn
where A i indicates the ith attribute; W i is the priority value of A i and
(1)
n
i =1
W i = 1.
Definition 2. The Rating Score (RS) is the satisfactory degree of a consumer agent toward a service provided by a provider agent. It is defined as an n-tuple, RS = R 1 , R 2 , . . . , R n , where R i indicates the rating value of the ith attribute of the service (see Definition 1). Here the range of R i is [0, 100], where 0 and 100 represent the worst and the best performance for ith attribute. In the CR model [17], the reference of a provider can only reflect its good performances. So it is hard for a consumer to have a general view about whether the provider has a consistent performance on the requested service. In order to solve this problem, the concept of service experience of a provider on a certain service is introduced in this model and defined below. Definition 3. The Service Experience (Exp) of a provider on a service is defined as a 2-tuple, Exp = SRate, SNum, where SRate indicates the success rate of the provider on this service and SNum indicates the total number of successes on the same service. Definition 4. A Service Request (SReq) is defined as a 4-tuple, SReq = CID, SDes , RN, SThreshold, where CID is the service consumer’s ID, SDes indicates the service, which is a 2 by n matrix representing the requested attributes and their priorities (see Definition 1), RN (0 < RN) is the number of references that CID requests, and SThreshold is the threshold of the success rate for a provider to qualify for providing the service. (RN and SThreshold can be assigned by a service consumer according to its preference.) Definition 5. A Reference (Ref ) is defined as a 4-tuple, Ref = RefID, SDes, RS, T , where RefID is the ID of the referee, SDes (see Definition 1) is the service description conducted by the provider for the referee, RS indicates the rating score given by RefID for each attribute of the service (see Definition 2), and T is the time when the reference was given. Definition 6. A Service Reply (SRep) is defined as a 3-tuple, SRep = SPID, RefSet, Exp, where SPID is the ID of the service provider, RefSet is the set of references including several previous best references provided by different referees (the size of RefSet can be determined by consumers), and Exp is the experience indicator (see Definition 3) indicating the provider’s general performance on this service. 4. Basic modules in the PBTrust model The PBTrust model consists of 4 modules. In this section, we will give a brief introduction of these 4 modules. As the key module in the PBTrust model, the Priority-Based Trust Calculation Module will be introduced in detail in Section 5. 4.1. The Request Module The objective of the Request Module is to create a Service Request based on the request from a consumer. For example, consumer C in an e-marketplace requests a service described by three attributes, i.e. cost, speed, and quality with corresponding priorities for each attribute (0.3, 0.5, 0.2), respectively. C requests two references and the requested success rate for a potential provider’s service history should be at least 70%. Based on this Service Request, the Request Module will generate a service description by using the format of Definition 1:
SDes =
Cost 0.3
Speed 0.5
Quality 0.2
Then, a Service Request SReq will be produced based on the service description and requirements of consumer C in the format defined by Definition 4.
SReq = C , SDes , 2, 0.7
JID:YJCSS AID:2644 /FLA
[m3G; v 1.87; Prn:4/01/2013; 11:23] P.4 (1-13)
X. Su et al. / Journal of Computer and System Sciences ••• (••••) •••–•••
4
The above example can be used to explain of the rest of the modules. 4.2. The Reply Module When a potential provider P can offer the service based on the requirement from consumer C , P will provide the following information: the provider ID, two reference reports, as well as service experience on the service before including the success rate and total number of successes. Suppose that P received 3 reference reports for its previous performances on the same service from different consumers represented by reference set {Ref 1 , Ref 2 , Ref 3 }, and each element in the set is in the format defined by Definition 5. P will pick up two best reference reports to represent its previous performance on the service, say Ref 2 and Ref 3 . Suppose that the success rate of P on the service is 70% and total number of times the service has been successfully completed is 35. The reply information from P responding to the request from C is as follows (see Definition 6).
SRep = P , {Ref 2 , Ref 3 }, (0.7, 35)
If more than one service providers can offer the requested service and also have the intention to provide the service, the Reply Module of each service provider will generate a reply for the service provider. 4.3. The Priority-Based Trust Calculation Module This module is the core of the PBTrust model. The main purpose of this module is to calculate the trust values of potential providers based on reference reports from third parties, service experience of providers, the time weights of references, and the similarities between the description of the requested service and the one from reference reports in terms of different priorities for attributes. These trust values will help a consumer to select the most trustworthy provider to complete the service. The final trust value for each potential provider is produced from several calculation results according to four perspectives: the provider’s experience on the service, the similarity of priority distributions on attributes between the referenced service and the requested service, the suitability of the potential provider for the requested service and the time relevance of rating score from third parties. The detailed design and calculations for each perspective will be introduced in Section 5. 4.4. The evaluation module This module includes two components. One is the generation of a reference report from a consumer for a provider based on the performance of a completed service, and the other is the updating the record of service experience of a provider when a new reference is available for the provider. 4.4.1. Reference report generation We use the same example as for the Request Module and the Reply Module to demonstrate how to generate a reference report in this module. After completing the requested service, consumer C evaluates the performance of provider P on the service. The evaluation result is represented in a reference report (see Definition 5) shown as follows.
Ref = C , SDes, 60, 40, 90, 12/7/2008
The above reference report shows the evaluation result from consumer C on the service SDes, completed on 12 July 2008. From consumer C ’s rating score, we can see that C was satisfied with the cost of the service (the first attribute of SDes), not satisfied with the speed of the service (i.e., the second attribute of SDes), and very satisfied with the quality of the service (the third attribute of SDes). 4.4.2. Service experience updating The service experience updating is based on consumer’s judgement on a newly completed service from the provider. A judgement result can be either ‘success’ or ‘fail’. Service experience Exp includes two elements SNum and SRate (see Definition 3). SNum and SRate can be updated by using Formulas (2) and (3):
SNum = SRate =
⎧ ⎨ ⎩
SNum + 1
judgement: success
SNum
judgement: fail
SNum +1 (SNum /SRate )+1
judgement: success
SNum (SNum /SRate )+1
judgement: fail
(2)
(3)
where SNum and SRate represent the total success times and the success rate before updating, respectively.
JID:YJCSS AID:2644 /FLA
[m3G; v 1.87; Prn:4/01/2013; 11:23] P.5 (1-13)
X. Su et al. / Journal of Computer and System Sciences ••• (••••) •••–•••
5
If consumer C is satisfied with the service provided by provider P , C will give the evaluation result, ‘success’ for P on this service. In this situation, Formulas (2) and (3) will be used to update the record of P ’s experience from (0.7, 35) to (0.706, 36). If consumer C is not satisfied with the service, C will give elevation result, ‘fail’ for P on this service. In this situation, Formulas (2) and (3) will be used to update the record of P ’s experience from (0.7, 35) to (0.686, 35). By using this updating method, the PBTrust model cannot only dynamically update records of service experience for all agents in open environments but also accumulate information to show general performance of each agent without a central control mechanism. 5. Priority-based trust calculation The Priority-Based Trust Calculation Module is used to produce the reputation values for potential service providers from four perspectives: the provider’s experience on the service, the similarity of priority distributions on attributes between the referenced service and the requested service, the suitability of the potential provider for the requested service and the time relevance of rating score from third parties. These perspectives have the contributions to the final reputation value from different views and are defined by separate formulas. This section gives a detailed introduction to this module. 5.1. Design and principle of priority-based trust calculation In order to produce reliable and robust trust values for potential service providers, we develop a priority-based trust calculation mechanism based on the following considerations: 1. The third party reference is used to derive the reputation of providers. 2. The term ‘suitability’ is introduced to predict the potential performance of a provider for requested service based on the information from a third party reference about the provider’s previous performance and the information of new priorities requested by the consumer. 3. The similarity measurement between the priority distribution on attributes of the service from a reference report and the priority distribution of attributes of the service requested from a consumer is also considered. 4. The timestamp of the reference report is taken into account to reduce the contribution of out-of-date references from third parties. This concept is borrowed from the CR model, which is used to reduce the effects that one service provider has good references reflecting its good performance before, but recently, it has begun to offer bad quality service. 5. The service experience is also used for the trust calculation. 6. The influence of all rating scores from different referees are also considered. 7. Finally, the trust value of a potential provider is calculated based on the above factors. Based the above considerations, we develop the following formula to calculate the trust values in the PBTrust model.
RN
k=1 (Simk
Trust = EW ×
× SIndk × TStampk ) RN
(4)
In Formula (4), EW is the experience weight of the provider, Simk refers to the similarity of priority distribution of attributes in the service from the kth reference report of requested service, SIndk is the suitability indicator based on the information of the kth reference’s rating score and the priorities in the requested services, TStampk represents timestamp for the kth reference, and RN is the number of references requested by the consumer and RN > 0. The detailed design for calculation of items EW, Simk , SIndk , and TStampk in Formula (4) are introduced in following sub-sections. 5.2. Experience weight calculation Experience weight EW represents the general performance of a service provider on this service. The higher the experience weight, the bigger the contribution to the trust calculation. EW is constructed by two factors, SRate and Fsn. SRate represents the successful rate (see Definition 3), while Fsn is the contribution to the EW from the total number of previous successful performances of the provider. EW is defined by the following formula:
EW = SRate × Fsn
(5)
SRate is the success rate of provider on the service (see Definition 3). As shown in Formula (6), Fsn is an exponentially increasing function for adjusting the weight of experience in trust calculation. When the SNum value becomes higher, Fsn can decrease the increasing rate of EW. Hence, by including Fsn, new providers (i.e., providers have not provide too many services) will have chances to be selected. Also, λ in Formula (6) is a coefficient to control the speed of the change in the curve which can be adjusted by users based on different application domains. Fig. 1 shows the increasing speed of Fsn when λ = 3.
Fsn = 1 − e −
SNum
λ
(6)
JID:YJCSS AID:2644 /FLA
[m3G; v 1.87; Prn:4/01/2013; 11:23] P.6 (1-13)
X. Su et al. / Journal of Computer and System Sciences ••• (••••) •••–•••
6
Fig. 1. Fsn’s increasing speed when λ = 3.
Fig. 2. The dot product of two vectors.
5.3. Similarity calculation The similarity of priorities between the ith reference and the requested service can be explained by the extent to which the reference can reflect the potential performance on the requested service. To determine this question, we must consider the similarity of priorities between the requested service and the referenced service. In the PBTrust model, we use a matrix to describe a service (see Definition 1). Since attributes in both the requested service and the referenced service are of the same order, we can omit attributes during similarity calculation. Now, a description matrix becomes a vector which includes priority values for corresponding attributes. We can use the dot product of two vectors. If the angle between two vectors’ direction are named θ , the dot product of two vectors indicates the cosine value of angle θ in mathematics. SReq in Fig. 2 represents the priority vector of the request, and SRef indicates the priority vector of service reference. θ is the angle between vector SReq and SRef . Since all priorities of attributes are positive numbers and the sum of them is 1, the range of angle θ is [0◦ , 90◦ ], the range of cos θ is [0, 1]. If θ = 0◦ and cos θ = 1 then there is no difference between the two vectors’ direction and the attribute priorities of the requested service and the referenced service are the same, so the reference can completely reflect the provider’s performance on the requested service. On the other hand, if θ = 90◦ and cos θ = 0 then there is the largest possible difference between the direction of the two vectors’ and the attributes priorities of the requested service and the referenced service are totally different, so the reference cannot reflect the provider’s performance on the requested service. With the explanation above, the similarity can be calculated by the following formula
n
× RW k ) n 2 2 k=1 (CW k ) ) × ( k=1 (RW k ) )
Simi = n
(
k=1 (CW k
(7)
where CW k and RW k represent the weight of kth attributes for the requested service by the consumer and reference service by provider, respectively. For example, supposing that there is a Service Request with 3 attributes, i.e., cost, speed and quality; the priority of each attribute is specified in the vector Req = 0.3, 0.5, 0.2. Suppose we have two potential providers, e.g. P1 and P2, for this request and the references of the two providers are Ref P 1 = 0.3, 0.5, 0.2 and Ref P 2 = 0, 0, 1, respectively. By using Formula (7), it can be calculated that Sim P 1 = 1 and Sim P 2 = 0.32. That means that the reference provided by P1 has a very similar priority distribution to the request service, and the reference provided by P2 has a very different priority
JID:YJCSS AID:2644 /FLA
[m3G; v 1.87; Prn:4/01/2013; 11:23] P.7 (1-13)
X. Su et al. / Journal of Computer and System Sciences ••• (••••) •••–•••
7
Fig. 3. The graph of Eq. (9) when λ = 3.
distribution with the request service. Therefore, different weights will be assigned to the two references when calculating the trust values of P1 and P2. 5.4. Suitability indicator calculation The purpose of the suitability indicator is to predict the potential performance of a provider on the requested service by using two pieces of information: reference rating score and the priorities of attributes in the requested service. The suitability indicator of the ith reference can be calculated by the following formula:
SIndi =
n
R k × CW k
(8)
k =1
where CW k represents the weight of kth attributes for the requested service by the consumer, and R k is the rating value for the kth attribute given by the ith referee. 5.5. Timestamp calculation The purpose of using the timestamp to evaluate the influence of references on the trust value is to eliminate or reduce the effect of out-of-date rating score depending on the value of T in a reference (see Definition 5). The method for the timestamp calculation is borrowed from the same concept used in the CR model [17]. The timestamp for the ith reference report is calculated by the following formula:
TStampi = e −
t (i ) λ
(9)
In Formula (9), t (i ) means the time period between the ith reference was generated and the current time. λ is a coefficient to control the decreasing speed in the time curve depending on application domains [17]. Fig. 3 shows the decreasing speed of TStamp when λ = 3. 6. Experiments and analysis 6.1. Experiment objective and benchmark To evaluation the performance of the PBTrust model in service selection, we designed several experiments to compare it with the CR model, which is one of the most well regarded distributed trust models (refer to Section 2). The objective of the experiment is to test whether the PBTrust model can help a service consumer to select a more satisfactory service in distributed environments. To achieve the objective, the Satisfaction Degree (SatDegree), are defined as the comparison benchmarks. Definition 7. Satisfaction Degree (SatDegree) is the difference between a service selected by a trust model and the expected service of a consumer. Satisfaction Degree can be calculated by using Formula (10):
n
SatDegree = Sim ×
i =1
Ri
n × 100
(10)
JID:YJCSS AID:2644 /FLA
[m3G; v 1.87; Prn:4/01/2013; 11:23] P.8 (1-13)
X. Su et al. / Journal of Computer and System Sciences ••• (••••) •••–•••
8
Table 1 Average similarity degrees of each provider group. Group ID (l)
Average Similarity Degree (AveSiml )
Group Group Group Group Group Group Group
0.2 0.3 0.4 0.5 0.6 0.7 0.8
1 2 3 4 5 6 7
Table 2 Differences of service and trust description in the CR model and the PBTrust model.
The CR model
The PBTrust model
Similarity factor & priority
Similarity factor is not considered. Similarity factors and priority distributions will not affect trust calculations in the CR model.
Similarity factor is considered. Trust calculations will affected by similarity factors and priority distributions.
Service description
A service is represented as a single item without consideration of service attributes.
A service is described by a matrix with multiple attributes and their priority distributions.
Description of providers’ performances
Providers’ performances are represented by single rating values.
Providers’ performances are represented as vectors. Each element in a vector represents the rating value of a particular service attribute.
where Sim is the similarity of priority distribution between the referenced service and the requested service (see Formula (8)), R i is the rating value of the ith attribute in the service and n is the number of attributes. The performance difference between the PBTrust model and the CR model will be mainly impacted by the three newly introduced items in the PBTrust model, i.e., the Similarity (Sim), the Suitability Indicator (SInd) the Experience Weight. As the Timestamp (TStamp) item is directly borrowed from the CR model, it will not cause any performance different between the PBTrust model and the CR model. For this reason, we did not include the TStamp item in our experiments. 6.2. Experiment 1: evaluating the impacts of Sim and SInd 6.2.1. Experimental setting In Experiment 1, three attributes: cost, speed, and quality, are considered for each service. We have a C++ program to randomly create the groups of service providers with different priority and rating score and seventy service providers ( P 1 , P 2 , . . . , P 30 , . . . , P 70 ) in seven groups which were chosen according to the average similarity degree of all service providers in one group. One Service Request with the following description was included.
SDes =
Cost 0
Speed 0.2
Quality 0.8
The seventy service providers were classified into seven groups according to the similarity degrees of their reference priorities with the requested priorities. Each provider group (G l ) has a different Average Similarity Degree (AveSiml ) (which indicates the average similarity degree of its group members). AveSiml can be calculated by using Formula (11):
AveSiml =
P i ∈G l
Simi
|G l |
(11)
where P i is a provider in group G l , Simi is the similarity between P i and the requested service SDes, |G l | is the size of G l . The average similarity degree of each group is shown in Table 1, and priority distributions and rating score of providers can be found in Appendix A. As Experiment 1 is focused on evaluating the impacts from Sim and SInd, we exclude EW and TStamp in this experiment assigning the same values for all providers: EW = 100%, 100, TStamp = “30/01/2011”. In addition, we set the reference number (RN) of the Service Request to 1, which means only one reference report is collected from each provider. 6.2.2. Trust value transfer function In order to compare the two trust models, a standard service description format is required. However, there are several differences between service descriptions of the two models which are listed in Table 2.
JID:YJCSS AID:2644 /FLA
[m3G; v 1.87; Prn:4/01/2013; 11:23] P.9 (1-13)
X. Su et al. / Journal of Computer and System Sciences ••• (••••) •••–•••
9
Fig. 4. The Satisfaction Degrees of the services selected by the CR model and the PBTrust model in Experiment 1.
To standardize the service descriptions in the two models, we define a transfer function f ( M , V ) to convert the trust values from the CR model to the PBTrust model. In the PBTrust model, a service S includes a number of attributes, i.e. S = ( A 1 , A 2 , . . . , A n ). In order to match the service presentation in the CR model, we can treat attributes of S as subservices so that S can be represented as S = ( S 1 , S 2 , . . . , S n ). Then, the CR model can be used to calculate the trust value of sub-services, i.e., S 1 , S 2 . . . S n . The overall trust value of S in the CR model can be obtained by calculating the weighted average values of all sub-services. Since the CR model does not consider priorities, we assume that all attributes have the equal priority values and the sum of these priority values is 1. Based on the above considerations, the trust transfer function can be defined as Formula (12):
n f (M , V ) =
i =1
Ri
n
(12)
where M is the service description matrix of a referenced service, V is the rating-vector of a service provider from the referee, and R i is the rating value of the ith attribute and n is the number of attributes the service has. In order to compare the performances of the CR model and the PBTrust model, we need to evaluate which service selected by the two models can better satisfy the requirements of the consumer. The priority distribution in a Service Request indicates the expected priority distribution of the consumer. Although a Service Request does not indicate the expectation about the rating value of each attribute, it can be assumed that a consumer always expects the highest rating value (i.e., 100) on each attribute. Therefore, for this experiment, the expected rating score from the consumer on Cost, Speed and Quality are: (100, 100, 100). 6.2.3. Experimental results and analysis of Experiment 1 In Experiment 1, services were selected from the seven groups using the two trust models. Then, the Satisfaction Degrees of the selected services are calculated and illustrated in Fig. 4. From Fig. 4, it can be seen that the Satisfaction Degrees of selected services of the PBTrust model are always higher than the CR model. That means, by including Sim and SInd, the PBTrust model can select better services than the CR model. In addition, the performances of the two models become closer as the Average Similarity Degree increases. This is because there are more possibilities for the CR model to select services with high Sim degrees when the average priorities of the service group is closer to the requested service priority. However, the calculation complexity of the PBTrust model (i.e., O (mn)) is higher than the CR model (i.e., O (1)). If both the number of attributes (m) in service description (see Definition 1) and the number of reference (RN) requested (n) by a service consumer (see Definition 4) are very large (e.g., > 1000), the CR model may has lower computation overheads, however, the PBTrust model has better performance. This result indicates that the PBTrust model is especially suitable for selecting service providers when the number of service attributes and references are not very high (e.g., < 1000). 6.3. Experiment 2: evaluating the impact of EW 6.3.1. Setting of the service consumer of Experiment 2 In Experiment 2, we want to test the impact of EW in the PBTrust model (see Eq. (5)). In this experiment, we also included 70 service providers, with the same reference priority values and rating values as Experiment 1. In addition, we simulated four scenarios in this experiment to evaluate the impacts of EW. For each scenario, a different EW value was assigned to each provider. The average EW values of the seventy providers in the four scenarios are 20%, 40%, 60%, 80%, respectively.
JID:YJCSS AID:2644 /FLA
[m3G; v 1.87; Prn:4/01/2013; 11:23] P.10 (1-13)
X. Su et al. / Journal of Computer and System Sciences ••• (••••) •••–•••
10
Fig. 5. The Satisfaction Degrees of services selected by the CR model and the PBTrust model in Experiment 2.
In Experiment 2, we also adopted SatDegree as the evaluation criteria, but modified the calculation method by considering success possibility. Formula (13) shows the calculation method of SatDegree in Experiment 2. The value of SatDegree is equal to zero when a service has failed.
SatDegree =
Sim ×
n
i =1 R i n×100
0
service executed
(13)
service failed
6.3.2. Experimental results and analysis of Experiment 2 Fig. 5 shows the experimental results of Experiment 2. From this figure, it can be seen that when the average experience weight is low (the first two scenarios), which means that most service providers have low success rates, the PBTrust model performs much better than the CR model. As the average experience weight of the providers increase, the performances of the two models will be more similar. This is because there are more possibilities for the CR model to select providers with low success rates in the first two scenarios, and that will cause high risks in service delivery. Such problems can be avoided, however, in the PBTrust model. 7. Conclusion and future work In this paper, the PBTrust model was proposed for selecting service providers in general service-oriented environments. In the PBTrust model, the trust value of a service provider is calculated based on the consideration of third party evaluations, its previous overall performance, the suitability of the requested service under requested attribute priories of the service by the consumer and weighted rating score from third party references based on time stamps. Since the reputation of a service provider is derived from third party referees in a rich context format, it can be easily used to handle different types of services. In the PBTrust model, the record of agent experience can also be updated dynamically without using a centralized mechanism. This feature makes the model suitable for open and dynamic environments. Different from other distributed trust models introduced in Section 2 [24,17,25], the PBTrust model use a matrix to describe trust information and preference. It provides a more robust and reasonable way to evaluate services from different perspectives. Based on the experimental results, it can be seen that the PBTrust model performs better than the CR model in selecting services. Hence, we claim that the PBTrust model has the following advantages:
• The PBTrust model considers the attributes of a service and uses priorities to distinguish the importance of different attributes. This feature allows more objective evaluation of both required services and providers’ reputations.
• The PBTrust model uses a relatively easy method of describing the different attributes of a service. • The PBTrust model introduces the concept of experience weight which can avoid subjective references and cheating references. Future research in this area will be focused on the following directions. First, in the PBTrust model, only single Service Requests were included. In future work, consumer competitions will be considered in service selections. Second, the current PBTrust model assumes that each service is provided by a single provider, however, in the real world, services may be combined in various ways to serve the consumer. In [26], we introduced some preliminary results of trust evaluation approaches for composed services. The evaluation of groups of service providers will be another direction of this research. Third, in the proposed approach, we assume all references cannot be falsified by provider agents. This assumption may bring some security issues. We will investigate some possible security protocols or techniques to avoid such problems in the future. Last but not least, more complex and robust interaction protocols which allows indirect recommendations among different providers and consumers will be investigated.
JID:YJCSS AID:2644 /FLA
[m3G; v 1.87; Prn:4/01/2013; 11:23] P.11 (1-13)
X. Su et al. / Journal of Computer and System Sciences ••• (••••) •••–•••
11
Appendix A Table A.1 Symbol notation index. Symbol
Meaning
Occurrence
SDes An Wn RS Rn Exp SRate SNum SReq CID RN SThreshold Ref RefID SRep SPID RefSet EW Sim SInd TStamp Fsn CW RW
the the the the the the the the the the the the the the the the the the the the the the the the
Definitions 1, 4, 5 & Eq. (1) Definition 1 & Eq. (1) Definition 1 & Eq. (1) Definitions 2 & 5 Definition 2 & Eqs. (8), (10), (12) Definitions 3 & 6 Definition 3 & Eqs. (3), (5) Definition 3 & Eqs. (2), (3), (6) Definition 4 Definition 4 Definition 4 & Eq. (4) Definition 4 Definition 5 Definition 5 Definition 6 Definition 6 Definition 6 Eqs. (4), (5) Eqs. (4), (7), (10), (11), (13) Eqs. (4), (8) Eqs. (4), (8) Eqs. (5), (6) Eqs. (7), (8) Eq. (7)
service description of a service request nth attribute priority value of the nth attribute rating scores of a service provider on attribute rating value of the nth attribute service experience successful rate of a service provider successful times of a service provider service request service consumer ID requiring number of references requiring threshold of success rate reference of a service provider reference ID service reply from a service provider service provider ID references set experience weight of the provider similarity between service request and reference suitability indicator of the service provider timestamp of the reference experience the service provider weight of attribute in the service request weight of attribute in the reference
Appendix B Table B.1 The setting of service providers’ references. Provider ID (i)
Group ID (l)
Priority distribution on (cost, speed, quality)
Ratings
P1 P2 P3 P4 P5 P6 P7 P8 P9 P 10 P 11 P 12 P 13 P 14 P 15 P 16 P 17 P 18 P 19 P 20 P 21 P 22 P 23 P 24 P 25 P 26 P 27 P 28 P 29 P 30 P 31 P 32 P 33 P 34 P 35
1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4
(0.9, 0.1, 0) (0.8, 0.2, 0) (0.7, 0.3, 0) (0.9, 0, 0.1) (0.8, 0.1, 0.1) (0.5, 0.5, 0) (0.7, 0.2, 0.1) (0.4, 0.6, 0) (0.8, 0, 0.2) (0.3, 0.3, 0.4) (0.7, 0.3, 0) (0.9, 0, 0.1) (0.6, 0, 0.4) (0.7, 0.2, 0.1) (0.4, 0.6, 0) (0.7, 0.1, 0.2) (0.5, 0.4, 0.1) (0.7, 0, 0.3) (0.5, 0.3, 0.2) (0, 0.5, 0.5) (0.6, 0.4, 0) (0.8, 0.1, 0.1) (0.8, 0, 0.2) (0.7, 0.1, 0.2) (0.3, 0.6, 0.1) (0.7, 0, 0.3) (0.6, 0.1, 0.3) (0.3, 0.5, 0.2) (0.6, 0, 0.4) (0.2, 0.3, 0.5) (0.7, 0.3, 0) (0.9, 0, 0.1) (0.4, 0.6, 0) (0.1, 0.7, 0.2) (0.3, 0.5, 0.2)
20, 100, 50 100, 20, 40 100, 50, 0 100, 50, 20 20, 100, 40 50, 100, 0 60, 10, 100 60, 60, 60 40, 20, 100 0, 50, 100 20, 100, 50 100, 20, 40 100, 50, 0 100, 50, 20 20, 100, 40 50, 100, 0 60, 60, 60 60, 10, 100 40, 20, 100 0, 50, 100 20, 100, 50 100, 20, 40 100, 50, 0 100, 50, 20 20, 100, 40 60, 60, 60 50, 100, 0 60, 10, 100 40, 20, 100 0, 50, 100 20, 100, 50 100, 20, 40 100, 50, 0 60, 60, 60 100, 50, 20 (continued on next page)
JID:YJCSS AID:2644 /FLA
[m3G; v 1.87; Prn:4/01/2013; 11:23] P.12 (1-13)
X. Su et al. / Journal of Computer and System Sciences ••• (••••) •••–•••
12
Table B.1 (continued) Provider ID (i)
Group ID (l)
Priority distribution on (cost, speed, quality)
Ratings
P 36 P 37 P 38 P 39 P 40 P 41 P 42 P 43 P 44 P 45 P 46 P 47 P 48 P 49 P 50 P 51 P 52 P 53 P 54 P 55 P 56 P 57 P 58 P 59 P 60 P 61 P 62 P 63 P 64 P 65 P 66 P 67 P 68 P 69 P 70
4 4 4 4 4 5 5 5 5 5 5 5 5 5 5 6 6 6 6 6 6 6 6 6 6 7 7 7 7 7 7 7 7 7 7
(0.5, 0.2, 0.3) (0, 0.7, 0.3) (0.3, 0.3, 0.4) (0.4, 0, 0.6) (0.2, 0.3, 0.5) (0.3, 0.7, 0) (0.2, 0.8, 0) (0.7, 0, 0.3) (0.5, 0.2, 0.3) (0, 0.7, 0.3) (0.5, 0, 0.5) (0.4, 0.2, 0.4) (0.3, 0.3, 0.4) (0.4, 0, 0.6) (0, 0.1, 0.9) (0.5, 0.3, 0.2) (0.1, 0.7, 0.2) (0.3, 0.5, 0.2) (0.5, 0.2, 0.3) (0, 0.7, 0.3) (0.5, 0, 0.5) (0.1, 0.4, 0.5) (0.2, 0.3, 0.5) (0.2, 0, 0.8) (0, 0.2, 0.8) (0.5, 0.4, 0.1) (0.5, 0, 0.5) (0.4, 0.2, 0.4) (0.3, 0.3, 0.4) (0.4, 0, 0.6) (0.1, 0.4, 0.5) (0.2, 0.3, 0.5) (0.2, 0.1, 0.7) (0.1, 0, 0.9) (0, 0.2, 0.8)
20, 100, 40 50, 100, 0 60, 10, 100 40, 20, 100 0, 50, 100 20, 100, 50 100, 20, 40 100, 50, 0 100, 50, 20 60, 60, 60 20, 100, 40 50, 100, 0 60, 10, 100 40, 20, 100 0, 50, 100 20, 100, 50 100, 20, 40 100, 50, 0 100, 50, 20 20, 100, 60 60, 60, 60 50, 100, 0 60, 10, 100 40, 20, 100 0, 50, 100 20, 100, 50 100, 20, 40 100, 50, 0 100, 50, 20 60, 60, 60 20, 100, 40 50, 100, 0 60, 10, 100 40, 20, 100 0, 50, 100
References [1] M. Bell, SOA Modeling Patterns for Service Oriented Discovery and Analysis, Wiley & Sons, 2010. [2] I.J. Taylor, E. Deelman, D. Gannon, M. Shields (Eds.), Workflows for e-Science – Scientific Workflows for Grids, Springer, 2007. [3] X. Fan, M. Li, J. Ma, Y. Ren, H. Zhao, Z. Su, Behavior-based reputation management in P2P file-sharing networks, J. Comput. System Sci. 78 (6) (2012) 1737–1750. [4] S. Ramchurn, D. Huynh, N. Jennings, Trust in multi-agent systems, Knowl. Eng. Rev. 19 (2004) 1–25. [5] K. Macarthur, Trust and reputation in multi-agent systems, in: Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS’08), Estoril, Portugal, 2008. [6] W. Lin, C. Lo, K. Chao, M. Younas, Consumer-centric QoS-aware selection of web services, J. Comput. System Sci. 74 (2) (2008) 211–231. [7] P. Varalakshmil, S.T. Selvil, A.J. Ashrafe, K. Karthick, B-tree based trust model for resource selection in grid. [8] V. Lesser, Cooperative multiagent systems: A personal view of the state of the art, IEEE Trans. Knowl. Data Eng. 11 (1999) 133–142. [9] A. Ali, S. Ludwig, O. Rana, A cognitive trust-based approach for web service discovery and selection, in: Proceedings IEEE of the 3rd European Conference on Web Services (ECOWS’05), 2005, pp. 1–5. [10] K. Fullam, K. Barber, Dynamically learning sources of trust information: Experience vs. reputation, in: Proceedings of the 6th International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS’07), Honolulu, Hawaii, 2007, pp. 1055–1062. [11] H. Billhardt, R. Hermoso, S. Ossowski, R. Centeno, Trust-based service provider selection in open environments, in: Proceedings of the 2007 ACM Symposium on Applied Computing (SAC’07), ACM, Seoul, Korea, 2007, pp. 1375–1380. [12] M. Sharmin, S. Ahamed, S. Ahmed, H. Li, Ssrd+: A privacy-aware trust and security model for resource discovery in pervasive computing environment, in: Proceedings of the 30th Annual International Computer Software and Applications Conference (COMPSAC’06), IEEE Computer Society, Chicago, USA, 2006, pp. 67–70. [13] G. Zacharia, P. Maes, Trust management through reputation mechanisms, Appl. Artif. Intell. 14 (9) (2000) 881–907. [14] J. Sabater, C. Sierra, Regret: Reputation in gregarious societies, in: Proceedings of the 5th International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS’06), Hakodate, Japan, 2001, pp. 194–195. [15] J. Sabater, C. Sierra, Reputation and social network analysis in multi-agent systems, in: Proceedings of the 1st International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS’02), ACM, Bologna, Italy, 2002, pp. 475–482. [16] S. Sen, N. Sajja, Robustness of reputation-based trust: Boolean case, in: Proceedings of the 1st International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS’02), Bologna, Italy, 2002, pp. 288–293. [17] T. Huynh, N. Jennings, N. Shadbolt, Certified reputation: How an agent can trust a stranger, in: Proceedings of the 5th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS’06), Hakodate, Japan, 2006, pp. 1217–1224. [18] X. Su, M. Zhang, Y. Mu, K.M. Sim, Pbtrust: A priority-based trust model for service selection in general service-oriented environments, in: Proceedings of IEEE/IFIP International Conference on Embedded and Ubiquitous Computing, IEEE Computer Society, Kowloon, Hong Kong, 2010, pp. 841–848.
JID:YJCSS AID:2644 /FLA
[m3G; v 1.87; Prn:4/01/2013; 11:23] P.13 (1-13)
X. Su et al. / Journal of Computer and System Sciences ••• (••••) •••–•••
13
[19] J. O’Donovan, B. Smyth, Trust in recommender systems, in: Proceedings of the 10th International Conference on Intelligent User Interfaces (IUI’05), ACM, New York, NY, USA, 2005, pp. 167–174, http://dx.doi.org/10.1145/1040830.1040870. [20] P. Massa, P. Avesani, Trust-aware recommender systems, in: Proceedings of the 2007 ACM Conference on Recommender Systems (RecSys’07), ACM, New York, NY, USA, 2007, pp. 17–24, http://dx.doi.org/10.1145/1297231.1297235. [21] Z. Huang, Analysis of the user similarity network for distributed recommendation, in: Proceedings the 17th Annual Workshop on Information Technologies and Systems, 2007. [22] L. Zhen, Z. Jiang, H. Song, Distributed recommender for peer-to-peer knowledge sharing, Inform. Sci. 180 (2010) 3546–3561. [23] S. Schmidt, R. Steele, T. Dillon, E. Chang, Fuzzy trust evaluation and credibility development in multi-agent systems, Appl. Soft Comput. 7 (2) (2007) 492–505. [24] M. Sensoy, P. Yolum, A context-aware approach for service selection using ontologies, in: Proceedings of the 5th International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS’06), Hakodate, Japan, 2006, pp. 931–938. [25] W. Tsai, P. Zhong, X. Bai, J. Elston, Role-based trust model for community of interest, in: Proceedings of IEEE International Conference on ServiceOriented Computing and Applications (SOCA’09), Taipei, Taiwan, 2009, pp. 1–8. [26] X. Su, M. Zhang, Y. Mu, Q. Bai, Gtrust: An innovated trust model for group services selection in web-based service-oriented environments, in: Proceedings of the 12th International Conference on Web Information System, Engineering (WISE’11), Sydney, Australia, 2011, pp. 306–313.