Towards a Collaborative Reputation Based Service Provider Selection ...

11 downloads 134 Views 104KB Size Report
entities and the Service Resource Providers (SRPs) that offer the .... Service/Resource Provider Reputation Broker component (SRPRB) through which.
Towards a Collaborative Reputation Based Service Provider Selection in Ubiquitous Computing Environments Malamati Louta Technological Education Institute of Western Macedonia, Department of Business Administration, Koila, 50100 Kozani, Greece University of Western Macedonia, Department of Information and Communications Technologies Engineering, 50100 Kozani, Greece {[email protected]}

Abstract. From a market based perspective, entities composing dynamic ubiquitous computing environments may be classified into two main categories that are, in principle, in conflict. These are the Service Resource Requestors (SRRs) wishing to use services and/or exploit resources offered by other system entities and the Service Resource Providers (SRPs) that offer the services/resources requested. Under the assumption that a number of SRPs may handle the SRRs requests, the SRRs may decide on the most appropriate SRP for the service / resource requested on the basis of a weighted combination of the evaluation of the quality of their offer (performance related factor) and of their reputation rating (reliability related factor). In this context, a reputation mechanism is proposed, which rates SRPs with respect to whether they honoured or not the agreements they have established with the SRRs. The reputation mechanism is collaborative, distributed and robust against false and spurious ratings. Keywords: Intelligent Multi Agent Systems, Performance & Reliability Related Factors, Collaborative Reputation Mechanism, Ubiquitous Computing.

1 Introduction Highly competitive and dynamic ubiquitous computing environments (including pervasive, peer-to-peer, grid computing, mobile ad-hoc & sensor networks and electronic communities), comprise system entities, which may be classified into two main categories that, in principle, are in conflict. These two categories are: the entities that wish to use services and/or exploit resources offered by other system entities (Service/Resource Requestors - SRRs) and the entities that offer the services / resources requested (Service/Resource Providers - SRPs). In general, SRPs’ main role is to develop, promote and provide the desired services and resources trustworthily, at a high quality level, in a time and cost effective manner.

The aim of this paper is, in accordance with efficient service operation objectives, to propose enhancements to the sophistication of the functionality that can be offered by ubiquitous intelligent computing environments. Hence, the following business case that falls into the realm of ubiquitous computing may be pursued: SRRs should be provided with mechanisms that enable them to find and associate with the most appropriate SRPs, i.e., those offering the desirable quality of service at a certain time period in a cost efficient manner, while exhibiting a reliable behavior. This study presents such mechanisms. Specifically, SRRs’ decision on the most appropriate SRP is based on a weighted combination of the evaluation of the quality of the SRPs’ offer (performance related factor) and of their reputation rating (reliability related factor). The SRPs’ performance evaluation factor is based on the fact that there may in general be different levels of satisfaction with respect to the various SRPs’ offers, while the reliability factor is introduced in order to reflect whether SRP finally provides to the SRR the service / resource that corresponds to the established contract terms or not. A wide variety of negotiation mechanisms, including auctions, bilateral (1 to 1) and/or multilateral (M to N) negotiation models and strategies as well as posted offer schemes (i.e., a nonnegotiable, take-it-or-leave-it offer) may be adopted in order to evaluate the quality of the SRP offers and establish the ‘best’ possible contract terms and conditions with respect to service/resource access and provision [1][2]. However, seeking for the maximization of their welfare, while achieving their own goals and aims, entities may misbehave (intentionally or unintentionally), acting selfishly, thus, leading to a significant deterioration of system’s performance. Therefore, trust mechanisms should be exploited in order to build the necessary trust relationships among the system entities [3], enabling them to automatically adapt their strategies to different levels of cooperation and trust. Traditional models aiming to avoid strategic misbehaviour (e.g., authentication and authorization schemes [4][5], Trusted Third Parties (TTPs) [6]) may be inadequate or even impossible to apply due to the complexity, the heterogeneity and the high variability of the environment. Reputation Mechanisms are employed to provide a “softer” security layer, sustaining rational cooperation and serving as an incentive for good behaviour, as good players are rewarded by the society, whereas bad players are penalized [7]. In the context of this study, for the evaluation of the reliability of SRPs, a collaborative reputation mechanism is proposed, which takes into account the SRPs’ past performance in consistently satisfying SRRs’ expectations. To be more specific, the reputation mechanism rates the SRPs with respect to whether they honoured or not the agreements established with the SRRs, thus, introducing the concept of trust among the involved parties. The reputation mechanism considers both first-hand information (acquired from the evaluator SRR’s past experiences with the target SRP) and second-hand information (disseminated from other SRRs), is decentralized and exhibits robust behaviour against spurious and false reputation ratings. This study is based upon the notion of interacting intelligent agents which participate in activities on behalf of their owners, while exhibiting properties such as autonomy, reactiveness, and proactiveness, in order to achieve particular objectives and accomplish their goals [8]. The rest of the paper is structured as follows. The related research literature is revisited in Section 2. In Section 3, the fundamental concepts of the proposed

collaborative reputation mechanism are presented, aiming to offer an efficient way of building the necessary level of trust in the ubiquitous intelligent computing environments. In Section 4, the reputation ratings system is mathematically formulated. In Section 5 the SRRs decision on the most appropriate SRP is formally described. Section 6 provides some initial results, indicative of the efficiency of the SRPs selection mechanism presented in this study. Finally, in Section 7, conclusions are drawn and directions for future plans are presented.

2 Related Research The issue of trust has been gaining an increasing amount of attention in a number of research communities. In [9], the current research on trust management in distributed systems is surveyed and some open research areas are explored. Specifically, the authors discuss on representative trust models in the context of P2P systems, mobile ad-hoc networks and electronic communities including public-key cryptography, the resurrecting duckling model and distributed evidence and recommendation based trust models. Open research issues identified include a) trust / reputation value storage, while preserving its consistency, b) mitigation of the impact of false accusations / malicious behaviour and c) combination and exploitation of trust values of different applications. In [10], a set of aspects is proposed to classify computational trust and reputation models. The classification dimensions considered were the reference model (cognitive or game theoretical), the information sources taken into account for trust and reputation value calculation (direct experiences & witness information, sociological aspects of agents’ behavior, prejudice), the global or subjective view of trust and reputation values, the model’s context dependence (capability of dealing with several contexts at the same time, while maintaining different trust/reputation values associated to these contexts for each partner), the incorporation of mechanisms to deal with agents showing different degrees of cheating behavior, the type of information expected from witnesses (Boolean information or continuous measures) and the trust/reputation reliability measure. A representative selection of trust and reputation models is classified on the basis of the aforementioned criteria. Based on this study, the authors believe that a good mechanism to increase efficiency of actual trust and reputation models and also to overcome the lack of confidence in e-markets is the introduction of sociological aspects as part of these models. In general, reputation mechanisms establish trust by exploiting learning from experience concepts in order to obtain a reliability value of system participants in the form of rating based on other entities’ view/opinion. Reputation related information may be disseminated to a large number of system participants in order to adjust their strategies and behaviour, multiplying thus the expected future gains of honest parties which bear the loss incurred by cooperating and acting for the maximization of the social welfare. Current reputation system implementations consider feedback given by Buyers in the form of ratings in order to capture information on Seller’s past behavior, while the reputation value is computed as the sum (or the mean) of those ratings, either incorporating all ratings or considering only a period of time (e.g., six

months) [11], [12]. In [13] the authors, after discussing on desired properties for reputation mechanisms for online communities, describe Sporas and Histos reputation mechanisms for loosely and highly connected online communities, respectively, that were implemented in Kasbah electronic marketplace. Sporas reputation mechanism provides a global reputation value for each member of the online community, associated with him/her as part of his/her identity. Histos builds a more personalized system, illustrating pairwise ratings as a directed graph with nodes representing users and weighted edges representing the most recent reputation rating given by one user to another. [14] introduces PeerTrust, an adaptive and dynamic reputation based trust model that helps participants/peers to evaluate the trustworthiness of each other based on the community feedback about participants’ past behavior. Five important factors are taken into account for the calculation of trust: the feedback a peer obtains from others, the feedback scope (such as the total number of transactions that a peer has with other peers), the credibility factor of the feedback source, the transaction context factor for discriminating mission critical transactions from less or non critical ones and the community context factor for addressing community related characteristics and vulnerabilities. In [15], the authors base the decision concerning the trustworthiness of a party on a combination of local information acquired from direct interactions with the specific party (if available) and information acquired from witnesses (trusted third parties that have interacted with the specific party in the past). In order to obtain testimonies from witnesses, a trust net is built by seeking and following referrals from its neighbours, which may be adaptively chosen. Their approach relies upon the assumption that the vast majority of agents provide honest ratings, in order to override the effect of spurious ratings generated by malicious agents. In [16], it is studied how to efficiently detect deceptive agents following some models of deception introduced, based on a variant of the weighted majority algorithm applied to belief functions. In the current version of this study, a reputation mechanism is exploited in order to estimate the reliability of each SRP. This estimation constitutes the reliability related factor and is introduced in order to reflect whether the SRP finally provides to the SRR the service /resource that corresponds to established contract terms or not. The SRP’s reliability is reduced whenever the SRP does not honour the agreement contract terms reached in general via a negotiation process. The proposed reputation mechanism is collaborative (the agents in addition to the information acquired based on their own experiences exploit information disseminated form other parties, enabling them to identify misbehaving parties in a time efficient manner), distributed and robust against false and/or spurious feedback.

3 General Elements of the Designed Reputation Mechanism Most reputation based systems in related research literature aim to enable entities to make decisions on which parties to negotiate/cooperate with or exclude, after they have been informed about the reputation ratings of the parties of interest. The authors in this study do not directly exclude / isolate the SRPs that are deemed misbehaving, but instead base the SRRs’ decision on the most appropriate SRP on a weighted

combination of the evaluation of the quality of the SRPs’ offer (performance related factor) and of their reputation rating (reliability related factor). Each system entity is represented by an intelligent agent acting on behalf of its owner in order to accomplish its tasks. Thus, two agent categories are introduced: Service/Resource Requestor Agents (SRRAs), which are assigned with the role of capturing the SRRs preferences, requirements and constraints regarding the requested services / resources, delivering them in a suitable form to the appropriate SRP entities, acquiring and evaluating the corresponding SRPs’ offers, and ultimately, selecting the most appropriate SRP on the basis of the quality of its offer and its reputation rating. Service/Resource Providers Agents (SRPAs) are the entities acting on behalf of the SRPs. Their role would be to collect the SRR preferences, requirements and constraints and to make a corresponding offer, taking also into account certain environmental criteria. SRRAs and SRPAs are both considered to be rational and self-interested, while aiming to maximise their owners’ profit. The proposed reputation mechanism is collaborative in the sense that it considers both first-hand information (acquired from the SRRA’s past experiences with the SRPAs) and second-hand information (disseminated from other SRRAs). To be more specific, each SRRA keeps a record of the reputation ratings of the SRPAs it has negotiated with and been served by in the past. This rating based on the direct experiences of the evaluator SRRA with the target SRPA forms the first factor contributing to the overall SRPA reputation rating. Concerning the SRPAs’ reputation ratings based on feedback given by other SRRA on their experiences in the system (the second factor contributing to the overall SRPA reputation based on witness information), the problem of finding proper witnesses is addressed considering a Service/Resource Provider Reputation Broker component (SRPRB) through which the evaluator SRRA obtains a reference of the SRRAs that have previously been served by the SRPAs under evaluation. At this point some clarifications with respect to the proposed model should be made. First, each SRRA is willing to share its experiences and provide whenever asked for the reputation ratings of the SRPAs formed on the basis of its past direct interactions. Second, the SRPRB component maintains a list of the SRPAs providing a specific service / resource as well as a list of SRRAs that have previously interacted and been served by a specific SRPA. Third, the reliability of SRPAs is treated as a behavioural aspect, independent of the services / resources provided. Thus, the witnesses list may be composed of SRRAs which have had direct interactions with the specific SRPA in the past, without considering the service / resource consumed. Fourth, SRPAs have a solid interest in informing SRPRB with respect to services / resources they currently offer, while the SRRAs are authorized to access and obtain witness references only in case they send feedback concerning the preferred partner for their past interactions in the system. This policy based approach provides a solution to the inherent incentive based problem of reputation mechanisms in order for the SRPRB to keep accurate and up to date information. True feedback cannot be automatically assumed. Second-hand information can be spurious (e.g., parties may choose to misreport their experience due to jealousy or in order to discredit trustworthy Providers). In general, a mechanism for eliciting true feedback in the absence of TTPs is necessitated. According to the simplest possible approach that may be adopted in order to account for possible inaccuracies to the

information provided by the witnesses SRRAs (both intentional and unintentional), the evaluator SRRA can mostly rely on its own experiences rather on the target SRPA’s reputation ratings provided after contacting the witness SRRAs. To this respect, SRPA’s reputation ratings provided by the witness SRRAs may be attributed with a relatively low significance factor. In this paper, we consider that each SRRA is associated with a trust level dynamically updated, which reflects whether the SRRA provides feedback with respect to its experiences with the SRPAs truthfully and in an accurate manner. In essence, this trust level is a measure of the credibility of the witness information. To be more specific, in order to handle inaccurate information, an honesty probability is attributed to each SRRA, i.e., a measure of the likelihood that a SRRA gives feedback compliant to the real picture concerning service provisioning. Second-hand information obtained from trustworthy SRRAs (associated with a high honesty probability), are given a higher significance factor, whereas reports (positive or negative) coming from untrustworthy sources have a small impact on the formation of the SRPAs’ reputation ratings. The evaluator SRRA uses the reputation mechanism to decide on the most appropriate SRPA, especially in cases where the SRRA doubts the accuracy of the information provided by the SRPA. A learning period is required in order for the SRRAs to obtain fundamental information for the SRPAs. In case reputation specific information is not available to the SRRA (both through its own experiences and through the witnesses) the reliability related factor is not considered for the SRPA selection. It should be noted that the reputation mechanism comes at the cost of keeping reputation related information at each SRRA and updating it after service provision / resource consumption has taken place.

4 Reputation Rating System Formulation Let us assume the presence of M candidate SRPAs interacting with N SRRAs concerning the provisioning of Q services / resources S = {s1 , s 2 ,...sQ } requested in a ubiquitous intelligent computing environment. Let the set of agents that represent Service Resource Providers be denoted by P = {P1, P2 ,...PM } and the set of agents that represent Service Resource Requestors be denoted by R = {R1, R2 ,...RN } . We hereafter consider the request of a SRRA Ri (evaluator) regarding the provision of service sc ( sc ∈ S ), which without loss of generality is provided by all candidate SRPAs P = {P1, P2 ,...PM } . In order to estimate the reputation rating of a target SRPA Pj , the evaluator SRRA Ri needs to retrieve from the SRPRB the list R w of n witnesses ( Rw ⊆ R = {R1, R2 ,...RN } ). Thereafter, the Ri contacts the n witnesses in order to get feedback reports on the behaviour of the Pj . The target SRPA’s Pj overall

reputation rating ORR Ri ( Pj ) may be estimated by the evaluator SRRA Ri in accordance with the following formula:

n

ORR Ri ( Pj ) = wRi ⋅ DRR Ri ( Pj ) + ∑ wRk ⋅ DRR Rk ( Pj ) . k =1

(1)

where DRR Rx ( Pj ) denotes the reputation rating of the target SRPA Pj as formed by

SRRA R x on the basis of its direct experiences with Pj in the past. As may observed from equation (1), the reputation rating of the target Pj is a weighted combination of two factors. The first factor contributing to the reputation rating value is based on the direct experiences of the evaluator agent Ri , while the second factor depends on information regarding Pj past behaviour gathered from n witnesses. The set of the

n witnesses is a subset of the R = {R1, R2 ,...RN } set. Weight wRx provides the relative significance of the reputation rating of the target SRPA Pj as formed by the SRRA R x to the overall reputation rating estimation by the evaluator Ri . In general, wRx is a measure of the credibility of witness Rx and may be a function of the trust level attributed to each SRRA Rx by the evaluator Ri . It has been assumed that weights wR x are normalized to add up to 1 (i.e., n

wRi + ∑ wRk = 1 ). Thus, weight wRx may be given by the following equation: k =1

wRx =

TLRi ( Rx ) R ∑ TL i ( Rx )

.

(2)

x∈i ∪{1,...n}

where TLRi ( R x ) is the trust level attributed to SRRA Rx by the evaluator Ri . It has been assumed that TLRi ( R x ) ∈ [0,1] with level 1 denoting a fully trusted witness Rx in the eyes of the evaluator Ri . One may easily conclude that for the evaluator Ri it stands TLRi ( Ri ) = 1 . Concerning the formation of the reputation ratings DRR Rx ( Pj ) , each SRRA may rate SRPA Pj with respect to its reputation after a transaction has taken place in accordance with the following equation: Rx Rx Rx DRR post ( Pj ) = DRR pre ( Pj ) + k r ⋅ l ( DRR pre ( Pj )) ⋅ {rr Rx ( Pj ) − E[rr Rx ( Pj )]} .

(3)

Rx Rx ( Pj ) are the SRPA Pj reliability based rating where DRR post ( Pj ) and DRR pre Rx after and before the updating procedure. It has been assumed that DRR post ( Pj ) and

Rx DRR pre ( Pj ) lie within the [0,1] range, where a value close to 0 indicates a

misbehaving SRP. rr Rx ( Pj ) is a (reward) function reflecting whether the service quality is compliant with the picture established during the negotiation phase with SRRA R x and E[rr Rx ( Pj )] is the mean (expected) value of the rr Rx ( Pj ) variable. In general the larger the rr Rx ( Pj ) value, the better the SRPA Pj behaves with respect to the agreed terms and conditions of the established contract, and therefore the more positive the influence on the rating of the Pj . It should be noted that SRP’s misbehaviour (or at least deterioration of its previous behaviour) leads to a decreased post rating value, since the {rr Rx ( Pj ) − E Rx [rr ( Pj )]} quantity is negative. The rr Rx ( Pj ) function may be implemented in several ways. In the context of this study,

it was assumed without loss of generality that the rr Rx ( Pj ) values vary from 0.1 to 1. Factor k r ( kr ∈ (0,1] ) determines the relative significance of the new outcome with respect to the old one. In essence, this value determines the memory of the system. Small k r values mean that the memory of the system is large. However, good behaviour will gradually improve the SPRA’s Pj reputation ratings. Finally, Rx Rx l ( DRR pre ( Pj )) is a function of the Pj reputation rating DRR pre ( Pj ) and is

introduced in order to keep the Pj rating within the range [0,1] . A wide range of functions may be defined. In the current version of this study, we considered a simple polynomial

model

Rx Rx l ( DRR pre ( Pj )) = 1 − DRR pre ( Pj ) ,

for

which

it

stands

Rx Rx ( Pj )) → 0 . Other functions could be defined as l ( DRR pre ( Pj )) → 1 and l ( DRR pre x DRR R pre ( Pj )→0

x DRR R pre ( Pj )→1

well. Trustworthiness of witnesses TLRi ( R x ) initially assumes a high value. That is all witnesses are considered to report their experiences to the Ri honestly. However, as already noted, the trust level is dynamically updated in order to account for potential dissemination of misinformation by the witnesses in the system. Specifically, a witness Rx is considered to misreport his/her past experiences, if the target Pj overall reputation rating ORR Ri ( Pj ) as estimated by the evaluator Ri by means of equation (1) is beyond a given distance of the rating DRR Rx ( Pj ) obtained from the witness Rx (formed in accordance with equation (3)), in which case the following expression holds:

ORR Ri ( Pj ) − DRR Rx ( Pj ) > e .

(4)

where e is the predetermined distance level. As it may be observed, this approach may be quite efficient in case the population of the witnesses reporting honestly their experiences is quite large with respect to the dishonest witnesses. Thus, to account for such cases, the evaluator takes also into account the distance of the reputation rating of the target Pj as formed considering its own direct experiences DRR Ri ( Pj ) . In case it stands DRR Ri ( Pj ) − DRR Rx ( Pj ) > e (assuming that the evaluator Ri has obtained the information required based on its direct experiences and has formed an accurate picture of the target Pj reliability) the evaluator may conclude that the witness misreports its experiences. Otherwise, in case the evaluator Ri does not have a real picture of the target Pj behaviour, it adjusts the trustworthiness of the witnesses considered for the formation of Pj reputation, only in case Pj is selected for the provisioning of the service / resource and after service provisioning has taken place and the reputation rating has been accordingly updated. Witnesses’ trustworthiness may be updated on the basis of the following expression, in a similar manner to equation (3): i i ( R ) + k ⋅ l (TLRi ( R )) ⋅ a . TLRpost ( R x ) = TLRpre x b pre x

(5)

i i ( R ) are the witness R trustworthiness after and before ( R x ) and TLRpre where TLRpost x x i i ( R ) lie ( R x ) and TLRpre the updating procedure. It has been assumed that TLRpost x

within the [0,1] range, where a value close to 0 indicates a dishonest witness. For the reward / penalty parameter a the following expression holds: ⎧a < 0, ORR Ri ( P ) / DRR Ri ( P ) − DRR Rx ( P ) > e⎫ j j j ⎪ ⎪ a=⎨ ⎬ . Ri Ri Rx ⎪a > 0, ORR ( Pj ) / DRR ( Pj ) − DRR ( Pj ) < e⎪ ⎩ ⎭

(6)

i ( R )) i (R ) l (TLRpre is a function of the witness trustworthiness TLRpre and is x x

introduced in order to keep the witness trustworthiness level within the range [0,1] . In the current version of this study, in accordance with equation (3), i ( R )) = 1 − TLRi ( R ) , l (TLRpre x pre x

for

which

it

stands

i ( R )) → 1 l (TLRpre x i ( R )→0 TLRpre x

and

i ( R )) → 0 . Factor k ( k ∈ (0,1] ) determines the relative significance of the l (TLRpre b b x i ( R ) →1 TLRpre x

new outcome with respect to the old one, constituting, thus, the memory of the system. At this point, it should be mentioned that the reliability rating value of the SRPA Pj requires in some cases (e.g., when consumption of network or computational resources are entailed in the service provision process) a mechanism for evaluating whether the service quality was compliant with the picture promised during the negotiation phase.

5 Decision on the Best Service / Resource Provider In this study, SRPs that are deemed misbehaving are not directly ostracised, but instead the SRRs’ decision on the most appropriate SRP is based on a weighted combination of the evaluation of the SRPs’ offer quality (performance related factor) and of their reputation rating (reliability related factor). Considering the fact that there may in general be different levels of satisfaction with respect to the various SRPs’ offers, there may be SRPs that, in principle, do not satisfy the SRR with their offer. The evaluator SRRA Ri decides on the most appropriate candidate SRPA Pj for the provision of service/resource sc ∈ S (i.e., the SRPA best serving its current service / resource request) on the basis of the following formula: P

j Maximise APR ( Pj ) = w p ⋅ U B (C final ) + wr ⋅ ORR Ri ( Pj ) .

(7)

As you may observe, APR ( Pj ) is an objective function that models the performance and the reliability of the SRPA Pj . Among the terms of this function there can be the overall anticipated SRRA satisfaction stemming from the final P

j contract C final reached within the negotiation phase, which may be expressed by the

P

P

j j utility function U B (C final ) ( U B (C final ) ∈ [0,1] [17], [18]) with respect to the

contract/offer proposed to the evaluator Ri and the reputation rating of the target Pj . Of course, one of the two factors (anticipated SRRA satisfaction or SRPA reputation rating) can be omitted in certain variants of the general problem version considered in this paper. Weights w p and wr provide the relative value of the anticipated user satisfaction and the reputation related part. It is assumed that weights w p and wr are normalized to add up to 1 (i.e., w p + wr = 1 ).

6 Performance Evaluation This section provides some indicative results on the behaviour of the service/resource provider selection mechanisms that are proposed in this paper. We hereafter assume the existence of an area that falls into the domain of P = {P1 , P2 ,...PM } candidate service providers (that is a specific request may be handled by any of the candidate SRPs belonging to the set P ). Regarding the different service/resource requestors that access the area, it is assumed that N classes exist. SRR classes are interested for the same service/resource, differentiated however with respect to the quality/quantity level required. In order to make the test case more realistic (or general), all SRPs are not assumed to offer all possible quantity/quality levels. SRPs that do not offer the required quality/quantity level for the service/resource as requested by the SRR class Ri constitute the I ( Ri ) set, which comprises SRPs that are inappropriate for the specific request and should therefore be excluded. Hereafter, it is assumed that N = 10 and M = 10 . Table 1 presents the set of SRPs that are inappropriate for service/resource requests originating from each SRR class. The proposed framework was empirically evaluated by simulating the interactions among SRRAs and SRPAs considering the simplest possible case. Specifically, it was assumed that the SRPAs, which can handle the request satisfying all requirements of the requestor class, offer exactly the same contract to the evaluator SRRAs (the same service/resource characteristics with exactly the same terms and conditions). In such a case, the service/resource provider selection is reduced to choosing the one with the highest reputation value (second factor contributing to equation (7)), since the overall satisfaction stemming from the proposed contract (expressed by the first factor of equation (7)) contributes to the objective function value the same amount for all candidate SRPs. Table 1. Set of inappropriate SRPs for each SRR class SRR Class R1

Inappropriate SRPs -

R2

P7 , P8 , P9 , P10

R3

-

R4

P7 , P9

R5

P2 , P5 , P7 , P8 , P9 , P10

R6

P3 , P5 , P7 , P8 , P9 , P10

R7

P2 , P3 , P4 , P5 , P7 , P8 , P9 , P10

R8

P3 , P5 , P7 , P9 , P10

R9

P3 , P5 , P7 , P9 , P10

R10

-

In Table 2 the result of SRPs ranking with respect to their overall reliability (estimated on the basis of equation (1)) after the learning period is presented for each SRR class, which reflects whether the SRPs usually meets the quality expectation raised (or promised) by the contract proposed. In order to test this aspect, each SRP has been associated with a reliability probability, i.e., a measure of the likelihood that the SRP delivers the service compliant with the agreement established. This probability has been set to values illustrated in Table 3. Specifically, with probability 0.9 SRP P5 complies with its promises, whereas P4 maintains its promises with probability 0.2. A mixture of extreme and moderate values have been chosen in order to test the schemes under diverse conditions. Table 2. Service Resource Providers Reliability Ranking SRR Class R1

1 P5

Service Resource Providers Reliability Ranking 2 3 4 5 6 7 8 9 P1 P8 P7 P10 P3 P6 P2 P9

R2

P5

P1

P3

P6

P2

P4

R3

P5

P1

P8

P7

P3

P6

P10

R4

P5

P1

P8

P3

P6

P10

P2

R5

P1

P3

P6

P4

R6

P1

P6

P2

P4

R7

P1

P6

R8

P1

P8

P6

P2

P4

R9

P1

P8

P6

P2

P4

R10

P5

P1

P7

P8

P6

P3

P10

Table 3. Honesty probability associated with each SRP SRP P1

Honesty Probability 0.8

P2

0.4

P3

0.6

P4

0.2

P5

0.9

P6

0.6

P7

0.7

P8

0.7

P9

0.4

P10

0.6

10 P4

P9

P2

P4

P9

P2

P4

Figure 1 illustrates the reputation rating of each SRP, as estimated by SRR class R1 , after the learning period. During the learning period, in order for the SRRAs to obtain fundamental information for the reliability related behavioural aspects of the SRPAs, each SRRA selects SRPAs for the service / resource provisioning on a roundrobin basis (e.g., the service / resource requests are served by iterating the candidate SRPs list). We considered that an accurate picture of the SRPs behaviour is formed by an SRRA, after a number of transactions (e.g., 100) have been conducted with each candidate SRP. Finally, all witnesses are assumed to behave honestly. Thus, TLRi ( R x ) = 1 for all witnesses.

SRPs Reputation Ratings for SRR class R1 after the learning period 1 0,9

Reputation Ratings

0,8 0,7 0,6 0,5 0,4 0,3 0,2 0,1 0 P1

P2

P3

P4

P5

P6

P7

P8

P9

P10

SRPs

Fig. 1. Reputation Ratings for all SRPs as formed by SRR class R1 after the learning period.

As may be observed from Table 2 (and Figure 1), considering SRR class R1 , the most appropriate SRP is P5 (ranked first), followed by SRP P1 (ranked second), followed by SRP P8 (ranked third), followed by P7 , P10 , P3 , P6 , P2 , P9 , while the SRP P4 occupies the 10th ranking position. Empty spaces in Table 2 are attributed to the fact that for a specific SRR class, there may be SRPs that do not offer the required quality/quantity level for the service/resource as requested (i.e., inappropriate SRPs). It may easily be concluded that each of the SRPs P5 and P1 handle 50% of the requests originating from all SRRs classes, because of their suitability in adequately serving 5 out of 10 SRR classes (that is, they offer the required quality/quantity level for the service/resource as requested by the SRR classes in a more reliable manner with respect to the rest SRPs). Specifically, SRP P5 is more reliable than P1 , but is characterized as inappropriate for SRR classes R5 , R6 , R7 , R8 , R9 , whereas P1 offer the requested service / resource characteristics for all SRR classes. Additionally,

one may observe slight differences in the SRP ranking position for different SRR classes. As an example the difference between SRR classes R1 and R3 should be noted. Specifically, for SRR class R1 , the 5th, 6th , and 7th positions are occupied by SRPs P10 , P3 and P6 respectively, while for SRR class R3 , 5th, 6th, and 7th positions are occupied by SRPs P3 , P6 and P10 . These changes may be attributed to the fact that SRPs P3 , P6 and P10 are associated with the same honesty probability. At this point it should be noted that, in order to take into account new SRPs that enter the system and/or potential changes on SRPs’ reliability related behavioural aspects, the SRR decision on the most appropriate SRP is based on a random scheme after a pre-specified number of transactions has been completed in the system, until possible outdated information is updated. In general, the proposed scheme exhibits a better performance with respect to the random SRP selection, which on average is 30%.

7 Conclusions Two main entity related categories may be identified in dynamic ubiquitous computing environments: the Service Resource Requestors (SRRs) wishing to use services and/or exploit resources offered by the other system entities and the Service Resource Providers (SRPs) that offer the services/resources requested. In this study, under the assumption that a number of SRPs may handle and serve the SRRs requests, the SRRs are enabled to decide on the most appropriate SRP for the service / resource requested on the basis of a weighted combination of the evaluation of the quality of their offer (performance related factor) and of their reputation rating (reliability related factor). Specifically, a reputation mechanism is proposed which helps estimating SRPs trustworthiness and predicting their future behaviour, taking into account their past performance in consistently satisfying SRRs’ expectations by honouring or not the agreements they have established with the SRRs. The reputation mechanism is distributed, considers both first-hand information (acquired from the SRR’s direct past experiences with the SRPs) and second-hand information (disseminated from other SRRs’ past experiences with the SRPs), while it exhibits a robust behaviour against spurious reputation ratings. The reputation framework designed has been empirically evaluated and has performed well. Initial results indicate that the proposed SRP selection scheme (based only on the reliability related factor) exhibits a better performance with respect to random SRP selection, which is on average 30%, in case honest feedback provision is considered from the vast majority of the witness population. Future plans involve our frameworks’ extensive empirical evaluation considering witnesses’ intentional provisioning of false feedback, as well as its comparison against existent reputation models and trust frameworks.

References 1. Jennings, N., Faratin, P., Lomuscio, A., Parsons, S., Sierra C., Wooldridge M.: Automated Negotiation: Prospects, Methods, and Challenges. International Journal of Group Decision and Negotiation, vol. 10, no. 2, pp. 199--215 (2001) 2. Louta, M., Roussaki, I., Pechlivanos, L.: An Intelligent Agent Negotiation Strategy in the Electronic Marketplace Environment. European Journal of Operational Research, vol. 187, no. 3, pp. 1327--1345 (2006) 3. Louta, M., Roussaki, I., Pechlivanos, L.: Reputation Based Intelligent Agent Negotiation Frameworks in the E-Marketplace. In International Conference on E-Business, pp. 5--12. Setubal, Portugal (2006) 4. Callas, J., Donnerhacke, L., Finney, H., Shaw, D., Thayer, R.: OpenPGP Message Format. RFC 4880, IETF (2007), http://www.ietf.org/rfc/rfc4880.txt 5. Cooper, D., Santesson, S., Farell, S., Boeyen, S., Housley, R., Polo, W.: Internet X.509 Public Key Infrastructure Certificate and Certificate Revocation List (CRL) Profile. Internet Draft, IETF (2007), http://www.ietf.org/internet-drafts/draft-ietf-pkix-rfc3280bis-09.txt 6. Atif, Y.: Building Trust in E-Commerce. IEEE Internet Computing Magazine, vol. 6, no. 1, pp. 18--24 (2002) 7. Zacharia, G., Maes, P.: Trust management through reputation mechanism. Applied Artificial Intelligence Journal, vol. 14, no. 9, pp. 881-908 (2000) 8. He, M., Jennings, N., Leung, H.: On agent-mediated electronic commerce. IEEE Transactions on Knowledge and Data Engineering, vol. 15, no. 4, pp. 985--1003 (2003) 9. Li, H., Singhal, M.: Trust Management in Distributed Systems. IEEE Computer, vol. 40, no.2, pp. 45--53 (2007) 10.Sabater, J., Sierra, C.: Review on Computational Trust and Reputation Models. Artificial Intelligence Review, vol. 24, no. 1, pp. 33--60 (2005) 11.eBay, http://www.ebay.com 12.OnSale, http://www.onsale.com/exchange.htm 13.Zacharia, G., Moukas, A., Maes, P.: Collaborative Reputation Mechanisms in Electronic Marketplaces. In 32nd Hawaii International Conference on System Sciences, pp. 1--7. Los Alamitos, CA, USA (1999) 14.Xiong, L., Liu, L.: Reputation and Trust. In Advances in Security and Payment Methods for Mobile Commerce, pp. 19-35. Idea Group Inc. (2005) 15.Yu, B., Singh, M.: A social mechanism of reputation management in electronic communities. In Klusch, M., Kerschberg, L. (eds.) Cooperative Information Agents IV - The Future of Information Agents in Cyberspace, LNCS, vol. 1860, pp. 154--165. Springer, Heidelberg (2000) 16.Yu, B., Singh, M.: Detecting Deception in Reputation Management. In 2nd International Joint Conference on Autonomous Agents and Multi-Agent Systems, pp. 73--80. Melbourne, Australia (2003) 17. Raiffa H.: The Art and Science of Negotiation. Harvard University Press, Cambridge, USA. (1982) 18. Roussaki, I., Louta, M., Pechlivanos, L.: An Efficient Negotiation Model for the Next Generation Electronic Marketplace. In IEEE Mediterranean Electrotechnical Conference, pp. 615--618. Dubrovnik, Croatia (2004)

Suggest Documents