TRIMS, a privacy-aware trust and reputation model ... - Semantic Scholar

5 downloads 15308 Views 2MB Size Report
Available online 6 September 2010. Keywords: ... Several tasks like buying goods, booking ... determined by the trust deposited in the targeting domain. As far as we know, ..... sidered a cheap operation in EC and the complexity of the point ...
Computer Networks 54 (2010) 2899–2912

Contents lists available at ScienceDirect

Computer Networks journal homepage: www.elsevier.com/locate/comnet

TRIMS, a privacy-aware trust and reputation model for identity management systems Félix Gómez Mármol a,⇑, Joao Girao b, Gregorio Martínez Pérez a a b

Departamento de Ingeniería de la Información y las Comunicaciones, Universidad de Murcia, 30.100 Murcia, Spain NEC Europe Ltd., Kurfürsten-Anlage 36, 69115 Heidelberg, Germany

a r t i c l e

i n f o

Article history: Available online 6 September 2010 Keywords: Identity management systems Trust and reputation management Dynamic federation Privacy homomorphisms Web services

a b s t r a c t Electronic transactions are becoming more important everyday. Several tasks like buying goods, booking flights or hotel rooms, or paying for streaming a movie, for instance, can be carried out through the Internet. Nevertheless, they are still some drawbacks due to security threats while performing such operations. Trust and reputation management rises as a novel way of solving some of those problems. In this paper we present our work TRIMS (a privacy-aware trust and reputation model for identity management systems), which applies a trust and reputation model to guarantee an acceptable level of security when deciding if a different domain might be considered reliable when receiving certain sensitive user’s attributes. Specifically, we will address the problems which surfaces when a domain needs to decide whether to exchange some information with another possibly unknown domain to effectively provide a service to one of its users. This decision will be determined by the trust deposited in the targeting domain. As far as we know, our proposal is one of the first approaches dealing with trust and reputation management in a multidomain scenario. Finally, the performed experiments have demonstrated the robustness and accuracy of our model in a wide variety of scenarios.  2010 Elsevier B.V. All rights reserved.

1. Introduction Internet and the World Wide Web are continuously changing our lives. Many daily tasks like going shopping, watching the news or calling friends can be performed by means of electronic transactions. But even more tedious ones like booking a flight or a hotel room, or enrolling at the University, among many others, have been simplified by using the Internet. However, as many more applications appear and become popular, also many security risks threaten its safe working. Due to their impersonal nature, electronic transactions suffer from several security defi-

⇑ Corresponding author. Tel.: +34 868 88 78 66; fax: +34 868 88 41 51. E-mail addresses: [email protected] (F.G. Mármol), joao.girao@nw. neclab.eu (J. Girao), [email protected] (G.M. Pérez). URLs: http://ants.dif.um.es/~felixgm (F.G. Mármol), http://www.girao. org/joao (J. Girao), http://webs.um.es/gregorio (G.M. Pérez). 1389-1286/$ - see front matter  2010 Elsevier B.V. All rights reserved. doi:10.1016/j.comnet.2010.07.020

ciencies that have not been accurately solved yet and are, therefore, slowing down the extensive use of these very useful technologies by society. Service providers from the same and even from different domains have to deal with these problems everyday. As will be shown in Section 2, we face the problem that some entities (called Web Service Consumers, or WSC) have when, in order to provide the requested service to a certain user, they need to previously exchange some information with other entities (called Web Service Providers, or WSP). Currently many domains carry out these transactions in a secure way by means of a SLA (Service Level Agreement), as well as by the use of AAA (Authentication, Authorization and Accounting) frameworks. But the problem is that a rigid contract such as a SLA is not always available or even feasible between every pair of domains. In our case, domains are delimited by the borders where data about a user cannot be shared amongst the services of one provider.

2900

F.G. Mármol et al. / Computer Networks 54 (2010) 2899–2912

In the last few years, a number of research works have been done in order to address these drawbacks. Specially, trust and reputation management has become a novel and effective way to tackle some of these security threats. In fact, many models have been published in order to effectively deal with concepts like trust and reputation over a wide range of environments [1–4], from P2P networks, to Wireless Sensor Networks (WSNs) or ad hoc ones, or even multi-agent systems. In this paper we present our novel trust and reputation model (one of the first applied in identity management systems) oriented to provide a Web Service Provider with an accurate mechanism to decide whether a Web Service Consumer is reliable or not to receive certain user’s attributes. The remainder of the paper is organized as follows. In Section 2 our scenario definition is shown. The main steps to be followed by every trust and reputation model and how our needs fit in those steps are discussed in Section 3. Section 4 exposes our trust and reputation model proposal, TRIMS, while Section 6 explains how we keep private certain sensitive information. A set of experiments and results in order to test the accuracy and goodness of our model are presented in Section 7. And, finally, some conclusions and future work are exposed in Section 9.

2. Scenario definition The problem we want to solve is the following (see Fig. 1). We have several (mobile) users connected to different domains at a certain moment. Some of the entities belonging to those domains will act as WSP (providing identity attributes such as age, e-mail, location, etc.) or WSC (actually delivering the requested web service, such as a film, a book, etc.) for the users who are currently connected to them. But in order to actually provide those ser-

vices to the users, a WSC needs to retrieve some information from another WSP. This information exchange between two domains is usually done under a well known and accepted SLA (Service Level Agreement). By means of a SLA, every domain can be sure that the information provided by the other domain is trustworthy, and that the receptor domain will use such information appropriately. However, it is not always possible to find a SLA between every pair of domains. Therefore, we need a mechanism to allow a domain to somehow determine if another (maybe unknown) domain can be taken as a reliable or not receptor of certain sensitive and private user data. The mechanism we will apply is a reputation and trust model. Additionally, the Identity Providers (IdP) manage identity information on behalf of users and provide assertion of users authentication to other providers. As we will explain in Section 4.1.1, they will act in our approach as recommendation aggregators, that is, they will collect the opinions of the users belonging to each IdP and they will return a single aggregated value. In order to clarify the scenario, let a user in Fig. 1 (domain A) ask for a certain service from WSC (domain B), for instance a pay-per-view horror movie for people over 18. Then, WSC needs to retrieve some information from WSP1 (domain A) such as user’s age, his/her e-mail address or his/her credit card number, in order to carry out the delivery of the requested service to that user. So, first of all, WSP1 checks if it has had past experience (transactions) with WSC, and if they were satisfactory or not. Once this revision has been done, it asks other users (regardless of the domain they are connected to) who have had past experience with the WSC about its behavior. And finally, WSP1 also asks other WSPs who have had past interactions with WSC, as well, about their satisfaction with the performed interaction. So, as we can see, every user and every WSP has to record its satisfaction with

Fig. 1. Scenario definition.

F.G. Mármol et al. / Computer Networks 54 (2010) 2899–2912

every transaction (actually the last n ones) carried out with every WSC belonging to another domain. The WSC might have a malicious behavior, for instance, by sending unsolicited e-mails or spam to the provided email address. It could also even perform some unclear or hidden charges using the user’s credit card number. So as soon as WSP1 has collected all WSC behavioral information, it assesses how trustworthy or reputable WSC is, according to some self-defined trust levels. That is, every domain can determine its own trust levels depending on its needs and on how confident or unconfident it is. Thus, according to the trust level where WSP1 placed WSC, the information exchange will be totally or partially done, or even not carried out. In our example, if the WSC does not get the user’s credit card number, it could provide the trailer of the movie instead, or a limited version. On the other hand, if it does not obtain the user’s age, it could provide a version of the movie where the violent scenes have been censored. Supposing that WSP1 trusts in WSC enough to let the information exchange happen, the service is then provided to the user who requested it, who will also tell WSP1 his/ her satisfaction with that specific received service. This feedback information can help WSP1 to increase or decrease its trust in WSC, according to its self-defined trust levels, and also to punish or reward the recommendations given in the previous step. Following our example, a user might be dissatisfied with the received film because its visualization quality is lower than expected (bad resolution, disengaged audio and video. . . ), because the WSC is spamming him/her, etc. One may also consider other situations where data about the user need to be exchanged, such as for social networking, collaborative document editing, etc., where either privacy policies, customer satisfaction or relationship between the service provider and the customer are at stake. We focus on the exchange of data amongst providers and on the satisfaction towards the service.

3. Trust and reputation models steps According to some previous works, like [5,6], regarding the structure that a trust and reputation model for distributed systems should have, this section shows the main generic steps that each of those models may follow, in our opinion, for an environment like the one described in Section 2. The first step consists of collecting as much information as possible about the behavior of the entity (in our case a WSC) we are evaluating. This information can come from direct past experiences, acquaintances’ experiences, pretrusted entities, etc. In our scenario, the sources of information of a domain will be: its direct past experiences and the past experiences of other users and other domains with the targeting domain. However, not every source of information should have the same weight or the same reliability. And, for a certain source of information, not everybody should have the same reliability, neither. In the second step, a trust and reputation model should aggregate all the information obtained in the previous step

2901

in order to get a rating or scoring value for that entity. In the next section we will explain in detail how this aggregation is done in our model. Once that value has been computed, the evaluated domain will be fit in some of the trust levels of the evaluator domain. These levels, self-defined by every domain, can vary from not trusting at all, to absolutely trust. In the next step, according to which trust level the evaluated domain has been placed, the information exchange will be totally or partially carried out or even not done. Once the evaluated domain has received the information from the evaluating domain (supposing the former was considered trustworthy enough), the service can be actually provided to the user who requested it. After receiving the service, the user sends back his/her satisfaction with that certain service to the domain who exchanged the information necessary to deliver it (WSP in our case). In the last step, this domain uses that feedback information in order to modify the trust score given to the domain which provided the service, increasing or decreasing it. The weights of the recommendations sources (i.e., users and WSPs) will be also adjusted. Finally, in this work we consider trust and reputation as different but closely related concepts. Thus we will treat them here as synonyms.

4. Trust and reputation model proposal In this section we will present our approach, called TRIMS (trust and reputation model for identity management systems), in order to solve the security problem between interacting domains in electronic transactions presented in previous sections. 4.1. Trust and reputation model design 4.1.1. Gathering information As we already said, our information sources are three: the own WSP who is evaluating the WSC, the users who have had past experiences with that WSC and other WSPs who have also had previous transactions with it. So when WSP1 is computing its trust in WSC, it checks if it has had any interaction with WSC in the past. If so, the last computed global trust value for WSC will be taken now as the direct trust or TD. On the other hand, we have the trust that other users deposit on WSC, called TU, and the trust of other WSP in that certain WSC, called TWSP. The problem here is how WSP1 can find out which users and other WSPs have had any interaction with WSC. We have analyzed three different approaches, with their pros and cons, as shown in Fig. 2.  Fig. 2(a) In this first approach, WSP1 asks all the users (through their corresponding IdP) if they have had any transaction with WSC every time it needs to compute its trust in that specific WSC. In case they had, then WSP1 requests their recommendation about WSC. Pros: WSP1 can weigh each user’s recommendation individually.

2902

F.G. Mármol et al. / Computer Networks 54 (2010) 2899–2912

Fig. 2. Gathering information (I).

Cons: It requires a lot of messages and maybe most of them are unnecessary, since probably not all of the users will have had an interaction with the specified WSC.  Fig. 2(b) The second alternative consists of WSP1 asking only those users who have actually had a transaction with WSC. Pros: WSP1 can weight each user’s recommendation individually, but using less messages. Cons: WSP1 needs to know all the users who have had an interaction with WSC (storing a list of them, for instance).  Fig. 2(c) The last proposed way of gathering recommendations from the users about WSC is to only ask the IdPs that WSP1 knows. Each IdP will give back an aggregated recommendation of all of its users who have had a transaction with WSC in the past. Pros: WSP1 does not need to know who has had an interaction with WSC. Additionally, this is the alternative using less messages. Cons: Users’ opinions can not be weighted individually. The same three options can be applied for the retrieval of other WSPs’ opinions about WSC. There is also a forth approach, based on the three previous presented ones, which is depicted in Fig. 3, and which is the one used in our model TRIMS. In this alternative, every IdP stores the weight given by WSP1 to each of its users, but encrypted with WSP1’s public key [7], so the IdP can not discover what is that actual weight. In the very first step, WSP1 sends its public key to the IdP (either within its digital certificate or by other means), together with the default initial weight for all its users. Then the IdP computes the weighted aggregation of all its users’ recommendations and gives it back to WSP1, encrypted with WSP1’s public key. WSP1 then decrypts that aggregation with its private key to obtain the weighted recommendation of all the users belonging to that IdP. In order to accomplish this, we need a privacy homomorphism E (as shown, for instance, in [8]) fulfilling that

n X

Eðxui Þ  Recui ¼ E

i¼1

n X

!

xui  Recui :

ð1Þ

i¼1

Pros: WSP1 can weight users’ recommendations individually, using less messages and the actual recommendation value of each user is only known by its corresponding IdP. Cons: A privacy homomorphism is needed satisfying Eq. (1). The importance of the application of these homomorphic schemes resides on the preservation of the confidentiality of the user’s recommendations. The WSP does not need to know each recommendation individually, but it needs to weight them in this way. The homomorphic approach, which will be described in detail in Section 6, allows us to perform such an operation. Additionally, in our proposed distributed architecture, each IdP should only collect the recommendations of a subset of users and aggregate their opinions. However, in case scalability results in a serious problem, some wellknow caching mechanisms [9] could be easily applied in our proposal. 4.1.2. Scoring and ranking Once all the relevant information related to the targeted WSC has been collected, a global trust value has to be given to it, by aggregating that precise information. Thus, the users’ trust in WSC, TU 2 [0, 1], is computed by a weighted sum of the recommendations of each user ui, Recui 2 ½0; 1, in the following way:

TU ¼

n X

xui  Recui ;

ð2Þ

i¼1

where xui 2 ½0; 1 is the weight given by WSP1 to user ui. Equally, in order to obtain the trust that other WSPs deposit on the WSC, the following equation is provided:

T WSP ¼

m X

xWSPi  RecWSPi :

ð3Þ

i¼1

Finally, the global trust of WSC can be computed as follows:

F.G. Mármol et al. / Computer Networks 54 (2010) 2899–2912

2903

Fig. 3. Gathering information (II).

GT ¼ ðxD  T D Þ þ ðxU  T U Þ þ ðxWSP  T WSP Þ;

ð4Þ

where xD, xU, xWSP 2 [0, 1] are the weights given to each source of information (direct experiences, other users and other WSPs, respectively). If WSP1 has no past experiences with WSC, then TD takes an initial value of 0.5; otherwise it takes the last computed global trust, that is, TD = GT. So, the first time a transaction is to be carried out between WSP1 and WSC, Eq. (4) is applied. However, for the transaction t, that expression becomes the following one:

GT ðtÞ ¼ xtD  T D þ ðxU  T U þ xWSP  T WSP Þ 

t1 X

xiD ;

i¼0

which is equal to

GT ðtÞ ¼ xtD  T D þ ðxU  T U þ xWSP  T WSP Þ 

1  xtD : 1  xD

ð5Þ

It is also important to notice that n X i¼1

xui ¼

m X

xWSPi ¼ xD þ xU þ xWSP ¼ 1:

ð6Þ

i¼1

Therefore if, for instance, xD = 1 then GT(t) = TD (i.e., only the direct trust is taken into consideration), and if xD = 0 then GT(t) = xU  TU + xWSP  TWSP (which means that only the users and WSPs trust is accepted). 4.1.3. Transaction After computing the global trust value of WSC, WSP1 has to decide whether to carry out the whole transaction with WSC, to carry it out partially or even not to have any interaction with WSC. This decision will depend on which trust level the WSC is placed. Every WSP can subjectively define its own trust levels. In our approach we propose the use of fuzzy sets in order to model those trust levels (see Fig. 4).

Each trust level will have associated an amount and/or type of information that can be exchanged if the communicating party is placed in that level. So, for instance, in the example shown in Fig. 4 we have four trust levels. If a WSC is placed in the level labelled ‘‘Trust”, then the whole transaction can be performed, including for example, payment data. If the trust level is ‘‘+/ Trust”, then only some non-critical information like e-mail address can be exchanged. Very few non-critical or relevant information (sex and age, for instance) may be exchanged if a WSC is placed in level ‘‘+/ Not Trust”. And if the level is ‘‘Not Trust”, then no transaction is carried out. In order to find out the trust level of a WSC from its global trust value we need to know the values returned by the membership functions of every fuzzy set containing GT as an element. In the example shown in Fig. 4 the membership function of fuzzy set ‘‘Not Trust” returns the value e1 for GT, while the membership function of fuzzy set ‘‘+/ Not Trust” returns e2, for the same crisp value. Once we have all those values, we can compute the probability of WSC of being placed in one trust level or another through the next expressions:

  e1 ; P \Not Trust"Þ ¼ e1 þ e2   e2 : P \ þ =  Not Trust"Þ ¼ e1 þ e2 In a generic way, the probability of WSC being placed in trust level TLj, being ei, i = 1, . . . , n the values returned by every membership function (and n the number of trust levels), could be obtained as follows:

ej

PðTLj Þ ¼ Pn

e

:

ð7Þ

i¼1 i

If ej(GT) = 0, " j, then the fuzzy set TLk with ek(v) – 0 for the closest value v < GT is selected.

2904

F.G. Mármol et al. / Computer Networks 54 (2010) 2899–2912

Fig. 4. Fuzzy trust levels.

4.1.4. Reward and punishment According to Fig. 3, once the transaction has been carried out between the WSC an the WSP, a reward or punishment is distributed to users and WSPs according to the accuracy and reliability of their recommendations. Lets focus on the punishment and reward of users (it is equal for WSPs). So according now to Fig. 3, WSP1 sends to IdP the satisfaction of the user who asked for the service, together with a certain threshold d 2 [0, 1] (both without being ciphered), used to determine whether to punish or reward the users. A simple mechanism to measure the divergence between the final satisfaction of the user, Sat, and the previously given recommendation of every user, Reci, is by calculating the value of jSat  Recij. Therefore, as it can be observed in Fig. 5, if jSat  Recij < d then a reward is performed over user ui. Otherwise, if jSat  Recij P d, then user ui is punished. Both punishment and reward are proportional to the distance between the recommendation given by the user and the satisfaction perceived by the customer. That is, the closer those values are, the greater is the reward, and the farther those values are, the greater is the punishment. Hence, the IdP computes a value ai for each user ui and sends it, together with the weights xi to the WSP, encrypted with the public key of the latter. These values ai are computed as follows: If jSat  Recij < d, i.e., if a reward is to be performed on user ui, increasing its weight xi, then ai = jSat  Recij. Otherwise, if jSat  Recij P d, i.e., if user ui must be pun1 ished, decreasing its weight xi, then ai ¼ jSatRec . ij

Fig. 5. Punish and reward.

~ , and the Once the WSP receives both the weights x punishing or reward values ~ a, it decrypts them and updates the former as follows:

( ai

xi ¼ xi

ai

jSat  Reci j ifjSat  Reci j < d; 1 jSatReci j

ifjSat  Reci j P d:

ð8Þ

Then it also needs to normalize again the weights as shown next, encrypt them and send them back to the IdP:

x xi ¼ Pn i j¼1

xj

:

It is important to notice that a value d ? 1 means a low reward and low punishment, while d ? 0 means a high reward and a high punishment. 4.2. TRIMS steps The steps followed by our model TRIMS, as depicted in Figs. 6 and 7, are: 1. A user requests a service from a certain WSC. 2. The WSC, through the user’s IdP, selects a WSP which has the required information about the user. 3. WSP requests all its known IdPs their aggregated recommendations about the selected WSC, given by the users and WSPs registered on them. 4. Each IdP checks the recommendations of their users and WSPs about the queried WSC and performs the aggregated recommendation according to formula (1). 5. Every IdP returns the aggregated recommendation to the WSP. 6. WSP assesses its global trust about WSC using the expression (4) and fits it in one of its trust levels. 7. WSP then provides more or less user’s attributes, depending on which trust level the WSC was placed in the previous step (see Section 4.1.3).

F.G. Mármol et al. / Computer Networks 54 (2010) 2899–2912

2905

Fig. 6. TRIMS Steps.

Fig. 7. TRIMS Steps (Sequence diagram).

8. WSC provides the requested service, or a worse one, or even a better one, depending on its current goodness.

9. The user gives a feedback to the WSP containing his/her satisfaction with the actually received service.

2906

F.G. Mármol et al. / Computer Networks 54 (2010) 2899–2912

10. WSP sends the satisfaction of the user, together with the threshold d, to every known IdP. 11. Each IdP calculates qU and qWSP according to Eq. (9), and ~ a according to formula (8) and sends them back ~. to the WSP, together with the set of weights x 12. WSP performs the corresponding punishment or ~, reward, according to expressions (8) for weights x and (10) for xD, xU and xWSP weights. ~ weights and sends them back to 13. WSP normalizes x each IdP, respectively. Moreover, in Fig. 7 it can be observed that our model fits well in the generic steps of any trust and reputation model as described in Section 3. Finally, it is worthy to mention that the way the user’s satisfaction provided (in step 9) will depend on the specific application where TRIMS is deployed. Once a transaction has finished the user can be requested to provide his/her satisfaction either by providing a number within the interval [0, 1], or within the interval [0, 100], or he/she could be even requested to select amongst a set of labels like ‘‘Fully satisfied”, ‘‘Satisfied”, ‘‘More or less satisfied” and ‘‘Not satisfied”, for instance. The translation between the value actually provided by the user and the one used by our approach (Recui 2 ½0; 1) will depend as well on the specific application where our mechanism is used. 5. Trust and reputation model implementation In this section we will describe some implementation issues related to our proposed model TRIMS. 5.1. Weights initialization and updating Regarding the initial value of the weights given to each source of information, we considered that a good set of values could be the following:

xD ¼ 0:5; xU ¼ xWSC ¼ 0:25:

Moreover, the evolution of these weights in time will depend on the accuracy and reliability of each source. Thus, if users (equally, WSPs) are always being punished, we should decrease the influence of their opinions in the assessment of the global trust. On the other hand, if the entities of an information source are always being rewarded, a greater weight should be given to that precise source when computing the global trust value. So let qU be the average between the amount of rewards and punishments received by all the queried users, computed as follows:

qU ¼

Pn

i¼1

n

li

;

li



1  jSat  Reci j ifjSat  Reci j < d; jSat  Reci j

ifjSat  Reci j P d; ð9Þ

qWSP would be obtained in a similar way. It is important to note that both qU, qWSP 2 [1, 1]. A value qU = 1 means that absolutely all the users have received the maximum punishment (that is, jSat  Recij = 1 P d, " i), so their weight should be decreased to 0. Alternatively, a value of qU = 1 implies that all the users have received the maximum reward possible (that is, jSat  Recij = 0 6 d, " i), so their weight should take the maximum value. Finally, if qU = 0, on average, half the users have given a bad recommendation (and therefore have been punished) and the other half have received a good reward due to their accurate recommendations. In this case, the weight given to users in Eq. (4), xU, should remain invariable. In order to achieve these conditions, both weights xU and xWSP might be redefined after the last step of punishing and rewarding has been completed, according to the following formulae, as can be observed in Fig. 8. 2

xU

1

qU x1þ ; xWSP U

Fig. 8. Main information sources’ weights updating.

2

1

1þqWSP xWSP :

ð10Þ

2907

F.G. Mármol et al. / Computer Networks 54 (2010) 2899–2912

Finally, in order to preserve the relation shown in Eq. (6), weights need to be normalized, that is

ElGamal encryption scheme (EC-EG) [15]

xD ¼

xD ; xD þ xU þ xWSP xU xWSP xU ¼ ; xWSP ¼ : xD þ xU þ xWSP xD þ xU þ xWSP

Public Key Private Key Encryption

5.2. d parameter

Decryption

It is also important to note the relevance of the d parameter in the model since it controls the punishment and reward mechanism. As we have seen, a low value of d implies that too few users will be rewarded and too many punished, but those of them who are rewarded, will be highly rewarded. On the other hand, a high value of d means that too many users will be rewarded and the few ones who will be punished, will be slightly punished. Thus, we can conclude the following implications:

d ! 0 ) q ! 1 ) x ! 0; d ! 1 ) q ! 1 ) x ! 1: That is, the lower d is, the more strict and severe the punishment is. And the greater d is the higher is the reward scheme. So, in our opinion, a good initial value would be d = 0.5 since this value balances appropriately between punishment and reward. However, it can be updated dynamically as well as in time in order to avoid oscillating WSCs’ behaviors, and each WSP would be responsible for managing its own d parameter value individually. 6. Homomorphic encryption The use of privacy homomorphisms or, in particular, homomorphic encryption, enables arithmetic operations over ciphertexts. There are many common uses for these schemes such as for outsourcing database operations [10], for e-Voting schemes [11] and even in social networking [12]. Privacy homomorphisms can be symmetric or asymmetric. Domingo–Ferrer’s symmetric cryptosystem [13] is an example of such a scheme which is both additively and multiplicatively homomorphic. While this scheme is much more efficient than the one we have adopted for this work, it is weaker in security and key distribution properties. For TRIMS we have chosen an asymmetric scheme which we describe next. 6.1. EC ElGamal scheme The ElGamal encryption scheme [14] is a well known multiplicative privacy homomorphism. As part of this research work we have made an adaption of this encryption scheme to an elliptic curve mapping the homomorphism in an additive group. As in most elliptic curve based schemes, the security of the scheme will depend on the choice of the elliptic curve E, prime p and the choice of generator G. Curve E should be chosen such that the Elliptic Curve Discrete Log Problem (ECDLP) is verified.

E, p, G, Y = xG, where G, Y 2 Fp x 2 Fp plaintext M = map(m), r 2 R Fp, ciphertext C = (R, S), where R = kG, S = M + kY M = xR + S = xkG + M + xkG, m = rmap(M)

EC-EG offers the properties we require with a low bandwidth cost, flexibility and efficient operations. If we choose p to be a 163-bit number, then the ciphered text will take up 2(163 + 1) bits. The scheme is widely used in a number of applications, some of which were already cited in this article, which attest to its versatility and flexibility. Finally, the cost of addition and multiplication with a scalar are a point addition and two point multiplications with small numbers, respectably. A point addition is considered a cheap operation in EC and the complexity of the point multiplication depends on the size of the operands. These reasons make this scheme the perfect candidate to apply in our scenario. EC-EG homomorphic properties Addition

Scalar Multiplication

(R1,S1) + (R2,S2) = (k1G + k2G, M1 + M2 + k1Y + k2Y) ((k1 + k2)G, (M1 + M2) + (k1 + k2)Y) a(R, S) = (akG, a(M + kY))

map(x) Function properties Addition Scalar Multiplication

map(m1) + map(m2) = m1G + m2G =(m1 + m2)G = map(m1 + m2) amap(m) = amG = map(am)

6.1.1. Addition and scalar multiplication Since EC-EG functions in terms of points in the EC, a function map(x) and its reverse function rmap(x) are required. These functions should map an integer number to a point on the curve, and vice versa, such that the privacy homomorphism properties are maintained throughout all operations on encrypted text. Although there are standard mechanisms to translate an integer into a point in the curve, due to the restriction explained above, we opted for a mechanism introduced in [15] which defines map(x) = xG and rmap(x) as the brute force of the ECDLP. Since we intend to work with relatively small numbers (less than 32 bits) this restriction is acceptable in our scenario. 6.2. Limitations and solutions In TRIMS, this scheme presents two challenges: support for real numbers (i.e., x 2 R) and the homomorphic expo-

2908

F.G. Mármol et al. / Computer Networks 54 (2010) 2899–2912

nentiation necessary for the reward and punishment phase. In this section we show how both these issues can be addressed. 6.2.1. Real numbers Since our chosen privacy homomorphism will only support integer numbers, we have defined a precision P which determines the number of decimals with which reputation values will be calculated. Every addition operation will be done in P by multiplying both operands by 10P. For example, if P = 3, then all parcels will be offset by 103 = 1000. In addition, scalar multiplications will follow a similar rule except that P must be split between the two operands. For example, if we consider one operand to be a single digit, then the second operand should be P  1. Once the final aggregated result is received, it can be simply divided by 10P after decryption to retrieve the final number. 6.2.2. Reward and punishment When we apply this privacy homomorphism, we are restricted to the operations it supports. Unfortunately, in this scheme we cannot support the operations required to perform the punishment and reward operations. In our presentation we have shown the WSC receives the vector of retrieved reputation values and weights and updates them accordingly. Since this operation is performed by the WSC, who can decrypt the value before performing the operation, it can also normalize the values and trim the values to the required precision P before sending them back to the IdPs. Should we find a privacy homomorphism which supports exponentiation of encrypted values in the future, then this update function could be performed without the interaction of the WSC. 7. Experiments and results In this section we will describe and analyze the experiments carried out over TRIMS in order to demonstrate its accuracy, robustness, scalability and correctness. The first three experiments have been performed on an scenario composed by 5000 users, 600 IdPs, 300 WSPs and 1 WSC offering 1 service. We will present the average of the global trust values given by every WSP in the system to the only WSC present on it, and compare that mean with the actual behavior or goodness of that WSC. Please note that while the definition of IdP in this section does not conflict with that of the rest of the paper, it may cause some confusion as it does not follow the traditional definition of an external entity which provides identity data. In these experiments, the role of IdP might be taken by a service provider which has registered users. As in the rest of the paper, IdPs aggregate results of multiple users or WSPs.

the global trust to the real deserved trust, according to the goodness of the WSC, when the latter does not change its behavior in time. Thus, with a totally benevolent WSC, the model should give it a high value as soon as possible. Equally, a malicious WSC should receive a low value of global trust quickly. Fig. 9 shows the outcomes produced by this experiment. It can be observed that TRIMS does not need more than a few transactions in order to identify a benevolent WSC, and even less if what we have is a malicious one. But even more interesting is the performance of TRIMS when dealing with a WSC which is neither good, nor bad, i.e., its goodness takes a value of 0.5 in our model. In such case, TRIMS underrates a little bit the WSC, assigning it a lower trust value than its actual goodness. However, this is not a failure of the model, but all to the contrary. As we will see later in experiment 3, TRIMS underrates every WSC whose goodness is less than 0.5, punishing them and forcing them as well to become benevolent or to improve the quality of the services they provide. 7.2. Experiment 2 The second experiment consists of evaluating the speed of the model when adapting the global trust of a behavioral oscillating WSC. A real WSC will not probably maintain the same goodness forever, but it will rather try to cheat in order to get a higher self profit. Thus we have tested our model TRIMS against two different scenarios. In the first one the WSC will be initially fully benevolent and it will maintain that behavior for 20 transactions. In the 21st transaction, it will swap and become fully malicious and remain in that state for another 20 interactions. It will repeat this oscillating behavior indefinitely. Fig. 10 depicts the average global trust value given by all the WSPs in the system to the WSC and shows how it also fluctuates as the real behavior of the WSC changes. As it can be observed, it needs a little bit less than 10 transactions to punish to the maximum level and a little bit more than 10 in order to fully recover from that point. It is important to notice that this experiment represents the

7.1. Experiment 1 In our first experiment we will evaluate the time (number of transactions) needed by the model in order to adjust

Fig. 9. Outcomes for experiment 1.

F.G. Mármol et al. / Computer Networks 54 (2010) 2899–2912

2909

and will constantly increase it until reaching its maximum value, and then it will constantly decrease it until being again fully malicious. This behavior will continue indefinitely. Note that this is not the same test as the one performed in experiment 2 (a), since in that one the goodness changed from one extreme to the opposite one immediately, while here we have a progressive and linear change. Fig. 12 shows the outcomes obtained by TRIMS in this experiment. As can be observed, our model again follows and adapts itself to the real behavior of the WSC. Additionally this experiment allowed us to determine the root mean square error as follows:

sffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi PNum transactions ðRBi  GT i Þ2 i¼1 ; error ¼ Num transactions Fig. 10. Outcomes for experiment 2 (a).

greatest behavioral changes that a WSC could make in this context, going from one extreme to the opposite one. The second scenario where we have tested TRIMS consists of a WSC changing its behavior randomly every 20 transactions, i.e., without following any pattern as in the previous scenario. The outcomes for this particular experiment can be observed in Fig. 11. One more time, we can check how our model adapts the global trust given to the WSC according to its actual and current goodness. However, we can also appreciate here the slight underrating of TRIMS when the goodness of the WSC is less than or equal to 0.5. And even more, if that precise goodness is greater than 0.5, then the TRIMS model overrates a little bit the WSC, rewarding thus its good behavior. Therefore, this second experiment demonstrates that our approach has a quick and appropriate response against brusque fluctuations in the behavior of the WSCs.

where RBi is the real behavior of the WSC (i.e., its goodness) at the ith transaction and, equally, GTi is the global trust given by TRIMS to such WSC in the ith transaction. According to this formula, TRIMS got an error of 0.0815, which actually represents a 8.15% of error. Additionally it can be checked how a malicious WSC which is improving its behavior is underrated until its goodness is greater than or equal to 0.7, more or less, which results in an incentive to become as benevolent as possible. On the other hand, a good WSC worsening its behavior is overrated only until its goodness reaches the threshold of 0.5, approximately. From that value and below, it is underrated. 7.4. Experiment 4

In the third experiment carried out in order to test our model TRIMS, the WSC will begin with a goodness of 0.0

In this experiment we will check the performance of TRIMS when the scenario is only composed by 2 users, 1 WSP, 1 IdP and 1 WSC offering 1 service. That is the minimum number of elements needed to apply our trust and reputation model. As can be observed in Fig. 13 if we carry out the same three previous experiments over this small scenario, we keep getting good outcomes, although the accuracy of the model decreases a little bit. This worsening is due to the

Fig. 11. Outcomes for experiment 2 (b).

Fig. 12. Outcomes for experiment 3.

7.3. Experiment 3

2910

F.G. Mármol et al. / Computer Networks 54 (2010) 2899–2912

Fig. 13. Outcomes for experiment 4.

lack of information in the system, since only two users have transactions with the WSC and, therefore, each one only gives recommendations to the other user. Nevertheless, even in such a lacking of information environment, the error committed when testing Experiment 3, is only 0.0933, that is, a 9.33%. Therefore, we can conclude that the more users, WSPs, and IdPs compose the system, the more accurate is our model, which constitutes a demonstration of scalability of TRIMS from the point of view of precision. Moreover, we have measured experiment 3 error in different scenarios, varying the number of users, IdPs and WSPs composing each of them, and showing the results in Table 1. As can be observed, with much less than 5000

Table 1 TRIMS accuracy.

#0 #1 #2 #3 #4 #5

Number of users

Number of IdPs

Number of WSPs

Experiment 3 error (%)

2 50 100 500 3000 5000

1 10 10 100 100 600

1 5 5 50 50 300

9.33 8.47 8.36 8.20 8.15 8.15

users, 600 IdPs and 300 WSPs, we achieve similarly good outcomes.

7.5. Experiment 5 In all the experiments performed until now we have supposed that the users always provide correct recommendations. However, this might not be a realistic scenario. Therefore, in this last experiment we have introduced a certain percentage of users providing wrong recommendations in order to evaluate their influence in the overall performance of TRIMS. We have varied this percentage of malicious recommenders from 0% to 90%. Additionally, we have tested several scenarios (with different number of users, IdPs and WSPs) taken from Table 1. And the outcomes have been plotted in Fig. 14. As it can be observed, there is not a big difference among the five selected scenarios, although, as expected, the error increases as the percentage of malicious recommenders increases too. In the worst case, the error achieved is less than a 30%, when the number of users is the lowest and the percentage of malicious recommenders the greatest.

F.G. Mármol et al. / Computer Networks 54 (2010) 2899–2912

Fig. 14. Outcomes for experiment 5.

Once again we can conclude that the accuracy of the model increases with the number of users, WSPs and IdPs in the system, but it keeps providing a good performance even if such number of elements is not that high.

8. Related work Trust and reputation management [16] has recently emerged as a novel and accurate solution in order to tackle several security deficiencies not fully covered by traditional approaches. By being able to make her own opinion about the members composing a community, a user might have more opportunities to succeed when choosing the appropriate partner to have a transaction with. Such opinion is built upon a collection of gathered recommendations which, in turn, represent the reputation of the targeting entity within the system. Thus, several models have been proposed over the last years [2,17,18]. One of the first ones to appear, specifically in the field of multi-agents systems was Regret [19]. Here, authors proposed to manage the reputation from three different dimensions: the individual one, given from direct interactions with an agent; the social one, from previous experiences of group members with such agent and its acquaintances; and the ontological one, given by the combination of multiple aspects in order to build a reputation about complex concepts. Later on PeerTrust [20] was developed. PeerTrust is a trust and reputation model that combines several important aspects related to the management of trust and reputation in distributed systems, such as: the feedback a peer receives from other peers, the total number of transactions of a peer, the credibility of the recommendations given by a peer, the transaction context factor and the community context factor. Several trust and reputation management proposals were also made dealing with wireless sensor networks (WSN). One of them was ATRM [21], where trust and reputation management is carried out locally with minimal overhead in terms of extra messages and time delay. It is based on a clustered WSN with backbone, and its core is

2911

a mobile agent system. It requires a node’s trust and reputation information to be stored respectively in the forms of t-instrument and r-certificate by the node itself. In addition, ATRM requires every node to locally hold a mobile agent that is in charge of administrating the trust and reputation of its hosting node. Finally, PowerTrust [22] (based on EigenTrust [23]) was also proposed as a robust and scalable P2P reputation system which leverages the power-law feedback characteristics found applicable in dynamically growing P2P networks, either structured or unstructured. The powerlaw distributions implies that the node with a small amount of feedback is common, whereas the node with a large amount of feedback is extremely rare. Therefore, only a few nodes have much higher degree than others, and specifically those nodes are dynamically selected as power nodes and considered as most reputable in the system.

9. Conclusions and future work Our style of life is definitely being changed by the technology. Specifically electronic transactions let us do some businesses that were unbelievable a few years ago. However these transactions suffer some deficiencies derived from many associated security risks. Recently, trust and reputation management has arisen as an accurate way of effectively dealing with some of those problems. Therefore, in this paper we present a trust and reputation management proposal, called TRIMS, aimed to advise a domain when it has to decide whether to exchange some necessary information with other domain or not, depending on its trustworthiness and reputation. In our proposal a domain takes into account its own previous experiences with the domain being evaluated, the opinions of other users who have had interactions with that domain in the past, and the recommendations of other domains which have previously had transactions with the evaluated domain. Each of these sources of information is weighted in order to give it more or less importance in the final result. An extensive set of experiments has been performed in order to test the correctness of our model, showing how it accurately adjusts the global trust given to a domain to its real behavior, and how it quickly and effectively reacts against sudden behavioral fluctuations. As immediate future work we have considered the implementation and deployment of TRIMS over a real scenario like the Liberty Alliance Project and its proposition for standardization, as well as its formal comparison with some of the current most representative trust and reputation models.

Acknowledgments This work has been partially supported by a Séneca Foundation grant within the Human Resources Researching Training Program 2007 as well as by the EU IST FP7 Project SWIFT (Secure Widespread Identities for Federated Telecommunications, http://www.ist-swift.org).

2912

F.G. Mármol et al. / Computer Networks 54 (2010) 2899–2912

Thanks also to the Funding Program for Research Groups of Excellence granted as well by the Séneca Foundation with code 04552/GERM/06. References [1] Y.Sun, Y.Yang, Trust establishment in distributed networks: analysis and modeling, in: Proceedings of the IEEE International Conference on Communications (IEEE ICC 2007), Communication and Information Systems Security Symposium, Glasgow, Scotland, 2007. [2] A. Josang, R. Ismail, C. Boyd, A survey of trust and reputation systems for online service provision, Decision Support Systems 43 (2) (2007) 618–644. [3] R.Roman, M.C. Fernandez-Gago, J.Lopez, Featuring trust and reputation management systems for constrained hardware devices, in: Autonomics ’07 Proceedings of the First International Conference on Autonomic Computing and Communication Systems, Rome, Italy, 2007, pp. 1–6. [4] Y. Sun, Z. Han, K. Liu, Defense of trust management vulnerabilities in distributed networks, IEEE Communications Magazine 46 (2) (2008) 112–119. [5] S. Marti, H. Garcia-Molina, Taxonomy of trust: categorizing P2P reputation systems, Computer Networks 50 (4) (2006) 472–484. [6] F. Gómez Mármol, G. Martínez Pérez, Handbook of Peer-to-Peer Networking, Springer, 2009. Chapter State of the art in trust and reputation models in P2P networks. [7] G.L. Millán, M.G. Pérez, G.M. Pérez, A.F. Gómez-Skarmeta, Pki-based trust management in inter-domain scenarios, Computers & Security 29 (2) (2010) 278–290. [8] E.Mykletun, J.Girao, D.Westhoff, Public key based cryptoschemes for data concealment in wireless sensor networks, in: IEEE International Conference on Communications, ICC2006, Istanbul, Turkey, 2006. [9] S. Paul, Z. Fei, Distributed caching with centralized control, Computer Communications 24 (2) (2001) 256–268. [10] S.Evdokimov, M.Fischmann, O.Gunther, Provable security for outsourcing database operations, in: International Conference on Data Engineering, 0, 2006, p. 117. [11] K. Peng, R. Aditya, C. Boyd, E. Dawson, B. Lee, Multiplicative homomorphic e-voting, in: INDOCRYPT, 2004, pp. 61–72. [12] J. Domingoferrer, A. Viejo, F. Sebe, U. Gonzaleznicolas, Privacy homomorphisms for social networks with private relationships, Computer Networks, 15, pp. 3007–3016. [13] J. Domingo-Ferrer, A provably secure additive and multiplicative privacy homomorphism, Information Security Conference (2002) 471–483. [14] T. ElGamal, A Public Key Cryptosystem and a Signature Scheme Based on Discrete Logarithms, CRYPTO IT-31 (4) (1985) 469–472. [15] J. Adler, W. Dai, R.L. Green, C. Neff, Computational Details of the VoteHere Homomorphic Election System, ASIACRYPT. [16] F. Gómez Mármol, G. Martínez Pérez, Security threats scenarios in trust and reputation models for distributed systems, Elsevier Computers & Security 28 (7) (2009) 545–556. [17] A. Singh, L. Liu, TrustMe: Anonymous management of trust relationships in decentralized P2P systems, in: IEEE International Conference on Peer-to-Peer Computing, 2003, pp. 142–149. [18] F. Gómez Mármol, G. Martínez Pérez, Providing trust in wireless sensor networks using a bio-inspired technique, Telecommunication Systems Journal. [19] J. Sabater, C. Sierra, REGRET: reputation in gregarious societies, in: J.P. Müller, E. Andre, S. Sen, C. Frasson (Eds.), Proceedings of the Fifth International Conference on Autonomous Agents, ACM Press, Montreal, Canada, 2001, p. 194.

[20] L. Xiong, L. Liu, PeerTrust: Supporting Reputation-Based Trust in Peer-to-Peer Communities, IEEE Transactions on Knowledge and Data Engineering 16 (7) (2004) 843–857. [21] A. Boukerche, L. Xu, K. El-Khatib, Trust-based security for wireless ad hoc and sensor networks, Computer Communications 30 (11–12) (2007) 2413–2427. [22] R. Zhou, K. Hwang, PowerTrust: A Robust and Scalable Reputation System for Trusted Peer-to-Peer Computing, Transactions on Parallel and Distributed Systems. [23] S. Kamvar, M. Schlosser, H. Garcia-Molina, The eigentrust algorithm for reputation management in p2p networks, in: Proceedings of the International World Wide Web Conference (WWW), Budapest, Hungary, 2003.

Félix Gómez Mármol is a researcher at the Department of Information and Communications Engineering of the University of Murcia. His research interests include authorization, authentication and trust management in distributed and heterogeneous systems, security management in mobile devices and design and implementation of security solutions for mobile and heterogeneous environments. He received an M.Sc and PhD in computer engineering from the University of Murcia. Contact him at [email protected](http:// ants.dif.um.es/felixgm)

Joao Girao is a senior researcher in the Ubiquitous Secure Computing group at NEC Laboratories Europe, Heidelberg, Germany, where he is responsible for technical coordination in the identity management area. His research interests include security for networks and services and identity management. He received a diploma in computer and telematics engineering from the University of Aveiro, Portugal. He is a member of the IEEE and the ACM. Contact him at [email protected] (http://www.girao.org/joao)

Gregorio Martínez Pérez is an associate professor in the Department of Information and Communications Engineering of the University of Murcia. His research interests include security and management of distributed communication networks. He received an M.Sc and PhD in computer engineering from the University of Murcia. Contact him at [email protected] (http://webs.um.es/gregorio)