On securing open networks through trust and reputation – architecture

6 downloads 0 Views 185KB Size Report
scope of the relationship, e.g. for example, Bob can be trusted to be good writer, ..... The provider would protect the data because it has a stake in .... reputation update is equivalent to just updating the value of the two parameters αj and βj. s.
On securing open networks through trust and reputation – architecture, challenges and solutions Yufeng Wang12, Yoshiaki Hori2, Kouichi SAKURAI2 1

College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, CHINA 2

Department of Computer Science and Communication Engineering, Kyushu University, Fukuoka 812-0053, JAPAN

Abstract. The traditional approach of providing network security has been to borrow tools from cryptography and authentication. With the Internet growing into highly decentralized and self-organizing networks, the open and anonymous nature of these networks makes them vulnerable to attacks from malicious and self-interested peers. So, we argue that the conventional view of security based on cryptography alone is not sufficient for those open networks. One way to minimize threats in such networks is to build reputation-based trust mechanisms to assess the trustworthiness of peers. But, in this research filed, there is still lack of comprehensive reputation and trust architecture, and exists a great deal of conflicting challenges. In this paper, we identify five important attributes of an elemental trust relation, provide comprehensive understanding of reputation-based trust architecture which includes three basic components, and introduce the corresponding requirements and research challenges in each component. Then we offer various reputation rating and trust inference mechanisms which can help us quantitatively model and analyze the real reputation and trust mechanisms. Finally, the novel research that considers trust and reputation systems as complex social networks is provided. The goal of this paper is to provide abundant problems to triggers more discussion on relatively new area of soft security with trust and reputation systems.

1

Introduction

Security has never been the strong point of the Internet, and with the advent of completely decentralized networks like P2P and sensor and ad hoc networks, the threats have only aggravated. These networks provide higher degree of autonomy to their nodes and hence are difficult to police. The traditional approach of providing network security has been to borrow tools from cryptography and authentication. However, we argue that the conventional view of security based on cryptography alone is not sufficient for the unique characteristics and novel misbehaviors encountered in open networks. Fundamental to this is the observation that cryptography cannot prevent malicious or non-malicious insertion of data from internal adversaries or faulty nodes. So, trust management became more and more

crucial in open networks and electronic communities. Considerable research has been performed on reputation systems targeted at securing decentralized systems. For example, reputation techniques were used to motivate the peers in a peer-to-peer network to cooperate, to motivate them to abstain from cheating, and penalize the peers who cheat [1-5]. This body of research work has shown that the use of reputation systems in P2P networks considerably reduces the number of malicious transactions, in decentralized networks, under certain assumptions. Reputations have also been examined from the perspective of ad hoc networks in order to motivate the network nodes to participate in routing and stay away from explicitly routing incorrectly. In ad hoc networks, reputation techniques increase the goodput of the network (the percentage of the packet sent that actually reach the target without getting maliciously altered) [6-8]. Ref. [9] discusses the requirement and approach to securing sensor networks including trust and reputation mechanisms. Ref. [10] proposes a reputation-based framework for sensor networks where nodes maintain reputation for other nodes and use it to evaluate their trustworthiness. Authors employ a Bayesian formulation, specifically a beta reputation system, for reputation representation, updates and integration, and illustrate that this framework provides a scalable, diverse and a generalized approach for countering all types of misbehavior resulting from malicious and faulty nodes. But, unfortunately, those activities are not very coherent, and there still exist a great deal of problems needed to be investigated. In this paper, we attempt to provide the comprehensive understanding of reputation and trust architecture, challenges and solutions, and discuss mathematical tools to model and analyze the reputation and trust systems. The paper is organized as follows: in section 2, concepts of trust and reputation are briefly introduced. Based on those concepts, the feature of trust and basic attributes of a trust relation is derived. Section 3 provides the comprehensive understanding of the reputation and trust architecture. Considering the peers’ requirement and adversaries’ power, challenges in each component are also discussed. Section 4 offers several mathematic tools used to quantitatively model and analyze the reputation rating and transitive trust calculation. Those existing trust and reputation systems are briefly introduced in section 5, and the novel research that considers trust and reputation systems as complex networks is provided. Finally, we brief conclude the paper, and provide the three kinds of goals achieved by various trust and reputation systems.

2

Trust and reputation definitions and features

2.1 concepts of trust and reputation Trust facilitates interaction and acceptance of risk in situations of incomplete information. However, trust is a complex concept that is difficult to stringently define. A wide variety of definitions of trust have been put forward. For the purpose of this study the following working definition will be used [11]:

Trust is a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he can monitor such action (or independently or his capacity ever to be able to monitor it) and in a context in which it affects his own action. This definition includes two distinguished features of trust: the concept of subjective (personalized), for example, different entities can have different kinds of trust in the same target entity, reflects that trust ultimately is a personal and subjective phenomenon that is based on various factors or evidence, and that some of those carry more weight than others. The concept of contextualized: reflect that trust is related to the context and scope of the relationship, e.g. for example, Bob can be trusted to be good writer, but not good car repairer etc. Reputation can be considered as a collective measure of trustworthiness based on the referrals or ratings from members in a community. An individual's subjective trust can be derived from a combination of received referrals and personal experience. The concept of reputation is closely linked to that of trustworthiness, but it is evident that there is a clear and important difference. The main differences between trust and reputation systems can be described as follows: trust systems produce a score that reflects the trusting entity’s subjective view of trusted entity's trustworthiness, whereas reputation systems produce an entity's (public) reputation score as seen by the whole community. Secondly, transitivity is an explicit component in trust systems, whereas reputation systems usually only take transitivity implicitly into account. 2.2 Soft Security Mechanisms In a general sense, the purpose of security mechanisms is to provide protection against malicious parties. Traditional security mechanisms will typically protect resources from malicious users, by restricting access to only authorized users. However, in many situations we have to protect ourselves from those who offer resources so that the problem in fact is reversed. Information providers can for example act deceitfully by providing false or misleading information, and traditional security mechanisms are unable to protect against this type of threat. Trust and reputation systems on the other hand can provide protection against such threats. Ref. [12] uses the term hard security for traditional mechanisms like authentication and access control, and soft security for what they called social control mechanisms in general, of which trust and reputation systems are examples. 2.3 Five attributes in an elemental trust relation In order to form meaningful trust networks, trust relationships should be expressed with five basic attributes. As illustrated in Fig. 1, the first attribute represents the trusting peer, the second represents the trusted peer, and the third represents the trust context denoted by C. In addition to the three basic trust attributes, a measure can be associated with each trust relationship. The trust measure could for example be binary (trusted, not trusted), discrete (e.g. strong trust, weak trust, strong distrust, weak

distrust, etc.) or continuous in some form, e.g. probability, percentage or belief functions. Finally, the fifth important element to a trust relationship is its time component. Obviously, the trusting entity's level of trust in the trusted entity at one point in time might be quite different from the level of trust after several transactions between these two entities have taken place. This means, that we can model time as a set of discrete events taking place between the involved parties. However, even if no transactions take place, a trust relationship will gradually change with time passing (such as aging factor or fading factor etc.). Context C2

Trusted peer

Semantic Ontology

measure 2 Fig. 1 Basic attributes in an elemental trust relation

3

Context C1 measure 1 Trusting peer

Time

Time

Trusting peer

measure 1

Trusted peer

measure 2

Trust and reputation architecture

3.1 Users’ requirement and adversaries’ power A reputation system designer must build a system that is accessible to its intended users, provides the level of functionality they require and does not hinder or burden them to the point of driving them away. Therefore, it is important to anticipate any allowable user behavior and meet their needs, regardless of added system complexity. User behavior and requirements that affect distributed mechanism design include: Node churn: the rate at which peers enter and leave the network, as well as how gracefully they disconnect, affects many areas from network routing to content availability. Higher levels of churn require increased data replication, redundant routing paths, and topology repair protocols. Reliability: for most applications, users require certain guarantees on the reliability or availability of system services. The situation is more difficult in peer-to-peer networks where adversaries are actively attempting to corrupt the content peers provide. Anonymity and privacy: users may only be willing to participate if a certain amount of anonymity is guaranteed. This may vary from no anonymity requirements, to hiding real-world identity behind a pseudonym, to requiring that an agent's actions be completely disconnected from both his real-world identity and his other actions. Obviously, it is very difficulty to design a feasible reputation system under the last requirement. Variety of the shared information: users would like to enjoy abundant information and services existed in distributed networks. (e.g., the files

authorized to be shared in Gnutella can include all media types, including executable and binary files). This feature combined with the peer anonymity makes the P2P environments more vulnerable to certain security attacks. The two primary types of adversaries in peer-to-peer networks are selfish peers and malicious peers. They are distinguished primarily by their goals in the system. Selfish peers wish to use system services while contributing minimal or no resources themselves. A well-known example of selfish peers is “freeriders” in file-sharing networks, such as Kazaa and Gnutella [13-14]. To minimize their cost in bandwidth and CPU utilization, freeriders refuse to share files in the network. The goal of malicious peers, on the other hand, is to cause harm to either specific targeted members of the network or the system as a whole. To accomplish this goal, they are willing to spend any amount of resources (though we can consider malicious peers with constrained resources a subclass of malicious peers). It is necessary to know about what techniques adversaries can employ to attack against the system and build in mechanisms to combat those techniques. The following list briefly describes the more general techniques available to adversaries. Unfair rating: peer should not be able to give a bad recommendation when it has received an excellent service, Bad Mouthing; peer should not give a good recommendation to a bad peer only because the bad peer is its own friend, Ballot Stuffing. Traitors. Some malicious peers may behave properly for a period of time in order to build up a strongly positive reputation, and then begin defecting. Collusion. In many situations multiple malicious peers acting together can cause more damage than each acting independently. Front peers. Also referred to as “moles” [15], these malicious colluding peers always cooperate with others in order to increase their reputation. They then provide misinformation to promote actively malicious peers. Freeriding and Whitewashers. Considering the anonymity of P2P systems, whitewashers refer to those peers that purposefully leave and rejoin the system with a new identity in an attempt to shed any bad reputation they have accumulated under their previous identity. Sybil attack. In this attack, a single user creates several fake users - called sybils - who are able to link to (or perform false transactions with) each other and the original user [16] to improve the original user’s reputation. Collusion strategy and sybil strategy differ in at least two critical ways. First, a sybil creator can gain reputation at the expense of his sybils, while colluders are unlikely to cooperate unless both can raise their reputations. Second, sybil strategies are likely to be less constrained in size - a user can often easily create a large sybil group, while it may be more difficult to find an equal number of users to form a colluding group [17]. 3.2 Trust and reputation architecture, components and challenges In order to shed light on designing useful trust and reputation system, the basic components of trust and reputation systems are provided, and the corresponding challenges in each component are also discussed, which are all illustrated in Fig. 2.

Taking action

Peer selection

Incentive for selfish peers Punishment for malicious peers

Update reputation rating and trust value

Reputation rating and trust models Input: Challenges: unfair rating, traitor, collusion, front peers, Quality, quantity, time, etc. Functional trust, referral trust

Rating methods transitive trust Centrality based rating Preference based rating Bayesian inference based rating Belief models and fuzzy model

Output: Binary value, a scaled integer, or a continuous scale etc. Multiple component score: functional trust, referral trust

Information gathering Information sharing: referral trust, functional trust; Challenge: unfair rating, front peers, traitor, collusion

Information source Information integrity

Dealing with stranger: optimistical, pessimistical, and adaptive strategy.

System identity: anonymity vs. Spoof-resistant and unforgeable Challenge: sybil attack, whitewashing

Information storage: third party (DHT, or unstructured P2P), Requester, or provider; Challenge: search overhead, freeriding Fig. 2 Trust and reputation architecture

In general, a reputation system assists agents in choosing a reliable peer (if possible) to transact with when one or more have offered the agent a service or resource. To provide this function, a reputation system collects information on the transactional behavior of each peer (information gathering), scores and ranks the peers based on expected reliability (scoring and ranking), and allows the system to take action against malicious peers while rewarding contributors (response).

3.2.1 Gathering Information The first component of a reputation system is responsible for collecting information on the behavior of p eers, which will be used to determine how "trustworthy" they are. System Identities Associating a history of behavior with a particular agent requires a sufficiently persistent identifier. Therefore, the first concern is the type of identities employed by the peers in the system. There are several properties an identity scheme may have, not all of which can be met with a single design. For example, the properties of system identity include: anonymity (see the subsection of users’ requirement); Spoof-resistant and unforgeable. In fact, the above properties are in direct conflict of each other. The fundamental questions that need to be answered for identity management in P2P reputation systems are: Should a peer have one or multiple identities? Who allocates the identities to the peers? What all information should an identity of a peer contain? In fact, many security threats arise from the anonymous identities in distributed networks, like whitewashing and Sybil attack. Several approaches are provided to manage identity in anonymous P2P systems [18][33-34]. Information Sharing Using established network identities, a reputation system protocol collects information on a given peer's behavior in previous transactions in order to determine their reputation. This information collection may be done individually by each peer in a reactive method, or proactively by all peers collating together their experiences. Sources of Information The most cautious peers may only want to rely on their own personal experience and use only local information when determining whether to transact with a given peer. To increase the information sources a cautious agent can collect the opinions of users whom they have a priori trust relationships with externally from the system (global information). These may include their friends from their personal lives, coworkers or business relationships, or even members of social networks they trust [19]. Reputation is also special kind of information, and it is not for free. Currently, most trust and reputation systems do not provide incentives for a peer to rate others and suffer from insufficient feedback. Reputation mechanisms should be designed to be incentive compatible, that is, rational agents are willing share and truthfully share the reputation information. Information Integrity One major problem with reputations systems is guaranteeing the validity of opinions. It is impossible to enforce honest, accurate reporting on transaction outcomes by all peers. Reputation systems that hope to combat colluding adversaries and front peers use reputation to weigh the information and opinions collected based on the trustworthiness of the source when constructing a peer's reputation rating. The detailed processing of information integrity involved the reputation rating and transitive trust calculation under certain condition, which will be introduced in detail in next section. Dealing with stranger With new users joining the system periodically, agents will often encounter peers with no transaction history available at any source. When no reputation information can be located, an agent must decide whether to transact with a stranger based on its stranger

Ԙ

ԙ

Ԛ

policy. Two simple strategies are to optimistically trust all strangers, or pessimistically refuse to interact with them. Both have their drawbacks. Optimistic agents may frequently be defrauded, especially in systems with high levels of whitewashing. However, in pessimistic systems, new users will be unable to participate in transactions and will never build a reputation. Feldman et al. suggest a “stranger adaptive” strategy, in which all transaction information on first-time interactions with any stranger is aggregated together, and the probability of being cheated by the next stranger is evaluated, so that peer and decides whether to trust the next stranger using that probability [14-15]. This probabilistic strategy adapts well to the current rate of whitewashing in the system. Information storage One of the important components of any reputation system is the storage of reputation data. It assumes such high importance because the security of the system is dependent on the integrity of the data, the format it is stored in, and the location of the storage. In a decentralized P2P network, there are only three choices for the location of reputation data: 1) the requester 2) the provider or 3) it might be stored with some third party in the network. The third party might be selected at random or by mutual agreement between the provider and the requester. The drawbacks of approach 3) are that (at least theoretically) a third party can be compromised; the third party might not be available (due to the erratic availability of peers in a P2P network) when needed or may not even care to store the data securely. Several approaches use the DHT to store the reputation data in order to locate those data efficiently and use cryptographic techniques to protect the data, like PeerTrust [20] and Havelaar [21]. The data can also be stored with the requester and protected cryptographically but the problem is that future requesters for a given provider will have to contact all the past requesters (of the given provider) in order to fetch and validate the recommendations before interacting with a given provider. A P2P network is already bandwidth intensive and search in an unstructured P2P network is extremely expensive. Hence the appropriate choice of location for reputation data is with the provider itself. The future requesters will find the reputation data in one place and will not have to search the network for the data. The provider would protect the data because it has a stake in the data. Hence the only challenge left is to protect the data from the provider itself. NICE system [46] addresses the problems by storing trust cookies signed by the requester in provider, in which the trust cookie denotes the transaction quality between requester and provider.

ԛ

3.2.2 Reputation rating and trust inference models Once a peer's transaction history has been collected and properly weighted, a reputation score is computed for that peer, either by an interested agent, a centralized entity, or by all peers collectively, as in EigenTrust [22]. The primary purpose of the reputation score is to help an agent decide which available service provider in the network it should transact with. It is needed to consider the inputs and outputs of the reputation rating function. What statistics gathered from a peer's transactional history will most benefit in computing its trustworthiness? How should reputation scores be represented? Inputs

Ԙ

Regardless of how a peer's final reputation rating is calculated, it may be based on various statistics collected from its history. But what statistics should be used in computing the ranking score? Ideally, both the amount a peer cooperates and defects would be taken into account. However, often the amount a peer defects may be unknown. As suggested in [15], peers can calculate the rate at which an agent contributes to the network. The contribution rate is a reputation rating based solely on good work. While both good and bad behavior can be taken into account, the negative impact of bad behavior on reputation should outweigh the positive impact of good behavior. Should the peer's reputation be based on the quantity of the work it’s done? Or should the quality also matter? Intuitively, a score that properly combines quality and quantity is much more effective and robust under a variety of adversarial techniques (for example, a peer that defects on one 100,000 yen transaction should have a lower reputation than one who defects on two or three 1000 yen transactions). If a system wishes to defend against traitors, then reputations rating system should regard time as input variable. More recent transaction behavior should have a greater impact on a peer's score than older transactions. For example, a weighted transaction history could be used. This would allow system agents to detect peers who suddenly “go bad” and defend against them. Outputs In the end, the computed reputation rating may be a binary value (trusted or untrusted), a scaled integer (e.g. 1 to 10), or on a continuous scale (e.g. [0,1]). The choice will be application dependent. Both scenarios detailed above imply a single scalar value is obtained for each candidate and is compared either against other candidates' ratings or against a trust threshold determined by the transaction. However, it is useful to maintain a peer's reputation as multiple component scores. Applying different functions to the scores allows a peer to calculate a rating best suited for the given situation. Many proposed systems suggest maintaining multiple statistics about each peer. For example, keeping separate ratings on a peer's likelihood to defect on a transaction and it's likelihood to recommend malicious peers helps mitigate the effects of front peers [23-25]. In section 4, we will provide the reputation rating and transitive trust inference mechanisms.

ԙ

3.2.3 Taking Action In addition to guiding decisions on selecting transactional partners, reputation systems can be used to motivate peers to positively contribute to the network and/or punish adversaries who try to disrupt the system. Mechanisms used to encourage cooperation in the system are referred to as incentive schemes. They are most effective at combating selfishness as they offset the cost of contribution with some benefit. However, incentive schemes can mitigate some maliciousness if access to system services requires an adversary provide good resources first. Such a reciprocative procedure raises the cost of misbehavior. While incentives are very useful at discouraging selfishness, curtailing misbehavior requires the ability to punish malicious peers. As discussed earlier, the primary function of reputation systems is to inform agents as to which peers are likely to defect on a transaction. Not only does adversary avoidance benefit well-behaved peers, but it punishes malicious peers who will quickly find themselves unable to disseminate bad resources or cheat other peers.

4

Indirect trust inference mechanisms

In any sizeable social networks, members are unlikely to have interacted with (rated) every other members. Humans rely on gossip or word-of-mouth to propagate their opinions about each other. In evaluating a stranger’s trustworthiness, we weight those of our friends’ opinions about this stranger by how much we trust our friends and come to a conclusion on whether we are going to trust this stranger. Hence, propagation of opinions (of which ratings is one) in human society is a very natural phenomenon. We argue that ratings should be context- and individual-dependent quantities. Clearly, sharing approval ratings in multiple contexts need to take into account the similarities and differences among the various contexts. We represent a trust system using a social network where nodes represent members and directed edges represent direct pair-wise ratings. Each social network is formed with respect to a specific context. By indirect inference, we refer to the ability to estimate the rating that a subject would have given to another member in the community as if that subject has directly interacted with that member. The general problem is described as follows: Let ρ ij (c) be the rating that member i gives to member j with respect to context C. Assume that and

ρ jk (c ) ,

ρ ij (c)

represents all the information that i has about j. Given

ρ ij (c)

where i ≠ j ≠ k , how i should evaluate k can be expressed as:

ρ ik = f ( ρ ij (c), ρ jk (c)) ,

where f(.) represents a rating propagation function for

inference across two edges. The following subsection examines four specific models that instantiate the above formalism: Centrality based rating, Preference based rating, Bayesian inference based rating and Belief models and fuzzy models 4.1 Centrality based rating This paper focus on one particularly well constructed centrality based rating system that takes into consideration not only the ratings themselves, but also weights these ratings of the ratings on the raters themselves. Let us define the following quantities: x: the vector of reputation measures for members in a social network; xi : the reputation measure for member i; n: number of members in a social network. A: adjacency matrix, which represents the ratings in the context of a social network. Rating by member i about member j is the matrix entry aij. The reputation measure xi is a function of the reputation valued of the members who have rated the member i. The rating process by members in a social network for the ith member can be expressed as:

xi′ = a1i x1 + a2i x2 +

L + a x , in matrix form, ni n

x′ = AT x

To perform the recursive calculation, this equation can be solved for its steady state

(

)

value (eigenvalue λ=1), I − λA x = 0 . This characteristic equation does not in general have a steady state solution since the matrix A in general does not have an eigenvalue 1. By the Perron-Frobenius Theorem, T

if the adjacency matrix can be column normalized to have a unity sum, the normalized matrix will have an eigenvalue of 1. Such normalization does not affect the relative reputation of each member. The corresponding eigenvector x is then the reputation of the corresponding members. This recursive calculation yields reputation that would be higher if either the ratings were higher or if the members who give those good ratings have higher reputation themselves. EigenTrust model [22] adopts the similar centrality-based model. In general, centrality based rating systems are global rating systems. We argue that a personalized rating system with contextualized ratings would yield better measures of reputation or reliability. 4.2 Preference-based Rating A preference-based rating system is a personalized rating system that takes into account the preferences of each member when selecting the reputable members in the community that he or she is most likely to approve of. Let ρi (c) be defined as the probability that an individual i approves of an object that can be categorized within context c. The probability that i approves of another j’s opinion for an object in the context c is represented by ρij(c). So, ρij(c)=Prob (i approves of j’s preference for context c) =Prob (both i and j approve of context c)+Prob (both i and j disapprove of context c)= ρi (c) ρj(c)+(1-ρi(c)) (1-ρj(c)) The closed form for ρik(c) is given as follows:

⎧ (2 ρ i − 1)(2 ρ i ρ jk − ρ i − ρ jk + ρ ij ) + (1 − ρ i )(2 ρ ij − 1) if ρ ij ≠ 0.5 ⎪ ρik = ⎨ (2 ρ ij − 1) ⎪ 0.5 if ρ ij ≠ 0.5 ⎩

where ρi (c) can be a simple proportional measure. For example, ρi(c) can be estimated as follows: divide the total number of i’s approval by the total number of ratings that i has given to that context. 4.3 Bayesian Estimate Rating Probability appears to be a better choice for representing uncertainty as it can represent uncertainty more accurately than the statistical techniques. Bayesian systems take binary ratings as input (i.e. positive or negative), and are based on computing reputation scores by statistical updating of beta probability density functions (PDF). A posteriori (i.e. the updated) reputation score is computed by combining the a priori (i.e. previous) reputation score with the new rating. The reputation score can be represented in the form of the beta PDF parameter tuple (α, β) (where α and β represent the amount of positive and negative ratings respectively). The beta PDF denoted by beta(p|α, β) can be expressed using the gamma function:

beta( p | α , β ) =

Γ(α + β ) α −1 β −1 p (1 − p ) where 0 ≤ p ≤ 1,α , β > 0 Γ (α )Γ( β )

The probability expectation value of the beta distribution is given by: E(p)= α/(α+β)

So, after observing αj positive and βj negative outcomes (from the perspective of node i) the reputation of node j maintained at node i is given by: ρij=beta (αj +1, βj +1), which corresponds to the probability that peer j with cooperate with peer i in next event. Then, assume peer i again interacts with peer j for r+s more events. r cooperative and s non-cooperative, then the updated ρij=beta (αj+r+1, βj+s+1). This clearly shows the flexibility associated with using beta reputation system, that is, the reputation update is equivalent to just updating the value of the two parameters αj and βj.

α new = α j + r , β jnew = β j + s . j

The indirect trust can be inferred through

[10][26]. The advantage of Bayesian systems is that they provide a theoretically sound for computing reputation scores, and the disadvantage that it might be too complex for average persons to understand. Furthermore, Bayesian models do not allow the peers to represent the state where a peer ‘does not know’ about the other peer. On the other hand, the ‘does not know’ state can be easily represented using the belief functions. 4.4 Belief Models and fuzzy models Belief theory is a framework related to probability theory, but where the sum of probabilities over all possible outcomes does not necessarily add up to 1, and the remaining probability is interpreted as uncertainty. This model explicitly separates between trust in the ability to recommend a good participant, which represents referral trust, and trust in actually being a good participant which represents functional trust. For example, trust in context c (for example, to be good car mechanic) of one peer i towards another peer k based on recommendations is established if and only if there is a third entity j so that i has referral trust in j, and j has functional trust in k. Based on this model, Jøsang [26] computes trust with the help of subjective probability. In this model trust is represented as an opinion. An opinion is a triple of believe b, disbelieve d and uncertainty u, each in [0, 1] with b + d + u = 1. b, d and u can be calculated out of the positive and negative experiences concerning the target of the opinion. Subjective logic defines a number of operators. Some operators represent generalizations of binary logic and probability calculus operators, whereas others are unique to belief theory because they depend on belief ownership. Through those operators, it is easy to infer the transitive trust and combine several trust value form multiple path between source and destination into one value. Recently, some proposals developed reputation systems based on a fuzzy-logic approach. This system benefits from the distinct advantages of fuzzy inferences, which can handle uncertainty, fuzziness, and incomplete reputation information effectively [27-29]. In all rating methods above, there exist the following two challenges: Considering the fact that trust and reputation is personalized and contextualize, so, semantic constraints must be satisfied in order for the transitive trust derivation to be meaningful [30]. More specifically, trust and reputation can be expressed on the semantic web using ontologies that provide a method for describing entities and the trust relationships between them. These ontological foundations facilitate expressing different trust relationships with respect to different topics. In large-size online community, it is common that trust networks consist of

multiple paths between the trusting party and the trusted party. The calculated indirect trust values of different paths can be combined to only one trust value. Ref. [31] introduces various path weighting algorithms to calculate the integrated trust value. Jøsang provides a practical method for analyzing and deriving measures of trust in such environments. The method, which is called TNA-SL (Trust Network Analysis with Subjective Logic), is based on analyzing trust networks as directed series-parallel graphs that can be represented as canonical expressions, combined with measuring and computing trust using subjective logic [26][32].

5

Existing solutions and performance metrics

The key to overcoming the individual rationality of defection is through encouraging reciprocity among the peers. Generally, Reciprocity can be facilitated through virtual currency based (token-based) or reputation-based approaches. Ref. [36-37] analyze the inherent incentive mechanism in BitTorrent (a very popular P2P application), and its effect on network performance. Specifically, the basic idea in BitTorrent is that each peer uploads to certain number of peers (the default value is 4) from which it has the highest downloading rates. The incentive mechanism is so-called ‘tit for tat’, which is special kind of toke-based incentive pattern, barter trade pattern. But the large population and high turnover of P2P systems make it less likely that repeat interaction will occur with a familiar entity. Also, this ‘tit for tat’ mechanism does not deal with asymmetry of interest. So, there exists a great deal of research on trust and reputation framework in open networks. Several representative P2P reputation systems are given as follows. Note the list presented in this paper is by no means exhaustive. At Stanford University, Hector Garcia-Molina and colleagues proposed the EigenTrust algorithm, in which the basic idea is that each peer i is assigned a global trust value – this reflects the experiences of all the peers in the network with peer i [22]. The algorithm aggregates the scores by a weighted sum of all raw reputation scores. EigenTrust is fully distributed using a DHT-overlay network. The system assumes pre-trust peers and uses majority voting to check faulty reputation scores reported. At Georgia Tech, Li Xiong and Ling Liu have developed the PeerTrust model [20]. Their PeerTrust system computes the trustworthiness of a given peer as the weighted sum of five peer feedback factors on peer records, scope, credibility, transaction context, and community context. PeerTrust is fully distributed, uses overlay for trust propagation, public-key infrastructure for securing remote scores, and prevents peers from taking some malicious abuses. At University of Southern California (USC), Shanshan Song etc. design FuzzyTrust system [27], which uses fuzzy logic inference rules to calculate local trust scores and to aggregate global reputation. This system leverages fuzzy-logic’s ability to handle uncertainty, fuzziness, and incomplete information adaptively. At University of Southern California (USC), also, Runfan Zhou and Kai Hwang provide PowerTrust system [39], which leverages the power-law feedback

characteristics in online reputation like eBay. The PowerTrust system dynamically selects small number of power nodes that are most reputable using a distributed ranking mechanism to improve global reputation accuracy and aggregation speed. Many people all over the world participate online in established social networks (complex networks). Most social networks show “community structure”, i.e., groups of vertices that have a high density of edges with in them, with a lower density of edges between groups. Trust and reputation is a special kind of social network, but few work attempts to investigate the network structural properties and their effect on processes taking place on those network. Ref. [40] uses the ‘trust community’ to deal with the problem of nodes’ selfishness and possible malicious. Ref. [39][41-43][46] also makes preliminary research on social trust networks. But there still exist a great deal of exciting research in this direction. Generally, the following three aspects in trust and reputation system should be considered thoroughly: How to find statistical properties in trust and reputation networks, such as path lengths and degree distributions, that characterize the structure and behavior of networked systems, and to suggest appropriate ways to measure these properties. How to create models that can help us to understand the meaning of these properties—how they came to be as they are, and how they interact with one another. The way to predict what the behavior of networked trust and reputation systems will be on the basis of measured structural properties and the local rules governing individual peers. A great deal of excellent outcomes are achieved on the characterization and modeling of network structure [44-45]. But, studies of the effects of structure on behaviors of trust and reputation systems on the other hand are still in their infancy. When studying reputation systems it is necessary to determine what metrics best measure the success of a particular system. It is necessary to measure the performance of real reputation and trust systems from the following aspects: Reputation system can enhance the security of open networks, but the implementation of reputation system exerts a certain amount of overhead on original open systems. So It is necessary to quantitatively measure the overhead in terms of data storage, computation, control messages, computation etc. Reputation should detect malicious behavior correctly and timely. So, the following measurements are also needed: Mean Time to detect; False Positives/Negatives detection. In general, the goal of reputation system is improve the goodput of open systems, through encouraging selfish user cooperation and deterring malicious behaviors, so It is necessary to quantitatively measure to what extent the reputation system increase the overall levels of cooperation, and improve the goodput of whole system

6

Conclusion

Reputation systems can be used for securing decentralized networks. The predictive power of reputation depends on the supposition that past behavior is

indicative of future behavior. The higher the reputation profile of a peer, the more trustworthy it is deemed to be. Hence, reputation systems secure decentralized networks by motivating honest participation among the peers in the network, desisting and penalizing cheaters. There is also a rapidly growing literature around trust and reputation systems, but unfortunately those activities are not very coherent. This paper looked at a wide range of issues related to trust and security including five attributes in an elemental trust relation, components and challenges in a trust and reputation architecture, then mathematical tools used to model and analyze reputation rating and transitive trust calculation. Some calculations of the trust value were presented they are not absolutely correct. However, they give a very good indication of the level of trust we can expect in different scenarios and different network architectures and implementations. In conclusion, the following properties should be enforced in a trustworthy open network environment: Rational participators (peers) not only have incentive to provide feedback, but truthfully provide feedback (truthfulness property). Dishonest peers do not participate in service for other valid participants (the avoidance property). Dishonest nodes may not use the service, and gradually are expelled from the virtual community (the exclusion property) Obviously, reputation and trust systems are not a panacea for all trust problems or security issues in decentralized and open networks. But as a research filed stand in the crossroad of several disciplines, like economics and sociology, computer science, telecommunications, even including psychology, they can be used to mitigate the threats faced by such networks, which can not be dealt with by traditional security mechanisms.

References [1] Sergio Marti and Hector Garcia-Molina, Limited Reputation Sharing in P2P Systems, In ACM Conference on Electronic Commerce (EC'04), 2004. [2] Sonja Buchegger, Jean-Yves Le Boudec, A Robust Reputation System for P2P and Mobile Ad-hoc Networks, P2PEcon 2004. [3] Minaxi Gupta, Paul Judge, Mostafa Ammar, A Reputation System for Peer-to-Peer Networks, NOSSDAV’03, June 1–3, 2003, Monterey, California, USA. [4] Rohit Gupta, Arun K. Somani, Reputation Management Framework and Its Use as Currency in Large-Scale Peer-to-Peer Networks, Fourth International Conference on Peerto-Peer Computing (P2P'04). [5] Prashant Dewan, Partha Dasgupta, Securing P2P networks using peer reputations: Is there a silver bullet, IEEE Consumer Communications and Networking Conference(CCNC),2005. [6] P. Michiardi and R. Molva, CORE: A collaborative reputation mechanism to enforce node cooperation in mobile ad hoc networks, The Sixth IFIP conference on security communications, and multimedia (CMS 2002), Portoroz, Slovenia., 2002. [7] S. Buchegger and J.-Y. L. Boudec, Performance Analysis of the CONFIDANT Protocol: Cooperation Of Nodes — Fairness In Dynamic Ad-hoc NeTworks, in Proceedings of IEEE/ACM Symposium on Mobile Ad Hoc Networking and Computing (MobiHOC), Lausanne, CH, June 2002, pp. 226–236.

[8] Sergio Marti, T.J. Giuli, Kevin Lai, and Mary Baker, Mitigating Routing Misbehavior in Mobile Ad Hoc Networks, In Proceedings of the 6th annual ACM/IEEE international conference on Mobile computing and networking, 2000. [9] Yee Wei Law, Paul J.M. Havinga, How to secure wireless sensor network, International Conference on Intelligent Sensors, Sensor Networks and Information Processing, 2005. [10] S. Ganeriwal and M. B. Srivastava, Reputation-based framework for high integrity sensor networks, In Proceedings of the 2nd ACM workshop on Security of ad hoc and sensor networks (ASN '04), pages 66--77, New York, NY, USA, 2004. [11] Lamsal, Understanding trust and security, Available at, http://www.cs.Helsinki.FI/ u/lamsal/papers/. [12] L. Rasmusson and S. Janssen, Simulated Social Control for Secure Internet Commerce. In Proceedings of the 1996 New Security Paradigms Workshop. ACM, 1996. [13] Eytan Adar and Bernardo A. Huberman, Free riding on gnutella, First Monday, 5(10), October 2000. [14] M. Feldman, C. Papadimitriou, J. Chuang and I. Stoica, Free-riding and whitewashing in Peer-to-Peer systems, ACM SIGCOMM’04 Workshop on Practice and theory of incentives in networked systems (PINS), 2004 [15] M. Feldman, Kevin Lai, Ion Stoica, and John Chuang. Robust Incentive Techniques for Peer-to-Peer Networks. In ACM Conference on Electronic Commerce (EC'04), 2004. [16 ]J. Douceur, The sybil attack, In Proceedings of the IPTPS02 Workshop, 2002. [17 ]Alice Cheng, Eric Friedman, Manipulability of PageRank under Sybil Strategies, First Workshop on the Economics of Networked Systems (NetEcon06), 2006. [18] Sergio Marti and Hector Garcia-Molina, Identity Crisis: Anonymity vs. Reputation in P2P Systems, In IEEE 3rd International Conference on Peer-to-Peer Computing (P2P 2003). [19] Sergio Marti, Prasanna Ganesan, and Hector Garcia-Molina. SPROUT: P2P Routing with Social Networks. In International Workshop on Peer-to-Peer Computing & DataBases (P2P&DB 2004), 2004. [20] Li Xiong and Ling Liu, PeerTrust: Supporting Reputation-Based Trust for Peer-to-Peer Electronic Communities, IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, VOL. 16, NO. 7, JULY 2004. [21] Dominik Grolimund, Luzius Meisser, Stefan Schmid, and Roger Wattenhofer, Havelaar: A Robust and Efficient Reputation System for Active Peer-to-Peer Systems, First Workshop on the Economics of Networked Systems (NetEcon06), 2006. [22] Sepandar D. Kamvar, Mario T. Schlosser, and Hector Garcia-Molina, The EigenTrust Algorithm for Reputation Management in P2P Networks, In Proceedings of the Twelfth International World Wide Web Conference, 2003. [23] Tim Moreton and Andrew Twigg, Enforcing Collaboration in Peer-to-Peer Routing Services, In Proc. First International Conference on Trust Management, May 2003. [24] A. Jøsang, C. Keser, and T. Dimitrakos, Can We Manage Trust?, In the Proceedings of the Third International Conference on Trust Management (iTrust) 2005. [25] A. Jøsang, L. Gray and M. Kinateder, Simplification and Analysis of Transitive Trust Networks, 4(2) 2006, pp.139-161 . Web Intelligence and Agent Systems Journal. 2006. [26] A. Jøsang, R. Hayward, S. Pope, Trust Network Analysis with Subjective Logic. Australasian Computer Science Conference 2006. [27] Shanshan Song, Kai Hwang, Runfang Zhou and Yu-Kwong Kwok, Trusted P2P Transactions with Fuzzy Reputation Aggregation, IEEE INTERNET COMPUTING, 2005. [28] Viktor S. Grishchenko, A fuzzy model for context-dependent reputation. ISWC Workshop on Trust, Security, and Reputation on the Semantic Web 2004 [29] Aringhieri R, Damiani E, Sabine , Paraboschi S, Samarati P, Fuzzy Techniques for Trust and Reputation Management in Anonymous Peer-to-Peer Systems, Journal of the American Society for Information Science and Technology, Vol. 57, No. 4. (February 2006),

[30] A. Jøsang and S. Pope, Semantic Constraints for Trust Tansitivity, In Proceedings of the Asia-Pacific Conference of Conceptual Modelling (APCCM), February 2005 [31] Lik Mui, Mojdeh Mohtashemi, Ari Halberstadt, A Computational Model of Trust and Reputation, In Proc. of the 35th Hawaii International Conference on System Sciences, 2002 [32] A. Jøsang, R. Ismail, and C. Boyd. A Survey of Trust and Reputation Systems for Online Service Provision. Decision Support Systems, 2006. [33]Alice Cheng, Eric Friedman, Sybilproof reputation mechanisms, Proceeding of the 2005 ACM SIGCOMM workshop on Economics of peer-to-peer systems. [34] Roger Dingledine, Nick Mathewson, Paul Syverson. Reputation in P2P Anonymity Systems. Workshop on economics of p2p systems, June 2003. [35] Wang Y. Vassileva J., Trust and Reputation Model in Peer-to-Peer Networks, Proc. of IEEE Conference on P2P Computing,. Linkoeping, Sweden, September 2003. [36] Dongyu Qiu and R. Srikant, Modeling and Performance Analysis of BitTorrent-like Peerto-Peer Networks, ACM SIGCOMM’04. [37] Nikitas Liogkas, Robert Nelson, Eddie Kohler, Lixia Zhang, Exploiting BitTorrent For Fun (But Not Profit), the 5th International Workshop on Peer-to-Peer System (IPTPS 2006), Santa Barbara, CA, USA, 2006. [38] P. Obreiter, J. Nimis, A taxonomy of incentive patterns - the design space of incentives for cooperation. In Proc. of the Second International Workshop on Agents and Peer-to-Peer Computing (AP2PC'03), LNCS 2872, Melbourne, Australia (2003). [39] R. Zhou, K. Hwang, PowerTrust: A Robust and Scalable Reputation System for Trusted P2P Computing, IEEE Transactions on Parallel and Distributed Systems, to appear, 2006. [40] Rohit Gupta and Arun K. Somani, Reputation Management Framework and Its Use as Currency in Large-Scale Peer-to-Peer Networks, Fourth International Conference on Peerto-Peer Computing (P2P'04). [41] J. Sabater and C. Sierra, Reputation and social network analysis in multi-agent systems, In Proceedings of the First Int. Joint Conf. on Autonomous Agents and Multi-agent Systems: 475482. ACM Press, 2002. [42] Sergio Marti, Prasanna Ganesan and Hector Garcia-Molina, SPROUT: P2P Routing with Social Networks, In International Workshop on Peer-to-Peer Computing & DataBases (P2P&DB 2004), 2004. [43] Tad Hogg and Lada Adamic, Enhancing Reputation Mechanisms via Online Social Networks, In ACM Conference on Electronic Commerce (EC'04), 2004. [44] Newman, M. 2003, The structure and function of complex networks, SIAM Review (45):167-256. [45] V. Latora and M. Marchiori, Economic small-world behavior in weighted networks, The European Physical Journal B, vol. 32, pp. 249--263, 2003. [46] S. Lee, R. Sherwood and B. Bhattacharjee, Cooperative peer groups in NICE, In Prof. of INFOCOM 2003.

Suggest Documents