Trust & Reputation Management
A Formal-Semantics-Based Calculus of Trust Building trust models based on a well-defined semantics of trust is important so that we can avoid misinterpretation, misuse, or inconsistent use of trust in Internet-based distributed computing. The authors present an approach to a formal-semantics-based calculus of trust, from conceptualization to logical formalization, from logic model to quantification of uncertainties, and from quantified trust to trust decision-making. They also explore how to apply a formal trust model to a PGP (Pretty Good Privacy) system to develop decentralized public-key certification and verification.
Jingwei Huang and David M. Nicol University of Illinois at Urbana-Champaign
38
Published by the IEEE Computer Society
T
rust is a critical factor in Internet applications such as Web services,1 peer-to-peer (P2P),2 grid computing,3 and various online communities or social networks.4 The problem of modeling and quantifying trust in decentralized networks is ubiquitous. Researchers have proposed many computational trust models,4,5 but most are heuristic models and neglect formal semantics of trust in modeling. Trust is a complex social phenomenon; it’s context-dependent, and people understand it in different ways. Without precisely defined semantics, measuring and calculating trust in a consistent and meaningful way is difficult, and misinterpreting or misusing trust is easy. Here, we address this issue by developing a formal-semanticsbased calculus of trust. Specifically, 1089-7801/10/$26.00 © 2010 IEEE
based on concepts of trust from social studies, we logically formalize key concepts of trust, model uncertainty, and apply quantified trust in decisionmaking and risk analysis. We apply these techniques to a PGP (Pretty Good Privacy) public-key system to develop decentralized public-key certification and verification.
Motivating Scenario
Alice needs a dentist; seeing an ad for Dan, she asks some friends she trusts about him. Ben is Dan’s client, and shares his opinion with Alice. Synthesizing such opinions, Alice makes a decision to trust or not to trust Dan. We can replace “finding a dentist” with finding a provider of other services or products on the Internet. With an eye toward modeling trust, let’s explore some subtleties in this scenario. IEEE INTERNET COMPUTING
A Formal-Semantics-Based Calculus of Trust
Sources of Uncertainties Uncertainty in trust might stem from observed uncertainty in the trustee’s behavior, but there is another source. Suppose Alice receives a costly, sophisticated treatment from Dan, but she isn’t sure whether it was necessary. This uncertain state is due to her limited or lack of knowledge or information to judge. Later, another dentist whom Alice trusts tells her that Dan saved her tooth; Alice’s trust state regarding Dan transforms from uncertainty to greater certainty. Alternatively, if her trusted dentist told her that the treatment was indeed unnecessary, the transformation would be from uncertainty to distrust. So, as we develop a trust model, we must be mindful of the nature and dependencies of uncertainty.
Measurement of Uncertainties
her trust in her friends, Alice might indirectly trust Dan regarding his performance, but trust in performance alone doesn’t mediate further trust. Consider that if Alice goes to Dan’s office, but Dan tells her that he’s going for a long vacation and recommends his friend Eric, could Alice trust Eric? Alice might wonder whether Dan’s recommendation is based on friendship with Eric rather than Eric’s performance as a dentist. Most importantly, Alice trusts Dan only regarding his own performance as a dentist and might not trust him regarding his opinion on Eric’s performance. Some models calculate degree of trust from a to c by trust from a to b and trust from b to c, without considering the condition of trust propagation. In modeling trust propagation, we must distinguish between these two types of trust.
Intuitively, we might quantify an entity’s trustworthiness based on interactions — the fractions of positive ones, negative ones, and those for which we’re uncertain. Approached formally in probabilistic terms, we must identify the underlying sample space, but that choice can significantly impact the results. In a given context, what, precisely, constitutes a positive, negative, or uncertain interaction? A precisely defined semantics of trust is needed to approach these questions.
Trust is Context-Dependent
Elements of Trust
Semantics of Trust
In Ben’s trust in Dan, he expects or hopes Dan will perform professionally. Ben believes what he expects to be true; he also knows that Dan might not behave as he expects, but Ben’s willing to take the risk to believe it. All notions of trust in our story share three common elements: an expected thing, belief, and willingness to take risk. To trust, or not to trust, is a decision. Thus, trust modeling must develop not only methods or algorithms to calculate trust degrees but also methods of trust decision-making. The latter is important, especially for Internet applications with autonomous agents.
Different Types of Trust with Different Properties Alice’s trust in her friends is different from Ben’s trust in Dan. The former is the trust regarding her friends’ belief about whether Dan is a good dentist; the latter is the trust regarding Dan’s performance as a dentist. Through SEPTEMBER/OCTOBER 2010
We must always place trust in a specific context. Alice might trust Ben highly regarding his opinions about computers, but the only dental work Ben has had Dan do is routine check-ups. Because Ben has only limited knowledge about Dan, Alice’s confidence in Ben’s opinion of Dan’s ability to do a root canal might be low. From the perspective of using probability to quantify trust, a different context of trust will lead to a different sample space. Based on the concepts of trust developed in social sciences,6 we use the following definition for trust: Trust is a mental state comprising: expectancy — the trustor expects (hopes for) a specific behavior from the trustee (such as providing valid information or effectively performing cooperative actions); belief — the trustor believes that the expected behavior occurs, based on the evidence of the trustee’s competence and goodwill; and willingness to take risk — the trustor is willing to take risk for that belief.
We can break trust into two basic types (based on the trustor’s expectancy): trust in performance is trust about what the trustee performs, whereas trust in belief is trust about what the trustee believes. The trustee’s performance could be the truth of what the trustee says or the successfulness of what he or she does. For simplicity, we represent both as a 39
Trust & Reputation Management
statement, denoted as a Boolean-type term, x, called reified proposition.7 For the first case, x is what the trustee says, and for the second, x represents a successful performance, which is regarded as a statement the trustee makes describing his or her performance. A trust-in-performance relationship, trust _p(d, e, x, k), represents that trustor d trusts trustee e regarding e’s performance x in context k. This relationship means that e makes x in context k, then d believes x in that context. In first-order logic (FOL), trust_p(d, e, x, k) ≡ madeBy(x, e ,k) ⊃ believe(d, k –> x),
(1)
where –> is an operator used for reified propositions to mimic the logical operator for implication, ⊃. For example, trust_p(Ben, Dan, perform(Dan, Good_dentistry_practice), General_dentistry) represents that Ben trusts Dan to perform well in general dentistry. A trust-in-belief relationship, trust_b(d, e, x, k), represents that trustor d trusts trustee e regarding e’s belief x in context k. This trust relationship means that if e believes x in context k, then d also believes x in that context: trust_b(d, e, x, k) ≡ believe(e, k –> x) ⊃ believe(d, k –> x).
(2)
For example, trust_b(Alice, Ben, perform(Dan, Good_dentistry_practice), General_dentistry) rep resents that Alice trusts Ben about Ben’s belief that Dan performs well in general dentistry. Following a definition in which distrust is regarded as the negative form of trust, untrust is a status in which the degree of confidence is insufficient to trust, and mistrust is misplaced trust,8 we define distrust as the opposite of trust. Distrust in performance, distrust_p(d, e, x, k), means that, in a specific context, the trustor believes that the information the trustee created is false or that the trustee’s performance isn’t successful. Distrust in belief, distrust_b(d, e, x, k), means that, in a specific context, the trustor believes that what the trustee believes is false.
Uncertainties in Trust
Earlier, we identified two types of uncertainties: incompleteness is uncertainty due to a trustor lacking sufficient knowledge or information to make a judgment; randomness is the 40
uncertainty coming from a trustee’s unpredictable behavior. In considering a proposition, Alice might have three mental states: she believes the proposition, disbelieves it (that is, believes that the proposition is false), or finds it unknown or undecidable (that is, she’s unable to judge the proposition as being true or false). Unknown is a mental state caused by incompleteness uncertainty. We can naturally represent an uncertain belief as a probability distribution over those three states. Looking at our earlier definitions, we can regard trust as a conditional belief. So, we can similarly represent uncertainty in trust as a probability distribution over three mental states — believing, disbelieving, and undecidable (corresponding to trust, distrust, and untrust). We represent this probability distribution as a triple of degrees: τ(d, e, x, k) =< α, β, γ >, α + β + γ =1,
(3)
where d, e, x, and k are the same parameters as in a trust relationship. τ uses a superscript b for trust in belief, p for trust in performance, or no superscript for either of b or p, depending on context. This triple quantifies an uncertain trust relationship, and thus we call it the uncertain trust triple. Based on the logical definition of trust and the connections of probability and conditionals9 studied in philosophical logic, for the trust-in-performance relationship, we define the degree of trust α = tdp (d, e, x, k) as the conditional probability that d believes x, given that context k is true and e makes x. We define the degree of distrust β = dtdp (d, e, x, k) as the conditional probability that d disbelieves x, given that context k is true and e makes x. We define γ = udp (d, e, x, k), the degree of untrust, as the conditional probability that d neither believes nor disbelieves x, given that context k is true and e makes x. We represent this formally as a = tdp (d, e, x, k) = pr(believe(d, x) | madeBy(x, e, k) ∧ true(k)), (4) b = dtdp (d, e, x, k) = pr(believe(d, ~x) | madeBy(x, e, k) ∧ true(k)),
www.computer.org/internet/
(5)
IEEE INTERNET COMPUTING
A Formal-Semantics-Based Calculus of Trust
g = udp(d, e, x, k) = pr(¬believe(d, x)
We can measure uncertain trust without a specific expectancy x:
(Note that ~ is an operator for reified propositions to mimic logical negation; because k is a reified proposition, predicate true(k) transfers a reified proposition as a position.7) We define degrees of trust, distrust, and untrust in belief similarly, but replace predicate madeBy in the condition with predicate believe. Because both trust and distrust reflect certainty in belief, the sum of degrees of trust and distrust reflects the degree of certainty; the degree of untrust reflects the degree of uncertainty owing to incomplete knowledge. As new knowledge or information becomes available, and this uncertainty is resolved, a fraction of γ goes to α and the other fraction to β. So, we can equivalently represent uncertain trust in the form of probability distribution over trust, distrust, and untrust as probability intervals:
td(d, e, k) = n′/m′,
(11)
dtd(d, e, k) = l′/m′,
(12)
∧¬believe(d, ~x) | madeBy(x, e, k) ∧ true(k)). (6)
p ∈ [a, a +g]
(7)
1 – p ∈ [b, b + g],
(8)
where p is the probability that the expected thing of trust occurs, and 1 – p is the complement. Here, the uncertainty owing to incompleteness appears in the form of uncertain probability or a probability interval. We can formulate probability measures in terms of frequencies of outcomes over a sample space. We can use this interpretation to quantify trust or distrust as follows. We measure the degree of trust by the frequency of the trustor’s positive experience among all encounters with the trustee. That is, α = td(d, e, x, k) = n/m,
(9)
where m is the total number of encounters regarding a particular expectancy x, and n is the number of the trustor’s positive experiences in that set. Similarly, we have β = dtd(d, e, x, k) = l/m,
(10)
where l is the number of the trustor’s negative experiences. Note that n + l isn’t necessarily equivalent to m, if we can’t determine some encounters to be positive or negative. SEPTEMBER/OCTOBER 2010
where m′ is the number of all encounters (in context k) between the trustor and trustee regarding all instanced expectancy (∀x); n′ is the number of positive experiences in those encounters; and l′ is the number of negative experiences in those encounters. When no information exists about a specific expectancy x, we can use td(d, e, k) as an estimated value of td(d, e, x, k) and implicitly assume that the probability in the specific subsample space associated with x is the same as in the whole sample space. A trustor can also evaluate each single encounter as positive (or satisfied, successful) to a certain degree, negative (or unsatisfied, failed) to a certain degree, and undecidable (or hard to say, positive or negative) to a certain degree. Thus, m
n = ∑ i =1 e p (i ) , m
l = ∑ i =1 en (i ) ,
(13) (14)
where ep (i), e n (i) ∈ [0, 1], representing the degree of encounter i being positive or negative separately, and ep (i) + e n (i) ≤ 1.
Trust Propagation and Properties
We know from social experience that sometimes if a trusts b and b trusts c, then a might indirectly trust c to a certain degree. We must identify conditions supporting such trust propagation and quantify it. From our earlier logical specification of the formal semantics of trust, we have trust_b(a, b, x, k) ∧ trust_p(b, c, x, k) ⊃ trust_p(a, c, x, k),
(15)
trust_b(a, b, x, k) ∧ trust_b(b, c, x, k) ⊃ trust_b(a, c, x, k).
(16)
That is to say, in an entity’s knowledge, for a given context k, if entity a trusts b regarding b’s belief in x, and b trusts c regarding c’s performance (or belief) in x, then a indirectly trusts c 41
Trust & Reputation Management
regarding c’s performance (or belief) in x. In other words, through trust in belief, trust can propagate along a chain of trust. This is the logical basis for uncertain trust aggregation in a trust path. From the formal semantics of uncertain trust relationships, we can derive formulas for sequence trust aggregation, under the condition that a’s belief in x is conditionally independent of the provenance of x given b’s belief in x: τ(a, c, x, k) = τ(a, b, x, k) ∙ τ(b, c, x, k) = , (17) td(a, c, x, k) = tdb (a, b, x, k) ∙ td(b, c, x, k) + dtdb (a, b, x, k) ∙ dtd(b, c, x, k),
(18)
dtd(a, c, x, k) = dtdb (a, b, x, k) ∙ td(b, c, x, k) + tdb (a, b, x, k) ∙ dtd(b, c, x, k). (19) This sequence aggregation operator has several interesting properties: Property 1. As a trust path’s length grows, the aggregated trust’s certainty degree (the sum of trust degree and distrust degree) exponentially decreases. Intuition suggests that trust decreases in propagation along a trust path. Property 1 gives a formal and precise description that both trust and distrust decrease exponentially, and untrust quickly increases to close to the maximum. Property 2. Sequence aggregation is associative. Via this property, the sequence aggregation’s outcome is independent of the order in which we aggregate pairwise trust relationships. Property 3. A trust relationship with ud = 1 is an absorbing element in trust aggregation. This property says that if a’s trust in b or b’s trust in c is complete untrust (ud = 1), then the derived trust of a in c is also complete untrust. Consequently, neglecting trust relationships with ud = 1 in a trust network won’t change the trust calculation’s result. We can use this property to simplify trust calculations. An algorithm for parallel aggregation and trust calculation is available elsewhere.10
Trust Decision-Making and Risk Analysis
Trust means that a trustor believes that a trustee will behave as expected and that the trustor is 42
willing to take some risk for that belief. We’ve discussed how to measure and calculate uncertain belief; let’s next consider how the trustor might use quantified uncertainty to choose whether to accept the risk that trust implies. Let p be the probability that the trustee behaves as expected. Given a calculated uncertain trust triple < α, β, γ >, from Equations 7 and 8, we have p ∈ [α, α + γ], 1 – p ∈ [β, β + γ]. We apply utility theory and the decision-tree method to trust decision-making. If the trustor chooses to take the risk, he or she could have a utility of gain (UG) when the trustee behaves as expected and a utility of loss (U L) when the trustee does not. If the trustor chooses not to take the risk (untrusting or ignoring), he or she might take other options or do nothing. Let U N be the expected utility of untrusting. Then, the trustor ought to make the decision to trust when the expected utility of trusting is greater than the expected utility of untrusting. This requires p > (U N – U L)/(UG – U L).
(20)
When multiple independent trust paths exist, and each has some probability of being untrustworthy, a problem occurs when trying to estimate the overall probability of all paths being untrustworthy. We considered this problem already10 and show how this probability lies within an interval whose endpoints we can calculate from those paths’ uncertain trust triples. We showed that as the number of multiple independent trust paths grows, the probability of all paths being untrustworthy decreases exponentially, and the uncertainty that incomplete information causes (that is, the probability interval’s width) also decreases exponentially. This result shows that a few parallel independent trust paths might be good enough for practical use and that discovering all possible trust paths between a trustor and trustee might not be necessary. We next demonstrate how to use trust decision-making and risk analysis to judge when more multiple independent paths are needed versus when existing ones are sufficient.
Application Study
We applied our formal trust model to PGP to develop a decentralized public-key certification and verification. As a real Internet application, PGP dem-
www.computer.org/internet/
IEEE INTERNET COMPUTING
A Formal-Semantics-Based Calculus of Trust
Related Research in Trust Modeling
T
rust and reputation are related but different concepts. An entity’s reputation is the aggregated opinion of a community about how good that entity is. People might give opinions based on just a single encounter or might not trust that entity at all. Trust occurs between two entities, a trustor who believes that the trustee’s expected behavior occurs and is willing to take a risk for that belief. Researchers have developed many reputation-based trust models. The rationale for reputation-based trust computing is that an entity having a high reputation in a domain usually is trustworthy in that domain. So, we frequently use a reputation metric as a substitute for a trust metric. Two major streams of trust formalization exist: the logical approach and the quantitative approach. The logical stream mainly focuses on trust’s semantic structure and its logical conditions and effects.1–3 The quantitative stream mainly focuses on the uncertainty of trust, trust quantification, trust dynamics, and trust computing’s models and algorithms.4–6 Most quantitative models are heuristic, and the semantics of trust aren’t formally defined. In the main text, we present an approach to building a quantitative trust model from a logical model and probability theory, thus establishing the trust model on the ground of a well-defined semantics. In trust modeling, a basic issue is quantifying trust. Most quantitative trust models define the trust degree as a real value in interval [0,1]. Determining how to interpret the meaning of 0, which could be untrust or distrust, is problematic. Some researchers define 1 as full trust, 0 as full distrust, and 0.5 as full ignorance (untrust). Despite discerning distrust and untrust, this school of notation doesn’t distinguish between uncertainty due to the randomness of a trustee’s behavior and uncertainty due to the trustor’s incomplete knowledge. Some researchers identified the existence of uncertainty in addition to probability distribution between belief and disbelief and rep-
onstrates interesting decentralized computing features by first introducing the web-of-trust model,11 in which users act as certification authorities to certify or sign each other’s keys. A user Alice believes a certificate if she trusts one or more certificate signers (called introducers). However, some unsolved issues exist. First, in PGP, Alice expresses trust relationships in her public keyring (keyring), which is a file containing her trusted introducers’ valid public keys and the level of trust she places in each introducer. Alice believes that a key is valid only if its certificate is signed by an introducer in her keyring. This means that PGP’s trust is limited to direct trust. To realize the web-of-trust idea, PGP encourages people SEPTEMBER/OCTOBER 2010
resented uncertain belief as an opinion triangle;7 however, they didn’t go further to study the source of that uncertainty, and their model doesn’t have a formal semantics of trust. For this reason, their design of trust formulation failed to discern distrust and untrust in calculation. In the main text, we develop a trust calculus from the formal semantics of trust, and represent both uncertainty from randomness and uncertainty from incompleteness and reveal the connection between them. Most trust models focus on methods and algorithms for calculating trust metrics, and little research models the third element of trust — willingness to take risk. By contrast, we present a utility-based method for trust decision-making. References 1. M. Burrows, M. Abadi, and R. Needham, “A Logic of Authentication,” ACM Trans. Computer Systems, vol. 8, no. 1, 1990, pp. 18–36. 2. C.-J. Liau, “Belief, Information Acquisition, and Trust in Multi-Agent Systems — A Modal Logic Formulation,” Artificial Intelligence, vol. 149, no. 1, 2003, pp. 31–60. 3. J. Huang and M.S. Fox, “An Ontology of Trust — Formal Semantics and Transitivity,” Proc. 8th Int’l Conf. Electronic Commerce (ICEC 06), ACM Press, 2006; http://portal.acm.org/citation.cfm?id= 1151454.1151499. 4. S.D. Kamvar, M.T. Schlosser, and H. Garcia-Molina, “The Eigen Trust Algorithm for Reputation Management in P2P Networks,” Proc. World Wide Web Conf. (WWW 03), ACM Press, 2003, pp. 640–651; http://portal.acm.org/ citation.cfm?id=775152.775242. 5. C.-N. Ziegler and G. Lausen, “Propagation Models for Trust and Distrust in Social Networks,” Information Systems Frontiers, vol. 7, nos. 4–5, 2005, pp. 337–358. 6. D. Artz and Y. Gil, “A Survey of Trust in Computer Science and the Semantic Web,” J. Web Semantics, vol. 5, no. 2, 2007, pp. 58–71. 7. A. Josang, E. Gray, and M. Kinateder, “Simplification and Analysis of Transitive Trust Networks,” Web Intelligence and Agent Systems J., vol. 4, no. 2, 2006, pp. 139–161.
to sign the key certificates they’ve validated. This proves to be impractical, however, because most people lack the motivation or incentive to do so, particularly if the signers are unknown to them. A mechanism to enable indirect trust is needed. The second issue is that, absent a precise definition of trust, some people misunderstand — they assume that signing a certificate expresses trust and that trust is transitive, thus interpreting a signature chain as a trust path (see www.rubin.ch/pgp/weboftrust.en.html). In actual fact, Alice’s trust in an introducer is a trust-in-performance relationship, and signing a certificate only creates an assertion to bind a key with an identity. A precisely defined 43
Trust & Reputation Management
semantics will help both users and developers to use or model trust properly. Next, PGP defines three levels of trust in an introducer: complete trust, marginal trust, and no trust (untrusted). Lacking a precise definition and measurement, each user gives this qualitative value subjectively in a web of trust. One challenge is to quantify trust based on PGP users’ performance in specified contexts and have it based on a common shared meaning. Finally, as its default, “PGP requires one completely trusted signature or two marginally trusted signatures to establish a key as valid”;11 users can configure their own requirements. However, configuration is based on experience and intuition. The challenge here is finding a general guide for identifying when more signatures are needed. Our formal trust model addresses these issues. First, introducing trust-in-belief relationships in PGP will enable indirect trust calculations. Alice would need to distinguish between trust-in-performance and trust-inbelief relationships in her keyring. For instance, if Alice trusts Bob (an introducer) regarding Bob’s performance in validating public keys he signs, she adds Bob’s key and a trust-in- performance relationship to Bob in her keyring. If she further trusts Bob regarding Bob’s belief in other introducers’ performance in validating key certifications, she adds a trust-in-belief relationship to Bob in her keyring. Alice can share her keyring with others in a protected manner via a Web service (through PGP servers), which answers specific requests from trusted friends and provides a service to find a trust path through her keyring and compute uncertain trust in a requested target. (Technical details about how this works are beyond this article’s scope.) In this way, an entity who needs to validate a public key can derive indirect trust, which might relax the burden on users to have many other users sign their keys. Precisely defined semantics of trust also eliminate the confusion between a signature chain and a trust path. A signature chain might reflect a trust path, but only if each signer has a trustin-belief relationship with the key holder specified in the signer’s keyring; the last signer can have a trust-in-performance relationship with the key holder. Regarding the second and third issues, using our formal trust model, we can explic44
itly and precisely define the trust relationships and quantitatively measure them. We use uncertain trust triples, rather than three levels, to represent uncertain trust; we use key usage to define the context of trust. In this way, we measure trust based on a proper set of historic data about the trustee’s behavior. For example, τp (Alice, Bob, cert(key, id), K_V_email),
(21)
τb (Alice, Bob, cert(key, id), K_V_email),
(22)
where the term cert(key, id) represents a key cert to bind a key (key) to a user’s identity (id); K_V_email represents the context of validating keys for secure emailing. When we calculate the first triple, the sample space is the set of all events in which Alice validates the keys signed by Bob for secure email purposes. A positive interaction is a key (Bob signed) confirmed to be valid; a negative interaction derives from a key Bob signed that proves to be problematic; keys (Bob signed) for which neither case applies contribute to the uncertainty measure. All users define and measure their trust relationships independently but based on commonly shared trust measurement semantics. Regarding the final issue, the quantified trust and paradigm of trust decision and risk analysis can help a user judge in an application context when more independent trust paths are needed, as Figure 1 illustrates (steps 9 through 15). The figure provides an example of how to use our formal trust model for trust calculation in PGP, risk analysis, and decision-making. Our study shows that using a formal trust model in PGP provides • a precisely defined semantics of trust and trust degrees; • a relative objective trust quantification schema based on common shared semantics; • a method to guide when more multiple independent trust paths are needed; and • a trust mechanism enhanced with trust in belief, for realizing a web of trust. We’ve used the formal approach in other application studies, such as to quantify trust in public-key infrastructures10 and trust reasoning in a network of Web services for e-business.12
www.computer.org/internet/
IEEE INTERNET COMPUTING
A Formal-Semantics-Based Calculus of Trust FYI: iPhone sale • Signed by Eric Eric’s key certificate • Signed by Dan’s key • Signed by Deb’s key 2
• • •
3
1
If the message comes from Eric, I buy this iPhone. But, can I believe Eric’s certificate?
4
I trust Ben as So, I trust Dan as o = Thus, p in [0.57, 0.905] 9
I need p > 0.8, but ... I need more trust paths.
11
Alice
How much do you believe in Eric’s key certificate?
Cathy
14
5
7
I don’t know Dan and Deb. Let me ask my trusted friends.
I trust Emma as So, I trust Deb as o Thus, p in [0.603, 0.943]
15 These two paths are independent, so (1 – 0.905) (1 – 0.943) ≤ 1 – p ≤ (1 – 0.57) (1 – 0.603) That is, p in [0.829, 0.995]
6 8
Ben
Now, I can make a decision to trust!
Ben-Cathy-Dan
I don’t know Bob
The certificate is valid. • In probability p
10
Trust
5
Believe Eric’s certificate? 12
Emma
L
13
Emma-Deb
Untrust (ignore)
Utility of gain: UG = 200
The cert is not valid. • In probability (1 – p) Utility of loss: UL = –600 Another option’s utility: UN = 40
To make a trust decision, the utility of trusting should be greater than the utility of untrusting — that is p • UG + (1 – p) • UL > UN, need p > (UN – UL) (UG – UL) = 0.8
Figure 1. Trust calculation in PGP (Pretty Good Privacy). Alice uses the formal trust model to calculate trust in PGP signature chains and make risk analysis and trust decisions when trading on the Internet.
B
uilding trust models based on a welldefined semantics of trust is important so that we can define the meaning of trust and trust metrics explicitly and accurately, and avoid misinterpretation, misuse, or inconsistent use of trust in Internet-based distributed computing. It also lets us model trust via a scientific approach rather than based on intuition or life experience. Future work can go further in several directions. First, we can extend our basic framework on a formal-semantics-based calculus of trust to consider features such as the different values of different interactions between trustor and trustee, and the different affects (on the trustor) of interactions along a time axis. Another interesting future direction would be to use quantified trust as heuristic information to construct trust paths in huge-size networks — for example, public-key certification path building, trustworthy Web services discovering, trustworthy P2P networking, and Internet routing. SEPTEMBER/OCTOBER 2010
Acknowledgments This work is partly supported by the US Department of Homeland Security through grant number 2006-CS-001000001 under the auspices of the Institute for Information Infrastructure Protection (I3P) research program.
References 1. Z. Malik and A. Bouguettaya, “Reputation Bootstrapping for Trust Establishment among Web Services,” IEEE Internet Computing, vol. 13, no. 1, 2009, pp. 40–47. 2. S.D. Kamvar, M.T. Schlosser, and H. Garcia-Molina, “The Eigen Trust Algorithm for Reputation Management in P2P Networks,” Proc. World Wide Web Conf. (WWW 03), ACM Press, 2003, pp. 640–651; http:// portal.acm.org/citation.cfm?id=775152.775242. 3. T. Eymann, S. Konig, and R. Matros, “A Framework for Trust and Reputation in Grid Environments,” J. Grid Computing, vol. 6, no. 3, 2008, pp. 225–237. 4. C.-N. Ziegler and G. Lausen, “Propagation Models for Trust and Distrust in Social Networks,” Information Systems Frontiers, vol. 7, nos. 4–5, 2005, pp. 337–358. 5. D. Artz and Y. Gil, “A Survey of Trust in Computer Sci45
Trust & Reputation Management
2006; http://portal.acm.org/citation.cfm?id=1151454. 1151499. Jingwei Huang is a research scientist in the Information Trust Institute at the University of Illinois at UrbanaChampaign. His main research interests are in trust modeling, knowledge provenance, and distributed information/knowledge management. Huang has a PhD in industrial engineering from the University of Toronto. He’s a member of IEEE and the ACM. Contact him at
[email protected]. David M. Nicol is a professor of computer and e lectrical engineering at the University of Illinois at UrbanaChampaign. His research interests include high-performance computing, simulation modeling and analysis, and security. Nicol has a PhD in computer science from the University of Virginia. He’s the coauthor of Discrete-Event Systems Simulation (5th ed., Prentice-Hall, 2009), and is a fellow of IEEE and the ACM. Contact him at
[email protected]. Selected CS articles and columns are also available for free at http://ComputingNow.computer.org.
IEEE SoftwarE offers pioneering ideas, expert analyses, and thoughtful insights for software professionals who need to keep up with rapid technology change. It’s the authority on translating software theory into practice.
www.computer.org/ software/subscribe
46
www.computer.org/internet/
SubScribe TodaY
ence and the Semantic Web,” J. Web Semantics, vol. 5, no. 2, 2007, pp. 58–71. 6. R. Mayer, J. Davis, and F. Schoorman, “An Integrative Model of Organizational Trust: Past, Present, and Future,” Academic of Management Rev., vol. 20, no. 3, 1995, pp. 709–734. 7. Y. Shaoham, Reasoning About Change, MIT Press, 1988. 8. S.P. Marsh and M.R. Debben, “Trust, Untrust, Distrust and Mistrust — An Exploration of the Dark(er) Side,” Proc. 3rd Int’l Conf. Trust Management (iTrust 05), LNCS 3477, Springer, 2005, pp. 17–33. 9. A. Hajek, “Probability, Logic, and Probability Logic,” Philosophical Logic, L. Goble, ed., Blackwell Publishing, 2001, pp. 362–384. 10. J. Huang and D. Nicol, “A Calculus of Trust and Its Application to PKI and Identity Management,” Proc. 8th Symp. Identity and Trust on the Internet (IDTrust 09), ACM Press, 2009; http://portal.acm.org/citation. cfm?id=1527017.1527021. 11. P. Zimmermann, The Offcial PGP User’s Guide, MIT Press, 1995. 12. J. Huang and M.S. Fox, “An Ontology of Trust — Formal Semantics and Transitivity,” Proc. 8th Int’l Conf. Electronic Commerce (ICEC 06), ACM Press,
IEEE INTERNET COMPUTING