High Performance Networks Studies
Study on computational trust and reputation models Hugo Barbosa
123
1
University Lusophone of Porto Games Interaction and Learning Tecnologies 3 Faculty of Engineering, University of Porto
[email protected]
2
Telmo da Silva Morais Student‘s of Doctoral Program of Informatics Engineering Faculty of Engineering, University of Porto Porto, Portugal
[email protected]
Abstract: The evolution of computational mechanisms for trust and reputation at the current technologies arises more frequently in electronic communication to enable increased performance and reliability. Computing systems have evolved through the paradigm of computer networks and distributed computing.In this sense arise the agent systems, as an artificial analogy to mimic human behavior, as so agents depend on reputation and t rust mechanisms to evaluate the behavior of potential partners. The paper shows the growing interest in this technology having a current view of this technology as well as different approaches, in fact, reputation and trust mechanisms have been already con sidered a key element in the design of multi -agent systems. Keywords: Computational trust and reputation models, Multi -agent systems, Cognitive trust and reputation.
1
Introduction
The concept of trust is prevalent in our society. From medical treatment to business trade, we must establish a certain level of trust before acting. Particularly, with the development of the Internet, people increasingly interact with others in the virtual world, and need for trust is thus widespread in various areas of the digital world[1], and it has been studied by sociologists in human societies, like Luhmann (1979) ho wrote: ―Trust and trust worthiness are necessary in our everyday life. It is part of the glue that holds our society together‖[2]. Luhmann view about human societies can be also applied to e-commerce sites like eBay, or Amazon where the buyer and seller are allowed to award positively or not each other on any given transaction, with this awarded points is then possible to grade sellers and buyers trust, by doing so, reputation arises as part of the trust system, and humans rely on this reputation to choose and help to decide with who should they do business to. In multi-agent systems (MAS) this concepts are also more and more essential. As these systems are composed of autonomous agents that need to interact, bargain and inevitably choose sellers and buyers alike, in orders to fulfill there objectives, therefore the concepts written by Luhmann, towards human
1st Edition, 2016 - ISBN 978-989-20-6555-7
58
High Performance Networks Studies
societies, also apply to this MAS, specially if we are talking about the open MAS, as this last ones are characterized by the existence of agents whose intentions are unknown. Autonomy means not only that none of the actors has control over other actors, but also that in general each actor would be independently motivated and would have independently-designed information systems that reflect its own motivations[3]. Making the existence of trust and reputation system, fundamental so that good agents can avoid the bad ones, usually there are three approaches to these problems[4]: In security the basic structured proprieties are guaranteed, like authenticity and integrity of messages, privacy, agents‘ identities, etc. This implies, usually, this means using of cryptography, digital signatures, electronic certificates etc. This does not tell you anything about the quality of information or how reliable is that information, but it does assure the identity of the involved parties as also the message integrity[4]; In an institutional approach, a central authority that observes and controls or even enforces agents actions and may also punish them in case of non-desirable behavior, it is indisputable, that it ensures high control in the interactions but it's requires some kind of a central hub moreover the control is bonded to some structural aspects of the interactions: allowed forbidden and obligated actions can be checked and controlled. However the quality of integrations is left apart, because a good or bad interaction has a subjective connotation that depends on the goals of each individual agent[4]; In a social reputation and trust mechanisms are placed at this level. Here the agents themselves are capable of punishing non-desirable behaviors, by for instance, excluding certain partners. To achieve such distributed control agents model other agents‘ behaviors, following the example of human societies, trust and reputation mechanisms become a good solution. This requires however the development of computational models of trust and reputation, which must cover not only the generation of social evaluations in all the dimensions, but knowledge on how agents use reputation information to select partners, how agents communicate and spread reputation, and how agents handle communicated reputation information[4]. Although these are all different approaches to the same problems, it‘s important to notice that they are not concurrent and can be all implemented in the same open MAS, if required, after all, if we look to human societies trust and reputation based decisions are based also, to some extent, on those same principles. Scientific research in the field has already confederated trust and reputation mechanisms as a key element in the design of MAS[4]. For this study we intent to investigate some of this models of trust and reputation mechanism and compare how they match the above approaches and how well they fulfill there purpose.
2
Trust and reputation models
Typically, trust is referred to as the relationship between two entities, where one entity is willing to rely on the actions performed by another entity. [1] In the domain of computer science, human trust is understood to refer to the relationships among various virtual entities. [1] Computational trust applies the human notion of trust to the digital world. Specifically, computational trust is a term that describes representations of trust used for trust inference and these representations can be 1st Edition, 2016 - ISBN 978-989-20-6555-7
59
High Performance Networks Studies
based on diverse trust definitions. In the past years, computational trust has been thorough studied in various computer science fields and a large volume of computational trust models have been proposed for different application scenarios, focusing on different aspects of trust modeling [1] Therefore trust has many possible definitions, for us trust is the ability to accept what it‘s offered with out holding back or being suspicious about it, it‘s a special kind of faith, and this faith may give us an edge if well given, because it saves time and troubles that may advent form an unknown origin. Another aspect that frequently affects the process of gaining trust is reputation, and the reputation is a bit different, reputation information is only as good as the level of trust we behold in it‘s source, otherwise this information is misinformation at best, but if i believe in the reputation information it will save me time and money that we would otherwise expend to acknowledged this information by our selves, in the end reputation in a trust system can be helpful if correctly implemented. Also sometimes we, in our human relationships, generalized, not always correctly, trust projection to other humans, based on it‘s duty or it‘s role or belief's, and being true it‘s not a fair, it my make sense for some scenarios to generate stereotypes tho whom to trust even not having any direct or indirect information about the individuality to whom we‘re trusting, for instance we usually trust the police to protect our goods, independently of the man or woman under the uniform. And to all of these aspect‘s are only true by a period of time, and must be re-evaluated with each interaction of the relation taking into account the aging.
3
Computational trust and reputation models: Status
Existing online applications provide a large scale, open and heterogeneous interaction environment where diverse types of information can be collected from different sources. Frequently the method to estimate trust is to rely on past behavior of the specific agent in question. The basic idea of such a method is to let agents involved in the interactions assess each other. Intuitively, direct experience is the most reliable and personalized information for trust assessment. However, in large-scale, open systems such as online social networks, direct experience is often not sufficient or even nonexistent. In this case, another way to judge the target agent‘s past behavior is to rely on indirect experience, reputation, which is the opinions obtained from other agents who have interacted with the target agent. Although it may not be as accurate/reliable as direct experience, indirect experience is much more pervasive, thus greatly complementing direct experience based trust approaches [5]. However, in an open, complex system, it is nontrivial to understand and aggregate such information due to its uncertainty (e.g., the information reporter may provide false information, or although the provided information is correct, it may not be suitable for the information requester due to its personalized view of the system. In order to address this issue, a lot of approaches have been proposed, and a comparison study of social network based and probabilistic estimation based approaches was conducted in Despotovic and Aberer [6]. Some other indirect experience based trust and reputation models rely on game theoretical approaches to predict the behavior of the interaction partner. but in a large scale social network system, the reliability of game theoretical models decreases due to high complexity of relations and interactions among agents.
1st Edition, 2016 - ISBN 978-989-20-6555-7
60
High Performance Networks Studies
Direct experience and indirect experience are commonly used to derive trust. However, when such information is not available, one may rely on other kinds of information by simulating human perceptions such as stereotype.
3.1 StereoTrust While using stereotypes for user modeling and decision making was suggested previously, StereoTrust is probably the first concrete, formal computational trust model that uses stereotypes. In this work, the trustor forms stereotypes by aggregating information from the context of the transaction, or the profiles of their interaction partners. Example stereotypes are ―programmers working for Google are more skilled than the average‖ or ―people living in good neighborhoods are richer than the others‖. In order to build stereotypes, the trustor has to group relevant agents (―Police‖ or ―people living in europe‖). Stereotypes on each group are calculated by aggregating the trustor‘s past experience with members of that group [7]. When facing a new agent, the trustor uses its stereotypes on the groups to which the new agent belongs to, and the weight for each stereotype is fraction of the trustor‘s transactions with members of the corresponding group. Example environments where StereoTrust may be applied include identifying unknown malicious sellers in online auction sites, selecting reliable peers to store replicated data in Peerto-Peer storage systems, to name a few [8].
3.2 MetaTrust The trust model, MetaTrust, is capable of harnessing meta-information it is generally not considered in older trust models, and may be available locally. This model, given its use of different kind of information, is meant to complement traditional models. MetaTrust relies on discriminant analysis (DA) to exploit the agent‘s local knowledge [9]. DA is a well known family of methods for dimensionality reduction and classification. DA methods take as input a set of events belonging to k (≥ 2) different classes and characterized by various features, and find a combination of the features (a classifier) that separates these k classes of events. In MetaTrust, a user‘s past transactions are described by a set of meta-information and classified according to their outcome: successful or unsuccessful (without loss of generality, we consider linear DA over two classes). Each transaction information is stored locally by the user. The user then performs a linear DA on this data to obtain a linear classifier that allows him to estimate whether a potential transaction is likely to be successful or not.
3.3 Burnett This author looked at the different aspect of the problem, that is to identify useful features to construct stereotypes. Three feature sources are discussed. From social network. That is, relationships between agents can be viewed as features. For instance, such relationships could be agent A is a friend of agent B. From agents‘ competence over time. The target agent‘s accumulated experience in certain tasks can be viewed as features. [10] An example stereotype may be if an agent performed task T for over 100 times, he is considered experienced (trustworthy). From interactions, e.g., features of both interaction parties. For instance, the 1st Edition, 2016 - ISBN 978-989-20-6555-7
61
High Performance Networks Studies
trustor with certain features is positively or negatively biased towards the target agent with other features. This work provided a comprehensive summary of feature sources (for stereotype formation) from social relationships among agents but the authors did not apply these features to any concrete application scenarios for validation.
4
New bootstrap based classification
Often, models offer a nice way to represent and deal with trust and reputation, but there is no explanations on how they bootstrap. This is quite common in cognitive models, which focus on the internal components of trust and reputation, but not how such components are built. However, some non-cognitive models do not give explicit details on the calculus of their evaluations. [11] On the other hand, some of the models do take into account how the models should bootstrap, and recommend a method for it, in our research lead us to believe that the classification and comparison of the models for trust and reputation techniques benefits from the discrimination of the bootstrap mechanism if the model under evaluation describes one, because it defines how the system adapts to new agents that enter the system. As trust is a vital concept in open and dynamic multi-agent systems, where diverse agents continually join, interact and leave. In such environments, some agents will inevitably be more trustworthy than others, displaying varying degrees of competence and self-interest in different interactions. When faced with the problem of choosing a partner with whom to interact, agents must evaluate the candidates and determine which one is the most trustworthy with respect to a given interaction and context. [12] For that purpose we updated the classification dimensions named yet another classification (YAC) defined by Isaac Pinyol, Jordi Sabater-Mir in ―Computational trust and reputation models for open multiagent systems: a review‖, published 2013.
1st Edition, 2016 - ISBN 978-989-20-6555-7
62
High Performance Networks Studies
Model
Trust
Cognitive
Procedural
Generality
Bootstrap
Abdul-Rahman et al.
-
-
~
-
-
AFRAS
-
-
√
√
-
Castelfranchi et al.
√
√
-
√
-
Esfandiari et al.
-
-
√
√
√
FIRE
~
-
√
√
-
ForTrust
√
√
-
√
-
Marsh
√
-
~
√
-
Mui et al.
√
-
~
√
-
LIAR
√
-
√
-
-
Regret
~
-
√
√
√
Regan & Cohen
√
-
√
-
-
Repage
-
~
√
√
-
Ripperger
√
-
√
-
-
Schillo et al
-
-
√
√
-
Sen & Sajja
√
-
√
-
-
Yu & Singh
√
-
√
-
-
Sierra & Debenham
√
-
√
√
-
BDI + Repage
√
√
√
√
-
StereoTrust
√
√
√
√
√
MetaTrust
√
√
√
√
√
Burnett
√
√
√
√
√
Table 1 - New computational models classification
4.1 Trust dimension We chose to keep the authors definition of the trust dimension to keep in alignment with the authors choice. As they believe that the ―distinction between both kinds of models does not rely on a clear consensus in the community. Trust can be seen as a process of practical reasoning that leads to the decision to interact with somebody. Regarding this aspect, some models provide evaluations, rates, scores etc. for each agent to help the decision maker with a final decision. Instead, others specify how the actual decision should be made. From our point of view, only the latter cases can be considered trust models‖ [11]. They also recall that in this case, the decisions are also pragmatic-strategic, in the sense described by Conte &
1st Edition, 2016 - ISBN 978-989-20-6555-7
63
High Performance Networks Studies
Paolucci [13]. Our Table 1 summarizes the models that from their definition should be considered trust models. We mark them with ‗√‘. Models marked with ‗−‘ are those that we do not consider trust models, as our purpose is to update the authors table with added information about the bootstrap techniques. Finally, we use ∼ to indicate that the model does not give an explicit decision mechanism, but that it is rather dependent on the current desires of the agent. in accordance to the Isaac Pinyol, Jordi Sabater-Mir, defined constraints.
4.2 Cognitive dimension Isaac Pinyol, Jordi Sabater-Mir define [11] this dimension as the difference of models that have clear representations of trust, reputation, image etc. in terms of cognitive elements such as beliefs, goals, desires, intentions, etc. From Isaac Pinyol, Jordi Sabater-Mir perspective, models that achieve such representation explicitly describe the epistemic and motivational attitudes that are necessary for the agents to have trust or to hold social evaluations, for the models that achieve a cognitive representation, final values of trust and reputation are as important as the structure that supports them. These models are usually very clear at the conceptual level, but lack in computational aspects. In Table 1 we show the summary of the reviewed models against this dimension. We marked with ‗√‘ the ones with such property, and ‗−‘ the lack of it. We kept the marking of the Repage model with ‗∼‘ because, as stated by Isaac Pinyol, Jordi Sabater-Mir the internal structure is based on predicates that have associated cognitive notions, but it does not have an explicit representation of them. In fact, Repage uses into first-order-like predicates, mixing also epistemic and motivational attitudes. The BDI + Repage model [14] makes explicit these missing cognitive components [11].
4.3 Procedural dimension According to Isaac Pinyol, Jordi Sabater-Mir [11], they focus on the epistemic decisions, not on the creation and combination of motivational attitudes (goal-based). The model introduced by Castelfranchi and Falcone [15] regarding social trust does not give details on how the beliefs are created. ForTrust model [16] redefines the notion of social trust and introduces cognitive reputation but still epistemic decisions remain unclear. On the contrary, models like AFRAS and Regret [17] describes until the last detail how evaluations are created and how they are aggregated. We point out here that the models by Marsh and Abdul-Rahman & Hailes [18] are marked with ‗∼‘ to indicate that in general they provide all the calculations, but left some initial values. For instance, the former model does not indicate how direct interactions are evaluated. The author indicates that this is left open and dependent of the context (and we totally agree with it). The same happens with the latter model [11].
4.4 Generality dimension This dimension defines the generality of the model. the model that are general purpose ‗√‘ versus the ones that specialize on specific scenarios ‗−‘. one of the examples givem by Isaac Pinyol, Jordi Sabater-Mir [11] is the model by Abdul-Rahman & Hailes [18] a non-general model that specializes on the trust on the information provided by witness agents. according to Isaac Pinyol, Jordi Sabater-Mir these models that have such specification achieve good results and very acceptable computational speeds. On the other 1st Edition, 2016 - ISBN 978-989-20-6555-7
64
High Performance Networks Studies
hand, general purposes can be adapted to multiple scenarios and are perfect then for general agents architectures. Regret and BDI + Repage model are good examples of such models [11]. Table 1 summarizes in the last column this property against the surveyed models.
4.5 Bootstrap technique This the last characteristic that we analyze, here we intent to distinguish how the models indicate the bootstrap mechanism should be defined or implemented, despite the the mention, made by Isaac Pinyol, Jordi Sabater-Mir [11], that ―often, models offer a nice way to represent and deal with trust and reputation, but there is no explanations on how they bootstrap. This is quite common in cognitive models, which focus on the internal components of trust and reputation, but not how such components are built. However, some non-cognitive models do not give explicit details on the calculus of their evaluations‖ [11] , we believe that is worth to mention how these models bootstrap them selves. We mark ‗−‘ when the model dos not define an method, we mark with an ‗√‘ when the model uses stereotypes as a bootstrap techniques.
5
Conclusion
We have presented an improvement for classification and comparison of trust mechanisms by exposing when the methods allow trust evaluations to be ―bootstrap‖ by a priori assumptions based on stereotypes. After the analysis performed in this paper, there are several things that we can conclude. The first one is that the study in trust and reputation models has not decreased. It can be seen how the amount of different models that currently exist in the literature keeps increasing. It is remarkable though the proliferation of cognitive models in the last few years we also noted that these new comers care more about how to bootstrap than the models that came before. We also belief that at current pace of investigation on the area, a more detailed survey on the newer models is worth doing in a near future, as by forming stereotypes, agents are given an human like capability to generalise their interactions to form types which are more resilient to the degree of agent changes within the community, in doing this generalisations they attempt grasp the trustworthiness of the types of agents and by doing so allows to infer that trust to unknown agents that type.
References 1.
L. Xin, A. Datta, E. Lim, Computational Trust Models and Machine Learning, 2014.
2.
Luhmann,. Trust and Power. Wiley, Chichester, 1979.
3. Paja, E., Chopra, A. K., & Giorgini, P., Trust-based specification of sociotechnical systems. Data and Knowledge Engineering, 87, 339–353. doi:10.1016/j.datak.2012.12.05. 4. Pinyol, I., & Sabater-Mir, J., Computational trust and reputation models for open multi-agent systems: A review. Artificial Intelligence Review, 40(1), 1–25. doi:10.1007/s10462-011-9277-z, 2013. 5. Karl Aberer and Zoran Despotovic. Managing trust in a peer-2-peer in-formation system. In Proceedings of the Tenth International Conference on Information and Knowledge Management, CIKM ‘01, pages 310–317, Atlanta, Georgia, 2001. 1st Edition, 2016 - ISBN 978-989-20-6555-7
65
High Performance Networks Studies
6. Sonja Buchegger and Jean-Yves Le Boudec. The effect of rumor spreading in reputation systems for mobile ad-hoc networks. In Proceedings of WiOpt 03: Modeling and Optimization in Mobile, Ad Hoc and Wireless Networks, France, 2003. 7. X. Liu, A. Datta, K. Rzadca, and E.P. Lim. Stereotrust: A group based personalized trust model. In Proceedings of the 18th ACM conference on Information and Knowledge Management, pages 7–16, Hong Kong, China, 2009. 8. Xin Liu and Anwitaman Datta. Contextual trust aided enhancement of data availability in peerto-peer backup storage systems. Journal of Network and Systems Management, 20:200–225, 2012. 9. G. J. McLachlan. Discriminant Analysis and Statistical Pattern Recognition. Wiley-Interscience, August 2004. 10. Chris Burnett, Timothy J. Norman, and Katia Sycara. Sources of stereo-typical trust in multiagent systems. In Proceedings of the Fourteenth International Workshop on Trust in Agent Societies, pages 25–39, Taipei, Taiwan, 2011. 11. Isaac Pinyol, Jordi Sabater-Mir, Computational trust and reputation models for open multi-agent systems: a review, published online 7 July 2011, published 2013. 12. Chris Burnett, Timothy J. Norman, Katia Sycara, Bootstrapping Trust Evaluations through Stereotypes, 2010. 13. Conte R, Paolucci M., Reputation in artificial societies: social beliefs for social order. Kluwer, Dordrecht, 2002. 14. Pinyol I, Sabater-Mir J, Dellunde P, Paolucci M., Reputation-based decisions for logic-based cognitive agents. Auton Agents Multi-Agent Syst pp 1-42, 2010. 15. Castelfranchi C, Falcone R, Social trust. In: Proceedings of the first workshop on deception, fraud and trust in agent societies. Minneapolis, USA pp 35–49, 1998. 16. Herzig A, Lorini E, Hubner JF, Ben-Naim J, Castelfranchi C, Demolombe R, Longin D, Vercouter L., Prolegomena for a logic of trust and reputation. In: NORMAS‘08. pp 143–157, 2008. 17. Sabater J, Sierra C., Regret: A reputation model for gregarious societies. In: Proceedings of the fourth workshop on deception, fraud and trust in agent societies, Montreal, pp 61–69, 2001. 18. Abdul-Rahman A, Hailes S., Supporting trust in virtual communities. In: Proceedings of the Hawaii‘s international conference on systems sciences. Maui, Hawaii, 2000.
1st Edition, 2016 - ISBN 978-989-20-6555-7
66