Multimed Tools Appl DOI 10.1007/s11042-015-2539-z
A multi-attribute rating based trust model: improving the personalized trust modeling framework Guangquan Xu & Gaoxu Zhang & Chao Xu & Bin Liu & Mingquan Li & Yan Ren & Xiaohong Li & Zhiyong Feng & Degan Zhang
Received: 24 November 2014 / Revised: 6 February 2015 / Accepted: 26 February 2015 # Springer Science+Business Media New York 2015
Abstract Recently, trust models have contributed much to the success of online multimedia recommendation service. However, most of them only consider the case of binary ratings and ignore the attributes of ratings, which will limit their universal applicability. To address this problem, we propose a multi-attribute rating based trust model to improve the Zhang’s Personalized trust modeling framework, an existing framework for trust modeling by using binary ratings in multi-agent electronic marketplaces. In our approach, it does not restrict users to using a single attribute rating; it allows a rating to be a certain value between 0 and 1 rather than only 0 or 1; it can improve assessment accuracy by calculating the similarity of common ratings between recommenders and users; and it considers the certainty of ratings to deal with the sudden change of partner’s behaviours. Finally, experimental results show that, our approach can effectively model the trustworthiness of recommenders and providers, and it can also resist several malicious attacks. Keywords Multi-attribute rating . Trust model . Multimedia recommendation service . Malicious attack G. Xu (*) : G. Zhang : B. Liu : Y. Ren : X. Li : Z. Feng Institute of Software and Information Security Engineering. School of Computer Science and Technology, Tianjin University, Tianjin, China e-mail:
[email protected] URL: http://cs.tju.edu.cn/faculty/xugq C. Xu School of Computer Software, Tianjin University, Tianjin, China M. Li The SpaceStar Technology Co., Ltd, ShaanxiXi’an, China D. Zhang Key Lab of Computer Vision and System, Ministry of Education, Tianjin University of Technology, Tianjin, China
Multimed Tools Appl
1 Introduction With the rapid development of internet technology, people are becoming enjoying multimedia services and products on the network. A lot of online multimedia services platforms, such as Xunlei, Amazon, and Youku, are playing more and more important roles in modern multimedia commerce. Unfortunately, there are some unavoidable problems arising due to the virtuality of online service. In multimedia commerce systems, most of people or agents are likely to be self-interested [4], and they spare no effort to make profits. For example, some service providers announce that they can supply high quality products and services to users (demanders). But in fact they may provide low quality products for the users to gain more profits in the transactions, which may cause huge losses to the users. Some providers take measure to improve their reputation in the early transactions, and then they may change their behaviours to provide users with inferior products. Moreover, some malicious users may collude with providers to recommend bad providers to other users. Therefore, it is important for users to accurately assess the trustworthiness of multimedia service providers to ensure good service and products. To avoid the unexpected losses, some intelligent user agents have to collect some information about the potential multimedia service providers, and select the most suitable provider agent to trade with. However, it is not an easy thing to accurately evaluate the trustworthiness of the multimedia service providers to select the appropriate providers to meet their demands. To cope with this dilemma, a lot of researchers in artificial intelligence proposed various approaches to model the trustworthiness of providers to perform transactions with the providers who are likely to be trusted [14], such as the beta reputation system [5], PET model [8], TRAVOS model [13], Swift Trust Model [20], and so on. In addition, the trust and reputation systems have been recognized as key factors for successful electronic commerce adoption [10]. In this paper, we propose a multi-attribute rating based trust model to address unfair ratings so that users can easily choose the most credible multimedia service providers. It uses the multi-attribute ratings in Zhang’s [24] Personalized trust modeling framework, aiming to improve the applicability and flexibility. Firstly, in order to distinguish the well-meaning users from malicious users, we consider the credibility of the recommender (some users provide ratings about products for other users) to give different weights to recommenders’ ratings. Secondly, the ratings in most existing approaches are binary and the value of rating is one (e.g., positive or satisfactory) or zero (e.g., negative or unsatisfactory). It cannot adequately portray the quality of the products or service. In our approach, we aggregate the multi-attribute ratings to a certain value in the interval [0, 1] to improve the flexibility of trust model. Finally, we take advantage of Zhang’s [23, 24] trust-based personalized framework to combine recommender’s credibility with their ratings to realize the transformation from the Dempster-Shafer theory to the beta function. The remainder of paper is organized as follows: a review of related work is given in Section 2. In Section 3, we introduce the multi-attribute rating based trust model. Section 4 gives examples to illustrate how our approach works when recommenders provide different experiences for buyers. In Section 5, we present some simulation and experiment results. In Section 6, we conclude our work and present an overview of our contributions.
2 Related work To find an effective approach to handle the problem of unfair testimonies has been studied for a long time [9]. In this section, we briefly summarize some representative trust models.
Multimed Tools Appl
2.1 BRS: Beta Reputation System In 2002, Jøsang and Ismail proposed the beta reputation system (BRS) [5] which is based on probabilistic theory. In this model, the rating of service provider provided by different recommender agents is used to calculate the trustworthiness of provider. And it uses a metric called opinion to describe beliefs about the truth of statements. However, when there are many malicious ratings got by user agent, the beta reputation model may result to a bad decision. In 2004, Whitby and Jøsang extended BRS [19] to filter the ratings that are not in the majority to get the fair ratings, and it use the Iterated Filtering approach to calculate the final trustworthiness of provider agent. Nevertheless, this method is not effective when the majority of ratings are unfair. This approach also does not consider separately buyer’s personal experience with advisors’ rating [25]. 2.2 EigenTrust model EigenTrust [7] is one of flow trust models [6] based on reputation ranking aggregation. It computes a global trust value in P2P networks through repeated and iterative multiplication and aggregation of trust scores along transitive chains until the trust value for all agents members of the P2P community converge to stable values [6]. On this basis, PowerTrust [26] is proposed by Zhou and Kai to improve the effectiveness of trust model. In PowerTrust model, it dynamically selects a number of most trusted nodes as the power nodes. Then it uses the power nodes to calculate the trustworthiness of target node. PowerTrust model improved the accuracy of global reputation and the speed of aggregation. However, it is not an effortless thing to calculate the global trust values for every peer in a large-scale P2P system [15]. 2.3 TRAVOS model The TRAVOS [13] model was proposed by Tency in 2005. This model is used to cope with inaccurate advice in agent-based virtual organizations. Its central idea is that unreliable ratings of seller agent provided by advisors are different from the buyer’s personal experience. And the unfair ratings will be discounted when buyer tries to calculate the trustworthiness of seller. In the end, TRAVOS adjusts all the reputation opinions of the seller and merges such reputation information to get the final reputation value. However, this method filters the inaccurate information by using personal experiences of the selling agent, which may not work well when buying agent does not have enough interactions with the selling agent. Also, it assumes that selling agents act consistently. This assumption might not be true in many cases [25]. 2.4 Fuzzy-trust model Fuzzy logic is also used by researchers to create trust models in [11, 12]. Fuzzy-based Trust Evaluation (FTE) [11] was proposed by Schmidt in 2007. To establish the FTE model, it can be divided into three parts: (1) Fuzzy process. It is a process to convert the exact value to fuzzy variable. (2) Fuzzy inference. In this part, the fuzzy variable will be transform to fuzzy trust value through fuzzy inference rules. (3) De-fuzzy process. The value obtained in part 2 is able to convert to trust value by using de-fuzzy method. Fuzzy-Trust is the trust model that can handle the situation when complete information was not available from the participators. And it was demonstrated through experiments that it was more effective than EigenTrust [7]. However, how to select the membership function is difficult in establishing the fuzzy-trust
Multimed Tools Appl
system, for different membership functions will have different degrees of influence on the results. 2.5 Formal trust model In 2007, Wang and Singh presented a formal trust model [17] which focused on modelling the certainty of trust from evidence provided by multiple recommenders. The authors thought that the sellers’ varying behaviour would result in conflicting evidence about the seller. Therefore, they formulated certainty in terms of evidence based on a statistical measure defined over a probability distribution of outcomes. This paper is meant to offer a mathematic theoretical basis for trust model which is based on evidence. In [16, 18], Wang and Singh offered a mathematical understanding of trust, especially in social network and service-oriented computing. It showed that certainty would increase as the conflict in the evidence decreases and when the conflict was fixed as a certain value, the certainty would increase as the number of evidence increases. 2.6 Personalized trust model Zhang and Cohen proposed a novel framework for trust model in multi-agent systems in [21, 22, 24]. In Zhang’s model, it combines direct interaction ratings and indirect interaction ratings of the selling agent to model the trustworthiness. The first stage in this approach is to model the trustworthiness of advisor (other buying agent). It is based on the rating pairs of advisor and buyer. The second stage is to model the trustworthiness of seller. It combines personal experiences and reputation ratings provided by advisors. To determine the weights, it is based on Chernoff Bound theorem [23]. In Zhang’s approach, it uses only the binary testimonies to model the trustworthiness of selling and buying agents. Thus, it cannot deal with the multidimensional testimonies when buying agent assesses multiple aspects of the service. As described above, most of the approaches are designed for reputation systems using binary ratings, and they do not care about the attributes of ratings. In contrast, we present a multi-attribute rating based trust model so that users can make effective selection according to the ratings of different attributes. It uses multi-attribute ratings as evidence to improve the applicability and flexibility of the trust model. Several approaches mentioned above assume that the recommenders act consistently, and it may not true in many cases. Our approach does not entirely rely on this assumption, and it considers the certainty of recommenders’ ratings and the similarity of rating pairs to filter out different kinds of malicious recommenders. Moreover, we make use of Zhang’s Personalized trust framework to compute the trustee’s final trust value based on the recommender’s trustworthiness and ratings.
3 Multi-attribute rating based trust model In this section, we describe the modeling process by using our approach in detail. At first, we introduce some notations and definitions to give a general description of the system. Supposing that, in this multimedia service system, there are M multimedia service providers {P1,P2,P3,…,PM} and N users {U1,U2,U3,…,UN}. After each transaction between a user and a provider, the user can give his ratings about the service to the provider. In addition, users can get all ratings about a multimedia service provider, while the provider cannot tamper with the ratings. Note that the user can be regarded as a recommender when other users make use of his ratings to assess a provider.
Multimed Tools Appl
To improve the applicability and flexibility of the system, we decide to adopt the multiattribute ratings to evaluate the trustworthiness of provider. In real world, people may evaluate a service or production from different aspects, such as quality, delivery date and so on. Therefore, we select quality, delivery date and after-sales service as the representative for the evaluated attributes. For example, if user U rates provider P’s service after one transaction, the rating is represented as a vector: rPU ¼ ðquality; delivery date; after−scale serviceÞ ¼ ðq; d; aÞ quality; delivery date; after−scale service∈½0; 1
ð1Þ
Moreover, in order to reflect timeliness of ratings, the ratings will be divided into different elemental time periods according to the time they are provided by users. The time period is noted as Ti, and T1 is the current time period. 3.1 Calculation of private trust value Private trust value is calculated by using user’s direct interaction with the multimedia service provider. There is no doubt that it is the most relevant and reliable information for user. Since it is hard to deal with the multi-attribute ratings, we will transform the ratings from the view of user to a certain value which is in [0, 1]. For a certain rating r=(q,d,a), a user can give a final score for this rating from his view. It can be represented as: SC ¼ ωq *q þ ωd *d þ ωa *a ωq þ ωd þ ωa ¼ 1
ð2Þ
ωq,ωd and ωa are the weights which can be set by users to represent their views. If a user pays more attention on the quality of service, he/she can set a high weight to the quality while low weights to delivery date and after-scale service. As we all know, previous interaction experience may not always be relevant for the actual service rating, because the provider may change his behaviour over time. Therefore, we should discount the old ratings to reflect the timeliness. This can be achieved by introducing a forgetting factor described by Jøsang [5]. And the private trust value about a provider P from the view of a user U can be calculated as follows: Xn SC T i *λi−1 þ 1 ð3Þ Rpri ðPÞ ¼ Xi¼1 n i−1 N *λ þ 2 T i i¼1 SC T i is the sum of all the ratings’ scores in a time period Ti, N T i is the number of ratings within the time period Ti. And λ (0≤λ≤1) is the time forgetting rate for assigning less weight to old ratings, to deal with possible changes of the seller’s behaviour over time [22]. Note that the higher the value of λ is, the larger the weight is placed on the previous ratings. 3.2 Calculation of public trust value In a multimedia service platform with numerous participators, it is impossible for a user to have direct interaction with every provider. Therefore, it is important for user to resort to other users’ recommendations to do the evaluation, and many recommendation based trust system have been proposed. However, some malicious users may recommend the terrible providers to the user. So it is also challenges to effectively filter the unfair ratings and efficiently aggregate referrals from diverse recommenders.
Multimed Tools Appl
3.2.1 Modeling the trustworthiness of recommender Since people prefer to trust their own views rather than others’, they may pay more attention to the people who has similar view with them. Therefore, we compare the user’s ratings with other users’ (also known as recommenders) ratings for commonly rated providers to model the trustworthiness of recommenders. To ensure the consistency of ratings, the compared ratings should be in the same time period. Here, we define the (rPA,rPB)=((qPA,dPA,aPA) , (qPB,dPB,aPB)) as a rating pair. The two ratings, rPA and rPB, are correspondent only if they are in the same elemental time period. Then we measure the similarity between rPA and rPB by using the Euclidean norm based method. qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi P P 2 P P 2 P P 2ffi qB −qA þ d B −d A þ aB −aA P P pffiffiffi D rPA ; rPB ∈½0; 1 D rA ; rB ¼ ð4Þ 3 P P P P Sim rA ; rB ¼ 1−D rA ; rB Note that such a similarity has been normalized. Suppose the number of rating pairs for provider P between user A and recommender B is k. Here, the provider P represents commonly rated providers. The positive outcome of rating pairs can be calculated as: α¼
Xm Xk j¼1
i¼1
Simi rPAm ; rPBm
ð5Þ
Here m is the number of commonly rated providers. Also, we can get the negative outcome: β¼
Xm Xk j¼1
i¼1
Di rPAm ; rPBm
ð6Þ
To estimate the probability that the recommender B provides similar ratings for user A, the best method is to use the expected value of the probability. And the expected probability of positive outcome is calculated as follows: RðBÞ ¼
αþ1 αþβþ2
ð7Þ
This idea of adding 1 each to α and β (and thus 2 to α+β) follows Laplace’s famous rule of succession for applying probability to inductive reasoning [18]. It is also used by some other famous trust model, such as the generalized BRS [5] trust model and TRAVOS [13] trust model and so on. Moreover, we consider the certainty about the ratings provided by recommenders to model the trustworthiness. The certainty is a measure of the confidence about the agent’s performance. In paper [16], it shows that computing the certainty can help an agent filter out the recommender with insufficient information, even if nominally the probability of a good outcome is high. And the certainty is defined as follows: Z 1 s r 1 x ð1−xÞ ð8Þ −1dx C ðBÞ ¼ Z 1 2 s r x ð 1−x Þ dx 0 0
In this equation, x presents the probability of a fair rating provided by recommender B. And r is the sum of fair ratings, s is the sum of unfair ratings. In contrast to Wang’s [18] approach, r is mapping to the sum of similarity α, and the s equals to β. Then we combine certainty with
Multimed Tools Appl
P1
recommender User B1
1 User A
3
P2
truster
P4
2
4
trustee
User B2
P3
recommender
Fig. 1 Network structure of the example. The solid line represents that one node has transactions with another node. Dash line 1 and 2 show that user A estimates the trustworthiness of other users by comparing ratings of commonly rated providers. Otherwise, dash line 3 and 4 show, user A estimates the provider P4’s performance through recommender B1and B2
the expected probability of positive outcome to get the final trustworthiness of recommender: T rðBÞ ¼ RðBÞ*C ðBÞ
ð9Þ
It is obvious that, for a fixed similarity of rating pair, the trustworthiness will increase as the number of rating pair increases. 3.2.2 Calculation of public trust value While user A does not have sufficient ratings about provider P, such user will consider the ratings provided by his recommenders (in some paper, it is known as neighbour). Therefore, it is important to combine the trustworthiness of recommender with his ratings. Here, we will take advantage of the mapping function proposed by Zhang [23] from beliefs defined by the Dempster-Shafer theory to the beta function. 2T r B j SC T i Ti ð10Þ DB j ðposÞ ¼ 1−T r B j N T i þ 2 2T r B j ðN T i −SC T i Þ DTB ij ðnegÞ ¼ 1−T r B j N T i þ 2
ð11Þ
Here, DTB ij ðposÞ is the positive outcome in time period i which is based on recommender Bj’s ratings, and DTB ij ðnegÞ is the negative outcome. Also, SC T i is the sum of all the ratings’ scores in a time period Ti, N T i is the number of ratings within the time period Ti. Finally, we aggregate all outcomes from different recommenders to get the public trust value of provider P. Xk Xn j¼1
Rpub ðPÞ ¼ X k X n j¼1
i¼1
DTB ij ðposÞ λi−1 þ 1 DTB ij ðposÞ þ DTB ij ðnegÞ λi−1 þ 2 i¼1
ð12Þ
Multimed Tools Appl Table 1 Ratings provided by recommenders Rj
User B1
User B2
Time
T1
T2
T3
T4
T1
T2
T3
T4
P1
0.7,0.8,0.8 0.8,0.8,0.8
0.8,0.9,0.8 0.7,0.9,0.8
0.9,0.8,0.8 0.9,0.8,0.9
0.8,0.8,0.8
0.2,0.2,0.3 0.2,0.2,0.3
0.3,0.3,0.1 0.1,0.2,0.3
0.2,0.1,0.3 0.2,0.1,0.2
0.3,0.2,0.2
P2
0.7,0.7,0.6
0.7,0.6,0.7
0.6,0.6,0.6
0.7,0.6,0.6
0.2,0.3,0.3
0.1,0.2,0.2
0.1,0.3,0.2
0.2,0.2,0.1
P3
0.3,0.4,0.2
0.4,0.3,0.3
0.4,0.5,0.5
0.8,0.8,0.9
0.8,0.9,0.8
0.9,0.9,0.8
0.8,0.9,0.8
0.2,0.2,0.3
P4
0.8,0.9,0.8
0.8,0.8,0.8
0.9,0.9,0.7
0.2,0.2,0.1
0.2,0.2,0.2
Similar to the way of estimating the private trust value, the public trust value is also discounted by the forgetting factor λ. Also, k is the number of recommenders, n is the number of time period Ti. 3.3 Trust value of provider Generally speaking, the information sources considered by researchers are divided into two parts: direct interaction experiences and indirect information. In this paper, direct experience is used to calculate private trust value, while indirect information is used to get the public trust value. Therefore, we should combine the private trust value and the public trust value together. Trust ðPÞ ¼ ε*Rpri ðPÞ þ ð1−εÞ*Rpub ðPÞ
ð13Þ
Note that the weights of public and private trust value are the key of the combination. As we all know, if a user has sufficient transactions with the provider, he/she will be confident about his own experiences and give a large weight to private trust value. Therefore, the threshold of direct transaction numbers can be set to a certain value by the user, and the parameter ε can be calculated as follows. ( ) N N < N lim ð14Þ ε ¼ N lim 1 else In Eq. 14, Nlim is the threshold of rating number that user will be confident about his own experiences.
4 An example by using our approach In this section, we provide an example to illustrate how our approach works in detail. And the example shows the trustworthiness of different recommenders (users) to demonstrate the Table 2 User A’s ratings User A Time
T1
T2
T3
T4
P1
0.7,0.8,0.8
0.7,0.9,0.8
0.9,0.8,0.9
0.8,0.9,0.8
P2
0.7,0.6,0.6
0.7,0.6,0.7
0.7,0.6,0.6
0.7,0.6,0.7
P3
0.3,0.3,0.2
0.3,0.3,0.3
0.5,0.5,0.5
0.9,0.8,0.8
Multimed Tools Appl
Table 3 Trustworthiness of recommenders
Rj
User B1
User B2
Tr (Bj)
0.65
0.23
effectiveness of our approach. Supposing that there are four providers (P1, P2, P3, P4) and two recommenders (B1 is a honest recommender, B2 is a liar). The network structure and ratings (quality, delivery date, after-sales servive) are shown in Fig. 1, Tables 1, 2 and 3 respectively: As Tables 1 and 2 show, user B1’s ratings about the three providers are always similar to user A’s ratings, while user B2 always gives the providers opposite ratings. Therefore, we can calculate the trustworthiness of each recommender by using Eqs. (4–9), and the results are shown in Table 3. Note that in Eq. (4), we use a measuring method to get the similarity of a rating pair. To select the most appropriate similarity measure, we illustrate the differences among the similarity values computed by various methods for four specific problems like [2] in Table 4. In Table 4, it can be observed that Euclidean norm based measuring method can solve the four problems of PCC and COS, and generate more realistic similarity measurements. Therefore, it is an effective choice by using Euclidean norm based measuring method to calculate the similarity of two ratings. After modelling the trustworthiness of recommenders, user A can calculate the trust value of provider P4. Since user A does not have any transaction with provider P4, user A has to model the trust value of P4 based on recommenders B1 and B2. Since the weights of different attributes are set by users according to their personal interests, the score SC T i of a rating calculated by Eq. (2) is different accordingly. For example, buyer A always pays more attention to quality and delivery date, i.e., ωq =0.4, ωd =0.4, ωa =0.2.
Table 4 Examples of PCC, COS, and Euclidean norm based similarity metrics Problems
Examples Rating a
Flat-value
Opp.-value
Single-value
Cross-value
(1+PCC)/2
COS
Equation (4) (Euclidean norm based)
Rating b
[1, 1, 1]
[1, 1,1]
NaN
1.0
1.0
[1, 1, 1]
[0.75,0.75,0.75]
NaN
1.0
0.75
[1, 1, 1]
[0.25,0.25,0.25]
NaN
1.0
0.25
[0.9, 0.1, 0.9]
[0.1, 0.9,0.1]
0
0.232
0.2
[0.25,0.75,0.75]
[0.75,0.25,0.25]
0
0.623
0.5
[0.25,0.75,0.75,0]
[0.75,0.25,0.25,1]
0
0.397
0.339
[1] [1]
[1] [0.75]
NaN NaN
1.0 1.0
1.0 0.75
[1]
[0.1]
NaN
1.0
0.1
[0.1,0.9]
[0.9,0.1]
0
0.220
0.2
[0.1,0.75]
[0.9,0.25]
0
0.394
0.333
[1,0.25]
[1,0.75]
1
0.922
0.646
[0.75,0.5]
[0.5,0.25]
1
0.993
0.5
Multimed Tools Appl Table 5 The discount of ratings T2
T3
T
T1
DTB1i ðposÞ
0.465
0.443
0.476
DTB1i ðnegÞ
0.089
0.111
0.077
DTB2i ðposÞ
0.030
0.033
DTB2i ðnegÞ
0.136
0.133
T4
According to Eqs. (10–11), we can get the discount of recommenders’ ratings about P4 in Table 5: Finally, we set the forgetting rate λ=0.9 as shown in [23], and the trustworthiness of multimedia provider P4 can be calculated by Eqs. (12–14). Since the direct interaction number is zero, the weight ε of the private trust value is 0. Xk Xn j¼1
TrustðP4 Þ ¼ 0 þ ð1−0Þ* X k X n j¼1
i¼1
DTB ij ðposÞ λi−1 þ 1 DTB ij ðposÞ þ DTB ij ðnegÞ λi−1 þ 2 i¼1
¼ 0:61
Such result, shows that user B1’s ratings have a great influence on the trust value as we expected. And with the number of transactions increases, the trustworthiness of P4 will converge to the actual value.
5 Experiments In this section, we carry out some experiments to show the effectiveness of our trust model. In our experiments, the ratings are provided by different kinds of recommenders, and we will show how to model the trustworthiness of the recommenders to filter the unfair ratings.
Fig. 2 Trustworthiness of recommenders by using the ratings about P1
Multimed Tools Appl
Fig. 3 Trustworthiness of recommenders by using the ratings about P2
Moreover, we compare the performance of our scheme with the BRS [5], Personalized [23, 24] and Shin [3] approaches. We consider four kinds of attacks, and each attack type corresponds to one kind of malicious users.
& & & &
Ballot-stuffing users (Dellarocas 2000 [1]). Users of this kind always give well ratings regardless of the actual behaviours of the providers. Badmouthing users (Dellarocas 2000 [1]). Users of this kind always give testimonies that provider behaves badly regardless of the actual behaviours of the providers. Strategic users. Users of this kind can provide fair ratings in the early transactions, but then abuse their credibility to mislead other users. Simple malicious users. Users of this kind always provide opposite ratings to other users.
Fig. 4 Trustworthiness of recommenders by using the ratings about P3
Multimed Tools Appl
Fig. 5 The trustworthiness of strategic recommender
In the first experiment, we set two kinds of providers. Multimedia service provider P1 always provide good service to users, while P2 always supply inferior service for users. We compare the trustworthiness of recommenders based on different kinds of recommenders. Experimental results are shown in Figs. 2 and 3. From the figures, we can see that the trustworthiness of recommender will be a low value when recommender provides a lot of unfair ratings. Though the ballot-stuffing and badmouthing recommenders may provide similar ratings to the user in some situations, they cannot get high trust values yet.
Fig. 6 Comparison of the BRS and our approaches
Multimed Tools Appl
In some situations, provider P3’s behaviours may change over time. We also consider showing how our approach works in this situation. Here, multimedia service provider P3 supplies good service in early transactions and bad service for the consequent transactions. As is shown in Fig. 4, with the number of common transactions (each common transaction corresponding to a rating pair) increases, only the honest recommender can get a high trustworthiness evaluation. And the other three kinds of recommenders will get the relative low trust values since their ratings are different from the user’s ratings. Strategic user is a kind of malicious user hard to deal with since the behaviour may change over time. To illustrate the effectiveness of our approach, we will compare the trust value of the strategic user with personalized [23, 24] method and Shin [3] model. In this case, the strategic user provides fair ratings in the early transactions, but then abuses his credibility to mislead other users. From the results shown in Fig. 5, we can see that the recommender will gain a high trust value after giving a few fair ratings by using Zhang’s [23, 24] approach and Shin [3] model. In our method, the trustworthiness of recommender is still keep a relative low value, since the number of common transactions is small. Then the recommender provides unfair ratings to cheat users. After that, the Personalized approach and our method can converge to a low value quickly, while the trustworthiness in shin model declines very slowly. Therefore, our method is more effective than Zhang’s approach and Shin model in dealing with the strategic user’s ratings. In another simulation, we directly compare the performance of our method with that of the famous BRS [5] trust model in the situation where regular user doesn’t have any experience with the multimedia service provider. We assume that there are 1 provider, 100 recommenders, and 1 regular user. All of the recommenders have a lot of common transactions with the user, which means that the regular user has obtained the trust value of recommenders. And the regular user doesn’t have any transaction with the provider. Therefore, the trustworthiness of the provider will be rated as the public trust value based on the multi-attribute ratings provided by recommenders. Moreover, the percentages of malicious recommenders changes from 0 to 80 %. Figure 6 illustrates that the trustworthiness of provider estimated by using our method is better than the BRS, even though the major recommenders are malicious.
6 Conclusions In this paper, to improve the flexibility and applicability in the multimedia service platform, we proposed a multi-attribute rating based trust model in a personalized trust modeling framework. It mainly deals with the multi-attribute ratings provided by recommenders, and helps a general user choose a trustworthy provider to trade with. Our experiments confirmed that our approach can deal with different kinds of malicious attacks efficiently. For the future, we will carry out more experiments to compare our proposed method with other popular trust models, and we will also make some necessary adjustments to this model to improve its accuracy and efficiency. Furthermore, it is still a challenge that how to cope with the new participator problem (also called cold start problem) in multimedia service platforms. In conclusion, much more future work is still needed to improve our trust model before it is applied to real world.
Acknowledgments This work is supported by the National Natural Science Foundation of China under grant No. 61340039.
Multimed Tools Appl
References 1. Dellarocas C (2000) Immunizing online reputation reporting systems against unfair ratings and discriminatory behavior. In: Proceedings of 2nd ACM Conference on Electronic Commerce, pp 150–157 2. Guo G, Zhang J, Yorke-Smith N (2013) A novel bayesian similarity measure for recommender systems. In: Proceedings of the Twenty-Third international joint conference on Artificial Intelligence, AAAI Press, pp 2619–2625 3. Hang CW, Zhang Z, Singh MP (2012) Shin: generalized trust propagation with limited evidence. IEEE Comput 46:78–85 4. Huynh TD, Jennings NR, Shadbolt NR (2006) An integrated trust and reputation model for open multi-agent systems. Auton Agent Multi-Agent Syst 13(2):119–154 5. Jøsang A, Ismail R (2002) The beta reputation system. In: Proceedings of the 15th bled electronic commerce conference, pp 41–55 6. Jøsang A, Ismail R, Boyd C (2007) A survey of trust and reputation systems for online service provision. Decis Support Syst 43(2):618–644 7. Kamvar SD, Schlosser MT, Garcia-Molina H (2003) The EigenTrust algorithm for reputation management in p2p networks. In: Proceedings of the 12th international conference on World Wide Web, pp 640–651 8. Liang Z, Shi W (2005) PET: a personalized trust model with reputation and risk evaluation for P2P resource sharing. In: Proceedings of the 38th Annual Hawaii International Conference on System Sciences, pp 201b201b 9. Liu S, Zhang J, Miao C et al (2012) An integrated clustering-based approach to filtering unfair multi-nominal testimonies. Comput Intell 30(2):316–341 10. Sabater J, Sierra C (2005) Review on computational trust and reputation models. Artif Intell Rev 24(1):33– 60 11. Schmidt S, Steele R, Dillon TS et al (2007) Fuzzy trust evaluation and credibility development in multi-agent systems. Appl Soft Comput 7(2):492–505 12. Song S, Hwang K, Zhou R et al (2005) Trusted p2p transactions with fuzzy reputation aggregation. Internet Comput, IEEE 9(6):24–34 13. Teacy WT, Patel J, Jennings NR, et al. (2005) Coping with inaccurate reputation sources: experimental analysis of a probabilistic trust model. In: Proceedings of the fourth international joint conference on Autonomous Agents and Multiagent Systems (AAMAS), pp 997–1004 14. Thirunarayan K, Anantharam P, Henson C et al (2014) Comparative trust management with applications: Bayesian approaches emphasis. Futur Gener Comput Syst 31:182–199 15. Tian C, Yang B (2011) Trust, a reputation and risk based trust management framework for large-scale, fully decentralized overlay networks. Futur Gener Comput Syst 27(8):1135–1141 16. Wang Y, Hang CW, Singh MP (2011) A probabilistic approach for maintaining trust based on evidence. J Artif Intell Res (JAIR) 40(1):221–267 17. Wang Y, Singh MP (2007) Formal trust model for multiagent systems. In: Proceedings of the 20’th International Joint Conference on Artificial Intelligence (IJCAI), pp 1551–1556 18. Wang Y, Singh MP (2010) Evidence-based trust: a mathematical model geared for multiagent systems. ACM Trans Auton Adapt Syst (TAAS) 5(4):14 19. Whitby A, Jøsang A, Indulska J (2004) Filtering out unfair ratings in bayesian reputation systems. In: Proc. 7th Int. Workshop on Trust in Agent Societies, vol 6, pp 106–117 20. Xu G, Feng Z, Wu H et al (2007) Swift trust in a virtual temporary system: a model based on the DempsterShafer theory of belief functions. Int J Electron Commer 12(1):93–126 21. Zhang J, Cohen R (2006) A personalized approach to address unfair ratings in multiagent reputation systems. In: Proceedings of the AAMAS Workshop on Trust in Agent Societies 22. Zhang J, Cohen R (2007) Design of a mechanism for promoting honesty in e-marketplaces. In: Proceedings of the National Conference on artificial intelligence, AAAI Press, 22(2): 1495 23. Zhang J, Cohen R (2008) Evaluating the trustworthiness of advice about seller agents in e-marketplaces: a personalized approach. Electron Commer Res Appl 7(3):330–340 24. Zhang J, Cohen R (2013) A framework for trust modeling in multiagent electronic marketplaces with buying advisors to consider varying seller behavior and the limiting of seller bids. ACM Trans Intell Syst Technol (TIST) 4(2):24 25. Zhang J, Sensoy M, Cohen R (2008) A detailed comparison of probabilistic approaches for coping with unfair ratings in trust and reputation systems. In: Sixth Annual Conference on Privacy, Security and Trust (PST’08), IEEE, pp 189–200 26. Zhou R, Hwang K (2007) Powertrust: a robust and scalable reputation system for trusted peer-to-peer computing. IEEE Trans Parallel Distrib Syst 18(4):460–473
Multimed Tools Appl
Guangquan Xu Is a Ph.D. and associate professor at the School of Computer Science and Technology, Tianjin University, China. He received Ph.D. degree from Tianjin University in March 2008. He is a member of the CCF and ACM. His research interests include trusted computing, trust and reputation.
Gaoxu Zhang Is a master student of the School of Computer Science and Technology, Tianjin University, Tianjin, China. His research interests include trusted computing, trust modeling.
Chao Xu Is a Ph.D. and associate professor at the School of Computer Software, Tianjin University, China. He received Ph.D. degree from Tianjin University in March 2010. He is a member of the CCF and ACM. His research interests include trusted computing, pattern recognization.
Multimed Tools Appl
Bin Liu Is a master student of the School of Computer Science and Technology, Tianjin University, Tianjin, China. His research interests include trusted computing, web security.
Mingquan Li Is a Ph.D. and senior engineer at the SpaceStar Technology Co., Ltd, Beijing, China. He received his Ph.D. degree from Tianjin University in September 2008. He is a member of CCF. His research interests include Cloud computing, Big data and Information safety.
Yan Ren Is a master student of the School of Computer Science and Technology, Tianjin University, Tianjin, China. His research interests include trusted computing, android security.
Multimed Tools Appl
Xiaohong Li Is a full tenured professor of the School of Computer Science and Technology, Tianjin University, Tianjin, China. Her current research interests include knowledge engineering, trusted computing, and security software engineering.
Zhiyong Feng Is a full tenured professor and head of the School of Computer Science and Technology, Tianjin University, Tianjin, China. His current research interests include knowledge engineering, service computing, and security software engineering.
Degan Zhang Was Born in 1970, Ph.D., a Member (M) of IEEE in 2001. He Graduated from Northeastern University, China. Now he is a professor of Tianjin Key Lab of Intelligent Computing and Novel software Technology, Key Lab of Computer Vision and System.