The Roles of Reliability and Reputation in Competitive Multi Agent ...

1 downloads 0 Views 264KB Size Report
paying any price. This implies the necessity for the agent to decide in which measure to use the reputation with respect to the reliability when he must select the ...
This is the authors' version of the paper. The final publication is available at Springer via DOI: 10.1007/978-3-642-16934-2_23

The Roles of Reliability and Reputation in Competitive Multi Agent Systems Salvatore Garruzzo and Domenico Rosaci DIMET, Universit` a Mediterranea di Reggio Calabria Via Graziella, Localit` a Feo di Vito 89122 Reggio Calabria, Italy E-mail:{salvatore.garruzzo, domenico.rosaci}@unirc.it

Abstract. In competitive multi-agent systems, the act of requesting recommendations to evaluate the reputation of an agent is a cost. On the contrary, an agent computes the reliability of another agent without paying any price. This implies the necessity for the agent to decide in which measure to use the reputation with respect to the reliability when he must select the most promising interlocutors. In the past, two main schools of thinking emerged, one assuming the necessity of always exploiting both reliability and reputation, the other declaring that the use of reputation requires, in terms of cost, more disadvantages than advantages. In this paper, we try to evaluate the role of using reputation with respect to reliability, depending on some characteristics of the involved scenario, as the size of the agent population, the percentage of unreliable agents and the percentage of agents provided with a reputation model. To this purpose, we introduce a novel framework that contains both a reliability-reputation model and a mechanism to integrate reliability and reputation in a unique synthetic value. We have used this framework to build a competitive agent, which we have run on the well-known ART platform, and we have performed several experiments to study how the characteristics of the scenario are correlated to the exploited reliability and reputation model.

1

Introduction

Over the last recent years, the introduction of trust-based approaches in multiagent systems (MAS) has been recognized as a promising solution to improve the effectiveness of these systems [10, 9, 2]. It is a matter of fact that nowadays MASs are more and more exploited in several application domains to provide e-services, as e-Commerce, e-Learning and so on. In such a context, software agents are distributed in large-scale networks and interact to share resources with each other. Trust is essential in such settings to make social interactions much fruitful as possible [8]. In the context of the e-services, trust is defined as:“the quantified belief by a trustor with respect to the competence, honesty, security and dependability of a trustee within a specified context”[5]. When two agents interact with each other, one of them (the trustor) assumes the role of a

service requester and the other (the trustee) acts as an e-service provider. It is important to point out that a trust relationship can involve multiple dimensions, depending on the particular perspective under which the interaction between the two agents is viewed. Some usual dimensions of trust are: – competence: A competent agent is capable of correctly and efficiently perform the requested tasks. – honesty: A honest agent shows a truthful behavior, and it is not fraudulent or misleading. – security: A secure agent confidentially manages private data and does not allow unauthorized access to them. – reliability: A reliable agent provides reliable services. In other words, reliability measures the degree of reliance that can be placed on the services provided by the agent, including the efficiency. For each of the aspects considered above, trust augments the spectrum of interactions between two agents. Another important consideration is that the trust relationships depend on the context in which the agents interact. For instance, an e-commerce agent might be reliable if the price of the transaction if low enough, while its reliability might significantly decrease in the case of very high price. Without loss of generality, in this paper we choose to only deal with the reliability, which is the main dimension usually considered in competitive multi-agent system. Thus, from hereafter, we refer to trust using the term reliability. While reliability is a subjective measure, reputation is a measure of the trust that the whole community perceives with respect to a given trustee. Reputation assumes a very important role when an agent a does not have a sufficient knowledge of another agent b. Then, a can exploit the b’s reputation to decide if b is a reliable interlocutor or not. Several reliability and reputation models have been proposed in the past for representing both reliability and reputation. However, we remark that a crucial point in the practical use of these two measures is represented by the possibility of suitably combining them to support the agent’s decision. To make this issue clear, consider an agent a that has to choose, among a set of possible interlocutors, the most suitable agents to request a service. Clearly, in order to make its choice, a has to consider the past interactions with the agents of the community, that is, its reliability model. However, a does not have necessarily interacted with all the agents of the community; moreover, in some cases the past interactions with an agent could be insufficient to make the trust measure credible. In such a situation, a has to also consider, besides the reliability measure, also a reputation measure, deriving by a reputation model. Finally, to operate its decision, a should combine both reliability and reputation measures, to obtain a synthetic preference measure, i.e., a score to assign to each candidate agents in order to choose which are the most preferable candidates for interaction. The main question is: How much the reliability should be considered with respect the reputation in determining the preference measure?

We argue that the size of agent population, as well as the capability of the agents to provide useful recommendations are two key parameters for choosing how much it is necessary to use reputation with respect to reliability. We also observe that, if we are in presence of a large agent population and there are a few of reliable agents against a lot of unreliable agents, the use of reputation becomes necessary. 1.1

Our contribution

The main contribution of this paper is that of determining some criteria to suitably choose the weights that should be assigned to reliability and reputation. A first difficulty to study the problem above is represented by the unsuitability, with respect to this purpose, of the existing reliability and reputation approaches. In fact, some of them provides measures of reliability and reputation, as well as a model for integrating them, but they are not designed for a competitive system and thus they do not consider the presence of reliability and reputation costs. On the other hand, the most of the existing approaches for competitive systems do not consider the necessity of integrating reliability and reputation into a unique measure. Finally, the aforementioned approaches generally do not distinguish between the reliability of an agent as service provider and the reliability of the same agent as provider of recommendations. We argue that this differentiation is crucial for a study that wants to deeply investigate the role of the reliability (that identifies with the service reliability) and that of the reputation (that strictly depends on the recommendations provided by agents). To overcame this problem, we introduce in this paper a new reliability and reputation framework for a competitive multi-agent system, called Reliability and Reputation Agent Framework (RRAF). This framework allows to compute, from the viewpoint of an agent a, a measure of reliability and a measure of reputation of another agent b, where the reputation is determined by using the recommendations coming from other agents of the community, obtained paying a given price. In this framework, reliability and reputation are integrated in an overall preference measure, and this integration is based on a parameter called β, ranging in the real interval [0,1]. In words, the value β = 1 means assigning full importance to reliability and no importance to reputation. This framework is then used to study the role of reliability and reputation in a competitive system. More in particular, we have used the framework to test the behaviour of competitive agents in the well-known Agent Reputation and Trust (ART) Testbed. The RRAF agents built based to our framework have been run in different multi-agent scenarios, varying the composition of the agent population. As a result, the role of the variable β in determining the agent performances has been correlated to some crucial parameters as the size of the agent population, the percentage of the unreliable agents and the percentage of agents having a reputation model. The experimental results clarifies that the advantage of using

a reputation mechanism in addition of the direct reliability is significant only for particular values of the above parameters. The paper is organized as follows. In Section 2 we discuss some related work. In Section 3 we introduce our multi-agent scenario, while in Section 4 we describe the RRAF reliability-reputation model. In Section 5 we propose a study of the role of reputation with respect to the reliability on the ART Testbed and, finally, in Section 6 we draw our conclusions.

2

Related Work

Several reliability and reputation models have been proposed in the past [9]. All of them provide an agent a with a metric for measuring both reliability and reputation of another agent b, based on the interactions between a and b and on the recommendations that the other agents of the community generates about b. Moreover, some different approaches have been presented in the literature dealing with the integration of reliability and reputation [3, 6]. However, all these approaches leave to the user the task of setting the weights to be assigned to reliability and reputation for determining the overall preference value. Some of these approaches, as for instance that presented in [6], beside of providing measures of reliability and reputation, also introduce a model for combining them into a synthetic measure. However, these approaches do not deal with competitive systems and do not take into account the costs for computing reliability and reputation measures. In order to provide a common platform on which different models can be compared against objective metrics the Agent Reputation and Trust (ART) Testbed Competition was created in 2006 [4, 1]. The scenario defined for the ART platform takes place in the art appraisal domain. In such a scenario, some clients can ask the agents for appraising some paintings, and agents may ask other agents for their assessments of paintings. The assessments are based on expertise values set by the ART testbed. Each agent can have a different reliability-reputation model, for deciding which are the most suitable agents to request an opinion for appraising paintings. Different strategies have been proposed by the participants to the ART competitions. While the most of the proposed agents exploit both reliability and reputation in their trust models, in [7] the authors propose to directly work with the reliability, without considering the reputation. Their decision is based on the consideration that there are few agents participating in a game, and therefore it is easily possible to directly evaluate all the agent population without paying the fees associated with requests of recommendation. Moreover, these authors also consider that the agent asked for the reputation of another does not have sufficient knowledge of it.

3

The multi-agent scenario

In our scenario, we assume the existence of some clients, that can request services to the agents of a multi-agent system. More in particular, we assume that all the services are related to the same domain D (e.g. e-Commerce) and each requested service falls in one of the given categories associated to that domain (e.g. the e-Commerce categories present in eBay as Antiques, Art, Books etc.). All the categories of D are contained in a pre-determined set of categories SC. We suppose that when a client needs a service falling in a given category cat, he sends a request to the Agent System Manager ASM of the multi-agent system, that provides to assign the request to an agent. The assignment of the client’s request to an agent is performed by ASM at pre-determined temporal steps. When a new step begins, the ASM examines all the service requests performed by clients, and assigns each request to the agent considered the most suitable. The selection of the most suitable agent is performed by the ASM based on the effectiveness shown in the past by the agents in the category cat in which the request falls. When the assignment is realized, the client must pay to the selected agent a given price sp for actually obtaining the service. In addition, during each temporal step, the agents of the community can interact with each others, in order to exchange information. The interactions among agents follow this protocol: – An agent a, in order to provide a client with a service falling in a given category cat, can decide to request the collaboration of another agent b. Before of requesting this collaboration, in order to understand the expertise of b in the category cat, the agent a can ask a third agent c for a recommendation about b. This recommendations is an evaluation of the Quality of Service (QoS) generally associated with the services provided by b. The agent c can accept or refuse to give the requested recommendation, and if it accepts, a must pay a given price rp to b. Moreover, a can also asks b itself for providing an auto-declaration of its expertise, paying the same price rp. The obtained recommendations can be used by a for updating its internal reputation model (see the next Section). In words, a uses the gossips coming from the other agents in order to understand what is the reputation of b in the community. Note that the expertise declared by b about itself also contributes to form this reputation. – If the agent a decides to obtain the collaboration of the agent b, he needs to pay a price cp to b. – At the end of the step, the agent a receives a feedback from the system manager ASM for each service provided by a during the step. The feedback contains an evaluation of the real quality of the service, obtained by ASM directly asking the client that received the service. In addition, the feedback also informs a about the quality of the contributions provided by the agents contacted by a to realize the service. This way, a can use the feedback to update its internal trust model (see the next section) about the agents of the system. Moreover, the feedback can be used by a to evaluate the recommendations received about the contacted agents.

ASM

as sig fee nm db en ac t k

1: service request

user

7:

8:

2:

e

se r vi

vic

ce

ser

t ues req n o ti on nda ndati me me m o m ec eco 3: r 4: r

3: recommendation request 4: recommendation agent

5: c

olla bo 6: c ration req olla u bor atio est n

5: co lla 6: co bora lla t bo ion re ra qu tio e n

st

Fig. 1. The scenario in which the agents operate.

We graphically represent such a scenario in Figure 1, where the overall operation that leads to provide the client with a service is logically decomposed in 8 phases, namely: 1) the client request a service to the ASM; 2) the ASM assigns the request to a selected agent; 3) the selected agent requests recommendations to other agents and 4) the requested agents provide the recommendations; 5) the selected agent requests collaboration to other agents and 6) the requested agents provide the collaboration; 7) the selected agent provide the service to the client; 8) the ASM provide a feedback to the selected agent.

4

RRAF-Model: A representation of trust interactions

In this section, we introduce a framework that provides (i) a model to represent the scenario previously described and (ii) a methodology for computing the trust measures involved in this scenario. To this purpose, let AS be a list containing all the agents belonging to the multi-agent systems, and let ai be the i-th element

of A. We associate to each agent ai the following matrices, whose values are real numbers belonging to the interval [0,1]: SRELi : It is the Service Reliability matrix, where each value SRELi (j, c) represents the reliability that the agent ai assigns to the services provided by the agent aj for the category c. We remember that the reliability represents the subjective measure of the trust that an agent has in another agent. The value 0 (resp.1) means no reliability (resp. complete reliability). REPi : It is the Reputation matrix, where each value REPi (j, c) represents the reputation that the agent ai assigns to the agent aj for the category c. The reputation is a measure of trust that an agent assigns to another agent based on some recommendations coming from agents of the community. Although the reputation is not based on a subjective evaluation of the agent, however it is not an objective measure, since each agent a computes the reputation of another agent b independently from how the other agents compute the reputation of b. Thus, it is correct to say that the value REPi (j, c) represents how the agent ai perceives the reputation of aj in the community. The value 0 (resp.1) means minimum reputation (resp. maximum reputation). RRELi : It is the Recommendation Reliability matrix, where each value RRELi (j, c) represents the reliability that the agent ai assigns to the recommendations provided by the agent aj for the category c. In other words, RRELi (j, c) is a measure of how much the agent ai considers as reliable the suggestions coming from the agents j about other agents and concerning the category c, i.e. it represents the reliability of the gossip coming from aj . P REFi : It is the Preference matrix, where each value P REFi (j, c) represents the overall preference that the agent ai assigns to the agent aj for the category c, based on both the reliability and reputation perceived by ai . RECCi : It is the Recommendation matrix, where each value RECCi (j, k, c) represents the recommendation that the agent aj provided to the agent ai about the agent ak for the category c. Differently from the previous matrices, that have only two dimensions, RECC is a 3-dimensions matrix. The data structures described above are initialized by using cold start values, i.e. values that represents how much trust the agent accepts to initially have in another agent in absence of information. A cold start value can be equal to 0, meaning a diffident approach of ai , or equal to 1, meaning a confident approach, or equal to 0.5, meaning a neutral approach. Different cold start values can be used for the different matrices, since an agent can choose to use different cold start strategies for the reliability, the reputation and so one. Then, the matrices are updated at each step by the agent ai , as follows: – Phase 1: Reception of the Recommendations. The agent ai receives, at the current step, some recommendations by the other agents, in response to previous recommendation requests. These recommendations are stored in the RECC matrix.

– Phase 2: Computation of SREL and RREL: The ASM sends to ai the feedbacks for each service s provided in the past step, where the contributions provided by other agents to ai in realizing s are evaluated. These feedbacks are contained in a matrix FEED, where each feedback F EED(s, j) is a real number belonging to [0, 1], representing the quality of the collaboration that the agent aj provided to the agent ai concerning the service s. A feedback equal to 0 (resp. 1) means minimum (resp. maximum) quality of the service. Based on these feedback, the agent ai updates both the matrices SREL and RREL. More in particular, in our approach we choose to compute the current reliability shown by an agent aj in its collaboration with ai by averaging all the feedbacks concerning aj . Therefore, denoting by Services(j) the set of the services of the category c provided by ai with the collaboration of aj at the previous step, the current service reliability shown by aj , that we denote by sr(j, c), is computed as P s∈Services(j) F EED(s, j) sr(j, c) = Services(j) At each new step, this current reliability is taken into account for updating the element SRELi . We choose to compute this update by averaging the value of SRELi at the previous step and the current reliability computed at the new step. Formally: SRELi (j, c) = α · SRELi (j, c) + (1 − α) · sr(j, c) Where α is a real value belonging to [0, 1], representing the importance that ai gives to the past evaluations of the reliability with respect to the current evaluation. In other words, α measures the importance given to the memory with respect to the current time. This coefficient assumes a significant importance in the design of the agent behaviour, and we will discuss in details its role in the next section. In an analogous way, the feedbacks are used to update the recommendation reliability RR. The current recommendation reliability of an agent aj at a given step is computed by averaging all the errors made by aj in providing a recommendation. In words, if aj recommended to ai the agent ak with a recommendation RECC(j, k, c), and the feedback for ak concerning a service s of the category c is F EED(s, k), the error made by aj by its recommendation is |RECC(j, k, c) − F EED(s, k)|. By averaging all the errors concerning services of the category c, we obtain an evaluation of the current precision of aj with respect to the recommendations relating to ak , that is P s∈Services(k)

|RECC(j,k,c)−F EED(s,k)|

. Finally, by averaging this precision on the set Agents(j) of all the agents ak evaluated by aj in the previous step, we obtain the current recommendation reliability rr(k, c): P X 1 s∈Services(k) |RECC(j, k, c) − F EED(s, k)| rr(k, c) = |Agents(j)| Services(k) Services(k)

k∈Agents(j)

Now, in order to update the element RRELi (j, c), we use a weighted mean between the value of RRELi (j, c) at the previous step and the current recommendation reliability: RRELi (j, c) = α · RRELi (j, c) + (1 − α) · rr(k, c) where α has the same meaning than for the case of the service reliability. – Phase 3: Computation of REP. The recommendations contained in the matrix RECC are used by the agent ai to compute the reputations of the other agents of the community. In particular, ai computes the reputation of another agent aj as the average of all the recommendations received by the other agents of the community concerning aj . Formally: P k∈AS,k6=i RECC(k, j, c) REPi (j, c) = |AS| − 1 where we denote by AS the set of the agents of the community. – Phase 4: Computation of PREF. The agent ai finally computes the overall preference measure P REFi (j, c) in the agent aj by taking into account both the service reliability SRELi (j, c) and the reputation REPi (j, c). In particular, a coefficient β is used to weight the importance of the service reliability with respect the reputation. β is a real value belonging to the interval [0, 1], where β = 0 means that the agent, in order to evaluate the overall trust in another agent, does not assigns any importance to the reliability and only considers the reputation. Viceversa, if β = 1, the agent only considers the service without using the contribution of the reliability. Formally: P REFi (j, c) = β · SRELi (j, c) + (1 − β) · REPi (j, c) At each step, the agent ai exploits the matrix P REF to select the most suitable candidates to request a collaboration.

5

A Study of the role of the β coefficient on the ART Tesbed

In this Section, we perform some experiments using the ART platform. On ART, each agent takes the role of an art appraiser who gives appraisals on paintings presented by its clients. In order to fulfill his appraisals, each agent can ask opinions to other agents. These agents are also in competition among them and thus, they may lie in order to fool opponents. The game is supervised by a simulator that runs in a synchronous and step by step manner, and it can be described as follows:

– The clients, simulated by the simulator, request opinions on paintings to the appraiser agents. Each painting belongs to an era. For each appraisal, an agent earns a given money amount that is stored in its bank account BA. – Each agent has a specific expertise level in each era, assigned by the simulator. The error made by an agent while appraising a painting depends on this expertise and the price the appraiser decides to spend for that appraisal. – An agent cannot appraise its paintings himself, but he has to ask other agents to obtain opinions. Each opinion has a fixed cost for the agent. – Each agent can obtain recommendations about another agent by other players. Each recommendation has a given price. This way, the agent can to build a reputation model of the other agents. – Agents weight each received opinion in order to calculate the final evaluation of the paintings. – At the end of each step, the accuracy of agents final evaluations is compared to each other, in order to determine the client share for each agent during the next step. In other words, the most accurate agent receives more clients. – At the end of each step, the simulator reveals the real value of each painting, thus allowing each agent to update its reliability and reputation model. – At the end of the game, the winner of the competition is the agent having the highest bank account. The purpose of our experiments is to study how the performances of an agent changes depending on the population size N , the percentage of unreliable agents P and the percentage of agents having a reputation model R. For our experiments, we have built an agent implementing the RRAF model, and we have run some games in presence of different populations of agents. In particular, we have realized the following three experiments: 1: Relationship between the coefficient β and N For this experiment, we have considered 6 different values of the agent population size, namely N = 30, N = 50, N = 70, N = 100, N = 150, N = 200. For each of these values, we have run 11 ART games, where an RRAF agent participates to each game, using a different value of β. In particular, the 11 values of β we have considered are 0, 0.1, 0.2.0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1. For each game, besides the RRAF agents, we have run as competitors a population of N − 1 Simplet agents. Simplet agent is an agent that has participated to the 2008 ART Competition, and whose software can be downloaded at the ART site [1], and that uses a reliability-reputation model. We have programmed a 50 percent of these agents with a low availability to pay for the opinions, thus generating unreliable answers to the opinion requests. This low availability is represented by the internal ART parameter cg = 1. The other 50 percent of Simplet agent have been programmed with cg = 15, that is a price for generating opinions with high reliability. This composition has been fixed for all the games, so the unique variable is the β coefficient. In Figure 2 we have reported the results of this experiment, in terms of variation of the bank amount BA of the RRAF agent against the different values of β. This

450.000 400.000 350.000 N=200

300.000

BA

N=150

250.000

N=100

200.000

N=70 150.000

N=50

100.000

N=30

50.000 0 0

0,1

0,2

0,3

0,4

0,5 0,6

0,7

0,8

0,9

1

â

Fig. 2. Variation of the bank amount BA against the β coefficient for different sizes N of the agent population (percentage of unreliable agents P=50, percentage of agents without reputation model R=0).

variation has been represented for each population size N . We note that, if the population size is very small (e.g., N = 30) the maximum bank amount has been obtained using β = 1, i.e. by the agent that does not assign any importance to the reputation and only uses the reliability. On the contrary, if the population size is high (e.g. N = 200), the maximum bank amount has been reached using β = 0.3, that is by assigning more importance to the reputation than the reliability. Generally, the Figure shows that the role of the reputation becomes more significant when the population size increases. 2: Relationship between the coefficient β and P In this experiment, we have considered 8 different agent populations, each characterized by a different percentage P of unreliable agents. Namely, the 8 values of P we have considered are 10, 20, 30, 40, 50, 60, 70, 80. For each of these values, we have run 110 ART games, where an RRAF agent participates to each game, using a different value of β. In particular, we have considered the set of values {β1 , β2 , .., βk }, where β1 = 0, βk = 1 and βi+1 = βi + 0.01. For each game, besides the RRAF agents, we have run as competitors a population of N − 1 Simplet agents, where a percentage P of these agents generates reliable opinions, using an opinion cost cg = 15, while the other agents generates unreliable opinions, using cg = 1. For each popultaion of agents, we have determined the value βM ax , represented the value of β that has generated the maximum bank account for that population. In Figure ?? we have represented the variation of βM ax against the percentage P . We remark that, in correspondence of a high value of P (e.g. P = 80), we have obtained a small value of βM ax (e.g. βM ax = 0.34), while for a small value of P (e.g. P = 10) we have obtained a high value of βM ax (e.g. βM ax = 0.74). In words, the

1 0,9 0,8 0,7 0,6

âmax 0,5 0,4 0,3 0,2 0,1 0 10

20

30

40

50

60

70

80

P

Fig. 3. Variation of the value βM ax against the percentage of unreliable agents, with population size N=150 and percentage of agents without reputation model R=0

experiment shows that the higher is the percentage of unreliable agents, the smaller will be the influence of the reputation with respect to the reliability. 3: Relationship between the coefficient β and R In our last experiment, we have considered 8 different agent populations, each characterized by a different percentage R of agents provided with a reputation model. In particular, the 8 values of R we have considered are 0, 10, 20, 30, 40, 50, 60, 70. For each of these values, we have run 110 ART games, where an RRAF agent participates to each game, using a different value of β. In particular, as in the previous experiment, we have considered the set of values {β1 , β2 , .., βk }, where β1 = 0, βk = 1 and βi+1 = βi + 0.01. For each game, besides the RRAF agents, we have run as competitors a population of N − 1 Simplet agents, where a percentage R of these agents are Simplet agents with their reputation model, while the other agents are honest agents without reputation model. For each population of agents, we have determined the value βM ax , having the same meaning than in the previous experiment. Figure ?? shows the variation of βM ax against the percentage R. It is easy to note that, in correspondence of a high value of R (e.g. R = 70), we have obtained a high value of βM ax (e.g. βM ax = 0.71), while for a small value of R (e.g. P = 0) we have obtained a small value of βM ax (e.g. βM ax = 0.41). This experiment shows that the higher is the percentage of agents having a reputation model, the higher will be the influence of the reputation with respect to the reliability.

6

Conclusions

The large number of trust-based approaches in competitive multi-agent systems emerged in the last recent years implies the necessity of clearly understanding what are the advantages and the limitations of using trust measures to improve

1 0,9 0,8 0,7 0,6

âmax 0,5 0,4 0,3 0,2 0,1 0 0

10

20

30

40

50

60

70

R

Fig. 4. Variation of the value βM ax against the percentage of agents having reputation model, with population size N=150 and percentage of unreliable agents P=50

the effectiveness of the systems. In particular, the two main measures considered in the past, i.e. reliability and reputation, seems to be strongly correlated to the characteristics of the agent population. However, in order to realize a sound and effective analysis, it is necessary to use a model for competitive environments that considers both the reliability of agents in providing services and in generating recommendation, as well as a mechanism to integrate both reliability and reputation for obtaining a synthetic measure to be exploited by an agent for selecting the most promising interlocutors. In this work, we propose a framework, called RRAF, that complies with the above requirements. In particular, this framework allows to built competitive agents, provided with an internal reliability-reputation model, where the importance of reliability with respect to reputation is measured by a coefficient denoted as β. We have run RRAF agents on the well-known ART testbed, in order to understand what is the correlation between the β coefficient against three main characteristics of the agent population, namely the population size, the percentage of unreliable agents and the percentage of agents provided by a reputation model. The experimental results confirm the suppositions, proposed by some authors [7], that the reputation has not a significant role if the size of the agent population is too low. However, the experiments also show that in presence of large-size agent population, the use of a reputation model can lead to an advantage of about a 40 percent with respect to the use of the only reliability. As other interesting results, our study shows that the role of the reputation becomes sufficiently significant only in presence of a sufficient percentage of unreliable agents, and only in presence of a sufficient number of agents provided with a reputation model. As for our ongoing research, we are planning to explore the dependence of β by other characteristics of the agent population, that might have, in our opinion, a significant impact as, for instance, the percentage of agents that are reliable from the viewpoint of the provided services but are unreliable as providers of recommendations. Moreover,

we are thinking to exploit the knowledge of these correlations to build an agent model able to dynamically adapt its reputation model (in terms of β coefficient) to the possible variation in the time of the characteristics of the agent population.

References 1. ART URL. http://megatron.iiia.csic.es/art-testbed/index.html. 2010. 2. Jamal Bentahar Babak Khosravifar, Maziar Gomrokchi and Philippe Thiran. Maintenance-based trust for multi-agent systems. In AAMAS ’09: Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems, pages 1017–1024, Richland, SC, 2009. International Foundation for Autonomous Agents and Multiagent Systems. 3. Touhid Bhuiyan, Yue Xu, and Audun Jsang. Integrating trust with public reputation in location-based social networks for recommendation making. In Proceedings of the IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, pages 107–110, 2008. 4. Karen Fullam, Tomas Klos, Guillaume Muller, Jordi Sabater-Mir, K. Suzanne Barber, and Laurent Vercouter. The agent reputation and trust (art) testbed. In Trust Management, 4th International Conference, iTrust 2006, Pisa, Italy, May 16-19, 2006, Proceedings, volume 3986 of Lecture Notes in Computer Science, pages 439– 442. Springer, 2006. 5. T. Grandison and M. Sloman. Trust management tools for internet applications. In Proceedings of the First International Conference on Trust Management (iTrust 2003), pages 91–107. Springer, 2003. 6. Trung Dong Huynh, Nicholas R. Jennings, and Nigel R. Shadbolt. An integrated trust and reputation model for open multi-agent systems. Autonomous Agents and Multi-Agent Systems, 13(2):119–154, 2006. 7. Vctor Muoz and Javier Murillo. Agent uno: Winner in the 2nd spanish art competition. Inteligencia Artificia, 12(39):19–27, 2008. 8. Sung-Jun Na, Kee-Hyun Choi, and Dong-Ryeol Shin. Reputation-based service discovery in multi-agents systems. In Proceedings of the IEEE International Workshop on Semantic Computing and Applications, pages 126–128. IEEE Computer Society, 2008. 9. Jordi Sabater and Carlos Sierra. Review on computational trust and reputation models. Artificial Intelligence Review, 24(1):33–60, 2005. 10. Dong Huyhn Sarvapali D. Ramchurn and Nicholas R. Jennings. Trust in multiagent systems. The Knowledge Engineering Review, 19:1–25, 2004.