2Computer Science Department, Adekunle Ajasin University, Akungba Akoko, Ondo State, Nigeria. â¢Argumentation provides a pragmatism for reasoning in the ...
Generic Framework for Modelling Dialogue with Information from Sources of Varying Trustworthiness Gideon Ogunniye1,2, Timothy J. Norman1, and Nir Oren1 1Computing Science Department, University of Aberdeen , Scotland, UK. {g.ogunniye,t.j.norman,n.oren}@abdn.ac.uk 2Computer Science Department, Adekunle Ajasin University, Akungba Akoko, Ondo State, Nigeria
Background
Scenario
•Argumentation provides a pragmatism for reasoning in the presence of conflicts and uncertainties while Trust field gives us the methodologies for computing trust in information and sources providing them
• Multi-agent systems’ researchers are exploiting the strengths of these technologies to minimize inherent uncertainties in the interactions among autonomous agents in multi-agent systems • Some existing assumptions: a) There is a bidirectional link between arguments and the trustworthiness degree of the source providing them b) Arguments grounded in information from more trustworthy sources will be able to defeat arguments grounded in information from less trustworthy sources[3] c) Observed quality of arguments should feed back on the assessment of its source. • We present a Trust-based Argumentation Framework for Deliberation Dialogue
Problem • Inconsistent information or information sources in a dialogue can hinder the attainment of the dialogue goals • Dispute occurs in a dialogue as a result of wars among the beliefs of participating agents
• Trust in information or information sources should not be treated as a monolithic, absolute, static and transitive concept mean absolute error
29
Estimation bias as a function of the percentage of malicious sources
27
2.2 All Sources Diversity sampling 2 Trust sampling Random sampling
1.8
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0 26
20
• Socio-cognitive models of trust inspired by limitations
[1]
handle some of these
22
21
40
18
increasing % of malicious sources
15
60
Question: How can we elicit trustworthy information from sources with varying degree of trustworthiness in a multi-party dialogue? Group behaviour = 80%
9
# Agents sampled
Variation over means
12
80
6 4
100
Approach
.
Application & Relevance Like other forms of communication systems in multi-agent systems, a dialogue framework must be robust to minimize uncertainties in the interactions among agents
A trust model for selecting participants for a dialogue Framework for automated decision making and recommendation in a coalition
We integrate argumentation, trust and belief revision to elicit trustworthy information in multiparty dialogue
Framework for collaborative intelligent analysis (e.g Military operations)
Future Research Aims: • Identify inconsistent information sources in a dialogue • Handle conflicts in the viewpoints of trustworthy sources
Generic Framework for Dialogue Modelling a framework that can be used for different types of dialogue
Trust Evaluation Formalising a trust model for computing dynamic trust in information and information sources before, during and after a dialogue
References [1] Castelfranchi, C. and Falcone, R. (2010). Trust theory: A socio-cognitive and computational model, volume 18. John Wiley & Sons. [2] Dung, P. M. (1995). On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artificial intelligence, 77(2) : 321-357. [3] Parsons, S., Tang, Y., Sklar, E., McBurney, P., and Cai, K. (2011). Argumentation-based reasoning in agents with varying degrees of trust. In The 10th International Conference on Autonomous Agents and Multiagent Systems-Volume 2, pages 879- 886. International Foundation for Autonomous Agents and Multiagent Systems.
Research funded by the Tertiary Education Trust Fund (TETFund)