The Agent Argumentation Architecture Revisited - Semantic Scholar

1 downloads 0 Views 234KB Size Report
move mv0 since none faculty has priority over fa1. At the end of the game, Pro contains the assumptions of an admissible argument deducing the motiv(ag ),.
The Agent Argumentation Architecture Revisited⋆ Maxime Morge∗ and Kostas Stathis∗∗ ∗

∗∗

Dipartimento di Informatica, Universit` a di Pisa Largo B. Pontecorvo, 3 I-56127 Pisa, Italy [email protected] http://maxime.morge.org Royal Holloway, University of London, Department of Computer Science Egham, TW20 0EX, UK [email protected] http://www.cs.rhul.ac.uk/∼kostas

Abstract. The Agent Argumentation Architecture (AAA) has been recently proposed [1] as an abstract model by means of which an autonomous agent argues with itself to manage its motivations and arbitrate its possibly conflicting internal goals. In an attempt to show how the AAA model can be instantiated, we revisit the original model with a concrete argumentation framework illustrating how the internal dialectic process can be specified as a dialogue-game between internal components representing the agent’s mental faculties. The resulting framework is exemplified with an intuitive case, illustrating the importance of argumentation to develop models of cognitive agents with motivations.

1

Introduction

Argumentation is abstractly defined as the interaction of different arguments for and against some conclusion. Over the last few years, we are witnessing argumentation gaining increasing importance in multi-agent systems, mainly as a mechanism for rational interaction, viz., interaction which involves the giving and receiving of reasons. As a result, argumentation has made solid contributions to the practice of multi-agent systems in such diverse domains as scientific inquiry, legal disputes, business negotiation, logistics, and GRID computing. The premise of this work is that the reason why rational agents argue with other agents externally is because this is what they do internally to choose actions in the world in which they are situated. This rather strong argumentation perspective acknowledges the existence of an innate dialectical process that is internal to agents and externalized via interaction. In particular, the internal dialectics is not considered as a monologue (of the kind one typically finds in proof procedures) but it is being proposed as a complex dialogue-game ⋆

This work is partially supported by the Sixth Framework IST programme of the EC, under the 035200 ARGUGRID project.

between different processes inside the agent’s mind (of the kind one typically finds in dialogue-games played amongst multiple agents in complex application domains). The vision of a generic Argumentation Agent Architecture (AAA) has been outlined in [1] in order to formulate what it means for a rational agent to be autonomous. This architecture relies upon the needs hierarchy described in the motivational theories of Abraham Maslow [2] and proposes a new hierarchy that can be used in practical applications of rational and autonomous agents. An important aspect of the AAA model is that it uses the intentional stance characteristic of the BDI architecture [3] but puts the emphasis on the motivations of the agent. We view these motivations as arguments describing the conditional decisions to achieve goals. In this context, internal processes are seen as mental faculties of the agent. Faculties encapsulate different goals, propose new actions or assume beliefs that support the motivations they are responsible for, and argue why these goals should be adopted by the agent as a whole. This overall process can, according to AAA, be viewed and modeled as a dialectical argumentation game, in which individual faculties must argue for the value of their individual contribution, of which they are a part. This paper seeks to complement the conceptual and abstract model of the AAA by investigating how to use the architecture to build specific faculties for a rational agent. Although guidelines of faculties have been proposed in [1], how to use argumentation technology to interpret reasoning within faculties and the overall argumentation game is abstracted away. In this context, we also seek to show how to exemplify the argumentation techniques required by applying them to a specific case. Again, in [1] suggestions of possible applications were discussed but the model was not exemplified. Thus, by providing a formal argumentation framework, a concrete dialogue-game, and an example, the contribution of our work is to demonstrate what it takes to specify, and subsequently program, the motivations of agents under the AAA model. The paper is organised as follows. Section 2 introduces the original AAA model and discusses how we instantiate it within our reinterpretation. Section 3 presents the argumentation framework used to support and interpret this architecture. Section 4 presents the dialectical argumentation game. Section 5 summarises our work, discusses related work and, finally, presents our plans for the future.

2

The Agent Argumentation Architecture

We present here the AAA model. Firstly, Section 2.1 recalls the AAA model proposed in [1]. Secondly, Section 2.2 proposes an instance of the AAA model. 2.1

The original model

The primary concern of the AAA model has been to study ways in which a rational agent can be considered to be autonomous. The model has been influenced by Maslow’s theory of motivations and in particular his proposal on the

hierarchy of needs [2] (see Table 1, column 1). Maslow’s work has been influential in the understanding of human drive and its relationship to the expression of personality. As a result Maslow’s hierarchy of needs was considered to be an important starting point for the notion of autonomy as it ought to manifest in agents seen as autonomous entities. Although Maslow’s work was chosen because it attempts to relate a model of underlying internal processes to observable traits, an initial analysis of the work suggested that it is not practical to build a software agent using precisely these traits. As a result, the work of AAA reinterpreted Maslow’s framework with concepts that were more suitable to a software agent. The resulting framework has many similarities to that of Maslow but it is revisited with a language that is more amenable to software development. The relationship between the hierarchy of needs by Maslow and that of AAA are shown in Table 1. The first column of Table 1 shows Maslow’s hierarchy of needs, while second column shows how AAA has reinterpeted Maslow’s hierarchy for software agents.

Fig. 1. The agent argumentation architecture [1]

Based on the above reinterpretation of the Maslow framework, the conceptual organisation of the AAA [1] is shown in Fig. 1. The AAA consists of the following components: an Argumentation State (AS), a Knowledge Base (KB), a number of faculties, an attender module (managing the flow of incoming information) and a planner/dispatcher module responsible for making plans from goals and performing actions as required. The interaction between components is organized as a complex game consisting of argument moves for achieving goals, preferences, and a set of phases that informally describes the overall argumentation procedure. This procedure uses AS as a shared memory structure for its current state, including the arguments put forward, and not yet defeated or subsumed, but accessible by the faculties

Maslow’s hierarchy of needs Physiological - the immediate requirements maintaining the function of an individual. Safety - the search for stability and freedom from fear and anxiety in the individuals continuing context, rather than freedom from immediate danger. Belonginness/Love - the apparent human requirement to seek out and maintain immediate contact with other individuals in a caring and cared for context (other giving and receiving of affection). Esteem - needs that are further divided into two primary categories, the need for self-esteem, ... the desire for strength, achievement ... independence and freedom and for the esteem of others, ... status, fame and glory, dominance, recognition, attention, importance, dignity, or appreciation. Self-Actualization - individual drives developing capabilities to the highest degree possible, what humans can be, they must be; this level includes invention and aesthetic (musical, artistic, and, presumably, scientific) achievement.

AAA’s hierarchy of needs Operational - the immediate protection of the agent and its continuing operation. Self-benefit - aspects of the agents behavior that is directly related to its ongoing protection and individual benefit. Peer - needs of individually identified entities, human or artificial, with which the agent has specific relationships. Community - requirements of an agent’s place in an electronic society that might contain both other software agents and humans, with which the agent must interact; includes constraints that come from both informal and institutionalized groups. Non-Utilitarian - longerterm activities, not directly related to tasks that offer an immediate or readily quantified benefit for the agent.

Table 1. The hierarchies of needs by Maslow and AAA juxtaposed (from [1])

and the input and output modules. The KB acts, conventionally, as a long-term repository of assertions within the system. It was assumed that the elements held in KB take the form of conjectures, rather than expressions of fact. The KB is partitioned according to the Maslow motivation types (KB = Km1 ∪ Km2 ∪ Km3 ∪ Km4 ∪ Km5 , as indicated in Fig. 1) and according to the faculties (KB = Kf 1 ∪ ... ∪ Kf n . Faculties (f 1, . . . , f n) are responsible for particular aspects of the whole agents possible agenda, which taken together will comprise every aspects that the agent can address. Each faculty may therefore argue that a goal should be established to actively achieve some aspect of that agenda (or avoid some situation that would be detrimental to the agenda). Equally it must monitor the goals and actions proposed by other faculties to determine whether the consequences of those goals or actions would interfere with or contradict some aspect of its own agenda. There is no winner or loser; each faculty is (or at least, should be) working towards the overall advantage of the whole agent. Essentially each faculty can have an opinion of what is best for the agent as a whole, but from its limited viewpoint, and must successfully argue its case against other possibly competing views for this to prevail and become incorporated into the agents overt behavior. Once goals are created, a planner/dispatcher module is responsible for creating prototype plans from agreed goals and effecting actions from agreed plans. There is also an attender module monitoring a continuous stream of incoming information, which will comprise of at least the following types of item: requests to perform activities on behalf of other agents, broad suggestions of things the agent might like to do or adopt (for instance, adverts, suggestions by providers that the recipient would be advantaged by purchasing or doing something), news items, indicating the outcome of various previous actions and activities, and solicited or unsolicited assertions from other agents, which the current agent may or may not wish to adopt according to its current motivational strategy. These are delivered to the argumentation state and discarded soon, each faculty having a brief time to inspect and adopt them into the KB if required. 2.2

Revisiting the Agent Argumentation Architecture

In an attempt to make the AAA model concrete, we present here an instance of the AAA model seeking to illustrate how to build an autonomous agent in practice. Our proposal relies upon the simplest set of faculties that correspond to the AAA hierarchy of needs discussed in Table 1, column 2. Our proposed model is shown in Fig. 2. An important point about our interpretation is that we consider the main knowledge bases of the AAA model Km1 . . . Km5 (Fig. 1) as placeholder of the reasons for maintaining a particular behaviour towards a goal. We call these reasons explicitly motivations and we denote them by Moperational . . . Mnon utilitarian (Fig. 2). Unlike the abstract version of AAA where motivations could have more than one faculty and knowledge base, in our reinterpretation a motivation corresponds to a single faculty controlling a single knowledge base. For instance, the operational motivation of the agent Moperational is represented as a set of rules

Fig. 2. The AAA instantiated

stored in the knowledge base Kf oper . This knowledge base is controlled by the operational faculty foper selecting arguments to propose according to the rules in Kf oper , the current AS, and the moves of the argumentation game (to be discussed later in Section 3). The other motivations are represented similarly. A last point about our reinterpretation is that the main focus is to illustrate how the dialogue-game between faculties is specified and implemented, but it abstracts away from the sensing enabled by the attender and the planning required by the planner/dispatcher modules. For simplicity we assume that the attender module contributes to the preparation of the initial state of the dialectic process and the planner module uses its outcome. However, the precise relationship of these components with the dialectical process is beyond the scope of this work.

3

Argumentation Framework

We present here the computational argumentation framework used within the AAA architecture. Firstly, we introduce the objects, i.e. the statements. Secondly, we define the arguments. Thirdly, we outline their interaction in order to give the semantics of the framework. 3.1

Knowledge representation

We consider here a knowledge representation framework (KF ) which allows complex representation of statements (goals, decisions, knowledge) and priorities. Definition 1 (Knowledge framework). A knowledge representation framework is a tuple KF = hL, Asm, I , T , Pi where:

– L is a formal language consisting of a finite set of sentences, called the representation language; – Asm, is a set of sentences in L which are taken for granted, called assumptions; – I is a binary relation over atomic formulas in L which is asymmetric, called the incompatibility relation; – T is a finite set of rules built upon L, called the theory; – P ⊆ T × T is a transitive, irreflexive and asymmetric relation over T , called the priority relation. The formulae in L can be interpreted as: – goals representing the state of the world faculties want to make true (e.g. Done(bob, donate)). The special literal motiv(ag ) represents the toplevel goal of the agent; – decisions representing the actions which can be performed, e.g. Offer(bob, picture); – beliefs about the state of the world, e.g. Friend(al, bob); – names of rules in T which are unique, e.g. self(fa4 , r1 (bob)); Also, L admits strong negation (classical negation) and weak negation (negation as failure). A strong literal is an atomic first-order formula, possible preceded by strong negation ¬. A weak literal is a literal of the form ∼ L, where L is strong. We consider the priority relation P on the rules in T , which is transitive, irreflexive and asymmetric. R1 PR2 can be read “R1 has priority over R2 ”. There is no priority between R1 and R2 , either because R1 and R2 are ex æquo, or because R1 and R2 are not comparable. For instance, nu(fa 1 , r 1 )Pself(fa 2 , r 2 ) means that the rules in KBnu have priority over the rules in KBself . We adopt an assumption-based argumentation approach [4] to reason about beliefs, goals, decisions and priorities as in [5]. That is, agents can reason under uncertainty. Actually, certain literals are assumable, meaning that they can be assumed to hold in the KB as long as there is no evidence to the contrary. Decisions (e.g. Offer(bob, picture) ∈ Asm) as well as some beliefs (e.g. ¬Give(al, ag, picture) ∈ Asm) are assumable literals. The incompatibility relation captures conflicts. We have L I ¬L, ¬L I L and L I ∼ L. It is not the case that ∼ L I L. For instance, Done(ag, hung) I Done(ag, donate) and Done(ag, donate) I Done(ag, hung) whatever the agent ag is. We say that two sets of sentences Φ1 and Φ2 are incompatible (Φ1 I Φ2 ) iff there is at least one sentence φ1 in Φ1 and one sentence φ2 in Φ2 such that φ1 I φ2 . A theory is a collection of rules. Definition 2 (Theory). A theory T is an extended logic program, i.e a finite set of rules s.t. R: L0 ← L1 , . . . , Lj , ∼ Lj+1 , . . . , ∼ Ln with n ≥ 0, each Li (with i ≥ 0) being a strong literal in L. The literal L0 , called head of the rule (denoted head(R)) is a statement. The finite set {L1 , . . . , ∼ Ln }, called body of the rule, is denoted body(R). R, called name of the rule, is an atom in L. All variables

occurring in a rule are implicitly universally quantified over the whole rule. A rule with variables is a scheme standing for all its ground instances. For simplicity, we will assume that the names of rules are neither in the bodies nor in the head of the rules thus avoiding self-reference problems. In our scenario, we consider the specific case of the agent bob. The KBases and the personality of bob are depicted in Tab. 2. The incompatibilities are the following ones: Done(ag 1 , donate) I Done(ag 2 , hung)) Done(ag 1 , hung) I Done(ag 2 , donate)) Done(al, donate) I Done(bob, donate)) Done(bob, donate) I Done(al, donate)) Done(al, hung) I Done(bob, hung)) Done(bob, hung) I Done(al, hung)) Done(al, hung) I Done(bob, )) Done(bob, )) I Done(al, hung) Non-utilitarian faculty. bob may desire that the picture is donated, nu(fa1 , r1 (ag)). Community faculty. bob desires that his superior carla hangs the picture, comm(fa2 , r1 (ag)). Peer faculty. bob desires that his friend al, hangs the picture, peer(fa3 , r1 (ag)). Self-benefit faculty. bob desires to hang the picture by himself, self(fa4 , r1 (bob)). Operational faculties. bob’s observation faculty works on percepts e.g. assumptions such as ¬Give(al, ag, picture)), common sense beliefs and possible decisions in the environment in which the agent is operating e.g. oper(fa7 , r1 (ag)), and safe operation e.g. the need of the agent to protect itself, oper(fa5 , r1 (bob)). Personality. Amongst the faculties assumed, bob prefers non-utilitarian statements rather than those that are operational, self-beneficial, or benefit his peers/ community. Contrary to other components, the personality is not embodied by faculties representing some rules/assumptions but the personality is encoded by some preferences between these rules.

KBnu KBcomm KBpeer KBself KBoper

pers

T or P nu(fa1 , r1 (ag)): motiv(ag ) ← Done(ag, donate) comm(fa2 , r1 (ag)): motiv(ag ) ← Done(ag, hung), Authority(ag, bob) peer(fa3 , r1 (ag)): motiv(ag ) ← Done(ag, hung), Friend(ag, bob) self(fa4 , r1 (bob)): motiv(bob) ← Done(bob, hung) oper(fa5 , r1 (bob)): motiv(ag ) ← Done(bob, protect)

Asm Authority(carla, bob) Friend(al, bob)

Control(al, picture) Control(bob, hammer) Control(carla, nail) ¬Give(al, ag, picture) ¬Give(bob, ag, hammer) ¬Give(carla, ag, nail) oper(fa7 , r1 (ag)): Done(ag, donate) ← Offer(ag, picture), Have(ag, picture) Offer(ag, picture) oper(fa7 , r2 (ag)): Done(ag, hung) ← Hang(ag, picture), Have(ag, picture), Have(ag, hammer), Have(ag, nail) Hang(ag, picture) oper(fa7 , r3 (ag)): Done(ag, protect) ← Threat(ag, hammer), Have(ag, hammer) Threat(ag, hammer) oper(fa7 , r4 (ag 1 , ag 2 , res)): Have(ag 1 , res) ← Control(ag 1 , res), ¬Give(ag 1 , ag 2 , res) ¬Give(ag 1 , ag 2 , res) oper(fa7 , r5 (ag 1 , ag 2 , res)): Have(ag 1 , res) ← Control(ag 2 , res), Give(ag 2 , ag 1 , res) Give(ag 2 , ag 1 , res) nu(fa 1 , r 1 )Pcomm(fa 2 , r 2 ) nu(fa 1 , r 1 )Ppeer(fa 2 , r 2 ) nu(fa 1 , r 1 )Pself(fa 2 , r 2 ) nu(fa 1 , r 1 )Poper(fa 2 , r 2 )

Table 2. The KBases and personality of bob

3.2

Arguments

Due to the abductive nature of goal-oriented reasoning, arguments are built by reasoning backward. Contrary to [6, 7, 4], our definition of arguments focuses on the candidates rules to distinguish between the several distinct arguments giving rise to the same conclusion. Definition 3 (Argument). Let us consider KF = hL, Asm, I , T , Pi a knowledge representation framework. An argument a for a statement α ∈ L (denoted conc(a)) is a deduction of that conclusion whose premise is a set of rules (denoted rules(a)) and assumptions (denoted asm(a)) of KF. The top-level rule of a (denoted ⊤(a)) is a rule r ∈ rules(a), if it exists, such that head(r) = conc(a). The sentences of a (denoted sent(a)) is the set of literals of L in the bodies/heads of the rules including the assumptions of a. The set of arguments built upon KF is denoted A(KF). Some arguments concluding motiv(ag ) in our example are the following one: • The motivation a supporting that al should donate the picture. sent(a) = {motiv(al), Done(al, donate), Offer(al, picture), Have(al, picture), Control(al, picture), ¬Give(al, ag, picture)}, asm(a) = {Offer(al, picture), Control(al, picture), ¬Give(al, ag, picture)}, rules(a) = {nu(fa1 , r1 (al)), oper(fa7 , r1 (al)), oper(fa7 , r4 (al, ag, picture))} and top(a) = nu(fa1 , r1 (bob)). • The motivation b supporting that bob should donate the picture. sent(b) = {motiv(bob), Done(bob, donate), Offer(bob, picture), Have(bob, picture), Control(al, picture), Give(al, bob, picture)}, asm(b) = {Offer(bob, picture), Control(al, picture), Give(al, bob, picture)}, rules(b) = {nu(fa1 , r1 (bob)), oper(fa7 , r1 (bob)), oper(fa7 , r5 (al, bob, picture))} and top(b) = nu(fa1 , r1 (bob)). • The motivation c supporting that carla should hang the picture. • The motivation d supporting that al should hang the picture. • The motivation e supporting that bob should hang the picture. • The motivation f supporting that bob should threat the other agents with the hammer. sent(f) = {motiv(bob), Done(bob, protect), Threat(ag, hammer), Have(ag, hammer) Control(bob, hammer), ¬Give(bob, ag, hammer)}, asm(e) = {Threat(ag, hammer), Control(bob, hammer), ¬Give(bob, ag, hammer)}, rules(e) = {oper(fa5 , r1 (bob)), oper(fa7 , r3 (bob))oper(fa7 , r4 (bob, ag, hammer))} and top(f) = oper(fa5 , r1 (bob)).

3.3

Defeat relation

In order to adopt the declarative semantics of argumentation [8], we define the defeat relation between our arguments. Since the different motivations are conflicting, the arguments interact with one another. For this purpose, we define the following attack relation. Definition 4 (Attack relation). Let a and b be two arguments. a attacks b iff sent(a) I sent(b). This relation encompasses both the rebuttal attack due to the incompatibility of conclusions, and the undermining attack, i.e. directed to a “subconclusion”. The strength of concurrent arguments depends on the priority of their sentences. In order to give a criterion that will allow to prefer one argument over another, we consider here the last link principle to promote high-level goals. Arguments are concurrent if their conclusions are identical or incompatible. We assume that concurrent arguments are comparable, i.e we assume a (partial or total) preorder over concurrent rules. Formally, ∀a, b ∈ A(T )s.t.conc(a) = conc(b) or conc(a) I conc(b)(aPb ∨ bPa ∨ a ≃ b) Definition 5 (Strength). Let a and b be two concurrent arguments. a is stronger than b (denoted a P b) if it is the case that ⊤(a)P⊤(b). The attack relation and the strength relation can be combined to adopt Dung’s seminal calculus of opposition [8] as in [9] to distinguish between one argument attacking another, and that attack succeeding due to the strength of arguments. Definition 6 (Defeats). Let a and b be two arguments. a defeats b iff: i) a attacks b; ii) it is not the case that b P a. In this section, we have defined the defeat relation to adopt Dung’s seminal calculus of opposition. In [8], Dung introduces various extension-based semantics in order to analyse when a set of arguments can be considered as collectively justified. Definition 7 (Semantics). Let KF = hL, Asm, I , T , Pi be a knowledge representation framework. Considering our AF = hA(KF ), defeats i be our argumentation framework built upon KF. For S ⊆ A a set of arguments, we say that: – S is conflict-free iff ∀a, b ∈ S it is not the case that a defeats b; – S is admissible iff S is conflict-free and S defeats every argument a such that a defeats some arguments in S; For simplicity, we restrict ourself to the admissible semantics. Fig. 3 abstractly represents our AF, whereby the fact that “a defeats b” is depicted by a directed arrow from a to b. {a, f} and {b, f} are admissible.

a

c e

b

f d

Fig. 3. Our argumentation framework

4

Dialectical framework

In order to propose a dialectical argumentation-game which instanciates the AAA architecture, we present here our dialectical framework. Firstly, Section 4.1 introduces the communication language, i.e. the moves. Secondly, Section 4.2 defines the procedural rules of the dialogue, the protocol. Definition 8 (Dialectical framework). Let KF = hL, Asm, I , T , Pi be a knowledge representation framework, topic a statement in L, and PCL a player communication language. The dialectical framework is a tuple DFPCL (topic, KF )=hP, AS, ΩM , H, T, proto, Zi where: – P = {p1 , . . . , pn } is a set of n players assoicated with a set of theories T1 , . . . , Tn and a set of assumptions Asm1 , . . . , Asmn such that, • • • •

T1 ∪ . . . ∪ Tn = T , Ti ∩ Tj = ∅ whatever 0 ≤ i < j ≤ n, Asm1 ∪ . . . ∪ Asmn = Asm, Asmi ∩ Asmj = ∅ whatever 0 ≤ i < j ≤ n,

– AS = hPro, Oppi is composed of two boards, the proponent board Pro and the opponent board Opp which contains the literals held by the proponents and the opponents respectively; – ΩM ⊆ PCL is a set of well-formed moves; – H is a set of histories, the sequences of well-formed moves s.t. the speaker of a move is determined at each stage by the turn-taking function T and the moves agree with the protocol proto; – T: H → P is the turn-taking function; – proto: H×AS → ΩM is the function determining the moves which are allowed to expand an history; – Z is the set of dialogues, i.e. the terminal histories. In the AAA architecture, the DF allows multi-party dialogues amongst faculties (the players) about motiv(ag ) (the topic) within KF. Players claim literals during dialogues. Amongst players, the proponents argue for an initial claim while the opponents argue against it.

4.1

Communication Language

We define here the syntax and the semantics of moves. The syntax is in conformance with a common player communication language, PCL. A move at time t: has an identifier, mvt ; is uttered by a speaker (spt ∈ P); eventually rpt is the identifier of the message to which mvt responds and the speech act is composed of a locution loct and a content contentt . The possible locutions are claim, concede, oppose, deny, and unknown. The content is a set of atoms in L. The semantics of speech acts is public since all players confer the same meaning to the moves. The semantics is defined in terms of pre/post-conditions. Definition 9 (Semantics of PCL). Le t be the time of a history h in H (0 ≤ t < |h|). AS0 = h{topic}, ∅i. The semantics of the utterance by the player p at time t is defined s.t.: 1. p may utter unknown(∅) and so ASt+1 = ASt ; 2. considering L ∈ Prot , (a) p may utter claim(P ), if ∃r ∈ Tp head(r) = L, body(r) = P , P ∩Prot = ∅, and it is not the case that P I Prot . Therefore, ASt+1 = hProt ∪ P − {L}, Oppt i, (b) p may utter concede({L}), if L ∈ Asmp . Therefore, ASt+1 = ASt , (c) p may utter oppose({L′ }), if L′ I L and either L′ ∈ Asmp or ∃r ∈ Tp head(r) = L′ . Therefore, ASt+1 = hProt − {L}, Oppt ∪ {L′ }i; 3. considering L ∈ Oppt , (a) p may utter claim(P ), if ∃r ∈ Tp head(r) = L, body(r) = P and P ∩ Oppt = ∅. Therefore, ASt+1 = hProt , Oppt ∪ P − {L}i, (b) p may utter deny({L′ }), if L′ I L and either L′ ∈ Asmp or ∃r ∈ Tp head(r) = L′ . Therefore, ASt+1 = hProt ∪ {L′ }, Oppt − {L}i. The rules to update AS incorporate a filtering (in case 2a and 3a) to be more efficient. Concretely, the set of literals in Pro and Opp are filtered, so they are not repeated more than once, and finally the literals in Pro are not incompatible with each other. The speech act unknown(∅) has no preconditions. Pleas of ignorance and concessions have no effect on AS. 4.2

Protocol

In order to be uttered, a move must be well-formed. The initial moves are initial claims and pleas of ignorance: mv0 ∈ ΩM iff loc(mv0 ) = claim or loc(mv0 ) = unknown The replying moves are well-formed iff they refer to an earlier move: mvj ∈ ΩM iff rpj = mvi with 0 ≤ i < j Notice that backtracking is allowed. Each dialogue h ∈ Z is a sequence such that: h = (mv0 , . . . , mv|h|−1 ) with proto(h) = ∅

. In this way, the set Z of dialogues is a set of maximally long histories, i.e. which cannot be expanded even if backtracking is allowed. We call line the sub-sequence of moves where backtracking is ignored. The turn-taking function T determines the speaker of each move such that: If h ∈ H, sp0 = pi and j − i = |h|(mod n), then T (h) = pj The protocol (proto) consists of a set of sequence rules (e.g. sr1 , . . . sr4 represented in Tab. 3) specifying the legal replying moves. For example, sr1 specifies the legal moves replying to a previous claim (claim(P )). The speech acts resist or surrender, i.e. close the line. In order to influence the outcome, the player resists as much as possible. The locutions concede and unknown are utilised to manage the sequence of moves since they surrender, and so close the line but not necessarily the dialogue (backtracking is allowed). By contrast, a claim (claim(P ′ )) and an opposition (oppose({L′ })) resist to the previous claim. The moves replying to a deny (deny({L′ })) are the same as the replying move of a claim (claim({L′ })). Table 3. Speech acts and the potential replies. Speech acts sr1 claim(P )

Resisting claim(P ′ ), with r s.t L = head(r) ∈ P and body(r) = P ′ oppose({L′ }), with L′ I L and L ∈ P sr2 oppose({L}) claim(P ′ ), with r s.t. L = head(r) and body(r) = P ′ deny({L′ }), with L′ I L sr3 concede(P ) ∅ sr4 unknown(P ) ∅

Surrendering concede({L}), with L ∈ P unknown(P ) unknown(P )

∅ ∅

In order to take into account the priorities amongst the faculties encoded by the priorities over rules, a procedural rule forbids the player to oppose in front of a player with a higher priority: ∀p1 , p2 ∈ P, ∀r 1 ∈ T1 ∀r 2 ∈ T2 r 1 Pr 2 ∀h ∈ H ∀mvi ∈ h with spi = p1 p2 is authorized to utter mvj if it is not the case that locj = oppose with rpj = mvi . Tab. 4 represents one of the possible dialogues of our example. The pleas of ignorance are omitted. Notice that none faculty is allowed to oppose to the move mv0 since none faculty has priority over fa1 . At the end of the game, Pro contains the assumptions of an admissible argument deducing the motiv(ag ), Concretely, AS contains one possible action to perform, Offer(al, picture).

Table 4. Some moves of the dialogue mvk mv0 mv6 mv13 mv20 mv26 mv33

5

spk fa1 fa7 fa7 fa7 fa6 fa6

lock claim claim concede claim concede concede

contentk Done(al, donate) Offer(al, picture), Have(al, picture) Offer(al, picture) Control(al, picture), ¬Give(al, ag, picture) Control(al, picture) ¬Give(al, ag, picture)

rk m0 m6 mv13 mv20 mv20

Discussion

We have presented a concrete version of the Agent Argumentation Architecture (AAA) model illustrating how an autonomous agent argues with itself to manage its motivations and arbitrate its possibly conflicting internal goals. We have focused in revisiting the original AAA model with a specific argumentation framework showing how the internal dialectic process can be specified as a dialogue-game between internal components representing the agent’s mental faculties. The resulting framework is exemplified with a simple case, illustrating the importance of argumentation to resolve conflicts within an agent in the same way it does at the level of a multi-agent system. The contribution of the work is that it illustrates how to use argumentation to develop a modular model of cognitive agents with emphasis on motivations. An implication of the internal dialectic as we present it is the ability to implement the model using multiple-threads of control, one for each faculty, thus increasing the flexibility of the approach for complex applications. This modular model allows the faculties and the personality of an agent to be specified declaratively, manages potential conflicts at runtime. Additionally, we can envisage to replaces faculties at runtime, thus avoiding to restart the agent’s reasoning process whenever a component (i.e. a faculty) joins or leaves the game. In this perspective, the agent personality corresponds to high-level guidelines to solve conflicts appearing when modifying the component assembly. The initial ideas of AAA were inspired by the work of Kakas and Moraitis [10] on using argumentation for reasoning about preferences, and the notion of KGP’s cycle theories to build an artificial personality for the agent [11], despite the fact that at present we have abstracted away from planning. As with these papers, our effort shares the vision to provide a modular and practical agent model whose reasonings are based on proof-procedures required to interpret modular knowledge bases. One important difference with [11] comes from our decomposition which distinguishes explicitly the different aspects, possibly conflicting, that the agent must arbitrate rather than the agent’s capabilities (e.g. reactivity, planning, goal decision). These aspects are embodied by faculties that are more amenable to be plug-and-play components at run-time using a multi-threaded implementation. Our work with reinterpreting the AAA is also related to the specialised argumentation agents that we are currently developing in the domain of service

composition [12] for the semantic GRID. This architecture uses three interacting modules internally which can be viewed liberally as faculties in AAA. These modules share a commitment store that it is like the argumentation state in AAA, the latter has a more fine-grained representation of faculties with the motivations based explicitly on a hierarchy of needs. Future work includes investigating the properties of different dialogue-games for different semantics and properties. We also plan to extend the current prototype using CaSAPI 1 to allow an internal dialectic that is multi-threaded and relies on facets that are interpreted by different proof systems implementing different kinds of reasoning such as epistemic reasoning, practical reasoning and normative reasoning.

References 1. Witkowski, M., Stathis, K.: A dialectic architecture for computational autonomy. In: Agents and Computational Autonomy. Springer Berlin (2004) 261–273 2. Maslow, A.H.: Motivation and Personality. New York: Harper and Row (1970) first published 1954. 3. Rao, A.S., Georgeff, M.P.: Modeling rational agents within a BDI-architecture. In Allen, J., Fikes, R., Sandewall, E., eds.: Proc. of the 2nd International Conference on Principles of Knowledge Representation and Reasoning (KR), Morgan Kaufmann publishers Inc.: San Mateo, CA, USA (1971) 473–484 4. Dung, P.M., Mancarella, P., Toni, F.: Computing ideal sceptical argumentation. Artificial Intelligence, Special Issue on Argumentation 171(10-15) (2007) 642–674 5. Morge, M., Mancarella, P.: The hedgehog and the fox. An argumentation-based decision support system. In: Proc. of the Fourth International Workshop on Argumentation in Multi-Agent Systems (ArgMAS). (2007) 55–68 6. Bondarenko, A., Toni, F., Kowalski, R.: An assumption-based framework for nonmonotonic reasoning. In Nerode, A., Pereira, L., eds.: Proc. of the 2nd International Workshop on Logic Programming and Non-Monotonic Reasoning (LPNMR), MIT Press (1993) 7. Dung, P.M., Kowalski, R.A., Toni, F.: Dialectic proof procedures for assumptionbased, admissible argumentation. Artificial Intelligence 170(2) (2006) 114–159 8. Dung, P.M.: On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games. Artif. Intell. 77(2) (1995) 321–357 9. Bench-Capon, T.: Value based argumentation frameworks. In: Proc. of the 2nd International Workshop on Non-Monotonic Reasoning (NMR). (2002) 444–453 10. Kakas, A., Moraitis, P.: Argumentative-based decision-making for autonomous agents. In: Proc. of the 2nd International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS), ACM Press (2003) 883–890 11. Kakas, A.C., Mancarella, P., Sadri, F., Stathis, K., Toni, F.: The KGP model of agency. In: Proc. of ECAI. (2004) 33–37 12. Morge, M., McGinnis, J., Bromuri, S., Toni, F., Mancarella, P., Stathis, K.: Toward a modular architecture of argumentative agents to compose services. In: Proc. of the Fifth European Workshop on Multi-Agent Systems (EUMAS), Hammamet, Tunisia (December 2007) 1–15 1

http://www.doc.ic.ac.uk/∼dg00/casapi.html