UNCORRECTED PROOF!
Annals of Mathematics and Artificial Intelligence 0: 1–30, 2004. 2004 Kluwer Academic Publishers. Printed in the Netherlands. 1
1
2
2
3
3
4
Proving BDI properties of agent-oriented programming languages ∗
5 6 7
4 5 6 7
8
The asymmetry thesis principles in AgentSpeak(L)
9 10
Rafael H. Bordini a and Álvaro F. Moreira b
11
8 9 10 11
12
a Department of Computer Science, University of Liverpool, Liverpool L69 7ZF, U.K.
12
13
E-mail:
[email protected] b Departamento de Informática Teórica, Instituto de Informática, Universidade Federal do Rio Grande do Sul, Porto Alegre RS, 91501-970, Brazil E-mail:
[email protected]
13
14 15 16 17
14 15 16 17
18
In this paper, we consider each of the nine BDI principles defined by Rao and Georgeff based on Bratman’s asymmetry thesis, and we verify which ones are satisfied by Rao’s AgentSpeak(L), a logic programming language inspired by the BDI architecture for cognitive agents. In order to set the grounds for the proofs, we first introduce a rigorous way in which to define the informational, motivational, and deliberative modalities of BDI logics for AgentSpeak(L) agents, according to its structural operational semantics that we introduced in a recent paper. This computationally grounded semantics for the BDI modalities forms the basis of a framework that can be used to further investigate BDI properties of AgentSpeak(L) agents, and contributes towards establishing firm theoretical grounds for a BDI approach to agent-oriented programming.
18
27
28
Keywords: distributed artificial intelligence, BDI logics, agent-oriented programming, structural operational semantics, asymmetry thesis principles
29
AMS subject classification: 03B45, 68N17, 68T27, 8Q55
29
19 20 21 22 23 24 25 26 27
30
35 36 37 38 39 40 41 42 43
21 22 23 24 25 26
28
31
1.
Introduction
32
33 34
20
30
31 32
19
33
The BDI architecture [22] for cognitive agents, and the more formal work on BDI logics [23], now permeates a significant part of research in multi-agent systems. This is an important aspect to consider when working on agent-oriented programming languages. We have chosen to work with AgentSpeak(L) [19] and to extend it in various ways (the extended language is called AgentSpeak(XL) [1]). The choice of AgentSpeak(L) is based on its very neat and elegant notation, and because we find it to be more faithful, so to speak, to the BDI architecture than other BDI-inspired programming languages. While in [1] we were concerned with the implementation of so∗ This work was partially carried out while Rafael H. Bordini was at Universidade Federal do Rio Grande
do Sul and Álvaro F. Moreira was at Universidade de Caxias do Sul.
44
34 35 36 37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 1
PDF-OUTPUT
2 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles
phisticated agent applications using AgentSpeak(L), in this paper we concentrate on its theoretical aspects. Although AgentSpeak(L) is faithful to the practical aspects of the BDI architecture, it remains to be investigated how AgentSpeak(L) relates to principles of the BDI theory. This paper contributes towards such investigation. In [19], Rao introduced the operation of an abstract interpreter for AgentSpeak(L). He also claimed that known properties that are satisfied by certain BDI logics could also be proved for his programming language. This was, in fact, his proposal for bridging the gap between BDI theory and practice (such “gap” has been mentioned in various papers; for an interesting discussion see [7]). However, that claim was never proved, to the best of our knowledge. Examples of properties that have been studied for certain BDI logics are the ones known as the asymmetry thesis principles [23]. These principles are an important part of the BDI theory, as they ensure a rational behaviour on the part of agents that are based on the BDI model. This paper contributes towards a more formal grounding for BDI programming in that it shows which of the asymmetry thesis principles are satisfied by any agent programmed in AgentSpeak(L). It is based on [2], but only here the proofs of the asymmetry thesis principles satisfied by AgentSpeak(L) agents are presented in full. In order to prove BDI properties of AgentSpeak(L) agents, we first need to formally define what the three mental attitudes expressible in BDI logics mean for such agents. These definitions, first given in [2], are based on the structural operational semantics that we have given to AgentSpeak(L) in [17]. This in fact constitutes a framework in which to investigate other BDI properties of AgentSpeak(L). More than that, it also represents a step towards the “computational grounding” [31] of BDI logics. The recent popularity of the multi-agent approach to the construction of complex computational systems is remarkable. However, agent-oriented programming languages still need much theoretical work on which to base the practice of the multi-agent systems community: this is the main motivation of this paper, the remainder of which is structured as follows. The next section gives the necessary background on AgentSpeak(L) and on the asymmetry thesis principles. The abstract syntax and formal semantics of AgentSpeak(L) is given in section 3. Section 4 presents the framework with which we prove, in section 5, the asymmetry thesis principles for AgentSpeak(L). We discuss related work in section 6, before we mention future work and conclude the paper.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
33
33
34
34
35
2.
35
Background
36 37
36 37
2.1. AgentSpeak(L)
38 39 40 41 42 43
38
AgentSpeak(L) is a rather natural extension of logic programming to support the BDI agent architecture. After Rao introduced it in [19], further formalisation of the abstract interpreter and missing details was given by d’Inverno and Luck in [5]. Their formalisation was done using the Z formal specification language. As we mentioned before, in [17], we gave a structural operational semantics for it.
44
39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 2
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
3
However, until recently there was no available implementation of an AgentSpeak(L) interpreter. In [16], we have shown the means for running AgentSpeak(L) programs within Sloman’s SIM_AGENT framework [26]. That was the first prototype implementation of an AgentSpeak(L) interpreter, and we have called it SIM_Speak. A mechanism is provided in SIM_Speak for the conversion of AgentSpeak(L) programs into running code within SIM_AGENT. In [1], we proposed some extensions to AgentSpeak(L). For this extended language called AgentSpeak(XL), we have implemented from scratch an efficient interpreter1 in C++. It has various extensions which turn AgentSpeak(L) into a more practical programming language. This section aims at presenting the intuitions behind AgentSpeak(L), as given originally by Rao [19]. The abstract syntax and operational semantics of AgentSpeak(L), as first given by us in [17], are presented later in section 3. An AgentSpeak(L) agent is created by the specification of a set of beliefs and a set of plans. Beliefs are ground atoms in the usual form (e.g., busy(line)). The set of beliefs (which change dynamically) represents the information an agent presently has about the world (i.e., its environment). A plan can be considered as a sequence of steps (a course of action) the agent needs to execute in order to handle some perceived event. Thus, plans are formed with sequences of actions and goals, as detailed later. AgentSpeak(L) distinguishes two types of goals: achievement goals and test goals. Both are predicates (as in beliefs, except that they need not be ground) where the former are prefixed with the ‘!’ operator, while the latter are prefixed with the ‘?’ operator. Achievement goals are used when the agent needs to achieve a certain state of the world by performing actions and possibly achieving other (sub)goals (e.g., !book(tickets)). Test goals are used when the agent needs to test whether the associated predicate is a true belief (i.e., whether it follows from the agent’s belief base). It is used by programmers for unifying the predicate with an agent’s belief, thus further binding free variables in the body of plans. For example, if a predicate busy(line) is in the agent’s belief base, given a test goal such as ?busy(X), occurrences of X in the remainder of the plan would be bound to line. (As in Prolog, variables are denoted with an uppercase initial letter.) Next, the notion of triggering event is introduced. It is a very important concept in AgentSpeak(L), as it is a plan’s triggering event that specifies what type of event can start off the execution of that plan. There are two types of triggering events: those related to the addition (‘+’) and those related to the deletion (‘−’) of mental attitudes, specifically beliefs and goals (e.g., −busy(line), +!book(X)). An AgentSpeak(L) agent uses plans to define possible courses of action. Therefore, plans have to refer to the basic actions that an agent is able to perform on its environment. In detail, an AgentSpeak(L) plan has a head which is composed of a triggering event (the purpose of that plan), and a conjunction of literals forming a context that needs to be satisfied if the plan is to be executed (i.e., the context must be a logical consequence of that agent’s belief base at the moment the plan becomes intended). A plan also has a
1
1 This interpreter is available as free software at URL http://protem.inf.ufrgs.br/cucla/
42
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41
43
(select “Downloads”).
44
44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 3
4 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles
body, which is a sequence of basic actions or goals that the agent has to achieve (or test). An example of a plan is:
1
+concert(X, Y ) : like(X) ∧ ¬busy(line) ← call(Y ); . . . ; !book(tickets).
3
The AgentSpeak(L) interpreter also manages a set of events and a set of intentions, and its functioning requires three selection functions. The event selection function (SE ) selects a single event from the set of events; another selection function (SAp ) selects an “option” (i.e., an applicable plan) from a set of applicable plans; and a third selection function (SI ) selects one particular intention from the set of intentions. The selection functions are supposed to be agent-specific, in the sense that they should make selections based on an agent’s characteristics (though previous work on AgentSpeak(L) did not elaborate on how designers specify such functions2 ). Therefore, we here leave the selection functions undefined, hence the choices made by them are supposed to be non-deterministic. Intentions are particular courses of actions to which an agent has committed in order to handle certain events. Each intention is a stack of partially instantiated plans. Events, which may start off the execution of plans that have relevant triggering events, can be external, when originating from perception of the agent’s environment (i.e., addition and deletion of beliefs based on perception are external events); or internal, when generated from the agent’s own execution of a plan (i.e., a subgoal in a plan generates an event of type “addition of achievement goal”). In the latter case, the event is accompanied with the intention which generated it (as the plan chosen for that event will be pushed on top of that intention). External events create new intentions, representing separate focuses of attention for the agent’s acting on the environment. We next give some more details on the functioning of an AgentSpeak(L) interpreter, which is clearly depicted in figure 1 (reproduced from [16]). At every interpretation cycle of an agent program, AgentSpeak(L) updates a list of events, which may be generated from perception of the environment, or from the execution of intentions (when subgoals are specified in the body of plans). It is assumed that beliefs are updated from perception and whenever there are changes in the agent’s beliefs, this implies the insertion of an event in the set of events. This belief revision function is not part of the AgentSpeak(L) interpreter, but rather a necessary component of the agent architecture. After SE has selected an event, AgentSpeak(L) has to unify that event with triggering events in the heads of plans. This generates a set of all relevant plans. By checking whether the context part of the plans in that set follow from the agent’s beliefs, AgentSpeak(L) determines a set of applicable plans (plans that can actually be used at that moment for handling the chosen event). Then SAp chooses a single applicable plan from that set, which becomes the intended means for handling that event, and
6
2 Our extension to AgentSpeak(L) in [1] deals precisely with the automatic generation of efficient inten-
tion selection functions. The extended language allows one to express relations between plans, as well as quantitative criteria for their execution. We then use decision-theoretic task scheduling to guide the choices made by the intention selection function.
44
2
4 5
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 4
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles
5
1
1
2
2
3
3
4
4
5
5
6
6
7
7
8
8
9
9
10
10
11
11
12
12
13
13
14
14
15
15
16
16
17
17
18
18
19
19
20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
20
Figure 1. An interpretation cycle of an AgentSpeak(L) program [16].
either pushes that plan on the top of an existing intention (if the event was an internal one), or creates a new intention in the set of intentions (if the event was external, i.e., generated from perception of the environment). All that remains to be done at this stage is to select a single intention to be executed in that cycle. The SI function selects one of the agent’s intentions (i.e., one of the independent stacks of partially instantiated plans within the set of intentions). On the top of that intention there is a plan, and the formula in the beginning of its body is taken for execution. This implies that either a basic action is performed by the agent on its environment, an internal event is generated (in case the selected formula is an achievement goal), or a test goal is performed (which means that the set of beliefs has to be checked). If the intention is to perform a basic action or a test goal, the set of intentions needs to be updated. In the case of a test goal, the belief base will be searched for a belief atom that unifies with the predicate in the test goal. If that search succeeds, further variable instantiation will occur in the partially instantiated plan which contained that test goal (and the test goal itself is removed from the intention from which it was taken). In the case where a basic action is selected, the necessary updating of the set of intentions is simply to remove that action from the intention (the interpreter informs to the architecture component responsible for the agent effectors what action is required). When all formulæ in the body of a plan have been removed (i.e., have been executed), the whole plan is removed from the intention, and so is the achievement goal that generated it (if that was the case). This ends a cycle of execution, and AgentSpeak(L) starts all over
44
21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 5
6 1 2
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles
again, checking the state of the environment after agents have acted upon it, generating the relevant events, and so forth.
3 4
7 8
4
2.2. Asymmetry thesis principles
5
The asymmetry thesis is important in Bratmans’s [3] work on practical reasoning as it states principles of rationality for agents based on notions of intentionality. According to Rao and Georgeff [23], Bratman’s asymmetry thesis basically says two things:
9 10 11 12 13
16 17 18 19 20 21 22 23 24 25 26 27 28 29
– it is irrational for an agent to intend to do an action and also believe that it will not do it (intention–belief inconsistency); – it is rational for an agent to intend to do an action but not believe that it will do it (intention–belief incompleteness).
32 33 34 35 36 37 38 39 40 41 42 43
7 8
10 11 12 13 14
To these, Rao and Georgeff add the idea that: – it is rational for an agent to believe that it can do an action without necessarily intending it (belief–intention incompleteness). These were formulated by Rao and Georgeff in [23] as principles AT1–AT3, which are shown in table 1. They also formulate principles for the other two combinations of pairs of mental attitudes (intention–desire and desire–belief). Table 1 shows all nine asymmetry thesis principles. In [23], the INTEND modality of BDI logics is defined over state formulæ, which means that the notion here is that of intention-that (a certain state of the environment is achieved) rather than intention-to (do an action which will lead the environment to reach a certain desired state). Therefore, it may be unclear to the reader whether the BDI formulæ in table 1 actually capture Bratman’s ideas on the asymmetry thesis (in particular in regards to intentions and beliefs on doing an action). Also, the reader may consider that some of the asymmetry thesis principles, as defined by Rao and Georgeff, only
30 31
6
9
14 15
2 3
5 6
1
15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Table 1 Asymmetry thesis principles [23]. Label
Principle
AT1 AT2 AT3
|= INTEND(ϕ) ⇒ ¬BEL(¬ϕ) |= INTEND(ϕ) ⇒ BEL(ϕ) |= BEL(ϕ) ⇒ INTEND(ϕ)
AT4 AT5 AT6
|= INTEND(ϕ) ⇒ ¬DES(¬ϕ) |= INTEND(ϕ) ⇒ DES(ϕ) |= DES(ϕ) ⇒ INTEND(ϕ)
AT7 AT8 AT9
|= DES(ϕ) ⇒ ¬BEL(¬ϕ) |= DES(ϕ) ⇒ BEL(ϕ) |= BEL(ϕ) ⇒ DES(ϕ)
44
31 32 33 34 35 36 37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 6
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles 1 2 3 4 5 6 7 8 9 10 11
7
make sense for temporal formulæ, which cannot be expressed in AgentSpeak(L).3 However, note that in [23] these principles are defined (and proved) for arbitrary formulæ, including the propositional fragment of the logical language used there. It is not in the scope of this paper to discuss further either of those issues. We are interested here in showing how to prove BDI properties in a practical programming language, and those nine principles certainly express interesting properties of (rational) BDI agents, whether or not they capture exactly Bratman’s asymmetry thesis. In doing so, we clarify various practical aspects of the language. Besides, each BDI modal system in [23] satisfies certain combinations of these properties, so this work may be relevant in future work on defining the logic of which AgentSpeak(L) is an implementation (as discussed towards the end of this paper).
12
17 18 19
Syntax and semantics of AgentSpeak(L)
22 23 24 25 26 27
30 31 32 33 34 35 36 37 38 39 40 41 42 43
5 6 7 8 9 10 11
15
This section presents the syntax and semantics of AgentSpeak(L), first given in [17]. It uses Plotkin’s structural operational semantics [18], a standard notation for semantics of programming languages. Previous formalisations of AgentSpeak(L) [5] used the Z language [27].
16 17 18 19 20
3.1. Abstract syntax
21
3.1.1. AgentSpeak(L) agent In AgentSpeak(L), an agent is simply specified by a set bs of beliefs (the agent’s initial belief base) and a set ps of plans (the agent’s plan library). An agent specification is therefore given by the following grammar: ag ::= bs ps.
22 23 24 25 26 27
28 29
4
14
20 21
3
13
3.
15 16
2
12
13 14
1
28
3.1.2. Beliefs The atomic formulæ of the language are predicates given by the following grammar: at ::= P(t1 , . . . , tn )
(n 0),
where P is a predicate symbol and t1 , . . . , tn are standard terms of first order logic. We use at as a metavariable for atomic formulæ. We call a belief an atomic formula at with no variables and we use b as a metavariable for beliefs. An AgentSpeak(L) program consists of a set of beliefs and of a set of plans. Syntactically, the set of beliefs of an AgentSpeak(L) program is a sequence of beliefs: bs ::= b1 . . . bn
(n 0).
3 Note that the complexity of theorem proving for temporal logics means it would not be viable to use them
in a practical language as AgentSpeak(L) is meant to be.
44
29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 7
8 1 2 3 4
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles
3.1.3. Plans Besides beliefs, a programmer also specifies a set of plans. Plans determine a course of action to be pursued in case certain events occur. A plan in AgentSpeak(L) is given by the following grammar:
5 6
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
32 33 34
ps ::= p1 . . . pn
(n 1).
37 38 39 40 41 42 43
4
6
8 9 10 11 12 13
In summary, when writing a plan, the programmer has to define: the event te that can trigger the plan, called the plan’s triggering event; the conditions ct that must hold for this plan to be considered applicable, called the context of the plan; and finally a sequence h of formulæ to be executed. These formulæ can be goals to pursue, the updating of beliefs (as explained later), or actions to be performed, in order for the agent to act properly on the triggering event. We now define each of these parts in turn.
14
Context. Each plan has in its head a formula ct that specifies the conditions under which the plan can be executed. The formula ct must be a logical consequence of the agent’s beliefs if the plan is to be considered applicable.
21
ct ::= at | ¬at | ct ∧ ct | T.
25
Actions, goals, belief updating. The following grammar relates to the constructs that may appear in the body of a plan. A sequence h of actions, goals, and belief updates in the body of a plan is defined as follows:
27
15 16 17 18 19 20
22 23 24
26
28 29 30
h ::= a | g | u | h; h a ranges over basic actions g ::= !at |?at u ::= +b | −b.
35 36
3
7
where te is the triggering event, ct is the plan’s context, and h is sequence of actions, goals, or belief updates; te : ct is referred as the head of the plan, and h is its body. Then the set of plans of an agent is given by the following grammar as a list of plans:
30 31
2
5
p ::= te : ct ← h
7 8
1
31 32 33 34 35
AgentSpeak(L) is concerned specifically with agent’s reasoning. Because of that, we abstract away from the details of which actions are available and how they are actually performed. We assume the agent has at its disposal a set of actions and we use a as a metavariable ranging over them. A goal of the form !at is called an achievement goal and ?at is a test goal. Unlike the original definition of AgentSpeak(L) by Rao [19], we allow the body of plans to include the updating of the agent’s own beliefs. Although Rao pointed out these constructs, he did not include them in the presentation of his abstract interpreter.
44
36 37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 8
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles 1 2
Triggering events. The first component in the head of a plan is a triggering event. Triggering events are given by the following grammar:
3
te ::= +at | −at | +g | −g.
4 5 6 7
9
A triggering event can then be the addition or the deletion of a belief from an agent’s belief base (+at and −at, respectively), or the addition or the deletion of a goal (+g and −g, respectively).
8 9
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
2 3 4 5 6 7 8 9
3.2. Semantics
10 11
1
10
3.2.1. Preliminaries An agent and its circumstance form a configuration of the transition system giving operational semantics to AgentSpeak(L). The transition relation: ag, C −→ ag , C
11
is defined by the semantic rules given in the next section. An agent’s circumstance C is a tuple I, E, A, R, Ap, ι, ρ, ε where:
16
• I is a set of intentions {i, i , . . .}. Each intention i is a stack of partially instantiated plans. • E is a set of events {(te, i), (te , i ), . . .}. Each event is a pair (te, i), where te is a triggering event and the plan on top of intention i is the one that generated te. When the belief revision function, which is not part of the AgentSpeak(L) interpreter but rather of the general architecture of the agent, updates the belief base, the associated events are included in this set. That is, changes in beliefs from perception of the environment are represented as external events which are included in this set, as well as internal ones. Note that this means an interaction of the AgentSpeak(L) interpreter with other parts of the (assumed) overall agent architecture: the belief revision function inserts events here which are actually dealt with by the AgentSpeak(L) interpreter. • A is a set of actions to be performed in the environment. As for events, this provides an interaction of the AgentSpeak(L) interpreter with other parts of the overall agent architecture (in this case, with the agent’s effectors). An action expression included in this set tells other architecture components to actually perform the respective action on the environment, thus changing it. • R is a set of relevant plans. In definition 1 below we state precisely how the set of relevant plans is obtained. • Ap is a set of applicable plans. The way this set is obtained is given in definition 2 below. • Each circumstance C also has three components called ι, ε, and ρ. They keep record of a particular intention, event and applicable plan (respectively) being considered along the execution of an agent.
44
12 13 14 15
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 9
10 1 2 3 4 5 6 7 8 9
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles
Auxiliary functions. Below, we define the auxiliary functions RelPlans and AppPlans, which will be needed in the semantic rules. In those definitions, we use the following notation: if p is a plan of the form te : ct ← h, we define TrEv(p) = te and Ctxt(p) = ct, which retrieve the triggering event and the context of the plan, respectively. A plan is considered relevant in relation to a triggering event if it has been written to deal with that event. In practice, that is verified by trying to unify the triggering event part of the plan with the triggering event that has been selected from E for treatment. In the definition bellow, we write mgu for the procedure that computes the most general unifying substitution of two triggering events.
10 11 12 13 14
17
Definition 1. Given the plans ps of an agent and a triggering event te, the set RelPlans(ps, te) of relevant plans is given as follows: RelPlans(ps, te) = pθ | p ∈ ps ∧ θ = mgu te, TrEv(p) .
20 21 22
25 26
29
Test(bs, at) = {θ | bs |= atθ}.
31
33 34 35 36 37 38 39 40 41 42 43
6 7 8 9
11 12 13 14
16 17
19 20 21 22
24 25 26 27
Definition 3. Given the beliefs bs of an agent and a formula at, the set of substitutions Test(bs, at) produced by testing at against bs is defined as follows:
30
32
5
23
An agent can also perform a test goal. The evaluation of a test goal ?at consists in testing if the formula at is a logical consequence of the agent’s beliefs. One of the effects of this test is the production of a set of substitutions:
27 28
4
18
Definition 2. Given a set of relevant plans R and the beliefs bs of an agent, the set of applicable plans AppPlans(bs, R) is defined as follows: AppPlans(bs, R) = pθ | p ∈ R ∧ θ is s.t. bs |= Ctxt(p)θ .
23 24
3
15
A plan is applicable if it is relevant and its context is a logical consequence of the agent’s current beliefs.
18 19
2
10
15 16
1
Notation.
In order to keep the semantic rules neat, we adopt the following notations:
• If C is an AgentSpeak(L) agent circumstance, we write CE to make reference to the component E of C. Similarly for all the other components of C. • We write Cι = _ (the underline symbol) to indicate that there is no intention being considered in the agent’s execution. Similarly for Cρ and Cε . • We use i, i , . . . to denote intentions, and we write i[p] to denote an intention that has plan p on its top, i being the remaining plans in that intention. We use the following notation for AgentSpeak(L) selection functions: SE for the event selection function, SAp for the applicable plan selection function, and SI for the intention selection function.
44
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 10
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles 1 2 3 4 5 6
11
3.2.2. Semantic rules We have organised the presentation of the semantics in groups of related rules. We next discuss each of these groups. In a normal reasoning cycle, the rules would apply in the order they are presented, one from each of the following sections. If there are no pending events, or no relevant or applicable plans for a selected event, the cycle continues from the SelInt rule.
7 8 9 10
SelEv
13 14 15 16 17
Rel1
20 21 22 23
Rel2
24 25 26 27 28 29 30
Appl1
33 34 35 36 37 38 39 40 41 42 43
RelPlans(ps, te) = {}
3 4 5 6
Appl2
8 9 10 11 12 13 14 15 16 17 18
Cε = te, i CAp = CR = {}
ag, C −→ ag, C where: CR = RelPlans(ps, te),
19
RelPlans(ps, te) = {}
22
ag, C −→ ag, C where: Cε = _ .
Cε = te, i CAp = CR = {}
20 21
23 24 25
3.2.2.3. Applicable plans. The rule Appl1 initialises the Ap component with the set of applicable plans. If no plan is applicable, the event is removed from ε by Appl2, but inserted back in CE , so that later on that event can be selected again in a new attempt to find an applicable plan for it. In either case the relevant plans are also discarded.
31 32
SE (CE ) = te, i Cε = _ , CAp = CR = {} ag, C −→ ag, C where: CE = CE − te, i, Cε = te, i.
3.2.2.2. Relevant plans. The rule Rel1 initialises the R component with the set of relevant plans. If no plan is relevant, the event is discarded from ε by Rel2 .
18 19
2
7
3.2.2.1. Selection of an event. The rule below assumes the existence of a selection function SE that selects events from a set of events E. The selected event is removed from E and it is assigned to the component of the circumstance.
11 12
1
AppPlans(bs, CR ) = {}
26 27 28 29 30
Cε = _ , CAp = {}, CR = {} ag, C −→ ag, C where: CR = {}, = AppPlans(bs, CR ), CAp
31
AppPlans(bs, CR ) = {}
36
Cε = te, i, CAp = {}, CR = {}
ag, C −→ ag, C where: CR = {}, Cε = _ , CE = CE ∪ te, i .
3.2.2.4. Selection of an applicable plan. This rule assumes the existence of a selection function SAp that selects a plan from a set of applicable plans Ap. The plan selected is
44
32 33 34 35
37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 11
12 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles
then assigned to the ρ component of the circumstance and the set of applicable plans is discarded. SAp (CAp ) = p Cε = _ , CAp = {} SelAppl ag, C −→ ag, C where: Cρ = p, = {}. CAp 3.2.2.5. Adding an intended means to the set of intentions. Events can be classified as external or internal (depending one whether they were generated from the agent’s perception, or whether they were generated by the previous execution of other plans, respectively). Rule ExtEv says that if the event ε is external (which is indicated by T in the intention associated to ε) a new intention is created and its single plan is the plan p annotated in the ρ component. If the event is internal, rule IntEv says that the plan in ρ should be put on top of the intention associated with the event. Either way, both the event and the plan can be discarded from the ε and ι components, respectively. ExtEv
18 19 20 21
Cε = te, T, Cρ = p ag, C −→ ag, C where: CI = CI ∪ [p] , Cε = _ , Cρ = _ ,
22 23 24 25 26
29 30 31 32 33
IntEv
Cε = te, i, Cρ = p ag, C −→ ag, C where: CI = CI ∪ i[p] , Cε = Cρ = _ .
36 37 38 39 40 41 42 43
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
23 24 25 26 27
Note that, in rule IntEv, the whole intention i that generated the internal event needs to be inserted back in CI , with p on its top. This is related to suspended intentions and will be explained further when we present rule Achieve. 3.2.2.6. Intention selection. This rule uses a function that selects an intention (that is, a stack of plans) for processing.
34 35
2
22
27 28
1
IntSel
SI (CI ) = i ag, C −→ ag, C where: Cι = i.
Cι = _
3.2.2.7. Executing the body of plans. This group of rules expresses the effects of executing the body of plans. The plan being executed is always the one on the top of the intention that has been previously selected. Observe that all the rules in this group discard the intention ι. After that, another intention can be eventually selected. We discuss each of the rules in this group in turn, according to formula that is selected (the one in the beginning of the plan on the top of the intention).
44
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 12
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles 1 2 3
Basic actions. The action a on the body of the plan is added to the set of actions A. The action is removed from the body of the plan and the intention is updated to reflect this removal.
4 5
Action
6 7 8 9 10
Achieve
13 14 15 16 17 18 19 20 21 22
Cι = i[head ← a; h]
ag, C −→ ag, C where: CA = C A ∪ {a}, CI = CI − {Cι } ∪ i[head ← h] .
Achievement goals. This rule register a new internal event in the set of events E. This event can then be eventually selected (see rule SelEv).
11 12
13
Cι = i[head ← !at; h] ag, C −→ ag, C where: CE = CE ∪ +!at, Cι , CI = CI − {Cι }.
Note how the intention that generated the internal event is removed from the set of intentions CI . This denotes the idea of suspended intentions. If the formula being executed in a plan is an achievement goal, a plan (i.e., an intended means) for that goal needs to be pushed on top of that intention. Only when that plan is finished, execution can be resumed to the point after the achievement goal in the previous plan. This is the reason why rule IntEv inserts back in CI the whole intention with the new plan on its top.
23 24 25 26 27 28 29 30 31
Test1
34 35 36 37 38 39 40 41 42 43
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Test goals. These rules are used when a test goal ?at should be executed. Both rules try to produce a set of substitutions that can make at a logical consequence of the agent’s beliefs. Rule Test1 says that one of the substitutions is applied to the plan, whereas in rule Test2 , the whole intention is removed if no substitution is found. Note that we have not dealt with the events for plan failure (i.e., triggering events of type −!at or −?at) as yet. At the moment, we remove the whole intention as the execution of the plan on its top cannot be carried out any further (because the rest of the plan will require further instantiation of variables4 ).
32 33
1
Test2
Test(bs, at) = {}
Cι = i[head ←?at; h] ag, C −→ where: CI = CI − {Cι } ∪ i (head ← h)θ , θ ∈ Test(bs, at). ag, C
Test(bs, at) = {}
Cι = i[head ←?at; h] ag, C −→ ag, C where: CI = CI − {Cι } .
4 When writing a plan, the programmer uses test goal variables in subsequent formulæ in the body of a plan
on the assumption that they are bound. Recall that, for example, basic actions require all variables to be bound.
44
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 13
14 1 2 3 4 5 6
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles
Updating beliefs. In rule AddBel, the formula +b is removed from the body of the plan and the set of intentions is updated properly. Rule DelBel works similarly. In both rules, the set of beliefs of the agent should be modified in such a way that either the (ground) predicate b follows from the new set of beliefs (rule AddBel) or it does not (rule DelBel). Whenever this causes the current belief base to be actually changed, the appropriate events have to be included in the set of events.
7
AddBel
9 10 11 12 13 14
ag , C
Cι = i[head ← +b; h]
ag, C −→ b, where: bs |= C ∪ +b, C if bs | = b, E ι CE = otherwise, CE CI = CI − {Cι } ∪ i[head ← h] ,
15
DelBel
17 18 19 20 21 22
25 26 27 28
Cι = i[head ← −b;h] ag, C −→ ag , C b, where: bs |= CE ∪ −b, Cι if bs |= b, CE = otherwise, CE CI = CI − {Cι } ∪ i[head ← h] .
Any formula in the body of a plan fits in one of the cases above. There is only one final rule to conclude the semantics. 3.2.2.8. Removing empty intentions. The rule ClrInt below removes empty intentions which may have been left from the previous execution of an intention.
29 30 31 32 33 34 35 36
39 40 41 42 43
4 5 6
8 9 10 11 12 13 14
16 17 18 19 20 21 22 23 24 25 26 27 28 29
ClrInt
Cι = _
ag, C −→ ag, C where: Cι = _ CI − {Cι } CI − {Cι } ∪ CI = i [head ← h ] CI
30 31 32
if Cι = [head ←],
33
if Cι = i head ← !at; h [head ←], otherwise.
37 38
3
15
16
24
2
7
8
23
1
34 35 36 37
The rule considers both the cases where there is nothing left in the whole intention, or when there is nothing left in a plan that is on top of other plans in the intention. In the former case, the intention is simply removed. In the latter case, the rule removes from the intention what is left from the plan that had been put on the top of the intention on behalf of the achievement goal !at (which is also removed as it has been accomplished). In all other cases, the rule does not change CI , only Cι needs to be cleared.
44
38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 14
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles 1
4.
15
A framework for proving BDI properties of AgentSpeak(L)
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
2
Based on the notions of an agent and agent circumstance used in the operational semantics of AgentSpeak(L) (as given in the previous section), we shall be able to introduce definitions for the interpretation of the three mental attitudes expressible in BDI logics (BEL(ϕ), DES(ϕ), and INTEND(ϕ)) in regards to AgentSpeak(L) agents. This is needed in order to set the grounds for investigating BDI properties of AgentSpeak(L) programs. Here, formulæ ϕ, over which the three modalities are defined, are simply atoms, i.e., “at” as defined in the AgentSpeak(L) syntax (section 3.1). Ground atoms are the only formulæ that can appear in an agent’s belief base and external events. Only in the case of internal events they may not be ground.5 These are the AgentSpeak(L) structures from which the mental attitudes are ascribed (as the definitions in this section will show), hence the definition of ϕ as above. We now go on to formalise what the three BDI modalities mean for an AgentSpeak(L) agent ag at a certain circumstance C (i.e., a configuration of the transition system in section 3.2). This means that the modalities in our definitions will be subscripted by these two structures. We start with the belief modality, which is quite straightforward.
26 27
32 33 34 35 36
39 40 41 42 43
6 7 8 9 10 11 12 13 14 15 16 17 18 19
22 23
Definition 4 (Belief in AgentSpeak(L) agents). We say that an AgentSpeak(L) agent ag, regardless of its circumstance C, believes a formula ϕ iff it is included in the agent’s belief base; that is, for an agent ag = bs, ps:
25
BELag,C (ϕ) ≡ bs |= ϕ.
28
This needs no further explanation, as an agent’s beliefs are explicitly represented within an AgentSpeak(L) interpreter. Intentions are also rather explicit in the architecture, but the same does not apply for desires. That is why we introduce the intention modality before desire (and also because we shall need this definition for defining desire).
26 27
29 30 31 32 33 34 35 36
4.2. Intentions
37 38
5
24
29
31
4
21
The following definition is given for interpreting the belief modality of BDI logics when it comes to AgentSpeak(L) agents.
28
30
3
20
4.1. Beliefs
24 25
1
37
Before giving the formal definition for the intention modality, we first define an auxiliary function agoals : I → P(), where I is the domain of all individual intentions 5 Suppose an uninstantiated variable is used in an achievement goal !at in the body of a plan p . Suppose 1 1 further that p2 is the plan that will be used as intended means to handle the event generated for !at1 . An uninstantiated variable in !at1 is used when the programmer wants the execution of plan p2 to return values to be subsequently used in p1 . That is the only situation in which atoms ϕ may not be ground.
44
38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 15
16 1 2 3 4 5 6 7 8 9 10
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles
and is the domain of all atomic formulæ (as mentioned above). Recall that an intention is a stack of partially instantiated plans (see the syntactic definition of plans p in section 3.1), so the definition of I is as follows. The empty intention (or true intention) is denoted by T, and T ∈ I. If p is a plan and i ∈ I, then also i[p] ∈ I. The agoals function takes an intention and returns all achievement goals in the triggering event part of the plans in it: agoals(T) = {}, {at} ∪ agoals(i) agoals i[p] = agoals(i)
13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
38 39 40 41 42 43
3 4 5 6
8
if p = +!at : ct ← h, otherwise.
9 10 11
Note that we are only interested in atomic formulæ at in additions of achievement goals, and ignore all other types of triggering events. These are the formulæ that represent (symbolically) properties of the states of the world that the agent is trying to achieve (i.e., the intended states). However, taking such formulæ from the agent’s set of intentions does not suffice for defining intentions, as there can be suspended ones. In the AgentSpeak(L) interpreter, intentions may be suspended when they are waiting for an appropriate subplan to be chosen (in the form of an internal event, which is an event associated with an existing intention); suspended intentions are clearly formalised in [5]. Suspended intentions are, therefore, precisely those that appear in the set of events CE at a circumstance C. That is, suppose that an achievement goal appears in the beginning of a plan body, and that plan is on top of an intention that was chosen to execute (by the intention selection function). Before that intention is selected again for execution, the related event needs to be chosen from CE and a relevant and applicable plan (an intended means) must be found for it. If that happens, the intended means will be pushed on top of that stack and only then the (previously suspended) intention goes back to CI , becoming a candidate for execution again (see rules Achieve and IntEv in section 3.2.2). The format of an event is te, i, where te is a triggering event and i ∈ I is the (suspended) intention that generated it (i is T for external events). Accordingly, an AgentSpeak(L) agent’s intentions, at circumstance C, are either the current intentions in CI , or the suspended ones that appear in the events in CE . Note that in the definition below, we are defining the AgentSpeak(L) equivalent of the BDI logic INTEND modality, which corresponds to the notion of intend-that, whereas AgentSpeak(L) intentions (stacks of plans) correspond to the intend-to notion.
36 37
2
7
11 12
1
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
Definition 5 (Intention in AgentSpeak(L) agents). An AgentSpeak(L) agent ag intends ϕ at circumstance C iff it has ϕ as an achievement goal that currently appears in its set of intentions I , or ϕ is an achievement goal that appears in the (suspended) intentions associated with events in E. For an agent ag and its circumstance C, we have: agoals(i) ∨ ϕ ∈ agoals(i). INTENDag,C (ϕ) ≡ ϕ ∈ i∈CI
te,i∈CE
44
37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 16
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles 1
17
4.3. Desires
1
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
2
Although this is not mentioned in the original literature on AgentSpeak(L), it is our impression that the desire modality in an AgentSpeak(L) agent is best represented by its set of events. Internal events are achievement goals,6 which are clearly desires: traditionally, goals are a subset of desires which are assumed to be mutually consistent [25]; in the simplified version of the architecture used for AgentSpeak(L), consistency of goals is taken for granted – see [28] for a discussion on goal conflicts. It is therefore our interpretation that events in CE that have the form of additions of achievement goals are desires (i.e., they have not yet been selected to become intended, but represent states that the agent wishes to achieve); achievement goals that are already intended are desires too, but this is explained below. External events in CE , on the other hand, represent the agent’s reactions to (believed) changes in the environment and therefore not a desire but rather a source of motivation for desires (goals). Recall that external events are expressed as changes in beliefs; they are the consequence of belief revision based on perception of the environment. External events, in turn, cause a series of internal events: these we consider the agent’s desires (not their external cause, which represents a focus of attention). However, when an internal event is selected (and removed from the set E of events) in an attempt to find relevant and applicable plans for it, and one of them becomes an intended means to handle that event, that does not imply that the agent no longer desires to handle that event. A possible interpretation is that it only ceases to be a desire once the execution of the plan associated with it in the set of intentions finishes execution. In fact, it is a common interpretation that intentions are a subset of the agent’s desires (when one is considering intention-that rather than intention-to, of course). Thus, besides the achievement goals in E we also consider as desires of the agent all of its intentions. It is important to emphasise that with this choice of interpretation for what desire means in an AgentSpeak(L) agent, we are determining which of the asymmetry thesis principles AT4, AT5, and AT6 (which relate desires and intentions) will be satisfied by AgentSpeak(L) agents. However, we think this is the most reasonable interpretation, as intentions are normally seen as those desires the agent has committed to act so as to achieve them (hence intentions are indeed a subset of desires).
35 36 37 38
41 42 43
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
Definition 6 (Desire in AgentSpeak(L) agents). An agent at circumstance C desires a formula ϕ iff ϕ is an achievement goal in C’s set of events E (associated with any intention i), or ϕ is a current intention of the agent; more formally:
39 40
3
36 37 38 39
DESag,C (ϕ) ≡ +!ϕ, i ∈ CE ∨ INTENDag,C (ϕ).
40
6 Again, in our semantics, because of the extension to allow belief changes in plans, internal events may be
42
addition and deletion of beliefs too, but not in the original semantics.
44
41
43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 17
18 1 2
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles
4.4. Restating the asymmetry thesis principles for AgentSpeak(L) under closed-world assumption
3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
1 2 3
We here reconsider the asymmetry thesis principles given the closed-world assumption, well known in Prolog, but also present in AgentSpeak(L). In [5], d’Inverno and Luck allow AgentSpeak(L) agents to have belief literals (rather than belief atoms) in the belief base; although they left it unspecified how to check whether a plan context is a logical consequence of the belief base, this clearly allows for open world interpretations. Various ad hoc implementations of BDI systems also allow open world interpretation of beliefs by having, e.g., two tables in a relational database – one for positive beliefs, the other for negative beliefs, and everything else being unknown. However, in its original definition, Rao [19] stated that an AgentSpeak(L) agent’s belief base had solely ground belief atoms (i.e., only positive beliefs can be stored). This means that closed-world was intrinsically assumed. Rao was quite concerned in providing for the first time a computable language for the BDI architecture; it was an abstract programming language and thus should be kept simple. Even though closed-world assumption is not ideal for agent programming, AgentSpeak(L)’s BDI paradigm alone can prove quite useful in multi-agent applications. Further, the only full implementation of AgentSpeak(L) we are aware of [1, section 2.1] does not allow negated literals in the belief base. Thus, in this work on proving the asymmetry thesis principles for a BDI programming language, we have maintained the original definition both for simplicity and for the practical usefulness in regards to the current implementation of AgentSpeak(L). The closed-world assumption here means that, for an agent ag = bs, ps, if bs |= ϕ, then BEL(¬ϕ) should hold; conversely, if it does not hold, then bs |= ϕ. In consequence, whenever in the asymmetry thesis principles we have, e.g., ¬BEL(¬ϕ), then we know it is the case that, for agent ag at a circumstance C, BELag,C (ϕ). Accordingly, in the light of closed-world assumption, we can restate AT1 as |= INTENDag,C (ϕ) ⇒ BELag,C (ϕ), AT4 as |= INTENDag,C (ϕ) ⇒ DESag,C (ϕ), and AT7 as |= DESag,C (ϕ) ⇒ BELag,C (ϕ). Thus, principles AT1, AT4, and AT7 cannot be proved alongside AT2, AT5, and AT8 for AgentSpeak(L) (or any other BDI logic programming language under closed-world assumption, for that matter). This makes some of the proofs in the next section simpler (by following directly from others). Nevertheless, similar lines of reasoning can be used to prove the asymmetry thesis principles for a version of AgentSpeak(L) where the closed-world assumption is dropped. Table 2 restates the asymmetry thesis principles in table 1 as used in the proofs in the next section. Note that, due to the practical nature of AgentSpeak(L), its syntax does not allow negated literals in achievement goals. Therefore, the changes in the asymmetry thesis principles related to belief on the basis of closed-world assumption, also apply for desires and intentions. Having defined how to interpret the belief, desire, and intention modalities of BDI logics for AgentSpeak(L), and restated the asymmetry thesis principles in the light of
44
4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 18
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles 1 2
19
closed-world assumption, we are ready to start proving which of the asymmetry thesis principles are satisfied by AgentSpeak(L) agents.
3
6 7 8 9 10 11 12 13 14 15 16 17 18
4
5.
Proof of the asymmetry thesis principles for AgentSpeak(L)
5
In this section, we investigate some characteristics of AgentSpeak(L) as a language for writing rational BDI agents. Before presenting the proofs, we discuss the implications of proving these principles for AgentSpeak(L). For brevity, we discuss only the principles involving the pair intention–belief. Similar discussions also apply for the other two pairs. In [23], Rao and Georgeff stated, using their modal logic, that it is irrational for a system to satisfy INTEND(ϕ) ∧ BEL(¬ϕ). So we want the negation of this formula to be a valid formula for a BDI agent at every moment of its execution, i.e., we require ¬(INTEND(ϕ) ∧ BEL(¬ϕ)) to be a valid formula, which is the same as requiring that |= INTEND(ϕ) ⇒ ¬BEL(¬ϕ). This, in the light of closed-world assumption, gives us, for an AgentSpeak(L) agent ag at circumstance C: |= INTENDag,C (ϕ) ⇒ BELag,C (ϕ),
(AT1)
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
42 43
6 7 8 9 10 11 12 13 14 15 16 17 18 19
as in table 2. It is rational, in the sense that it is “allowed”, for a BDI system to exhibit a behaviour satisfying INTEND(ϕ) ∧ ¬BEL(ϕ). That is, an agent will still be considered rational even if it exhibits this intention–belief incompleteness. In other words we cannot require intention–belief completeness to be a property valid for all agents as this requirement would be too restrictive and would exclude behaviour considered rational. Formally, we cannot require, ¬(INTEND(ϕ) ∧ ¬BEL(ϕ)) to be valid, thus: |= INTENDag,C (ϕ) ⇒ BELag,C (ϕ).
(AT2)
Table 2 Asymmetry thesis principles for an AgentSpeak(L) Agent ag at circumstance C, under closed-world assumption.
20 21 22 23 24 25 26 27 28 29 30 31 32
Label
Principle
AT1 AT2 AT3
|= INTENDag,C (ϕ) ⇒ BELag,C (ϕ) |= INTENDag,C (ϕ) ⇒ BELag,C (ϕ) |= BELag,C (ϕ) ⇒ INTENDag,C (ϕ)
34
AT4 AT5 AT6
|= INTENDag,C (ϕ) ⇒ DESag,C (ϕ) |= INTENDag,C (ϕ) ⇒ DESag,C (ϕ) |= DESag,C (ϕ) ⇒ INTENDag,C (ϕ)
37
AT7 AT8 AT9
|= DESag,C (ϕ) ⇒ BELag,C (ϕ) |= DESag,C (ϕ) ⇒ BELag,C (ϕ) |= BELag,C (ϕ) ⇒ DESag,C (ϕ)
40 41
2 3
4 5
1
44
33
35 36
38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 19
20 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles
Similarly, we cannot require BEL(ϕ) ⇒ INTEND(ϕ) (belief–intention completeness) to be a valid formula, which gives us the following asymmetry thesis principle: |= BELag,C (ϕ) ⇒ INTENDag,C (ϕ).
(AT3)
The same observations hold for intention–desire and desire–belief. A programming language that is ideal for developing rational BDI agents should be one that: – guarantees that agents written with it never exhibit irrational behaviour along their execution or, equivalently, that agents exhibit rational behaviour at every step of their executions (a safety property); and – does not preclude agents to exhibit behaviours considered rational (an expressiveness property). In summary, it is important to know for a programming language such as AgentSpeak(L), which of the asymmetry thesis principles are automatically enforced by its semantics. Programmers should be aware that it is for them to ensure those particular aspects of BDI rationality that are not enforced automatically (e.g., by following adequate programming styles). Each of the next three sections below is concerned with the asymmetry thesis principles that refer to the relations between the BDI attitudes taken pairwise.
21 22
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
5.1. Intention–belief principles
23 24
1
22 23
The argumentation on the three principles AT1–AT3 (relating beliefs and desires) requires some discussion on the practice and style of programming with AgentSpeak(L). If AgentSpeak(L) programmers call a certain achievement goal !ϕ, then it would be reasonable (from an agent-oriented software engineering point of view) to assume that they expect ϕ to be believed by the agent (through perception of the environment) if all goes well with the execution of any plan chosen to handle that event. However, even in case the whole system (including any simulated environment) was engineered taking that into account, there is always the question that environments may be non-deterministic. Therefore, there can be no way to assure that it will be the case that ϕ will be in an agent’s belief base after the plan for +!ϕ finishes execution. In summary, the argumentation above is intended to show that there are no assured connections between believed and intended formulæ. A formula ϕ will not be normally true when an event having +!ϕ is generated but rather after a plan for it finishes execution, and even then ϕ will not be necessarily believed (this will be considered in the proof of the first lemma below). Having or not ϕ as a belief when !ϕ becomes intended can only be assured, however, if the programmer explicitly includes such restriction in the context part of the plans to handle that event. Also, recall that an agent may have its belief base completely changed by belief revision from perception. The belief revision function is not part of the semantics of AgentSpeak(L), but rather a component of the general agent architecture in which an
44
24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 20
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles 1 2 3 4 5 6 7 8 9 10 11 12
21
AgentSpeak(L) interpreter is inserted. Again, as the environment is non-deterministic, nothing can be assured as to what will be believed by the agent after pursuing a desired or intended state of the world. With this issues in mind, we can proceed with the proofs of the intention–belief principles for AgentSpeak(L). An initial circumstance is a tuple with all its components empty, except the set of events: C0 = {}, E, {}, {}, _, _, _ .
1
The only events in the set of events E of an initial circumstance are external events (i.e., they are created by belief revision). We now show that every AgentSpeak(L) agent, in an initial circumstance, is trivially intention–belief complete.
9
13 14 15
18 19
Lemma 7. If ag is an AgentSpeak(L) agent and C0 is an initial circumstance, then INTENDag,C0 (ϕ) ⇒ BELag,C0 (ϕ).
22 23 24 25 26
Proof. Because of the way C0 and INTENDag,C0 (ϕ) are defined, we have that INTENDag,C0 (ϕ) is false. Hence, INTENDag,C0 (ϕ) ⇒ BELag,C0 (ϕ) is a true implication.
29 30
We also show that, even though any AgentSpeak(L) agent is intention–belief complete in an initial circumstance, it is possible to use AgentSpeak(L) to write agents that come to be intention–belief incomplete (at least under some belief revisions). Note that this is a good characteristic of AgentSpeak(L): it is rational to be intention–belief incomplete and AgentSpeak(L) agents can possibly exhibit such incompleteness. The next lemma states that AT2 holds for AgentSpeak(L).
33 34 35 36 37 38 39 40 41 42 43
6 7 8
10 11 12
14 15
17 18 19
21 22 23 24 25 26 27
Lemma 8 (AgentSpeak(L) allows intention–belief incomplete agents). There exists an AgentSpeak(L) agent ag such that, for some ag and C , ag, C0 −→∗ ag , C and ¬(INTENDag ,C (ϕ) ⇒ BELag ,C (ϕ)).
31 32
5
20
27 28
4
16
20 21
3
13
16 17
2
28 29 30 31
Proof. The proof consists in showing an instance of a program with !ϕ in the triggering event of a plan but not including ϕ in the set of beliefs. An instance of that plan may become an intended means (i.e., to be included in CI ) so that INTENDag,C (ϕ) holds, yet there is no guarantee that ϕ will be believed by the agent at that point. So in this case we have a situation of intention–belief incompleteness. The details of the proof are as follows. Consider an agent ag = bs, ps, where bs = {} and ps = {+p(t1 ) : T ← !q(t2 ), +!q(t2 ) : T ← a(t2 )}, at C0 (an initial circumstance). Suppose that, from perception of the environment, a belief p(t1 ) is added to the agent’s belief base (i.e., bs = {p(t1 )}). This belief revision generates an event so that the agent’s circumstance now is C = {}, {+p(t1 ), T}, {}, {}, _, _, _ (i.e., CE = {+p(t1 ), T}). From the configuration ag, C, the following sequence of semantic rules would apply: (i) SelEv, (ii) Rel1 , (iii) Appl1 , (iv) SelAppl, (v) ExtEv,
44
32 33 34 35 36 37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 21
22 1 2 3 4 5 6 7 8 9 10 11 12 13
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles
(vi) IntSel, and (vii) Achieve. Working out the details of the rule applications, and assuming for simplicity (but without loss of generality) that no further belief revision took place, this leads to a configuration ag , C , ag = bs , ps, where bs = {p(t1 )}, and C = {}, {+!q(t2 ), T[+p(t1 ) : T ← !q(t2 )]}, {}, {}, _, _, _. In other words, the agent now has in CE an internal event with triggering event +!q(t2 ), generated by the (intended) plan for handling +p(t1 ). From that point the following semantic rules can be applied: (viii) SelEv, (ix) Rel1 , (x) Appl1 , (xi) SelAppl, (xii) IntEv. This sequence of rule applications leads to a configuration where CI = {+p(t1 ) : T ← !q(t2 )[+!q(t2 ) : T ← a(t2 )]}. In such configuration, by definition 5, we see that INTENDag,C (q(t2 )) is true. However, it is possible that no further belief revision took place, which means that ¬BELag,C (q(t2 )) holds. Thus, we have shown by counterexample that INTENDag,C (ϕ) ⇒ BELag,C (ϕ) is not valid, hence AT2 is satisfied by AgentSpeak(L) agents.
14 15 16 17
20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38
From the lemma above it follows immediately that AT1 does not hold for AgentSpeak(L). The following lemma corresponds to stating that AT3 holds for AgentSpeak(L).
41
Lemma 9 (AgentSpeak(L) allows belief–intention incomplete agents). There exists an AgentSpeak(L) agent ag such that, for some ag and C , ag, C0 −→∗ ag , C and ¬(BELag ,C (ϕ) ⇒ INTENDag ,C (ϕ)). Proof. The proof consists of showing an instance of a program with belief formulæ that are different from the formulæ in the triggering events of plans. If a formulæ that is presently believed by the agent does not appear as a triggering event in one of its plans in the form of a goal addition, that formula can never become intended, hence it is possible that INTENDag,C (ϕ) is false while BELag,C (ϕ) is true. A program showing that is trivial, e.g., ag = bs, ps where bs = {p1 (t1 )} and ps = {+p2 (t2 ) : T ← a(t2 )}. Note that the formula BELag,C (ϕ) ⇒ INTENDag,C (ϕ) could only be valid if programmers were required to include plans in an agent’s plan library for all possible additions of belief +ϕ, and that those plans had in their bodies an achievement goal !ϕ. Of course that is not the case, which would be unreasonable from the practical programming point of view. Also, for the situation in the proof above, note that it is possible that perception of the environment will make the agent no longer believe p1 (t1 ) when the plan for +p2 (t2 ) becomes intended, but that is dependent on environments and therefore cannot be assured for all AgentSpeak(L) agents.
4 5 6 7 8 9 10 11 12 13
15 16 17
19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39
Corollary 10 (Intention–belief principles in AgentSpeak(L)). Agents programmed in AgentSpeak(L) do not satisfy AT1, but satisfy AT2 and AT3.
42 43
3
18
39 40
2
14
18 19
1
40 41 42
Proof.
Immediately from lemmas 7–9 above.
44
43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 22
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles 1
23
5.2. Intention–desire principles
2 3 4
2
AgentSpeak(L) agents are intention–desire consistent. This is equivalent to saying that principle AT4 holds for AgentSpeak(L) agents.
5 6 7 8
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
39 40 41 42 43
4
6 7 8 9
Proof. Given that INTENDag,C (ϕ) is one of the disjuncts in the definition of DESag,C (ϕ) (definition 6), it is easy to see that |= INTENDag,C (ϕ) ⇒ DESag,C (ϕ). Observe that, by lemma 11, we automatically have that AT5 does not hold. That is, AgentSpeak(L) agents are intention–desire complete (although this is not required of rational agents). Note that this property of AgentSpeak(L) agents appears to mean that there are behaviours considered rational in the BDI theory (intention–desire incompleteness) that cannot be programmed using AgentSpeak(L). In fact, this comes from the definition of desire that we have chosen (see definition 6), which is appropriate for practical BDI architectures. We now show that every AgentSpeak(L) agent, in an initial circumstance, is trivially desire–intention complete. Lemma 12. If ag is an AgentSpeak(L) agent and C0 is an initial circumstance, then DESag,C0 (ϕ) ⇒ INTENDag,C0 (ϕ). Proof. By the way C0 and DESag,C0 (ϕ) and INTENDag,C0 (ϕ) are defined we have that DESag,C0 (ϕ) and INTENDag,C0 (ϕ) are false. Hence, DESag,C0 (ϕ) ⇒ INTENDag,C0 (ϕ) is true. We also show that, even though any AgentSpeak(L) agent is desire–intention complete in an initial circumstance, it is possible to use AgentSpeak(L) to write agents that evolve to be intention–desire incomplete. Note that this is also a good characteristic of AgentSpeak(L): it is rational to be desire–intention incomplete and AgentSpeak(L) agents can evolve to exhibit this incompleteness. The next lemma states that AT6 holds for AgentSpeak(L).
37 38
3
5
Lemma 11 (AgentSpeak(L) agents are intention–desire consistent). If ag is an AgentSpeak(L) agent and C is any circumstance, then INTENDag,C (ϕ) ⇒ DESag,C (ϕ).
9 10
1
10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37
Lemma 13 (AgentSpeak(L) allows desire–intention incomplete agents). There exists an AgentSpeak(L) agent ag such that, for some ag and C , ag, C0 →∗ ag , C and ¬(DESag ,C (ϕ) ⇒ INTENDag ,C (ϕ)).
38
Proof. Given an agent’s circumstance C, there can be an event +!ϕ, i ∈ CE such that DESag,C (ϕ) holds, but rule Appl2 (see section 3.2.2), which is used when there are no
42
44
39 40 41
43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 23
24 1 2 3 4 5 6 7
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles
applicable plans, may always apply whenever +!ϕ, i ∈ CE is selected by SE (see rule SelEv), thus INTENDag,C (ϕ) may never hold, while DESag,C (ϕ) continues to hold. A formula INTENDag,C (ϕ) can only be true after Appl1 applies, so that ϕ ∈ agoals(i) for some i ∈ CI or possibly later on te, i ∈ CE (i is a suspended intention in the latter case): recall that these are the conditions under which INTENDag,C (ϕ) hold, according to definition 5. Therefore, as it cannot be guaranteed that Appl1 will eventually apply, INTENDag,C (ϕ) may never hold even though DESag,C (ϕ) does.
8 9 10
Corollary 14 (Intention–desire principles in AgentSpeak(L)). Agents programmed in AgentSpeak(L) satisfy AT4, do not satisfy AT5, and satisfy AT6.
15 16 17 18 19 20 21 22 23 24 25 26 27 28
Proof.
Immediately from lemmas 11–13 above.
Informally, satisfying AT6 represents the fact that an AgentSpeak(L) agent’s desire may never have an applicable plan which would allow it to satisfy that desire (finding such plan makes the desire to become intended). Notice that, in the semantics, we have formalised the idea that if a chosen event does not have an applicable plan when it is selected, it is included again in the set of events in the hope that later on, when selected again, it may possible to handle it (see how CE changes in Appl2 ). Another possibility is to simply discard the event,7 although neither alternative is actually ideal. The latter possibility does not change the satisfiability of AT6 either (INTENDag,C (ϕ) does not apply whilst DESag,C (ϕ) does, even if it applies only shortly before being selected and discarded). The question of what to do with an event for which no applicable plan is available, when associated with internal events, can be related to the idea of intention reconsideration [32]. An open problem in AgentSpeak(L) is how to address the problem of incorporating intention reconsideration into the language (in a computationally efficient way).
6 7
9 10
12
14 15 16 17 18 19 20 21 22 23 24 25 26 27 28
30
5.3. Desire–belief principles
31 32
AgentSpeak(L) agents are desire–belief complete in an initial circumstance:
33 34
5
29
31 32
4
13
29 30
3
11
13 14
2
8
11 12
1
33
Lemma 15. If ag is an AgentSpeak(L) agent and C0 is an initial circumstance then DESag,C0 (ϕ) ⇒ BELag,C0 (ϕ).
34
37
39
7 In the AgentSpeak(L) interpreter mentioned in footnote 1, a command line option allows the user to
39
40
choose whether events for which no applicable plans are available will be discarded or inserted back at the end of the event queue. When the option for discarding events is used, and the particular event is internal, the mechanism for plan failure is triggered. This mechanism is based on the use of goal deletion events (i.e., those prefixed with ‘−’); it is mentioned in [1], but no formal semantics has been given to it as yet.
40
35 36 37 38
41 42 43
Proof.
Similar to the proof of lemma 7.
44
35 36
38
41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 24
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles 1 2 3 4 5
25
We also show that even though any AgentSpeak(L) agent is desire–belief complete in an initial circumstance, it is possible to use AgentSpeak(L) to write agents that go on to be desire–belief incomplete. Again, this is a good characteristic of AgentSpeak(L): it is rational to be desire–belief incomplete and AgentSpeak(L) agents can come to exhibit this incompleteness. The next lemma states that AT8 holds for AgentSpeak(L).
6 7 8 9
14 15
Similar to the proof of lemma 8.
18 19
Proof.
Similar to the proof of lemma 9.
22 23 24
27 28 29 30
Proof.
Immediately from lemmas 15–17.
33 34
We are now in a position to summarise all our findings in regards to the asymmetry thesis principles enforced by the AgentSpeak(L) semantics.
39 40
Theorem 19 (Asymmetry thesis principles in AgentSpeak(L)). All AgentSpeak(L) agents satisfy the asymmetry thesis principles AT2, AT3, AT4, AT6, AT8, and AT9, but do not satisfy AT1, AT5, and AT7.
43
13 14 15
17 18 19 20 21
23 24 25 26
29 30
32 33 34 35
Proof.
Follows immediately from corollaries 10, 14 and 18.
36 37
Corollary 20 (Equivalence to BDI logics). In regard to the asymmetry thesis principles that are satisfied, AgentSpeak(L) is not equivalent to either BDI-B1, BDI-B2, BDI-S3, BDI-R3, or BDI-W3, as defined in [23].
41 42
11
31
37 38
9
28
35 36
8
27
5.4. AgentSpeak(L) and the asymmetry thesis principles
31 32
7
22
Corollary 18 (Desire–belief principles in AgentSpeak(L)). Agents programmed in AgentSpeak(L) do not satisfy AT7, but satisfy AT8 and AT9.
25 26
5
16
Lemma 17 (AgentSpeak(L) allows desire–belief incomplete agents). There exists an AgentSpeak(L) agent ag such that, for some ag and C , ag, C0 −→∗ ag , C such that ¬(BELag ,C (ϕ) ⇒ DESag ,C (ϕ)).
20 21
4
12
From the lemma above it follows immediately that AT7 does not hold for AgentSpeak(L). The following lemma corresponds to stating that AT9 holds for AgentSpeak(L).
16 17
3
10
Proof.
12 13
2
6
Lemma 16 (AgentSpeak(L) allows desire–belief incomplete agents). There exists an AgentSpeak(L) agent ag such that, for some ag and C , ag, C0 →∗ ag , C and ¬(DESag ,C (ϕ) ⇒ BELag ,C (ϕ)).
10 11
1
38 39 40 41
Proof. Rao and Georgeff proved that those logics satisfy other combinations of the asymmetry thesis principles; refer to [23] for details.
44
42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 25
26 1 2 3 4 5 6
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles
Note that Rao and Georgeff only analyse the logics BDI-B1, BDI-B2, BDI-S3, BDI-R3, and BDI-W3 with respect to the asymmetry thesis principles. It remains to investigate the other logics in the family of BDI logics they present in that paper to check whether any of them satisfy the same asymmetry thesis principles as AgentSpeak(L). This would be interesting in the process of defining a logic which is semantically equivalent to AgentSpeak(L).
7
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
3 4 5 6
8
6.
Related work
9
10 11
2
7
8 9
1
10
Since Shoham’s seminal paper on agent-oriented programming [24], several agentoriented programming languages have been proposed, following various approaches. ConGolog [4] is a concurrent programming language based on the situation calculus. Fisher’s Concurrent M ETATE M [6] is based on executable temporal logics (this in fact predates the work by Shoham). The MIN ERVA agent architecture [15] is based on dynamic logic programming. AgentSpeak(L) [19] is based on the BDI (Beliefs– Desires–Intentions) architecture and the more practical experience with PRS [8] and dMARS [12]. A recent approach to BDI agents based on process algebra aims at a visual language [13,14]. Other BDI programming languages, such as 3APL [9], were derived from AgentSpeak(L), improving it in some ways (e.g., in handling plan failure). Recently, a language very much inspired by AgentSpeak(L), called AgentTalk, appears on the Internet.8 There are also JAVA-based platforms for developing BDI agents, such as JAM9 [11] and JACK10 [10]. Unfortunately, neither of these approaches provide both a practical platform for developing intelligent agents and a solid theoretical base; either they concentrate on an adequate theoretical background, or they provide a practical infrastructure for agent applications. This is particularly true of the BDI-oriented ones; there still remains a distance between theory and practice of BDI systems [7]. The work on BDI logics [20, 23,25] and the BDI architecture [21,22] was arguably a turning point in agent research. However, much work is still needed so that the research results in this area can be used in making the work of the multi-agent systems practitioners more formally grounded. We hope to contribute towards that direction by showing some of the BDI rationality principles that are enforced by AgentSpeak(L), which was turned into a practical agentoriented programming language [1]. Only recently another publication has appeared in which there is also an interest in properties of BDI agent-oriented programming languages. Winikoff et al. [28] give formal semantics to a language which has some properties integrating declarative and procedural views of goals. Their work can be used in reasoning about goals for detecting and resolving conflicts among them. Finally, similar motivations to those in this paper are found in recent work by Wobcke [29,30]. In [30], he proposed a formal
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
41
8 URL: http://www.cs.rmit.edu.au/∼winikoff/agenttalk/.
41
42
9 URL: http://www.marcush.net.
42
43
10 URL: http://www.agent-software.com.
44
43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 26
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles 1 2 3 4 5 6 7
27
modelling of the mental states of PRS-like agents [8] using Agent Dynamic Logic, a logic that combines Computation Tree Logic, Propositional Dynamic Logic, and BDI Logic (as, indeed, the one presented by Singh, Rao, and Georgeff in [25]). This interpretation of mental states was then applied in [29] to investigate properties of intentions and rationality defined by Rao and Georgeff such as Bratman’s asymmetry thesis (see section 2.2), the side-effect problem of intentions, and the non-transference principles.
8
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
Conclusion
37 38 39 40 41 42 43
4 5 6 7
10
This paper contributes towards giving theoretical support for the implementation and verification of multi-agent systems. It introduced a framework for proving BDI properties of AgentSpeak(L) agents, based on an operational semantics for such agents. This work also contributes towards the computational grounding of the notions of belief, desire, and intention from BDI logics (in that these modalities are interpreted in terms of computational processes). Our framework was then used to prove which of the asymmetry thesis principles are enforced by the AgentSpeak(L) semantics. It was found that the particular combination of asymmetry thesis principles that are satisfied by AgentSpeak(L) agents does not coincide with the particular combinations of asymmetry thesis principles that are satisfied by some of the BDI modal systems presented in [23]. The exercise of relating AgentSpeak(L) to each one of the asymmetry thesis principles has also helped to clarify practical details of the language. Future work includes both investigating other BDI properties that may be satisfied by AgentSpeak(L), as well as trying to match AgentSpeak(L) with a particular BDI logic in terms of the BDI properties they both satisfy. This may be a first step towards describing AgentSpeak(L) as a logic, which would be useful for work on proving properties of (implemented) intelligent agents. We also plan to extend the operational semantics to account for more practical aspects of agent-oriented programming such as speech-act based communication, and to consider systems of multiple AgentSpeak(L) agents.
11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34
Acknowledgements
35 36
3
9
7.
33 34
2
8
9 10
1
35
This research has been supported by a Marie Curie Fellowship of the European Community programme Improving Human Potential under contract number HPMF-CT2001-00065 for Rafael H. Bordini. In its early stages, this work has been partially supported by CNPq and FAPERGS. The first author would like to thank Robin Hirsch (from UCL-CS) who pointed out that it would be interesting to define a logic that is equivalent to AgentSpeak(L) so as to investigate its properties through the usual approach to logics. This work does not exactly intend to do so, but is directly in line with (and partially inspired by) that suggestion. Many thanks also to an anonymous reviewer of
44
36 37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 27
28 1 2 3
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles
the CLIMA paper whose detailed comments led to the discussion of many important issues included here, as did the detailed comments of the anonymous reviewers of this paper.
4
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
2 3 4
5 6
1
5
References
6
[1] R.H. Bordini, A.L.C. Bazzan, R.O. Jannone, D.M. Basso, R.M. Vicari and V.R. Lesser, AgentSpeak(XL): Efficient intention selection in BDI agents via decision-theoretic task scheduling, in: Proceedings of the First International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS-2002), eds. C. Castelfranchi and W.L. Johnson, Bologna, Italy, 15–19 July (ACM, New York, 2002) pp. 1294–1302. [2] R.H. Bordini and Á.F. Moreira, Proving the asymmetry thesis principles for a BDI agent-oriented programming language, in: Proceedings of the Third International Workshop on Computational Logic in Multi-Agent Systems (CLIMA-02), eds. J. Dix, J.A. Leite and K. Satoh, Copenhagen, Denmark, 1st August, http://www.elsevier.nl/locate/entcs/volume70.html. CLIMA-02 was held as part of FLoC-02. This paper was originally published in Datalogiske Skrifter number 93, Roskilde University, Denmark (2002) pp. 94–108. [3] M.E. Bratman, Intentions, Plans and Practical Reason (Harvard University Press, Cambridge, MA, 1987). [4] G. de Giacomo, Y. Lespérance and H.J. Levesque, ConGolog: A concurrent programming language based on the situation calculus, Artificial Intelligence 121 (2000) 109–169. [5] M. d’Inverno and M. Luck, Engineering AgentSpeak(L): A formal computational model, Journal of Logic and Computation 8(3) (1998) 1–27. [6] M. Fisher, A survey of concurrent M ETATE M – the language and its applications, in: Temporal Logics – Proceedings of the First International Conference, eds. D.M. Gabbay and H.J. Ohlbach, Lecture Notes in Artificial Intelligence, Vol. 827 (Springer, Berlin, 1994) pp. 480–505. [7] M. Georgeff, B. Pell, M. Pollack, M. Tambe and M. Wooldridge, The Belief–Desire–Intention model of agency, in: Intelligent Agents V – Proceedings of the Fifth International Workshop on Agent Theories, Architectures, and Languages (ATAL-98), held as part of the Agents’ World, eds. J.P. Müller, M.P. Singh and A.S. Rao, Paris, 4–7 July, 1998 (Springer, Heidelberg, 1999) pp. 1–10. [8] M.P. Georgeff and A.L. Lansky, Reactive reasoning and planning, in: Proceedings of the Sixth National Conference on Artificial Intelligence (AAAI’87), Seattle, WA, 13–17 July, 1987 (AAAI Press, Menlo Park, CA, 1987) pp. 677–682. [9] K.V. Hindriks, F.S. de Boer, W. van der Hoek and J.-J.C. Meyer, Control structures of rule-based agent languages, in: Intelligent Agents V – Proceedings of the Fifth International Workshop on Agent Theories, Architectures, and Languages (ATAL-98), held as part of the Agents’ World, eds. J.P. Müller, M.P. Singh and A.S. Rao, Paris, 4–7 July, 1998 (Springer, Heidelberg, 1999) pp. 381–396. [10] N. Howden, R. Rönnquist, A. Hodgson and A. Lucas, JACK intelligent agentsTM – summary of an agent infrastructure, in: Proceedings of Second International Workshop on Infrastructure for Agents, MAS, and Scalable MAS, held with the Fifth International Conference on Autonomous Agents (Agents 2001), Montreal, Canada, 28 May–1 June (2001). [11] M.J. Huber, JAM: A BDI-theoretic mobile agent architecture, in: Proceedings of the Third International Conference on Autonomous Agents (Agents-99), Seattle, WA, 1–5 May (ACM Press, 1999) pp. 236–243. [12] D. Kinny, The distributed multi-agent reasoning system architecture and language specification, Technical report, Australian Artificial Intelligence Institute, Melbourne, Australia (1993). [13] D. Kinny, The calculus: An algebraic agent language, in: Intelligent Agents VIII – Proceedings of the Eighth International Workshop on Agent Theories, Architectures, and Languages (ATAL-2001), eds. J.-J. Meyer and M. Tambe, 2001, Seattle, WA, August 1–3 (Springer, Berlin, 2002) pp. 32–50.
44
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 28
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43
29
[14] D. Kinny, ViP: A visual programming language for plan execution systems, in: Proceedings of the First International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS-2002, featuring 6th AGENTS, 5th ICMAS and 9th ATAL), eds. C. Castelfranchi and W.L. Johnson Bologna, Italy, 15–19 July (ACM, New York, 2002) pp. 721–728. [15] J.A. Leite, J.J. Alferes and L.M. Pereira, MIN ERVA – A dynamic logic programming agent architecture, in: Intelligent Agents VIII – Proceedings of the Eighth International Workshop on Agent Theories, Architectures, and Languages (ATAL-2001), eds. J.-J. Meyer and M. Tambe 2001, Seattle, WA, August 1–3 (Springer, Berlin, 2002) pp. 141–157. [16] R. Machado and R.H. Bordini, Running AgentSpeak(L) agents on SIM_AGENT, in: Intelligent Agents VIII – Proceedings of the Eighth International Workshop on Agent Theories, Architectures, and Languages (ATAL-2001), eds. J.-J. Meyer and M. Tambe, Seattle, WA, August 1–3, 2001 (Springer, Berlin, 2002) pp. 158–174. [17] Á.F. Moreira and R.H. Bordini, An operational semantics for a BDI agent-oriented programming language, in: Proceedings of the Workshop on Logics for Agent-Based Systems (LABS-02), held in conjunction with the Eighth International Conference on Principles of Knowledge Representation and Reasoning (KR2002), Toulouse, France, April 22–25 (2002) pp. 45–59. [18] G.D. Plotkin, A structural approach to operational semantics, Technical report, Computer Science Department, Aarhus University, Aarhus (1981). [19] A.S. Rao, AgentSpeak(L): BDI agents speak out in a logical computable language, in: Proceedings of the Seventh Workshop on Modelling Autonomous Agents in a Multi-Agent World (MAAMAW’96), eds. W. Van de Velde and J. Perram, Eindhoven, The Netherlands, 22–25 January (Springer, Berlin, 1996) pp. 42–55. [20] A.S. Rao, Decision procedures for propositional linear-time belief-desire-intention logics, in: Intelligent Agents II—Proceedings of the Second International Workshop on Agent Theories, Architectures, and Languages (ATAL’95), held as part of IJCAI’95, eds. M. Wooldridge, J.P. Müller and M. Tambe, Montréal, Canada, August 1995 (Springer, Berlin, 1996) pp. 33–48. [21] A.S. Rao and M.P. Georgeff, An abstract architecture for rational agents, in: Proceedings of the Third International Conference on Principles of Knowledge Representation and Reasoning (KR’92), eds. C. Rich, W.R. Swartout and B. Nebel, Cambridge, MA, 25–29 October (Morgan Kaufman, San Mateo, CA, 1992) pp. 439–449. [22] Rao A.S. and M.P. Georgeff, BDI Agents: From theory to practice, in: Proceedings of the First International Conference on Multi-Agent Systems (ICMAS’95), eds. V. Lesser and L. Gasser, San Francisco, CA, 12–14 June (AAAI Press, Menlo Park, CA, 1995) pp. 312–319. [23] A.S. Rao and M.P. Georgeff, Decision procedures for BDI logics, Journal of Logic and Computation 8(3) (1998) 293–343. [24] Y. Shoham, Agent-oriented programming, Artificial Intelligence 60 (1993) 51–92. [25] M.P. Singh, A.S. Rao and M.P. Georgeff, Formal methods in DAI: Logic-based representation and reasoning, in: Multiagent Systems – A Modern Approach to Distributed Artificial Intelligence, eds. G. Weiß (MIT Press, Cambridge, MA, 1999) chapter 8, pp. 331–376. [26] A. Sloman and B. Logan, Building cognitively rich agents using the SIM_AGENT toolkit, Communications of the Association of Computing Machinery 43(2) (1999) 71–77. [27] J.M. Spivey, The Z Notation: A Reference Manual, 2nd edition (Prentice Hall, Englewood Cliffs, NJ, 1992). [28] M. Winikoff, L. Padgham, J. Harland and J. Thangarajah, Declarative and procedural goals in intelligent agent systems, in: Proceedings of the Eighth International Conference on Principles of Knowledge Representation and Reasoning (KR2002), Toulouse, France, 22–25 April (2002) pp. 470–481. [29] W.R. Wobcke, Intention and rationality for PRS-like agents, in: Proceedings of the 15th Australian Joint Conference on Artificial Intelligence, AI 2002: Advances in Artificial Intelligence, eds. B. McKay and J.K. Slaney, Canberra, Australia, 2–6 December (Springer, Berlin, 2002) pp. 167–178. [30] W.R. Wobcke, Modelling PRS-like agents’ mental states, in: Proceedings of the 7th Pacific Rim International Conference on Artificial Intelligence, PRICAI 2002: Trends in Artificial Intelligence, eds. M. Ishizuka and A. Sattar, Tokyo, Japan, 18–22 August (Springer, Berlin, 2002) pp. 138–147.
44
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 29
30 1 2 3 4 5 6
R.H. Bordini, Á.F. Moreira / BDI programming languages and the asymmetry thesis principles
[31] M. Wooldridge, Computationally grounded theories of agency, in: Proceedings of the Fourth International Conference on Multi-Agent Systems (ICMAS-2000), ed. E. Durfee, Boston, 10–12 July (IEEE, Los Alamitos, CA, 2000) pp. 13–20. Paper for an Invited Talk. [32] M. Wooldridge and S. Parsons, Intention reconsideration reconsidered, in: Intelligent Agents V – Proceedings of the Fifth International Workshop on Agent Theories, Architectures, and Languages (ATAL-98), held as part of the Agents’ World, eds. J.P. Müller, M.P. Singh and A.S. Rao, Paris, 4–7 July, 1998 (Springer, Heidelberg, 1999) pp. 63–79.
1 2 3 4 5 6
7
7
8
8
9
9
10
10
11
11
12
12
13
13
14
14
15
15
16
16
17
17
18
18
19
19
20
20
21
21
22
22
23
23
24
24
25
25
26
26
27
27
28
28
29
29
30
30
31
31
32
32
33
33
34
34
35
35
36
36
37
37
38
38
39
39
40
40
41
41
42
42
43
43
44
44
VTEX(JK) PIPS No:5278825 artty:res (Kluwer DORDRECHT v.2004/04/14) amaicli8.tex; 21/05/2004; 13:25; p. 30