Rule Schemata for Game Artificial Intelligence - CiteSeerX

2 downloads 896 Views 200KB Size Report
game engines and game editors. As presented in [13], the proposed standards shall be adapted from the Java Specification Request JSR 94 – Java Rule Engine ...... report-2005/report-2005.html, Last accessed 05 March 2006, 16:03GMT.
Rule Schemata for Game Artificial Intelligence∗ Fl´avio S. Corrˆea da Silva

Wamberto W. Vasconcelos

Department of Computer Science

Department of Computing Science

Universidade de S˜ ao Paulo

University of Aberdeen

005508-090 – S˜ ao Paulo, SP

AB24 3UE – Aberdeen

Brazil

United Kingdom

[email protected]

[email protected] Abstract

Artificial intelligence and computer games enjoy a healthy level of cooperation and integration. Rule-based systems are a promising means to specify interface standards for artificial intelligence tools and modules for games, as advocated by the International Game Developers Association. Rules, however, can be too flexible, allowing undisciplined and “dirty” programming styles and solutions. We advocate in this paper that although rules are a good starting point towards standardising artificial intelligence techniques in games, they must be complemented with automatically verifiable rule schemata to ensure the appropriate implementation of such techniques and theories. We illustrate our point with a specific rule-based implementation of a theory of norms for synthetic characters which enables the specification of sophisticated behaviours. We also describe a rule-based framework for the design and fast prototyping of games.

1

Introduction

Mutual cooperation and integration between Artificial Intelligence and Computer Games has been acknowledged as a desirable feature and a goal by their corresponding research and industrial communities. From the standpoint of Artificial Intelligence research, computer games are an ideal groundwork for experimentation and presentation of research results. Games are also among the most successful areas of industrial application of artificial intelligence techniques [16]. From the standpoint of Computer Games, artificial intelligence has been pointed as the next breakthrough for gaming experience, after the “saturation” of computer graphics and visual realism [5, 15, 22]. Hence, it is very much welcome the effort put forward by the International Game Developers Association (IGDA1 ) to build and propose to the game industry interface standards for artificial intelligence tools and modules for games, ∗ Technical Report AUCS-TR0601, Department of Computing Science, University of Aberdeen, United Kingdom. Work partially sponsored by FAPESP and CNPq – Brazil. 1 http://www.igda.org/

1

game engines and game editors. As presented in [13], the proposed standards shall be adapted from the Java Specification Request JSR 94 – Java Rule Engine API [14]. Among the many working groups engaged in the specification of the IGDA interface standards for artificial intelligence, we have the working group on rulebased systems. Rules are expressive, declarative, flexible, and can be implemented to be executed very efficiently. These features rank rule-based inference engines high as a potential foundation to build on artificial intelligence techniques and methods to control synthetic characters in computer games [5]. The problem with rules is that they can be too flexible. Although designed for declarative programming, rules – and more specifically rule based systems based on JSR 94 – give room to all sorts of undisciplined and non-declarative programming. We advocate the following: Although rules are a good starting point towards standardising the incorporation of artificial intelligence techniques to games, they must be complemented with automatically verifiable rule schemata that ensure the appropriate implementation of such techniques. This is the central message of this article. In order to justify this message, we present a specific proposal – namely, a rule-based implementation of a theory of norms of behaviour for synthetic characters. Although we expect this theory and its proposed implementation to be of interest in itself for game developers and artificial intelligence researchers, we present it as a concrete case in which a rulebased schema – instead of just rules – enables the construction of sophisticated behaviour for synthetic characters. We have in this article two goals: 1. An “object level” goal proposing a theory of norms as a useful tool to design the interactions among synthetic characters in computer games; and 2. A “meta level” goal justifying by means of an example the statement above. We assume we are providing synthetic characters with information restricted to what they would have if immersed in a physical environment (e.g., usually our characters cannot see through walls). Such design constraints can bring additional difficulties to a game developer. The gain this brings on the believability and trustworthiness of the behaviour of synthetic characters compensates any difficulties. We want to explicitly represent the effect of individual actions on the collective behaviour in the game and vice-versa, that is, the collective behaviour influencing and constraining individual actions. We thus aim at representing the social aspects of a game. In the theory of norms we present here, we abstract away internal or circumstantial details of each synthetic character (i.e., how a character makes decisions about which actions to take with the available information, how properties change as a result of actions and so on). We focus 2

on those aspects which influence the game as whole. Nevertheless, whenever such particularities are important, they can be easily factored in. This article is organised as follows: in Section 1.1 we develop an illustrative scenario of application for our theory of norms. In Section 2 we introduce the foundations upon which we build our theory, based on deontic logics. In Section 3 we present the theory itself and show how it can be implemented as a rule schema, thus justifying our statement that rule schemata are useful to discipline the implementation of artificial intelligence theories. In Section 4 we describe a rule-based framework for game design. In Section 5 we review related work. Finally, in Section 6 we present some conclusions and proposed future work.

1.1

A Motivating Example

Let us consider the situation in which a synthetic character has a set of actions that can be performed at a particular point of the game – for instance, a warrior may have the choice of killing, maiming or freeing a prisoner2 . The decision procedure to choose (or rank) which actions to take should be independent of the design of how the characters interact. The set of permitted actions itself, however, is assembled taking into account the characters’ interactions up to that point in the game. In our warrior example, if the prisoner in question in the past had killed the warrior’s father, then the freeing option would be ruled out by the warrior’s internal convictions, that is, the warrior would be prohibited by its own code of ethics from taking this action. In the next section we expand what precisely we mean by prohibitions as well as the complementary notions of permissions and obligations and how these define computational behaviours. Just like in our social environment, prohibitions, permissions and obligations can have different consequences. We can assume that no character is allowed in any circumstance to violate prohibitions or obligations, thus ruling out any attempt to perform a prohibited action or to stop an obligation. Alternatively, in more sophisticated social settings we can determine sanctions to be applied in case a violation occurs. Sanctions can characterise new obligations, in which case such systems pose complex entangled behaviour to be solved, which can introduce significant burdening on the computational complexity of reasoning about norms and behaviour. For instance, if a character violates an obligation, it may receive a sanction, which is itself another obligation; a system designer must specify what sanction to apply in case the character does not abide by the first sanction, and so on recursively. 2 Pacifist readers should take into account that our theory works just as well in less aggressive scenarios. For example, we could consider a soccer player facing the choice of kicking the ball, charging against an opponent player or halting.

3

2

A Theory of Norms for Synthetic Characters

Social norms can be encoded in a multitude of ways. We are particularly interested in theories of norms: • That can be turned into computer programs to control the behaviour of synthetic characters, and • From which we can guarantee mathematical properties to allow the verification of desirable features of norms, e.g., whether the norms are consistent3 . As stated in section 1, we want to take into account the dynamics of social norms, considering that the set of permitted, obliged and prohibited actions is influenced by the characters’ history of interactions. A natural choice to express these norms are deontic logics. As presented in [24], deontic logics date as far back as medieval philosophy. The modern treatment of this family of logics, however, is commonly reputed to have started with Mally [19] and von Wright [28]. Deontic logics add to classical (propositional) logics modal qualifiers corresponding to the notions of permissions, prohibitions and obligations. We present here a very brief account of a simple class of monadic deontic logics. Monadic deontic logics4 have monadic modal operators, i.e., operators that apply to single propositions, e.g., we can have a proposition p and a deontic sentence Op, to be read as “p is obligatory”. Our presentation is, more specifically, the so-called Smiley-Hanson systems of monadic deontic logics, as presented in [2]. In our presentation, we enrich the language with predicates over finite models, which make the language more adequate to express complex patterns of interactions. Strictly speaking, we do not add expressive power to a propositional language by adding predicates over finite models, but we make the language more “ergonomic”: predicates are a shorthand notation for lengthier purely propositional sentences.

2.1

Syntax

In this section we define the syntax of our language to express deontic notions. We set out defining the building blocks of our language, its alphabet: Def. 1 The alphabet of our language consists of: • A finite set of constants C = {c1 , . . . , cn }. 3 Inconsistent social norms could, for example, be defined as those that make an action simultaneously obligatory and prohibited to a character. 4 Monadic deontic logics contrast with dyadic deontic logics, in which deontic modalities are made relative to a second proposition, and therefore relate to two different propositions, e.g., we can have the propositions p and q and a deontic sentence Oq p, to be read as “p becomes obligatory in the presence of q”.

4

m

j ms 1 of fixed arities mj ; this • A finite set P = {pm 1 , . . . , ps } of predicates pj set includes the special 0-ary predicates > (verum or true) and ⊥ (falsum or false).

• A denumerable set of variables X = {X1 , X2 , . . .}. • The set of logical connectives {¬, ∧, ∨, →, ↔}. • The set of deontic modalities {O, P}, obligation and permission, respectively. We do not explicitly add parentheses to our alphabet, but we shall use them whenever necessary to make the reading of sentences clearer. Using the alphabet above, we can define terms: Def. 2 Our set of terms T is defined as any element t ∈ C ∪ X . We now define the sentences of our language: Def. 3 The set of sentences S is the smallest set such that: • For every k-ary predicate pk ∈ P and k terms t1 , . . . , tk ∈ T , then pk (t1 , . . . , tk ) ∈ S. • >, ⊥ ∈ S. • If ϕ, ψ ∈ S, then (¬ϕ), (ϕ ∧ ψ), (ϕ ∨ ψ), (ϕ → ψ), (ϕ ↔ ψ) ∈ S. • If ϕ ∈ S, then Oϕ, Pϕ ∈ S. We add the abbreviation Fϕ ≡ ¬Pϕ to capture the intuitive notion of prohibition: ϕ is prohibited iff it is not permitted. We highlight two special classes of sentences: Def. 4 The sentences of the form pk (t1 , . . . , tk ) are called atomic sentences. Def. 5 The sentences of the form Oϕ, Pϕ and Fϕ, in which ϕ is an atomic sentence, are called simple deontic sentences. We assume implicitly that every variable occurring in a sentence is universally quantified. Thus, the following intuitive readings of the sentences below are assumed: kills(t1 , t2 ) is read as “t1 kills t2 ” and father (t1 , t2 ) is read as “t1 is the father of t2 ”, then: (1) father (wizard , warrior ): “wizard” is the father of “warrior”. (2) kills(prisoner , wizard ): “prisoner” kills “wizard”. (3) kills(X1 , X2 ) ∧ father (X2 , X3 ) → Okills(X3 , X1 ): for every X1 , X2 , X3 , if X1 kills X2 and X2 is the father of X3 , then X3 is obliged to kill X1 . (4) O kills(warrior , prisoner ): “warrior” is obliged to kill “prisoner”. Now one natural question to ask is: can we entail sentence (4) from sentences (1), (2) and (3)? In order to do so, we need some additional definitions. 5

2.2

Proof Theory

In this section we lay out the proof-theoretical foundation of our language above. We set out by defining substitutions, required in our unification process: Def. 6 A substitution σ is a finite, possibly empty set of pairs X/t, where X ∈ X is a variable and t ∈ T a term. We now defined how substitutions can be applied to sentences, giving rise to new sentences: Def. 7 The application of σ to a construct C ∈ S ∪ T (a term or a sentence of our language), denoted by C · σ, is defined as follows: c · σ = c for every constant c ∈ C.  t · σ if X/t ∈ σ. X ·σ = X otherwise p(t1 , . . . , pn ) · σ = p(t1 · σ, . . . , tn · σ). > · σ = >, ⊥ · σ = ⊥. (¬ϕ) · σ = ¬(ϕ · σ), (ϕ ∧ ψ) · σ = (ϕ · σ ∧ ψ · σ), (ϕ ∨ ψ) · σ = (ϕ · σ ∨ ψ · σ), (ϕ → ψ) · σ = (ϕ · σ → ψ · σ), (ϕ ↔ ψ) · σ = (ϕ · σ ↔ ψ · σ). Oϕ · σ = O(ϕ · σ), Pϕ · σ = P(ϕ · σ). We put forth the notion of unification: Def. 8 A substitution σ is a unification of two sentences ϕ and ψ iff ϕ·σ = ψ·σ; we can also say that σ unifies ϕ and ψ. We define the two inference rules we shall make use of: Def. 9 Our rules of inference are: ϕ, ξ → ψ – Unification-based Modus Ponens – (σ) for every substitution σ that ψ·σ unifies ϕ and ξ, that is, ϕ · σ = ξ · σ ϕ – O-necessitation – Oϕ A deontic logic is fully specified once we supply its deontic relations, namely how permissions, obligations and prohibitions relate to each other. The specification of deontic relations is a subtle and treacherous task that frequently leads to the formulation of conflicting theories (e.g. a theory in which everything that is permitted is also prohibited). The Smiley-Hanson systems of monadic deontic logics are interesting from a formal point of view, since they admit semantic models similar to those found in regular modal logics. The price we may pay to adopt these systems is to loose the capability to express some subtleties of deontic relations that can be important in specific domains. The Smiley-Hanson systems of monadic deontic logics are based on the following axiom schemata: 6

1. all classical tautologies, that is, (ϕ → (ϕ ∨ ψ)), (ϕ → ¬¬ϕ), and so on. 2. Pϕ → ¬O¬ϕ. 3. O(ϕ → ψ) → (Oϕ → Oψ). 4. Oϕ → Pϕ. 5. Oϕ → OOϕ. 6. POϕ → Oϕ. 7. O(Oϕ → ϕ). 8. O(POϕ → ϕ). Each axiom schema admits an informal reading, some of which are more intuitive than others. Axiom schema (4), for example, states that every obligation is permitted. These axiom schemata are the building blocks to build specific logics. As presented in [2], we can build ten different logics with these axiom schemata whose semantics are similar to those found in regular modal logics: OK: proper axioms plus schemata (1), (2), (3). OM: proper axioms plus schemata (1), (2), (3), (7). OS4: proper axioms plus schemata (1), (2), (3), (5), (7). OB: proper axioms plus schemata (1), (2), (3), (7), (8). OS5: proper axioms plus schemata (1), (2), (3), (5), (6), (7), (8). OK+: OK plus schema (4). OM+: OM plus schema (4). OS4+: OS4 plus schema (4). OB+: OB plus schema (4). OS5+: OS5 plus schema (4). We now define provability: Def. 10 A sentence ϕ is provable if, and only if, – it is an axiom, or – it can be derived from provable sentences using the inference rules of Def. 9 above. We also need the following two definitions: Def. 11 A set of sentences Ψ is inconsistent if there are ψ1 , . . . , ψn ∈ Ψ, n ≥ 1 such that (ψ1 ∧ . . . ∧ ψn ) → ⊥ is provable. Def. 12 A sentence ϕ is provable from a set of sentences Ψ (denoted as Ψ ` ϕ) if the set Ψ ∪ {¬ϕ} is inconsistent. 7

2.3

Semantics

In this section we define the meaning of our constructs. We give the following definition of a model: Def. 13 A model is a triple U = hW, R, V i where • W is a non-empty set of situations. • R ⊆ W × W is a co-permissibility relation, depicting which situations can be reached from each other. • V is a truth assignment, associating to each sentence ϕ a set V (ϕ) ⊆ W, which is the set of situations in which ϕ holds. Intuitively, a situation can be considered as a scene of a game. For example, we could have the following situations: 1. w1 ∈ W: the warrior is surrounded by enemies. 2. w2 ∈ W: the warrior evades the enemies. 3. w3 ∈ W: the warrior is killed by its enemies. ¿From situation w1 we could move to situation w2 , situation w3 or continue in w1 . This would amount to the co-permissibility relation (w1 , w1 ), (w1 , w2 ), (w1 , w3 ) ∈ R. We now need to define ground sentences: Def. 14 A sentence is ground if it contains no variables. Given any sentence ϕ, we define gr(ϕ) as the set of all ground sentences ϕ such that there is a substitution σ, ϕ · σ = ϕ. We build on the definitions above to define truth conditions: Def. 15 Given a model U = hW, R, V i and w ∈ W, we define truth conditions as follows, where ϕ and ψ are always ground: |=w ϕ iff |=w ϕ for all ϕ ∈ gr(ϕ). |=w p(t1 , . . . , tn ) iff w ∈ V (p(t1 , . . . , tn )), p(t1 , . . . , tn ) ground. |=w >. 6|=w ⊥. |=w ¬ϕ iff 6|=w ϕ. |=w (ϕ ∧ ψ) iff |=w ϕ and |=w ψ. |=w (ϕ ∨ ψ) iff |=w ϕ or |=w ψ. |=w (ϕ → ψ) iff |=w ¬ϕ or |=w ϕ ∧ ψ. |=w (ϕ ↔ ψ) iff |=w (ϕ → ψ) and |=w (ψ → ϕ) 8

|=w Oϕ iff for every w 0 , (w, w0 ) ∈ R then |=w0 ϕ |=w Pϕ iff for some w0 , (w, w0 ) ∈ R then |=w0 ϕ |=w Fϕ iff |=w ¬Pϕ Finally, we put forth the notion of semantic entailment: Def. 16 Given a sentence ϕ and a set of sentences Ψ: • Ψ is not satisfiable if there is no model U = hW, R, V i and w ∈ W such that simultaneously |=w ψ for all ψ ∈ Ψ. • ϕ is semantically entailed by Ψ (denoted as Ψ |= ϕ) if Ψ ∪ {¬ϕ} is not satisfiable.

2.4

Special Classes of Models

A model should be a characterisation of how permissions, prohibitions and obligations relate to each other in the world we are analysing (or designing, in the case of computer games). Observing that a co-permissibility relation determines a directed graph connecting situations, this characterisation can be built in terms of topological features that we may enforce in the model. An important and remarkable feature of Smiley-Hanson systems of monadic deontic logics is that the proof theory presented above is sound and complete with respect to particular classes of models, characterised by topological features of their co-permissibility relations. A proof theory is sound and complete with respect to a class of models when Ψ ` ϕ if, and only if, Ψ |= ϕ In order to characterise special classes of models, we must define the topological relations we can capture with the presented proof theory and axiom schemata. We define five features that can be observed/enforced in co-permissibility relations: 1. R is serial if all situations are connected, i.e., for every w there is at least one w0 such that either (w, w 0 ) or (w0 , w) ∈ R. 2. R is transitive if for all situations w, w 0 , w00 such that (w, w0 ), (w0 , w00 ) ∈ R then (w, w00 ) ∈ R. 3. R is euclidean if for all situations w, w 0 , w00 such that (w, w0 ), (w, w00 ) ∈ R then (w0 , w00 ) ∈ R. 4. R is almost reflexive if for all situations w, ˆ w, if wRw ˆ then wRw. 5. R is almost symmetric if for all situations w, ˆ w, w 0 , if (w, ˆ w) ∈ R then we 0 0 have that (w, w ) ∈ R implies (w , w) ∈ R.

9

Each of the ten logical systems presented above can be proved to be sound and complete with respect to the following classes of models: OK: all models (i.e. no special condition is imposed on R). OM: models with almost reflexive R. OS4: models with transitive and almost reflexive R. OB: models with almost symmetric and almost reflexive R. OS5: models with transitive and euclidean R. OK+: models with serial R. OM+: models with almost reflexive and serial R. OS4+: models with transitive, almost reflexive and serial R. OB+: models with almost symmetric, almost reflexive and serial R. OS5+: models with transitive, euclidean and serial R.

3

A Rule Schema for the Theory of Norms

We use the logics outlined in the previous section as the foundation for a computational theory of norms, used to rule the behaviour of synthetic characters in computer games. The rule schema we present here is a variation of the work on norm-aware agent societies for e-commerce developed in [10, 12, 11]. As advanced in the previous sections, this theory is implemented as a rule schema, fully compatible with JSR 94. We restrict our language to the following atomic sentences: • do(action i (c, t1 , . . . , tk )): represents an action action i to be performed by character c; the action can be parameterised by terms t1 , . . . , tk . • att(action i (c, t1 , . . . , tk )): represents an attempt to perform an action. • pi (t1 , . . . , tr ): generic predicates, used to describe aspects of states of affairs; these predicates are updated as the result of execution of actions. We differentiate between attempts to perform actions (att) and actual actions being performed (do): we allow our characters to try whatever they want (via att sentences). However, illegal actions may be discarded and/or may cause sanctions, depending on the deontic notions we want or need to implement. The do sentences are thus confirmations of the att sentences. To precisely define the computational behaviours specified by our rules, we employ the notion of a global state of affairs:

10

Def. 17 ∆ = {ϕ0 , . . . , ϕn } is a a finite and possibly empty set of implicitly universally quantified atomic sentences ϕi , 0 ≤ i ≤ n, possibly with a deontic modality. State of affairs may contain both atomic sentences and simple deontic sentences. Intuitively, a state of affairs can be used to store all relevant information on the current situation of the game. A global state triggers a collection of deontic rules, which in turn update the global state ∆ generating a new global state ∆0 . Whenever an external factor (e.g., an input from a human player) changes the global state, rules can be triggered, updating it. Global states are used to characterise the social commitments of agents, stated in terms of obligations, prohibitions and permissions. We define below the syntax of our deontic rules: Def. 18 A deontic rule Rule is defined as Rule LHS RHS Update

::= ::= ::= ::=

LHS ⇒ RHS α | ¬α | LHS ∧ LHS Update | RHS ∧ RHS ⊕α | α

where α is any atomic sentence, possibly with a deontic modality. Constructs Update are update operations: ⊕α inserts α into a global state, and α retracts α from a global state.

3.1

Sample Deontic Rules

Sets of deontic rules are not committed to any specific axiom schemata. Different sets of rules can be used to simulate different axiom schemata. For example, axiom (4) in section 2 is simulated using the following rule (cf. [10, 12, 11]): O(X) ⇒ ⊕P(X) Obligations can also generate attempts to perform certain actions. Hence, a sensible general rule to be added to specific theories can be: O(do(X)) ⇒ ⊕att(X) Attempts become effective action executions depending on preconditions. Rules that turn attempts into action executions have the form:     att(action(c, t1 , .V. . , tm )) ∧ ⊕do(action(c, t1 , . . . , tm )) ∧ Vn ⇒ u ( i αi ) ∧ ( j ¬αj ) att(action(c, t1 , . . . , tm ))

In such rules, αi and αj are simple deontic sentences. Notice that these sentences can include prohibitions, obligations and permissions. For example, we could have a rule of the form:     att(action(c, t1 , . . . , tm )) ∧ ⊕do(action(c, t1 , . . . , tm )) ∧  P(do(action(c, t1 , . . . , tm )) ∧  ⇒ att(action(c, t1 , . . . , tm )) ¬F(do(action(c, t1 , . . . , tm )) 11

Finally, action execution rules are rules specialised in updating global states. An action execution rule has the form: n u ^ ^ do(action(c, t1 , . . . , tm )) ⇒ ( ⊕αi ) ∧ ( αj ) i

j

Our rule schema is defined as sets of rules precisely written as above. It should be noticed that this rule schema is extremely flexible, and can simulate a large host of deontic systems, including (but not restricted to) those systems outlined in section 2.

3.2

Operational Semantics

In this section we provide an operational semantics for our language of deontic rules, which match neatly and perfectly with the conventional operational semantics of forward chaining rules as specified in JSR 94. We show in Figure 1 our operational semantics as functions. We numbered each function so as to 1. 2. 3. 4. 5. 6. 7. 8. 9.

˛ ff 0 sr (∆, hRHS , Σi) ˛˛ (LHS ⇒ RHS ) ∈ Rules, ˛ s∗ (∆, LHS ) = Σ, Σ 6= ∅ l s∗l (∆, LHS ) = {σ  | sl (∆, LHS ) = hσ, Ti} 0 hσ, Ti if sl (∆, LHS ) = hσ , Ti and sl (∆, LHS 0 · σ 0 ) = hσ, Ti sl (∆, LHS ∧ LHS 0 ) = h∅, Fi, otherwise hσ, Ti if α · σ ∈ ∆ sl (∆, α) = h∅,  Fi otherwise hσ, Ti if α · σ 6∈ ∆ sl (∆, ¬α) = h∅, Fi otherwise S 0 0 0 s0r (∆, hRHS , Σi) = 8m i=0 ∆i ∈ {∆ | σ ∈ Σ, sr (∆, hRHS , σi) = h∆ , Ti} > OO ``AAA }>> OO V }} V }} A A } } V V V AA } V }} V A }}} }} _*4 ∆00 ∗ +3 ∆1 _*4 ∆01 ∆0 w0

R

R

OO



//

+3

...

...

Figure 3: Operational (Bottom) and Model-Theoretic (Top) Semantics of affairs, then we try to build a model as a set of situations. This mapping builds a model U = hW, R, Vi by gradually assembling a set of situations W = {w0 , w00 , w1 , w10 , . . .} such that for some α ∈ ∆i , V (α) = wi , for a situation wi . The mapping selects the maximal subset from each state of affairs that depicts a set of situations as well as the associated relation R for which any relevant model-theoretic properties hold.

3.4

Motivating Example Revisited

In order to make our discussion more concrete, we present some rules that could be used to implement the behaviour described in section 1.1. The relevant social norms for the described scene can be captured by the following seven rules5 : 1. Unless stated otherwise, killing is prohibited: > ⇒ ⊕F(do(kill (X, Y ))) 2. A prisoner has limited rights. In particular, a prisoner can be killed: prisoner (X, Y ) ⇒ F(do(kill (X, Y ))) 3. If X kills Y , then X is identified as the murderer of Y and Y is identified as dead (definition of kill ): do(kill (X, Y )) ⇒ do(kill (X, Y )) ∧ ⊕murderer(X, Y ) ∧ ⊕dead (Y ) 4. If someone kills X’s father, then X has the moral obligation to kill the murderer: dead (X) ∧ murderer (Y, X) ∧ father (X, Z) ⇒ ⊕O(do(kill (Z, Y ))) 5 These

rules are rather politically incorrect. The authors suggest that they should be taken into account for computer games only.

14

5. (general deontic rule) Any obligation is also a permission: O(X) ⇒ ⊕P(X) 6. (general deontic rule) Any obligation to execute an action generates an attempt to execute that action: O(do(X)) ⇒ ⊕att(X) 7. (general deontic rule) Any attempt to execute an action that is permitted and not prohibited leads to the effective execution of that action: att(X) ∧ P(do(X)) ∧ ¬F(do(X)) ⇒ att(X) ∧ ⊕do(X) We populate this world with the following possible values for the variables: • cw – the warrior (a good guy). • cf – the warrior’s father. • cm – a murderer (a bad guy). Using the above rules with ∆0 = {father (cf , cw )}, by virtue of rule (1), generates 8 father (cf , cw ), > > < F(do(kill (cw , cw ))), ∆1 = > F(do(kill (cf , cw ))), > : F(do(kill (cm , cw ))),

F(do(kill (cw , cf ))), F(do(kill (cf , cf ))), F(do(kill (cm , cf ))),

8 father (cf , cw ), > > > F(do(kill (cw , cw ))), > > < F(do(kill (cf , cw ))), ∆3 = F(do(kill (cm , cw ))), > > > > > : murderer (cm , cf ), P(do(kill (cw , cm ))),

F(do(kill (cw , cf ))), F(do(kill (cf , cf ))), F(do(kill (cm , cf ))), dead (cf ), att(kill (cw , cm ))

8 father (cf , cw ), > > > F(do(kill (cw , cw ))), > > > > < F(do(kill (cf , cw ))), F(do(kill (cm , cw ))), ∆5 = > > > murderer (cm , cf ), > > > > : P(do(kill (cw , cm ))), murderer (cw , cm ),

F(do(kill (cw , cf ))), F(do(kill (cf , cf ))), F(do(kill (cm , cf ))), dead (cf ), att(kill (cw , cm )), dead (cm )

9 > > =

F(do(kill (cw , cm ))), F(do(kill (cf , cm ))), > > ; F(do(kill (cm , cm )))

If an external action adds to the global state the action do(kill (cm , cf )), this generates ∆2 = ∆1 ∪ {do(kill (cm , cf ))}. Triggering the rules with this new global states leads to F(do(kill(cw , cm ))), F(do(kill(cf , cm ))), F(do(kill(cm , cm ))), O(do(kill(cw , cm ))),

9 > > > > > = > > > > > ;

Notice that we have in the same state P(do(kill (cw , cm ))) and F(do(kill (cw , cm ))). Rule (7) makes prohibition override permission in this case, and the kill ing does not occur. If we add prisoner (cm , cw ) to ∆3 , thus generating ∆4 = ∆3 ∪ {prisoner (cm , cw )}, then rule (2) is triggered, and we finally have

15

9 > > > > > > F(do(kill(cf , cm ))), > = F(do(kill(cm , cm ))), > O(do(kill(cw , cm ))), > > > > > > ;

Clearly, a game would require many more rules to become interesting. For example, we could lift an obligation once the expected results of the corresponding action are observed: 8. O(do(kill (X, Y ))) ∧ dead(Y ) ⇒ O(do(kill (X, Y ))) Different game settings can also suggest more convoluted social norms. For example, a soccer game can be specified, in which prohibited actions can indeed be executed, but are followed by corresponding sanctions (fault, yellow card, red card, and so on).

4

A Rule-Based Framework for Game Design

We envisage a framework for game design centred around rule schemata. Figure 4 illustrates our framework: game designers (shown on the left of the diShared Blackboard (Tuple Space) Rule1

Rule2

...

Rulen

State of Affairs 2

...

...

...

...

State of Affairs 1

...

Synthetic Characters

Designers

Figure 4: Rule-Based Framework for Game Design agram) create their rules using a suite of tools for editing and checking them. Since more than one designer can be involved in the rule preparation, the editing and management of rules must allow for collaborative design. Ours is a distributed framework using a shared blackboard implemented via a tuple space. Once rules are created and made available, they can be “tried out” immediately: synthetic characters (shown on the right of the diagram) put the rules to use, as the operational semantics of rules defines precise behaviours. The global states of affairs representing the operational semantics can also be stored on the blackboard, thus providing explicit and comprehensive descriptions of how the game evolved. Our framework encourages design experimentation as it short circuits the design-followed by-implementation process. Rules directly give rise to computational behaviours, leading to rapid prototyping of games. Once rules have been designed and checked for properties and designers are satisfied with the behaviours of sample executions, then the rules can be combined with their operational semantics and used to define simple program skeletons [25] in a programming language of choice. Our proposal should encourage high-level design principles using a metaphor (viz., deontic notions) humans relate to, rather than low-level implementational details scattered over code of a programming

16

language. We highlight that rules can be used by various people, from artists to programmers, and also by software artifacts.

5

Related Work

Apart from classical studies on law, research on norms and agents has been addressed by two different disciplines: sociology and philosophy. On the one hand, socially oriented contributions highlight the importance of norms in agent behaviour (e.g., [7, 6, 23]) or analyse the emergence of norms in multi-agent systems (e.g., [30, 21]). On the other hand, logic-oriented contributions focus on the deontic logics required to model normative modalities along with their paradoxes (e.g., [29, 1, 17]). The last few years, however, have seen significant work on norms in multi-agent systems, and norm formalisation has emerged as an important research topic in the literature [8, 4, 26, 9]. V´ azquez-Salceda et al. [26, 27] propose the use of a deontic logic with deadline operators. In their approach, they distinguish norm conditions from violation conditions. This is not necessary in our approach since both types of conditions can be represented in the LHS of our rules. Their model of norm also separates sanctions and repairs (i.e., actions to be done to restore the system to a valid state); these can be expressed in the RHS of our rules without having to differentiate them from other normative aspects of our states. Our approach has an immediate advantage over [26, 27], in that we provide an implementation for our rules. Fornara et al. [9] propose the use of norms partially written in Object Constraint Language (OCL). Their commitments are used to represent all normative modalities; of special interest is how they deal with permissions: they stand for the absence of commitments. This feature may jeopardise the safety of the system since it is less risky to only permit a set of safe actions thus forbidding other actions by default. Although this feature can reduce the amount of permitted actions, it allows unexpected actions to be carried out. Their within, on and if clauses can be encoded as LHS of our rules as they can all be seen as conditions when dealing with norms. Similarly, “foreach in” and “do” clauses can be encoded as RHS of our rules since they are the actions to be applied to a set of agents. L´ opez y L´ opez et al. [18] present a model of normative multi-agent system specified in the Z language. Their proposal is quite general since the normative goals of a norm do not have a limiting syntax as is the case with the rules of Fornara et al. [9]. However, their model assumes that all participating agents have a homogeneous, predetermined architecture. Artikis et al. [3] use event calculus for the specification of protocols. Obligations, permissions, empowerments, capabilities and sanctions are formalised by means of fluents (i.e., predicates that may change with time). Prohibitions are not formalised in [3] as a fluent since they assume that every action not permitted is forbidden by default. Although event calculus models time, their deontic fluents are not enough to inform an agent about all types of duties. For

17

instance, to inform an agent that it is obliged to perform an action before a deadline, it is necessary to show the agent the obligation fluent and the part of the theory that models the violation of the deadline. Michael et al. [20] propose a formal scripting language to model the essential semantics, namely, rights and obligations, of market mechanisms. They also formalise a theory to create, destroy and modify objects that either belong to someone or can be shared by others. Their proposal is suitable to model and implement market mechanisms. However, it is not as expressive as other proposals: for instance, it cannot model obligations with a deadline.

6

Conclusions, Discussion and Future Work

In this paper we have argued that the efforts of the International Game Developers Association (IGDA) to propose to the game industry interface standards for artificial intelligence techniques using rules must be complemented with automatically verifiable rule schemata that ensure the appropriate implementation of such techniques. We have presented a specific proposal for such schemata, namely, a theory of norms to guide the behaviour of synthetic characters. We have developed a simple albeit flexible syntax and proposed an operational semantics for a rule language in which normative aspects (emphviz., obligations, permissions and prohibitions) are used to specify the behaviour of characters. We have also described a complete rule-based approach to game design. Our proposal should encourage high-level design principles based on metaphors and paradigms humans relate with, in this case, normative concepts such as obligations, permissions and prohibitions. Rather than having game designers pursuing their ideas in the abstract, possibly using simple storyboards and graphic props, and then transmitting their ideas to implementors, rules can “short circuit” this two-step process. If designers make use of rules (via suitable support tools such as the ones described in Section 4) and since rules capture behaviours, then the rules are an initial prototype which can then be used to guide the final, more efficient implementation. Even if rules were to be “discarded” after exploration, they can still serve as a means to communicate the design decisions to implementors. We want to investigate (semi-)automatic means to check properties of deontic rules. In special, we want to explore the formal connections between the model-theoretic and operational semantics sketched in Section 3.3, and propose algorithms to check if a set of deontic rules are “feasible”, that is, they admit at least one model. We also want to study other properties, such as a guarantee that all obligations should be fulfilled and that prohibitions should never be infringed, and how these can be checked. Such checks could be done “a posteriori”, after the set of deontic rules have been finalised, or during their preparation, as editing and checking operations can be interleaved.

18

References [1] C. Alchourron and E. Bulygin. The Expressive Conception of Norms. In R. Hilpinen, editor, New Studies in Deontic Logics, pages 95–124, London, 1981. D. Reidel. [2] L. Aqvist. Deontic Logic. In Handbook of Philosophical Logic, volume II: Extensions of Classical Logic. Kluwer, Dordrecht, 1984. [3] A. Artikis, L. Kamara, J. Pitt, and M. Sergot. A Protocol for Resource Sharing in Norm-Governed Ad Hoc Networks. volume 3476 of LNCS. Springer-Verlag, 2005. [4] G. Boella and L. van der Torre. Permission and Obligations in Hierarchical Normative Systems. In Procs. ICAIL 2003. ACM Press, 2003. [5] N. Combs and J.-L. Ardoint. Declarative vs. Imperative Paradigms in Games AI, 2005. Available at http://www.roaringshrimp.com/WS04-04NCombs.pdf, Last revised in June 2005. [6] R. Conte and C. Castelfranchi. Norms as Mental Objects: From Normative Beliefs to Normative Goals. In Procs. of MAAMAW’93, Neuchatel, Switzerland, 1993. [7] R. Conte and C. Castelfranchi. Understanding the Functions of Norms in Social Groups through Simulation. In Artificial Societies. The Computer Simulation of Social Life. UCL Press, 1995. [8] F. Dignum. Autonomous Agents with Norms. A. I. & Law, 7(1):69–79, 1999. [9] N. Fornara, F. Vigan` o, and M. Colombetti. A Communicative Act Library in the Context of Artificial Institutions. In Procs. EUMAS, 2004. [10] A. Garc´ıa-Camino, J.-A. Rodr´ıguez-Aguilar, C. Sierra, and W. Vasconcelos. A Distributed Architecture for Norm-Aware Agent Societies. In Procs. Int’l Workshop on Declarative Agent Languages & Technologies (DALT 2005), New York, USA, volume 3904 of LNAI. Springer-Verlag, Berlin, 2006. [11] A. Garc´ıa-Camino, J.-A. Rodr´ıguez-Aguilar, C. Sierra, and W. Vasconcelos. A Rule-based Approach to Norm-Oriented Programming of Electronic Institutions. ACM SIGecom Exchanges, 5(5):33–40, Jan. 2006. [12] A. Garc´ıa-Camino, J.-A. Rodr´ıguez-Aguilar, C. Sierra, and W. Vasconcelos. Norm Oriented Programming of Electronic Institutions. In Procs. 5th Int’l Joint Conf. on Autonomous Agents & Multiagent Systems (AAMAS’06), Hakodate, Japan, May 2006. ACM Press. [13] International Game Developers Association. Report of the IGDA’s Artificial Intelligence Interface Standards Committee, 2005. Available at http://www.igda.org/ai/ report-2005/report-2005.html, Last accessed 05 March 2006, 16:03GMT. [14] Java. Community Development of Java Technology Specifications: Java Specification Requests, 2005. Available at http://www.jcp.org/en/jsr/all, Last accessed 05 March 2006, 15:28GMT. [15] D. Johnson and J. Wiles. Computer Games with Intelligence. In Procs. 10th IEEE Int’l Conf. on Fuzzy Systems, Melbourne, Australia, 2001. IEEE. [16] J. E. Laird and M. van Lent. Human-level AI’s Killer Application: Interactive Computer Games. In Procs. 17th Nat’l Conf. on A.I. (AAAI’2000), Austin, Texas,U.S.A., 2000. AAAI Press. [17] A. Lomuscio and D. Nute, editors. Procs. of DEON 2004, volume 3065 of LNAI. Springer Verlag, 2004. [18] F. L´ opez y L´ opez and M. Luck. A Model of Normative Multi-Agent Systems and Dynamic Relationships. volume 2934 of LNAI. Springer-Verlag, 2004. [19] E. Mally. Grundgesetze des Sollens: Elemente der Logik des Willens. Leuschner & Lubensky, Graz, Austria, 1926. [20] L. Michael, D. C. Parkes, and A. Pfeffer. Specifying and monitoring market mechanisms using rights and obligations. In Proc. AMEC VI, 2004.

19

[21] Y. Shoham and M. Tennenholtz. On Social Laws for Artificial Agent Societies: Off-line Design. Artificial Intelligence, 73(1-2):231–252, 1995. [22] Sweetser, P. Current AI in Games: a Review, 2002. Unpublished manuscript, available at http://www.itee.uq.edu.au/~penny/_papers/Game%20AI%20Review.pdf, University of Queensland, Last accessed 05 March 2006, 16:15GMT. [23] R. Tuomela and M. Bonnevier-Tuomela. Norms and Agreement. European Journal of Law, Philosophy and Computer Science, 5:41–46, 1995. [24] A. Valente. Legal Knowledge Engineering: A Modelling Approach. IOS Press, Amsterdam, The Netherlands, 1995. [25] W. W. Vasconcelos, D. Robertson, C. Sierra, M. Esteva, J. Sabater, and M. Wooldridge. Rapid Prototyping of Large Multi-Agent Systems through Logic Programming. Annals of Maths and A.I., 41(2–4):135–169, Aug. 2004. [26] J. V´ azquez-Salceda, H. Aldewereld, and F. Dignum. Implementing Norms in Multiagent Systems. volume 3187 of LNAI. Springer-Verlag, 2004. [27] J. V´ azquez-Salceda, H. Aldewereld, and F. Dignum. Norms in Multiagent Systems: Some Implementation Guidelines. In Procs. EUMAS, 2004. [28] G. von Wright. Deontic Logic. Mind, 60:1–15, 1951. [29] G. von Wright. Norm and Action: A Logical Inquiry. Routledge and Kegan Paul, London, 1963. [30] A. Walker and M. Wooldridge. Understanding the emergence of conventions in multiagent systems. In Procs. ICMAS 2005, San Francisco, USA, 2005.

20

Suggest Documents