Prospective Storytelling Agents - CiteSeerX

2 downloads 0 Views 232KB Size Report
enemy(ninja,stone_bridge(gap)). neg_barred(L) ← enemy(X,L), consider(kill(X)). enemy(spider,wood_bridge(gap)). enemy(L) ← enemy(_,L). ○. By reasoning ...
Prospective Storytelling Agents G. Lopes and L. M. Pereira

Interactive Storytelling ●





Digital storytelling today remains largely pre-written. Interactive drama seeks to endow their audience with the freedom to influence the story. Requirements for dramatic interaction: –

Rich, dynamic worlds.



Automatic generation of non-linear dramatic content.



Autonomous virtual characters embodied in the narrative environment.

Prospective Logic Programming ●







Anticipation is essential for proactive agents working with partial information in dynamically changing environments. Declarative framework supporting the specification of autonomous agents capable of anticipating and reasoning about hypothetical future scenaria. Builds upon grounded theories: –

Abduction



Non-monotonic Reasoning



Preferences

Supports modelling cognitive architectures involving planning, utility-based reasoning, moral constraints, ...

An Interactive Story ●





Once upon a time, there was an autonomous robot who had to save this princess trapped in a castle. The robot was endowed with a set of declarative rules for decision making and moral reasoning. As he approaches the castle, an ordeal presents itself...

Hypotheses Generation ●

The goal: save(princess, after(gap)).



The abductive logic program: save(princess,after(X)) ← cross(X). cross(X) ← cross_using(X,Y).

wood_bridge(gap). stone_bridge(gap).

cross_using(X,wood_bridge) ← wood_bridge(X), neg_barred(wood_bridge(X)). cross_using(X,stone_bridge) ← stone_bridge(X), neg_barred(stone_bridge(X)). neg_barred(L) ← not enemy(L). neg_barred(L) ← enemy(X,L), consider(kill(X)). enemy(L) ← enemy(_,L). ●



enemy(ninja,stone_bridge(gap)). enemy(spider,wood_bridge(gap)).

By reasoning backwards from the goal, the agent generates three possible hypothetical scenaria for action: { [...], [kill(ninja),...], [kill(spider),...] } By reasoning forwards from each scenario, their expected consequences can be derived: reasonable ← utility(survival,U), prolog(U > 0.6). reasonable_rescue(P,X) ← in_distress(P,X), reasonable.

Preferences ●



Preferences can be applied either a priori or a posteriori w.r.t. the derivation of consequences for each scenario. Prior preferences establish pre-conditions on the availability of hypotheses. expect(kill(X)) ← enemy(X,_). expect_not(kill(X)) ← consider(follow(ghandi_moral)), human(X).



Post preferences establish conditions on the consequences of each scenario under which a given hypotheses should be preferred to another. select(Ms,SMs) :- select(Ms,Ms,SMs). select([],_,[]). select([M1|Ms],AMs,SMs) :- count_morals(M1,NM1), member(M2,AMs), count_morals(M2,NM2), NM2 > NM1, select(Ms,AMs,SMs). select([M1|Ms],AMs,SMs) :- not member(solving_conflict,M1), member(M2,AMs), member(solving_conflict,M2), select(Ms,AMs,SMs). select([M|Ms],AMs,[M|SMs]) :- select(Ms,AMs,SMs).

Utility Theory and Moral Constraints ●



The reasoning process can be supported by utility and moral rules. Utility rules condition the value of scenario consequences on their probability of occurring. life_value(1). prob_kill(ninja, 0.7). prob_kill(spider, 0.3). utility(survival,U) ← available(kill(E)), prob_kill(E,P), life_value(V), U is P * V.



Moral constraints are qualitative guidelines which rule out certain scenarios on the basis of some of their expected consequences. human(ninja). expect_not(kill(X)) ← consider(follow(ghandi_moral)), human(X). falsum ← knightly_rescue(princess,X), not save(princess,X).

Software Architecture ●







A blend of imperative and declarative techniques integrated using the C# programming language and the .NET framework. Graphics rendering and sensory-motor systems supporting locomotion and perception were procedurally implemented using state-of-the-art game engine components (e.g. Ogre3D render engine). XSB-Prolog and the ACORDA system support the decision-making components. A C# interface for XSB-Prolog was developed allowing full bidirectional coupling of procedural and declarative engines: –

The game engine can query Prolog in order to update agent decisions.



Declarative reasoning can query the procedural perception modules.

Demo

Conclusions and Future Work ●





A step forward in the application of state-of-the-art declarative reasoning techniques to the automatic generation of dramatic narratives in dynamic virtual environments. PLP is a logic formalism particularly useful for interactive storytelling techniques: –

Non-monotonic reasoning supports robustness to novel, possibly contradictory, data entering the system.



Abduction and preferences allow reasoning with incomplete information.

What's next: –

Formalize a framework for full-blown agent specification.



Experiment interaction between multiple prospective logic agents.



Couple prospective reasoning with other interactive storytelling techniques.

References [1] M. Cavazza, F. Charles, and S. J. Mead. Interacting with virtual characters in interactive storytelling. In AAMAS ’02: Proceedings of the first international joint conference on Autonomous agents and multiagent systems, pages 318–325, New York, NY, USA, 2002. ACM. [2] C. Crawford. Chris Crawford on Interactive Storytelling. New Riders Games, 2004. [3] L.M. Pereira and G. Lopes. Prospective logic agents. International Journal of Reasoning-based Intelligent Systems (IJRIS), 3/4(1):200–208, 2009. [4] L.M. Pereira and A. Saptawijaya. Modelling morality with prospective logic. International Journal of Reasoning-based Intelligent Systems (IJRIS), 3/4(1):209– 221, 2009.