Put Fuzzy Cognitive Maps to Work in Virtual Worlds - CiteSeerX

13 downloads 245 Views 258KB Size Report
Another class jobs tackle the ... [Maes 95]), formative agent (STEVE Project [Rickel 99]), presenter ..... University Press, Princeton, New Jersey, 1976. [Bates 92].
Accepted to Melbourne Fuzz-IEEE 2001, in Fuzz-IEEE’01 proceedings The 10th IEEE Internationa Conference on Fuzzy Systems, 1:P038, Australia, December 2-5th, 2001.

Put Fuzzy Cognitive Maps to Work in Virtual Worlds Marc Parentho¨en*, Patrick Reignier**, Jacques Tisseau* * Laboratoire d’Informatique Industrielle (Li2 , EA 2215 UBO/ENIB) ENIB - Parvis Blaise Pascal - 29280 Plouzan´e - BP 30815 - F-29608 Brest Cedex FRANCE Phone : +33 (0)298 05 66 31 Fax : +33 (0)298 05 66 29 Web : http://www.enib.fr/chercher/LI2 {parenthoen, tisseau}@enib.fr ** Informatique Graphique, Vision, Robotique (GRAVIR, UMR 5527 IMAG/INRIA) ZIRST Montbonnot, 655 Avenue de l’Europe, 38330 Montbonnot Saint Martin - FRANCE Phone : +33 (0)4 76 61 54 11 Fax : +33 (0)4 76 61 52 10 Web : http://www-gravir.imag.fr [email protected] Keywords: Fuzzy Cognitive Maps, Believable Agents, “Charactors”, Perception, Virtual Story Telling provisations [Hayes-Roth 96]. This article proposes the use of the Fuzzy Cognitive Maps (FCMs) as a tool to model the emotional behavior of virtual actors with character (“charactors”) improvising in free interaction within the framework of a “nouvelle vague” scenario, as could Godard do. FCMs result from work of some psychologists. In 1948, Tolman introduces the key concept of the “cognitive maps” to describe complex topological memorizing behaviours in the rats [Tolman 48]. In the Seventies, Axelrod describes the “cognitive maps” in the shape of directed, inter-connected, bilevel-valued graphs, and uses them in decision theory applied to the politico-economic field [Axelrod 76]. In 1986, Kosko extends the graphs of Axelrod to the fuzzy mode which become thus FCMs [Kosko 86]. In 1988, Styblinski and Meyer use FCMs to analyze electric circuits [Styblinski 88]. In 1994, Dickerson and Kosko propose the use of FCMs to obtain an overall virtual world modeling [Dickerson 94]. In this paper we describe how we delocalized FCMs on each agent level to model autonomous agents within a virtual world. In the following section, we define FCMs such as we implemented them in the multi-agents environment oRis [Harrouet 00]. Section 3 shows the use of FCMs to define “charactors” and as an example, section 4 puts in scene a shepherd, his dog and his herd. We show thus that FCMs are particularly well adapted to believable agents roles specification and implementation.

Abstract This article lies within the interactive virtual stories telling scope and proposes the use of fuzzy cognitive maps as a tool to model emotional behavior of virtual actors improvising in free interaction within the framework of a “nouvelle vague” scenario, as could Godard do. We show how fuzzy cognitive maps can be delocalized on each agent level to model autonomous agents within a virtual world. We describe the implementation carried out, starting from work in cognitive psychology and illustrate it by an improvisation between a shepherd, a dog and virtual sheep.

1

Introduction

To date, a majority of work on Virtual Reality (VR) relates to multi-sensory immersion of human user within virtual universes. These universes, increasingly realistic, will miss credibility as long as they will not be populated with autonomous entities. This autonomy rests on a sensorimotor autonomy: each entity is equipped with sensors and effectors enabling it to be informed and to act on its environment; an autonomy of execution: the execution controller of each entity is independent of the controllers of the other entities; an autonomy of decision: each entity decides according to its own personality (its history, its intentions, its state and its perceptions). The virtual human autonomy is one of the current VR stakes, as underlines D. Thalmann in a recent futurology [Thalmann 00]. The first work on the virtual actors modeling focuses on the physical avatars behavior (MIRALab Project [Magnenat 91], Jack Project [Badler 93]). Another class jobs tackle the problem of interaction between a virtual actor and a human operator: companion agent (ALIVE Project [Maes 95]), formative agent (STEVE Project [Rickel 99]), presenter agent [Noma 00]. Lastly, others are interested in the interaction between virtual actors. In particular, the OZ Project [Bates 92] is centered around a drama manager which controls the scenario evolution [Mateas 97]. Conversely, the Virtual Theater Project rests on a set of im-

2

Fuzzy Cognitive Maps (FCMs)

An FCM is an influence graph. Nodes are named by concepts forming the set of concepts C = {C1 , · · · , Cn }. Arcs (Ci , Cj ) are oriented and represent causal links between concepts; id est how concept Ci causes concept Cj . Arcs are elements of the set A = {(Ci , Cj )ij )} ⊂ C × C. Weights of arcs are associated with a link value matrix (Lij ∈ Mn (K) where K is ZZ or IR: if (Ci , Cj ) ∈ / A then Lij = 0 else excitation (resp. inhibition) link from concept Ci to concept Cj gives Lij > 0 (resp. Lij < 0). FCM concepts’ activations take their value in an acti1

vation degree set V = {0, 1} or {−1, 0, 1} if crisp mode or [−δ, 1]; with δ = 0 or 1 if fuzzy mode. At moment t ∈ IN , each concept Ci is associated with two types of activations: inner activation degree ai (t) ∈ V and extern forced activation value fai (t) ∈ IR. This defines inner activation a ∈ (V n )IN and extern forced activation fa ∈ (IRn )IN vector sequences. FCM is a dynamical system. Initialisation is a(0) = 0. The dynamic obeys a recurent relation: ∀t ≥ 0, a(t + 1) = G (fa (t), LT · a(t)), involving link matrix product with inner activation vector, fuzzy logical operators between this result & extern forced activation vector and normalisation:

decision: each entity decides according to its own personality (its history, its intentions, its state and its perceptions). Both agent and FCM are autonomous entities. Agent has sensors, effectors and decides on its behaviour. FCM has perceptive, motor and emotional concepts. Agent decision associated with the FCM is replaced by FCM dynamic: perceptive concept extern forced activations are resulting from fuzzyfication of sensors, defuzzyfication of motor concept inner activations gives effectors. As an example, one wants to model an agent perceiving his distance to an enemy. According to this distance and its fear it will decide to flee or not. The closer the enemy is, the more it is frightened and conversely. The more it ∀i ∈ [[1, n]], ai (0) = 0 , Ã is frightened, the quicklier it flees. We model this escape ! X by the FCM of the Figure 1. This FCM has 4 concepts Lji aj (t) ) ∀t ≥ 0, ai (t + 1) = σ(gi fai (t), and 3 links: “enemy close”, “enemy far”, “fear” and “esj∈[[1,n]] cape”, with exiting links (+1) from “enemy close” to “fear” and from “fear” to “escape”, and an inhibiting link (−1) where gi : IR2 → IR are fuzzy operators between influence from “enemy far” to “fear”. One chooses the fuzzy mode graph inner activations and extern forced activations, and (V = [0, 1], δ = 0, k = 5), not forced (fa = 0). The activaσ : IR → V standardises activations via sigmoidal func- tion of the sensitive concepts “enemy close” and “enemy far” tion. If fuzzy mode, V = [−δ, 1], σ is the sigmoidal func- is carried out by fuzzyfication of the sensor of the distance 1+δ tion σ(δ,a0 ,k) : a 7→ 1+e−k(a−a − δ centered in (a0 , 1−δ 0) 2 ), of to the enemy while the defuzzyfication of “escape” gives a 1+δ slope k · 2 in a0 and of limits in ±∞ respectively 1 and speed of escape to this agent. 0 si σ(0,0.5,k) (a) ≤ 0.5

−δ. If bimodal, σ : a 7→

−1 σ : a 7→ 0 1

si si si

1 si σ(0,0.5,k) (a) > 0.5 σ(1,0,k) (a) ≤ −0.5 −0.5 < σ(1,0,k) (a) ≤ 0.5 . σ(1,0,k) (a) > 0.5

. If trimodal,

+λ +1

+1 Wish to Escape

Fear

Asymptotic behaviour (t → +∞) of an FCM with constant extern forced activation vector sequence could be fixed point, limit cycle or even strange attractor if complex enough. FCM has the forward chaining ability only: it can answer to “What’s happen if...?”, it can not answer to the question “Why...?” because of non-linearity. FCM helps to predict the evolution of the system (simulation of behavior) and can be equipped with capacities of hebbian learning as propose Dickerson and Kosko [Dickerson 94]. The fundamental difference between a FCM and a Neural Network (NN) is in the fact that all the nodes of the FCM graph have a strong semantic defined by the modeling of the concept whereas the nor input, nor output nodes of the graph of the NN have a weak semantic, only defined by mathematical relations. Concerning the capacity of learning, it is necessary to give for a FCM the vectors of activations of all the concepts, whereas for a NN, the vectors of activations of the peripheral neurons only.

3

Perceptive Escape FCM

Enemy close

-1



Enemy far

−λ

0

0

1

0 +1 0 0 −1 0 C . −λ +γ +1 A 0 0 0 0 Perceptive (resp. motor) concepts are in dashed (resp. dotteddashed) lines. Concept “enemy close” excites “fear” whereas “enemy far” inhibits it and “fear” excites ”wish to escape”. λ = γ = 0 defines a purely sensitive FCM. If λ 6= 0 or γ 6= 0, our agent becomes perceptive according to its paranoia degree λ and stress degree γ.

B 0 FCM with link matrix L = @ +λ

Figure 1: Perception-Model Implemented via FCM Perception-model implementation can be done with FCM. Sensation versus Perception: sensation results from the sensors alone while perception is sensation influenced by internal state. FCM can model perception thanks to links between emotional and perceptive concepts. For example, let us add 3 links to the preceding escape FCM. A first self-exciting (γ ≥ 0) on “fear” models the stress, a second exciting (λ ≥ 0) from “fear” to “enemy close” with a last inhibitor (−λ ≤ 0) from “fear” to “enemy far” model the phenomenon of paranoia. Our agent becomes perceptive according to its paranoia degree λ and stress degree γ (Fig.2). An agent can also use a FCM in an imaginary space and simulate a behavior. Thus, with its own FCMs it reaches a self-perception while being observed in an imaginary field. If it knows the FCMs of another agent, it will have a perception of the other and will be able to mime a behavior or

Autonomous Agent’s Behaviour modelled by FCMs

Autonomous entities is one of the keys for believable agent creation. Autonomy rests on sensorimotor autonomy: each entity is equipped with sensors and effectors enabling it to be informed and to act on its environment; an autonomy of execution: the execution controller of each entity is independent of the controllers of the other entities; an autonomy of 2

to cooperate. Indeed, if an agent has a FCM, it can use it in simulation by forcing the values of the sensitive concepts and while making act the motor concepts in an imaginary space, projection of the world; such an agent is able to predict its own behavior or that of an agent deciding according to this FCM, while making carry out several cycles with the FCM in its imaginary space or by determining the FCM attractor (fixed point or limit cycle in crisp mode).

comprising as set of concepts the meeting of the sensitive concepts {“enemy close”, “enemy far”, “energy high”, “energy low”} of the motor concepts {“eat”, “socialize”, ”escape”, “run”} and of the central concepts {“satisfaction”, “fear”}. This FCM decides the speed by disfuzzyfication of the concept “run”, and the direction by disfuzzyfication of the 3 concepts “eat”, “socialize” and “escape” each activation corresponding to a weight on the relative direction to follow (on the left or on the right) respectively to go towards grass, to join a sheep and to flee an enemy. 1

Shepherd

4 4

1

1

4

Stop

Sheep b

1

Guard point

a b The perception of the distance to an enemy can be influenced by the fear: according to the proximity of an enemy and fear, the dynamics of the FCM decides a speed obtained here with the 3th cycle of the FCM. In (a), λ = γ = 0, the agent is purely sensitive and its perception of the distance to the enemy does not depend on his fear. In (b), λ = γ = 0.6, the agent is perceptive: its perception of the distance to an enemy is modified by his fear.

7

Sheep a

-1

7

Dog

7 4

Run

7

Stop FCM Sheep c a b In (a), the shepherd is motionless. A circular zone represents the place where the sheepdog must gather the herd. On this zone is attached a point of guard diametrically opposed to the position of the shepherd: it is there that must put a sheepdog when all the sheep are inside the zone. Initially, the behavior of a dog without FCM is to run after the sheep which disperse quickly and out of the zone of regrouping. In (b), this elementary FCM carries out the obedience of a dog to the message “stop” of its shepherd by inhibiting the desire for running.

Figure 2: Escape speed decided by FCM of figure 1 This is one of the key for self-perception modelling. Moreover, if an agent knows the FCMs of another agent, it can envisage the behavior of this agent by simulating a scenario carried out by unrolling the cycles of their FCMs in an imaginary space. This technique opens the gates with a true co-operation between agents, each one being able to represent the behavior of the other.

1

Shepherd

Sheep a 13

4

13

Charactors with FCMs

9

5

Sheep b

1

We will see how to carry out an interactive film proposing the history of a sheepdog with the assistance of FCMs. The cinematographic ideas used take as a starting point the cinematographic school of “la nouvelle vague” proposing to the director to give not a script to the actors but rather a temporal frame of obligatory events between which the actors improvise according to their roles. Let us illustrate our remarks by a history of mountain pastures. Once upon a time, there were a shepherd, sheep and a dog: The shepherd moves in his pasture, can speak and give to the dog know-how. He wants to gather his sheep in a zone which he determines with measurement of the needs. By default he remains sitted; it is a human actor who makes all the decisions of the shepherd. Each sheep distinguishes an enemy (a dog or a man) from a sheep and an edible grass. It collects the distance and the relative direction (left or right) towards an agent which it distinguishes. It can identify the nearest enemy. It can turn on the left or right and run until a maximum speed. It has a reserve of energy which it regenerates while eating and spends while running. By default, it goes straight and ends up becoming exhausted. One wants that the sheep eat grass distributed by chance, are afraid of the dogs and the men when they are too near and, according to the gregarious instinct they socialize. One chooses a principal FCM

13 1 5 9

5

Sheep c 9

5

1

13

Guard point Dog

9

c Sheep far

+1

-3 Run

-1

Sheep close

+1 +1

Stop -3

Sheep left -1

-1

Sheep close to zone

-1

-1 +1

+1

−2

Sheep inside herd

Turn left

-1

−2

+2

Close guard point

Turn right +1

−1

+1

−1

-1 Sheep outside herd -1

Large zone

Sheepdog’s main FCM

+2

+1 -1

+1

+1

-1

Guard

Bring back

+1

Sheep far from zone

No move +1

Follow slowly +1

Sheep right

+1

-1

+1 Small zone

Far guard point

Zone left

Zone right

Bring back angle’s FCM

d For this simulation in virtuo, the socialization of the sheep is inhibited. In (c), the film of the sheepdog bringing back 3 sheep was held with for the sheep their FCMs and for the dog FCMs represented in (d) and (e). In (d), the principal FCM of the dog translates the role consisting in bringing back a sheep in the zone and maintaining a position relating to the shepherd when all the sheep are in the desired zone. In (e), this FCM decides angle of incidence towards the sheep to bring back it into the zone: to go towards the sheep, but approaching it by the direction opposed to the zone.

e

Figure 3: sheepdog and sheep playing roles given by their FCMs. 3

References

The dog is able to identify the men, the sheep, the zone of the pasture and the point of guard. It distinguishes visually and auditivly its shepherd from another man and can locate the sheep further from the zone among a group of sheep. It can turn on the left or right and run until a maximum speed. Initially young and crazy, it has a behavior consisting in running after the sheep; what causes to disperse them quickly (Figure 3a). Initially, the shepherd wants that the dog obeys the command not to move, which will have as consequence a socialization of the sheep. That is carried out by giving to the dog a FCM sensitive to the message of the shepherd and inhibiting the desire for running (Figure 3b). The behavior of the dog is decided by the FCM and the dog immobilizes itself when its shepherd shouts: “stop” (diffusion of the message “stop”). Then, the shepherd gives to the dog a FCM structured according to the concepts associated with the zone of the herd, with the fact that a sheep is out or in this zone, as well as the concepts making it possible to bring back a sheep (Figure 3cde) and to maintain the herd in the zone while being placed at the point of guard, i.e. on the edge and with opposite of the shepherd.

[Axelrod 76]

Axelrod R., Structure of Decision, Princeton University Press, Princeton, New Jersey, 1976.

[Bates 92]

Bates J., Virtual Reality, Art, and Entertainment, Presence, 1(1):133-138, MIT Press, 1992.

[Badler 93]

Badler N., Phillips C., Webber B., Simulating humans, Oxford University Press, New-York, 1993.

[Dickerson 94]

Dickerson J. A., Kosko B., Virtual Worlds as Fuzzy Cognitive Maps, Presence, 3(2):173-189, MIT Press, 1994.

[Harrouet 00]

Harrouet F., oRis: s’immerger par le langage pour le prototypage d’univers virtuels ` a base d’entit´es autonomes, Th`ese de Doctorat, Universit´e de Bretagne Occidentale, Brest, France, December 8, 2000.

[Hayes-Roth 96] Hayes-Roth B., Van Gent R., Story-making with improvisational puppets and actors Technical Report KSL-96-05, Knowledge Systems Laboratory, Stanford University, 1996. [Kosko 86]

It is remarkable to observe that the trajectory in “S” of the virtual sheepdog emerging from this simulation in virtuo and not explicitly programmed is an in vivo observable [Maes 95] strategy adopted by a real sheepdog who brings back a group of sheep.

5

Conclusion

Maes P., Artificial Life meets Entertainment: Interacting with Lifelike Autonomous Agents, Communications of the ACM, 38(11):108-114, November, 1995.

[Magnenat 91]

Magnenat N., Thalmann D., Complex Models for Animating Synthetic Actors, IEEE Computer Graphics and Applications, 11(5):32-44, 1991.

[Mateas 97]

Mateas M., An Oz-Centric Review of Interactive Drama and Believable Agents, Technical Report CMU-CS-97-156, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA., June, 1997.

In this paper, we associated FCMs with each agent to define its behavior. These autonomous agents in a virtual universe play their roles according to the structure of their FCMs and improvise between the constraints of the scenario inspired [Noma 00] from the manner “la nouvelle vague”. Implemented in the multi-agents environment oRis, the delocalization of FCMs at the level of each agents enables us to define the decisional part of their cycles of life. The artificial agents implemented [Perlin 95] with FCMs are not only sensitive, but yet perceptive: their behaviors depend on their internal state retroacting on the [Rickel 99] sensors. The FCMs can force the values of their sensitive concepts and make act their motor concepts in an imaginary world. This makes it possible to simulate a behaviour with- [Styblinski 88] out carrying it out. This behavior is that which is decided by these FCMs. They can be those of the agent which simulates, or those of another agent; this can be regarded as the perception of oneself (self-perception) or the perception of the others, authorizing thus more easily the co-operation [Thalmann 00] between agents. The principal difficulty of FCMs lies in their construction: it is necessary to extract relevant concepts and to determine the links between these concepts. Procedures of [Tolman 48] training, Hebb type, can be planned to facilitate this construction. 4

Kosko B., Fuzzy Cognitive Maps, International Journal Man-Machine Studies, 24:65-75, 1986.

Noma T., Zhao L., Badler N., Design of a Virtual Human Presenter, IEEE Journal of Computer Graphics and Applications, 20(4):79-85, July/August, 2000. Perlin K., Goldberg A., Improv: a system for scripting interactive actors in virtual worlds, Computer Graphics, 29(3):1-11, 1995. Rickel, J., Johnson, W.L., Animated Agents for Procedural Training in Virtual Reality: Perception, Cognition, and Motor Control, Applied Artificial Intelligence, 13:343-382, 1999. Styblinski M. A., Meyer B. D., Fuzzy Cognitive Maps, Signal Flow Graphs and Qualitative Circuit Analysis, Proceedings of the 2nd IEEE International Conference on Neural Networks (ICNN-87), 2:549-556, July, 1988. Thalmann D., Challenges for the Research in Virtual Humans, Workshop Achieving Humanlike behavior in interactive animated agent, Barcelona, Spain, June 3, 2000. E. C. Tolman, “Cognitive Maps in Rats and Men”, Psychological Review, 42, 55, 189-208, 1948.

Suggest Documents