Artificial Life Simulation on Distributed Virtual ... - Semantic Scholar

2 downloads 0 Views 404KB Size Report
Artificial life is a very promising research area [Netto 2001]. ... action instinctively, following always the same rules – those corresponding to their particular state ... implemented by a virtual camera followed by a neural network, trained in ...
Artificial Life Simulation on Distributed Virtual Reality Environments Marcio Lobo Netto, Cláudio Ranieri Laboratório de Sistemas Integráveis – Universidade de São Paulo (USP) São Paulo –SP – Brazil {lobonett,ranieri}@lsi.usp.br

Abstract. This paper presents a distributed virtual reality application that has been implemented at the USP Digital CAVE. It is a virtual aquarium inhabited by artificial fishes that are able to evolve in this environment using their ability to learn. These fishes have their own cognition, which control their actions – mainly swimming and eating. Through contact and communication with other fishes they learn how to behave in the aquarium, while trying to remain alive. The simulation has been implemented in JAVA 3D. A main server and five clients compose the distributed VR version. The server represents the aquarium and contained fishes, while the clients are responsible for the different views of this aquarium (from inside) projected at each of the five CAVE walls.

1. Introduction Artificial life is a very promising research area [Netto 2001]. Although its first concepts were developed for fifty years by eminent scientists as Turing and Von Neuman involved in the definition of the computation foundations, only recently computers achieved the required power levels to allow interesting experiments. As virtual reality became feasible and computers achieved a very high performance, scientists began to use them as virtual laboratories, where they can coexist with their experiment, and analyze it in new and unprecedented ways. As researchers interested in artificial life and cognitive sciences, and by another side involved for a long time with high performance and distributed computer graphics, we have proposed the development of an Artificial Life Framework. It has been conceived to be open and flexible to accept the addition of new features as time goes by. Object oriented paradigm has been used as a reference to develop this framework. So, different artificial creatures can be designed combining the features provided by this framework, which includes modules to emulate some of the most important aspects of live beings, as perception, cognition, reasoning, communication, and actuation. We have not yet focused on physical body properties, but on the mental models that control the behavior of our artificial creatures. The paper is organized as follows. Section 2 presents previous work in artificial life and related fields, including our own activities. Section 3 presents our artificial fish model, providing details about the cognition and learning skills from our fishes. Section 4 describes briefly some particular implementation aspects, and then focuses on the distributed virtual reality aspects of this project, describing how we implemented the distributed version of the aquarium. Finally some results are presented in section 5, showing the aquarium as a desktop tool and as a distributed application running

in a PC cluster displaying images in a 5-side CAVE. Section 6 concludes the paper and presents future work in this project.

2. Previous Work Many authors are currently working with artificial life, a rich research field. In this paper we focus our attention on the authors that are applying cognitive skills to control the behavior of virtual characters, providing these actors with some kind of personality. 2.1. Work from Other Groups Some researchers have been involved with this field looking for models that would describe how real life began and evolved. In fact they were looking for a universal life concept, which should be independent of the matter on which they exist [Adami 1998]. Other scientists were looking for physical models to give natural appearance to their characters. Very interesting results have been achieved here [Terzopoulos 1998] [Phillips 1988] [Sims 1994]. Although very different in nature, these work have been related to evolution and natural selection concepts. In some of them, evolutionary computing schemes such as genetic algorithms have been an important tool to assist the conduction of transformations in the genotype of virtual creatures, allowing them to change from one generation to another. Mutation and combination (by reproduction) provide efficient ways to modify characteristics of a creature, as in real life. Selection plays a role, choosing those beings that, by some criteria, are the best suited, and therefore allowed to survive and reproduce. Terzopoulos presented some papers showing how to develop artificial creatures supporting strategies to simulate natural behavior and cognition. Herewith he showed the possibility to train these characters to perform a certain class of actions, or even to let them learn how to perform sophisticated actions. Sims proposed an evolutionary model to evolve creatures, where both morphology and behavior adaptation is considered. The results are very impressive. After performing some adjustments on initially simple models, leads to characters well adapted to their environment. 2.2. Our Previous Work A general and flexible framework has been proposed in our first project allowing us to establish an open and extensible model to support different types of implementation. This first model – WOXBOT [Miranda 2001] – consists of two main classes of modules (Figure 1). Each character has a set of sensors, responsible for gathering environmental information and translating it into an equivalent linguistic internal representation. This information passes through different types of processing stages intending to classify it. For instance, the visual module transforms a pictorial bitmap representation of an image, gathered through a visual process simulating photography, into a set of logical symbols, which are considered as input by the cognitive module. The cognitive module is able to conduct decision processes based on this input and on its own internal state. In this project we do not consider knowledge acquirement neither handling. No learning skill is present, so the actors perform their action instinctively, following always the same rules – those corresponding to their particular state machine. The decisions taken here are passed to actuators (not represented in the figure), which in turn implement the command, as determined by the cognition.

R

A+, A, A-

NN

Image

SM

B+, B, B-

R:

Symbols

NN: neural net

Language

PERCEPTION

render

SM: state machine COGNITION

Figure 1. WOXBOT Character Framework

In the WOXBOT project the input and output modules were designed to perform their tasks, and no adjustment has been carried out after any creature has become alive. The vision system was implemented by a virtual camera followed by a neural network, trained in advance to correctly classify the scene elements, in order to provide right information about it to the cognitive module of each virtual robot. The cognitive module has been conceived as a state machine, represented as a bit string that as such could evolve through generations. Therefore the state machine can be adaptively adjusted to perform more convenient actions. The state change in the machine was controlled by the inputs passed by the sensors. Each state could be associated with an action. The initial state machines, corresponding to those of the first generations of characters, were randomly produced. Therefore the behavior of these characters was not expected to be appropriate to a long survival. But a statistical dispersion of behaviors made some robots more suited to the environment than others. Periodically some of these characters have been selected to reproduce based on their measured fitness. This process of reproduction, combining crossover and mutation, allowed an improvement in collective behavior after some generations (Figure 2). But none of these virtual beings presented any learning ability, which has been added to the fishes.

SM: state machine

SM

SM

SM

Corresponding gene Figure 2. WOXBOT Evolutionary Model

An overview of the WOXBOT simulation process, showing the environment and some results, is presented above (Figure 3). It presents the robot in the arena, and shows also its own view of the scene, used by him to take decisions. Different genotypes have been produced after some generations, showing a diversity of strategies considered to be well adjusted to the survival proposal.

Figure 3. WOXBOT Simulation

3. Fishes Aquarium Framework The fishes have three main differences in comparison to the robot: different perception and cognition modules and a new communication module. Before presenting this model in details we describe its structure, considering components and how they are combined in more elaborated structures. Perception: their visual perception is able to identify objects in the aquarium located inside a view frustum, which is a pyramidal section in a 3D space. Some regions are distinguished in this volume, based on the distance to the observer (far or close) and relative angle (left, right and center). Herewith the perception module provides symbolic information to the cognitive module. This information describes a particular situation, which is considered by the cognition to take an action. Cognition: in this framework a cognitive model is represented by a simple language structure, instead of the state machine used in the previous one. Each character has a simple language interpreter, which periodically selects one sentence from one book to be executed. The selection depends on the information given by the visual system, which provides symbols describing the recognition and relative position of scene elements. The mentioned book is a knowledge table consisting of a set of actions to be performed at each circumstance, acquired through contact with other more experienced fishes. Multiple actions can be assigned to the same situation, and the character selects one of them to be executed. This process considers all possible actions, for the current situation, assigning higher selection probability those ones with higher success score in the history of this character. Learning: a learning ability has been added to the cognition, and depends also on the communication module. The learning process is composed by two phases. First the inexperienced

fishes receive tips from more experienced colleagues, which they add as new statements to their own knowledge book. In a second step they classify these statements using an importance approach, considering the accumulative experience on that situation and the success rate of each statement to incrementally increase their certainty on the selection of the most appropriate statement for each situation. Communication: a communication module is responsible for the knowledge exchange between fishes, and therefore it is a core component in the new learning ability associated to the cognition. S

C

A+, A, A-

Analyzer

B+, B, BActions

View Frustum

Symbols Sentences Communicator

VISUAL PERCEPTION Sensor & Classification

COGNITION Decision & Action (language construction)

Figure 4. FISH Character Framework

The cognition module is composed by an analyzer and has an interface to a communicator. A view from this environment, from a fish perspective is show below.

Figure 5. Fish Perspective View from the Aquarium

3.1. Analyzer The analyzer coordinates the selection of statements (words representing single actions, or sentences expressing more elaborated strategies). This selection is carried out based on the character’s own experience, expressed by the level of certainty assigned to each statement, as well as to the expected level of success. The first term (experience) represents the number of times that the character has been in that situation, when it decided by one of the possible statements. The second term (certainty) is associated to each of these statements, and is a measure of success assigned to this particular choice. A first implementation considers statements just as words. We implemented a concept – vanishing memory – based on Markov chains, in order to be able to evaluate latter the effectiveness of the present actions, since they do not necessarily, and normally not, can be immediately evaluated. For instance, the decision to follow a piece of food falling in the water is necessary to let the fish approximate itself from the food and only then to catch it. Therefore a memory mechanism is used to allow the association of a success value to each decision, even if they do not lead immediately to a reward. In fact, currently the only reward comes from eating, since it allows an update of the actual energy level of each creature. We intend to extend this concept allowing the construction of sentences from basic words. These sentences would then be associated to strategies. In this approach sentences are a set of actions and should be evaluated as a unity, to which we can assign a level of success at the end of the execution. This would allow us to analyze closed strategies, those that remain fixed through their own development. In this strategy the analyzer must combine words into small sentences. The level of success or failure associated to the execution of each sentence is kept internally, and represents the characters own knowledge. This information is used to assist the selection process for new sentences. The first approach – vanishing memory – is more flexible, since it allows a real time correction of any emerging strategy, since a new decision is taken at every simulation step. Furthermore it presented nice results, showing the emergence of a common sense among all fishes. This could be observed comparing the books from all fishes and the certainty values assigned to each word on their books. 3.2. Communicator A communicator is present in this framework to create the possibility of knowledge exchange between characters, allowing some characters to teach others. The language model, described ahead, gives more details of this structure and functionality. 3.3. Statements Building Every character is born with a small vocabulary, corresponding to everything it knows at the beginning of its life. New vocabularies and statements may be acquired when talking to other characters. New statements may be acquired in two forms: a) listening to speeches from colleagues and considering their suggestions (reflecting how appropriate they are), or b) acquired by an internal inspection conducted by the constructor, which in some cases proposes new statements, combining other already existent. This concept emulates a type of reasoning or self-reflection by the character. Currently only the first model is implemented.

3.4. Statements Execution The statements executor is periodically required to take one statement from the knowledge book and to execute it, leading to a character action. Currently, the implementation assigns up to four actions for each situation. The executor makes some measurements on internal character variables (states) in order to observe how they change, and based on them modifies the history of success or failure associated to that sentence in conjunction with the circumstance where it happened. A typical internal variable is the stored energy of the character. The execution of a statement can pertain to one of three classes, listed bellow: Action Statement: These statements lead to an action as gathering, touching, bringing, etc… When executing bad actions (inside a context) the character is punished, while good actions rewards it. Speech Statement: Speech statements allow small talks (normally just a sentence accompanied by its context). For instance a valid sentence here could be: Context: If ball is close Sentence: then catch the ball. Movement Statement: Movement statements control the character motor system with actions as: step ahead, step back, turn right, turn left or just stay.

4. Aquarium and Fishes Implementation The simulator is composed by an environment – the aquarium, which corresponds to the space inhabited by the fishes, and by a population of fishes, all implemented in JAVA 3D as independent running threads. Multiple view cameras can be used, allowing the user to select one of them in a single monitor version, or to watch simultaneously 5 cameras in the CAVE version. These cameras may be fixed outside or inside the aquarium, or even be attached to one of the fishes, giving the users the possibility to follow the movement of this particular fish. Furthermore interesting information is provided about the vocabulary of the fishes (cognition), as well as about their accumulated energy histories (statistics), their instantaneous vision and corresponding action.

4.1. Distributed Virtual Reality Implementation The CAVE implementation of this simulator runs on a PC Cluster, and therefore corresponds also to a distributed processing version. The model used is composed by a main server responsible for the core simulation of all fishes behavior, in a multi-threaded approach, and 5 clients responsible for the rendering of the images presented by each of the 5 virtual cameras, projected respectively in each of the 5 CAVE sides (4 walls and the floor). In this implementation we use a replication of the entire scene description in all cluster PCs, but while the main simulation runs in one of them, the other five are the sole responsible for the real time rendering of each of the 5 views. All clients request the server for updated information concerning the scene elements, mainly the position and orientation of each fish and other objects as food particles. The synchronization in carried out automatically, during the update request, keeping all clients with consistent scene information. The JAVA Remote Method Invocation (RMI) is used to establish the communication and correspondent synchronization between the server and all 5 clients. Furthermore

the JAVA 3D stereo mode is enabled, allowing a really immersive experience in the CAVE using appropriate stereo glasses. This simple structure was enough to ensure a real-time frame rate (around 50 frames per second) without any perceptual slide between the animations presented at each of the 5 CAVE sides. The user has an immersive experience, as he or she ware inside the aquarium. In the near future we intend to exploit the distribution of the server among different PCs in a cluster, implementing then a truly distributed VR application. The use of multi-threading, implementing each fish as a separate character, will help in the distribution of this simulator. The current number of fishes in the simulation, 20, did not impose any requirements for the distribution, but this will certainly be necessary for a larger aquarium inhabited by hundreds or thousands of fishes.

5. Results The main results presented in this paper are: a 3D virtual environment for the simulation of artificial beings – fishes in the current implementation; the correspondent virtual reality implementation of this environment, running on a five side CAVE; and the extension of a general purpose artificial life character framework, through the incorporation of learning abilities.

5.1. Desktop Implementation This project has also a common desktop version, useful particularly to study the evolution of the cognitive skills from our fishes. The simulator API provides different tabs: Aquarium: a 3D view from the environment and fishes. Fishes are represented by different colors depending on their knowledge, which evolves during the time. They are born yellow and then turn to orange and finally to red, as their knowledge grows (Figures 6, 7 and 8 respectively). Vision: symbolic information about the vision of each fish (what is seem by every fish). Cognition: the knowledge book of each fish, with the actual set of words of this table. Statistics: measurements about actual energy (food) levels, the accumulated value of it (history of everything eaten), and the actual percentage of acquired knowledge.

Figure 6. AQUARIUM Initial Life Cycle View (a) and Statistics (b)

Figure 7. AQUARIUM Intermediary Life Cycle View (a) and Statistics (b)

Figure 8. AQUARIUM Final Life Cycle View (a) and Statistics (b)

This simulator is available as an applet in the Prototype part of the ALGA project website [ALGA WebSite], running in web browsers on machines where JAVA 3D has been installed. 5.2. CAVE Implementation We present here images taken in the CAVE environment, showing the results of the distributed version running on the PC cluster. The first image is an internal CAVE view running the virtual aquarium simulation, and the second one shows a user testing this immersive experience.

6. Conclusion This paper presents a new approach, which has been incorporated to our artificial life framework. This new approach added an inter-character communication feature, expressed through the use of a simple language, which in turn is also used to express the reasoning of these artificial life creatures.

Figure 9. AQUARIUM Distributed VR Implementation

Furthermore, this paper presented some implementation results. One of them, the desktop tool, has shown to be very useful in the analysis of the artificial life beings, mainly on their learning and decision taking skills. Different information is provided at run-time, allowing us to understand how the cognition of these creatures evolves, based on their learning capabilities. The other implementation, on a virtual reality CAVE environment, shows the possibility to be involved in an immersive experience at this virtual aquarium. Finally another important issue of this implementation is that it corresponds to a distributed version, running on a PC cluster. Further work includes, but is not limited, to the adding of an adaptively adjusted classifier (based on fuzzy logic) to the visual perception module, as well as, perhaps, some other types of perceptual modules, as audition. This new visual perception should improve the classification of distinct situations, based on the experience of the artificial characters. Finally, the mentioned approaches for more refined statement constructions and analysis should be extended.

Acknowledgments Our thanks to CNPq for the partial support granted to this project, through a PBIC scholarship.

References Adami, C. “Introduction to Artificial Life”, Springer Verlag, 1998 Terzopoulos, D. (org.) “Artificial Life for Graphics in Animation, Multimedia and Virtual Reality”, in ACM/SIGGRAPH 98 Course Notes 22, 1998

Phillips, C. and Badler, N. I. “Jack: A toolkit for manipulating articulated figures” in ACM/SIGGRAPH Symposium on User Interface Software, Banff, Canada, 1988 Sims, K. “Evolving 3D Morphology and Behavior by Competition”, Artificial Life, 1994 Miranda, F. R. et al “An Artificial Life Approach for the Animation of Cognitive Characters”. Computers & Graphics - an international journal of systems and applications in computer graphics. Vol. 25,Issue 6 Amsterdam, Holland, Elsevier Science 2001 Netto, M. L. and Kogler Jr., J. E. (eds.) “Artificial life: Towards New Generation of Computer Animation”. Computers & Graphics - an international journal of systems and applications in computer graphics. Vol. 25,Issue 6 Amsterdam, Holland, Elsevier Science 2001 ALGA WebSite http://www.lsi.usp.br/~alga

Suggest Documents