Causality and Virtual Reality Art - Semantic Scholar

4 downloads 133119 Views 2MB Size Report
Reality Art installations. Causality plays ... open new practice in VR Art, where the artistic intention is ..... aesthetics of the original Tenniels' illustration (using 3D.
Causality and Virtual Reality Art Marc Cavazza (1), Jean-Luc Lugrin (1), Sean Crooks (1), Alok Nandi (2), Mark Palmer (3) and Marc Le Renard (4) (1) School of Computing, University of Teesside, TS1 3BA Middlesbrough, United Kingdom. [email protected] (2) Commediastra, 182, av. W. Churchill, 1180 Brussels, Belgium. (3) University of the West of England, Bristol, United Kingdom. (4) CLARTE, 6 rue Léonard de Vinci, BP 0102, 53001 Laval CEDEX, France. ABSTRACT

In this paper, we discuss how a cognitive concept, causality, can be used for the conceptual underpinning of Virtual Reality Art installations. Causality plays an important role in our construction of reality and, as such, it makes sense to use it as a principle to define VR experiences. We have developed a VR platform using cognitive data on causal perception to create artificial event co-occurrences in virtual worlds, which can be perceived as possible outcomes for user actions. After a preliminary validation of this technology by user experiments, it has been used to implement prototypes of artistic installations by two different artists. We describe the technical approach behind the elicitation of causal perception in virtual reality, and illustrate its use through the two artistic installations being developed with this new VR platform. Author Keywords

Digital Arts, Intelligent Virtual Environments, Causality ACM Classification Keywords

H5.1 [Multimedia Information Systems] Artificial, Augmented and Virtual Reality - Virtual Reality for Art and Entertainment INTRODUCTION AND RATIONALE

Virtual Reality Art offers many possibilities to create experiences that produce an illusion of realism or from a different perspective, an immersion into fantasy worlds and alternative realities [4]. If the principles behind some of the cognitive aspects that are involved in the construction of the subject’s reality could be modelled, they would become a target for the design of artificial experiences. This could open new practice in VR Art, where the artistic intention is translated into cognitive phenomena, which in turn become

a target for the VR experience. In this paper we illustrate how a fundamental concept from cognitive psychology, causal perception [15] can be put to use in the creation of user experience in Virtual Reality Art installations. Causality can be a direct part of the artistic reflection [12], as one of the essential concepts of our experience. It can also be the mode of description of dynamic behaviours that are meant to elicit a certain kind of spectators’ experience. In both cases, we want to demonstrate that causality can be directly manipulated as part of VR systems and as such it could constitute a knowledge-level formalism to express artistic intentions while at the same time providing a direct route for their implementations. PRINCIPLES OF CAUSAL PERCEPTION

Human subjects show a strong propension to perceive causality between co-occurring events. The perception of causality from collision events has been qualified by Scholl and Nakayama [16] as “phenomenologically instantaneous, automatic and largely irresistible”. Probably as another consequence of the irresistible nature of causal perception, it has been shown recently that causal perception has the ability to create illusions or to distort our perception of events. This finding is obviously of great potential interest in the context of VR reality art, as a principle for creating illusions in interactive installations. Several studies have evidenced “side effects” of causal perception in terms of generating illusion of wrong judgments. For instance, Schlottmann and Schanks [13] have reported that causal perception can contradict causal judgments, suggesting the “irresistibility” of causal perception described above. Scholl and Nakayama [16] have described a phenomenon known as “causal capture” where contextual information can affect causal perception. Even more striking are reports according to which perceived causality can affect subjects’ perception of space or time. Scholl and Nakayama [17] have shown that spatial relations between objects could be misperceived in the context of causal perception, while Haggard et al. [5] have evidenced shifts in the perception of time in the context of causal expectations.

Figure 1: System Architecture THE ARTIFICIAL CREATION OF OCCURRENCES

This research is based on a virtual environment supporting new forms of experimentation in VR Art [2]. This environment specifically supports the elicitation of causal perception by supporting the creation of event cooccurrences, in real time, in the virtual world. These cooccurrences can be generated from high-level principles, such as analogies between object physical properties. The original idea behind this research was that such high-level principles could be used to implement the artistic intentions described in artistic briefs. The technical approach for this “artificial causality” can be described as follows: the behaviour of objects in a virtual environment are under the control of event systems [7], which are derived from the graphical primitives controlling object collisions. These event systems are also used to discretise the Physics of the virtual world and make it computationally tractable in real-time. While the dynamics of virtual world objects is simulated using traditional Physical laws, most physical interactions between objects are discretised and under the control of an event system. For instance, the impact of a stone on a glass window does not usually generate a physical computation of the effects of that impact. Rather, an impact event is generated, accepting as input some of the parameters of the dynamic aspects and having as an outcome pre-defined simulations, such as the glass shattering. As most phenomena in VR systems are controlled by these mechanisms, we can use these event systems to associate arbitrary outcomes to a given action. This in turn generates event co-occurrences that would be perceived as causally related by human subjects. In that sense, artificial causality is potentially a powerful tool to create VR experience, including specific illusions. THE VR PLATFORM

The VR platform should support immersive visualisation as required by VR Art installations. CAVE™-like systems offer several advantages in terms of visualisation quality, user interaction, users and audience participation. We

required visualisation software which could be ported to a CAVE™-like display and would support the redefinition of sophisticated behavioural mechanisms, in order to implement causal systems. For all these reasons, we selected a CAVE™-like PC-based system, the SAS Cube™, which is a 4-wall, 3 x 3 metre immersive display powered by a PC cluster and supporting stereoscopic images through the use of shutter glasses. We use a game engine, Unreal Tournament 2003 as a visualisation engine for its graphical performance as well as the existence of a sophisticated event management system [9]. UT2003™ has been ported to the SAS Cube™ using the CaveUT™ system [6], which has been extended to support stereoscopic displays. The software architecture integrates an additional software layer, the causal engine, on top of the visualisation system. The Causal engine overrides part of the native Physics engine to support the definition of new world behaviours, namely the principled generation of event co-occurrences. Figure 1 shows the system architecture together with a view from one of our artistic installation prototypes. THE CAUSAL ENGINE

The causal engine is the main software module generating arbitrary co-occurrences between events in the virtual world. It is built on top of the event system, which is part of the UT2003™ engine. It extends the event system to operate at a semantic level in which events are interpreted in terms of the actions they can be part of. For instance, the impact of a hard object on a fragile surface can be recognised as a kind of “breaking” action if proper parameters (impact momentum, hardness) are taken into account. Recognising high-level actions, such as breaking, tilting, emptying, etc. makes it possible to associate an action with its consequences, which is the basis for representing causality explicitly. This requires appropriate representations for those highlevel actions: the causal engine relies on an action formalism inspired by formalisms used in planning and

Figure 2: Generation of an artificial co-occurrence by the causal engine

robotics [3, 19], which associates the triggering events of an action to its consequences. For instance, Figure 2 shows the formalisation of the break-on-impact action, which describes the event by which a fragile object will shatter upon colliding with a harder object. This representation associates an event to its outcome. The event (represented as the action’s condition) consists in a collision Hit(?obj, ?surface) between an object (here, a glass) and a surface (a table). The event is also characterised by the object’s respective properties, here the fact that the surface is harder than the object and the object is fragile. The outcome (represented in the effect part) consists in the shattering of the object. The causal engine operates continuously through a sampling cycle during which it received low-level events triggered by object interactions and parses them into candidate action representations. These low-level events are derived from the graphical primitives provided by the visualisation engine. For instance contact between two objects generates a Bump(?obj1, ?obj2) system event. These events can be intercepted which makes it possible to recognise more specific events taking into account object velocity and some of its physical properties. For instance a bump event between objects is recognised as Hit(?obj1, ?obj2) if ?obj1 has a high velocity. These Basic Events (BE) and are used to describe the physical primitives of the action representations (see above). During each sampling cycle, primitive physical events trigger the instantiation of candidate actions: for instance, depending on the physical properties of the objects involved, a Hit(?obj1, ?obj2) event

can instantiate actions such as breaking, bouncing, shattering, etc. The essential point is that these actions are “frozen” during the sampling cycle, or more specifically their effects are temporarily inhibited. In the case of a break-on-impact action, the shattering of the fragile object is not simulated in the virtual world. This makes it possible for the causal engine to alter action representations (i.e., their effects) while they are frozen: in this way an actions outcome can be modified prior to its re-activation. This constitutes the basic mechanism for the creation of co-occurrences. A detailed example is presented in Figure 2. The falling glass hits the table, which generates a Hit(?glass, ?table) event. This event can be parsed to activate several possible action representations, among which is the break-onimpact(?glass, ?table) action, which would represent the default outcome. This action representation can be instantiated but its effects (i.e. the shattering of the glass) would be suspended during the sampling cycle. While this action representation is “frozen” the causal engine can alter the effect representation to substitute new outcomes to the default effects. This is done through the application of various Macro-operators (MOp). Macro-operators are knowledge structures which, when applied to the action representations, perform specific transformations. These modifications consist for instance in substituting the action’s objects with new ones, based on property similarity (including spatial proximity). The process of MOp selection is based on heuristic search and uses a score corresponding to a “level of disruption”. This score is a measure of the semantic difference between the default effects and the

alternative effects, based for instance on object semantic properties or action categories. In the example of Figure 2, the glass affected by the action will not be the falling one (glass #1) but the one which was already standing on the table (glass #2). But the causal engine can also alter the nature of the effect themselves: rather than shattering, the glass will tumble over, spilling its contents (Figure 2). This is the resulting sequence of events: following the first glass fall and its impact on the table, the second glass tilts and spills its contents. In experiments we have carried out, not only was this perceived causally by the users but in several instances they even gave mechanistic explanations. In the next sections, we will show how this technical approach can be used in the implementation of artistic briefs centred on causality. PSYCHOLOGICAL VALIDATION

The causal engine has been developed taking into account state-of-the-art knowledge in causal perception. However it has been necessary to validate its behaviour on human subjects, to demonstrate that the co-occurrences generated could indeed induce causal perception in the subjects. The first evaluation of this approach should be to verify its consistency with data from psychological literature, in terms of response times, prior to the design of causal perception experiments involving users. Our tests have shown the overall response time to be on average between 90 and 100 ms. This has to be compared to data in the psychological literature, which reported the maximum delay between consecutive events for these to be perceived as causally linked. In the original experiments from Michotte [10], events delayed by more than 150 ms progressively ceased to be perceived as causally linked. In their experiments on event association, Buehner and May [1] considered events occurring within 250 ms to be perceived as instantaneous. Buehner and May [1] contrasted “immediate” and “delayed” action-outcome sequences. The average response time on immediate pairings was “less than 0.25 s” [1, p.884] and participants assessed action-outcome contingencies under such a schedule accurately. Finally, when interpreting Michotte-style launching events, Kruschke and Fragassi [8] considered that motion ampliation (considered to account for causal impressions in Michotte’s theory) took place within a critical 200 ms interval. All this data suggests that the system’s response time is compatible with results from the psychological literature: as a consequence, the co-occurrences generated should be perceived by the vast majority of subjects as sufficiently close to induce causal perception. To evaluate the perception of causal relations the following experiments were staged. Subjects were introduced to a desktop 3D virtual environment supporting navigation and interactions with the virtual world’s objects. The environment comprises of five tables each supporting two glasses and a cardboard menu (Figure 2).

Subjects were facing a 18-inch screen from a distance of 30-45 cm. The corresponding field of vision in the virtual environment was approximately 80 degrees. In addition, they operated in a quiet, silent, darkened room. After being explained the basic interaction mechanisms for grasping, lifting, and dropping objects in a similar but different environment, subjects were given instructions for the task they had to carry. It consisted for each table in the virtual world, to select one of the glasses, lift it above the table, then drop it and let it fall on the table. They would then witness the virtual world reaction to their action. The default effect consists for the falling pint to shatter on impact. However, the system can generate alternative cooccurrences which can be perceived as effects. For instance, following the glass’ impact on the table: i) the other pint on the table shatters, ii) the table surface cracks, iii) the other pint tilts over, spilling its contents on the table or iv) the cardboard menu falls. The subjects had to repeat the action of dropping the glass five times. After each interaction, the subjects were asked to give a short textual explanation of the observed events. The rationale is that explanations, rather than simple descriptions, would force the expression of causal concepts relating their actions to the observed system response. In addition, the subjects were asked to identify which topic characterised best the phenomena they had observed, among the following three topics: i) Physics, ii) object interaction and iii) causes and effects. This was the first time in the whole experiment that the word “cause” was introduced. Thirty-three subjects participated in the experiment as volunteers. Three of them did not comply with the experiment’s instructions, hence their results were discarded without further analysis. The remaining sample of 30 subjects comprises 21 males and 9 females, for an average age of 29 years. Subjects were from different backgrounds, and, in terms of interface familiarity, only 6 of them were frequent computer games players.

Figure 3: Causal Perception Experiments

When asked to identify the topic of the experiment out of three possible topics, 80% of subjects selected “causes and effects”, which suggests a strong component of causal perception in the experiments (Figure 3). It should be noted that according to early results from Michotte, only 65 to 85% of subjects tend to report causal perception in these types of experiments without repeated exposure to those phenomena [15]. The analysis of textual explanations tends to confirm that subjects did perceive causal relations between co-occurring events. We thus collected 150 short answers for the five experiments performed by the thirty subjects. The goal was to analyse these answers for the use of causal explanations and also to determine whether, to some extent, some subjects perceived different events from those actually taking place on screen as a consequence of the perceived causality. One problem in the interpretation of these textual explanations is of course the use of language. Although sometimes a simple juxtaposition of descriptions can constitute an implicit causal statement [11] we could only interpret descriptions making explicit use of causal vocabulary. We considered a description to be explicitly causal when words such as “cause”, “consequence” or “effects” were used (e.g. “seems to have caused the other pint to fall down”, “as a consequence, the menu fell off the table”), or when the object’s grammatical case in those sentences was an ergative, as in “the glass made the menu jump off table on floor”, or “it made the other pint fall”. Because of this difficulty in interpreting linguistic descriptions, this part of results interpretation is essentially qualitative. Several subjects perceived a causal link between cooccurring events, but provided in addition mechanistic explanations, such as the fact that vibrations accounted for the perceived causality. This was in particular the case for two cases of action-outcome: the fall of the cardboard menu from the table (“the menu fell on the ground because of the vibrations of the table”) and the tilting of the second glass following the impact of the falling glass on the table (“the vibrations of the table induced the falling over of the second glass present on the table”). This is consistent with reports linking causal perceptions to mechanistic explanations [14]. These results confirm the existence of causal perception in these experiments. USING CAUSALITY IN VR ART INSTALLATIONS

In an artistic context, causal impressions can be an important aspect of user experience. The difficulty lies in being able to “programme” causality on the basis of the artistic intentions: this requires mechanisms for the explicit handling of causality, such as those provided by our causal engine. In this section we describe how our system is being used in the development of two artistic installations:

“Ego.Geo Graphies” (A. Nandi) and “Gyre and Gimble” (M. Palmer). “Ego.Geo Graphies”

The Ego.Geo.Graphies brief is exploring interaction and navigation in a non-anthropomorphic world, blurring the boundaries between organic and inorganic. Its installation involves an immersive VR world displayed in the SAS Cube™ with which the user can interact. The virtual world comprises of a landscape in which the user can navigate, populated by autonomous entities (floating spheres), which are actually all part of the same organism (see Figure 1). In this world, two sorts of interaction take place: those involving elements of the world (spheres and landscape) and those involving the user. The first type of interaction is essentially mediated by collisions and will be perceived in terms of causality. The second is based on navigation and position and will be sensed by the world in terms of “empathy”, as a high-level, emotional, translation of the user exploration. Through the staging of the Ego.Geo.Graphies installation, we are interested in exploring aspects related to predictability, non-predictability and hence some kind of narrative accessibility, from the perspective of user interaction. This also implies that we explore how the user can be affected by causality. The spontaneous movements of the spheres focus the user attention, within the constraints of his visual and physical exploration of the landscape. The user will perceive consequences of spheres colliding with each other, which are equivalent to an emotional state of the world (as these multiple spheres still constitute one single organism) responding to perceived user empathy. We expect a dialogue to emerge from this situation: user exploration will affect world behaviour through levels of perceived empathy, and in return the kind of observed causality will influence user exploration and navigation. In this first version, considering the potential complexity of the system, we propose to focus on some simple rules for the sake of developing the core ideas, which will be presented below. We can now illustrate this through several examples involving collisions between spheres in the Ego.Geo Graphies brief. It can be noted that (although the brief was in no way influenced by this fact) collision between moving objects is the best studied phenomenon in causal perception. In the world of Ego.Geo Graphies, sphere-shaped objectactors may collide with one another or with elements of the landscape. The effects of a collision between spheres is normally expected to be felt on the spheres themselves and the nature of the effect will depend on visual cues as to their physical properties (i.e. soft/hard, deformable, etc.), which can be conveyed to some extent by their textures and animations. Because the spheres are all part of the same organism, when they collide the basic effect should be that

Figure 4: Alternative Causality in the “Ego.Geo Graphies” Brief

they coalesce into a bigger sphere. This is represented as the baseline action for sphere-sphere collision (Figure 4-A). The causal engine can apply various transformations to this baseline action. It can for instance replace the merging effect with the explosion of one, or both spheres (by applying a “change effect” macro-operator). As an alternative, both spheres can also bounce back from each other (Figure 4-B). Another way of inducing causal perception is to propagate effects to elements of the landscape itself (a specific class of operators exists in the system for propagating effects). In that instance, the collision between two spheres will result in the explosion of landscape elements (Figure 4-C). These alternative effects correspond to various levels of disruption (see above), which in turn are related to the perceived levels of empathy. Figure 5 details the operation of the causal engine on the collision event between two spheres. First the causal engine recognises the collision event and instantiates the default action representation (Figure 5) for merging spheres, while at the same time it freezes its execution. This representation can thus be modified to create alternative outcomes for that collision: the nature of this modification derives from some parameters of the user interaction history, thus implementing the “dialogue” between empathy and causality discussed above (see Figure 6).

“Gyre and Gimble”

Mark Palmer’s artistic work has been exploring user interaction with complex systems, in which the determinism of local interaction does not entail the predictability of the system’s response. In other words, the fact that user experience can derive from interaction does not imply any kind of control over the system he’s interacting with. In addition to simple and direct causality, the system should be able to generate unpredictable events. This approach draws reference from Spinoza’s philosophy, in rejecting transcendental explanations, as well as the notion of final cause. In that sense, the very term of “user” is misleading in its utilitarianism and in that it suggests a simple causality as a means to an end. Interaction should never resonate with the notion of a final cause; rather, experience should derive from adjustments of efficient causes only. In the context of this research, his “Gyre and Gimble” brief revisits Alice in Wonderland, through an interactive VR installation. In the original novel, as in this installation, Alice is certainly confronting an environment which exhibits behaviour of its own. Objects have a life of their own, generating all sorts of (inter)actions. In addition, the world itself is hardly predictable, the outcome of such interactions depending on changing conditions.

Figure 5: Generation of an Artificial Co-occurrence by the Causal Engine (“Ego.Geo Graphies” Brief)

The brief environment is a 3D world reflecting the aesthetics of the original Tenniels’ illustration (using 3D objects with non-photorealistic rendering, Figure 7). The user, evolving in the environment as Alice, in first person mode is a witness to various objects behaviour which she can also affect by her presence. Let us consider the situation where Alice faces a cupboard containing several animated object on its shelves. Objects will try to escape from the approaching Alice, but in doing so can only move on the shelves supporting them. This is bound to generate all kinds of collisions between events, yet the consequences of these collisions can vary to reflect the global mood of the situation or the identity of objects. In the example below, the clock is evading Alice’s gaze and running towards the other end of the shelf. In doing so, it

can collide with the teacup. A default outcome (if there is such a thing under the circumstances) would be for the teacup to break under impact, or it could cause the cup to start moving (launching effect). The default break-onimpact(?teacup ?clock) action is instantiated by the causal engine (Figure 7). However, the collision is actually affecting the clock whose hands start spinning; simultaneously the cup empties itself, while the nearby candle consumes. In this instance, the collision has triggered certain “time-dependent” processes (Figure 7). This can be generated by the causal engine using its semantic representation principles: when selecting possible alternative effects to be applied to the “frozen” action representation (which are associated with each object categories, e.g. recipients, solids, etc.) preference is given to those effects which match pre-existing semantic properties

Figure 6: Example of “Ego.geo” co-occurrence in SAS-CubeTm

Figure 7: Generation of an artificial co-occurrence by the causal engine (“Gyre and Gimble” Brief)

(time in the case of the clock). This becomes a heuristic for the selection of operators transforming the “frozen” action [2]. The frozen action sees both its object and its effects being modified. This illustrates once again that the outcome of an action can be altered depending on semantic properties of the environment, or some measure of its global state. CONCLUSIONS

We have developed a new kind of tool for VR Art, which supports the definition of behaviours at a conceptual level, facilitating the development of VR Art [2]. We have illustrated this approach using causality as a test case. As a psychological concept, it can relate elements of the artistic brief to the user experience (the details of which are still open to inter-personal variability so the process in not restrictively deterministic). In that sense there is a faithful transposition of the artistic intention to the user experience. At the same time, we have developed technical tools which can work directly at the level of causal phenomena. This in turn facilitates the technical implementation of VR installations. The overall context of our research is an Art+Science approach [18]. This work is an example of the use of cognitive concepts to support the creation of VR Artworks. Fundamental knowledge of cognitive mechanisms is a determinant of the elicitation of experience, which can be made to serve artistic intentions, by bridging the gap between user experience and the VR implementation that should produce it.

ACKNOWLEDGEMENTS

This research has been funded in part by the European Commission through the ALTERNE (IST-38575) project. Marc Buehner is thanked for his advice on psychological aspects of causal perception. Jeffrey Jacobson is thanked for his help in porting CaveUT™ to the SAS Cube™. Mikael Le Bras developed part of the environments used on Figure 1-2-4-5-6. REFERENCES

1. Buehner, M.J. and May, J. Rethinking Temporal Contiguity and the Judgment of Causality: Effects of Prior Knowledge, Experience, and Reinforcement Procedure. Quarterly Journal of Experimental Psychology, 56A(5) (2003), 865-890. 2. Cavazza, M., Lugrin, J-L., Hartley, S., Libardi, P., Barnes, M.J., Le Bras, M., Le Renard, M., Bec, L. and Nandi, A. New Ways of Worldmaking: the Alterne Platform for VR Art. ACM Multimedia 2004, New York, USA (in Press). 3. Fikes, R. E. and Nilsson, N. J. STRIPS: a new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2 (3-4), (1971). 189-208. 4. Grau, O. Virtual Art: from Illusion to Immersion. MIT Press (2003). 5. Haggard, P., Clark, S. and Kalogeras, J. Voluntary action and conscious awareness. Nature, 5 (2002), 382285.

6. Jacobson, J. and Hwang, Z. 2002. Unreal Tournament for Immersive Interactive Theater. Communications of the ACM, Vol. 45, 1(2002), 39-42. 7. Jiang, H., Kessler, G.D and Nonnemaker, J. DEMIS: a Dynamic Event Model for Interactive Systems. ACM Virtual Reality Software Technology (2002). 8. Kruschke, J. K. and Fragassi, M. M. The perception of causality: Feature binding in interacting objects. Proc. of the Eighteenth Annual Conference of the Cognitive Science Society (1996), 441-446. Hillsdale, NJ: Erlbaum. 9. Lewis, M. and Jacobson. Games Engines in Scientific Research. Communications of ACM, Vol. 45(2002), n. 1, 27-31. 10.Michotte, A. The perception of causality. New York: Basic Books. Translated from the French by T. R. and E. Miles, (1963).

13.Schlottman, A. Shanks D.R. Evidence for a distinction between judged and perceived causality. Quarterly Journal of Experimental Psychology: Human Experimental Psychology, 44 (1992), 321-342. 14.Schlottman, A. Seeing it happen and knowing how it works: How children understand the relation between perceptual causality and knowledge of underlying mechanism. Developmental Psychology, 35 (1999), 503517. 15.Scholl, B. J. and Tremoulet, P. Perceptual causality and animacy. Trends in Cognitive Sciences, 4(8) (2000), 299-309. 16.Scholl, B.J. and Nakayama, K. Causal Capture: Contextual Effects on the Perception of Collision Events. Psychological Science, vol. 13 (2002), n.6, 493498(6). 17.Scholl, B.J. and Nakayama, K. Illusory Causal Crescents: Misperceived spatial relations due to perceived causality. Perception, volume 33 (2004), number 4, 455–469.

11.Oestermeier, U. and Hesse, F. W. Singular and general causal arguments. In J. D. Moore & K. Stenning (Eds.). Proceedings of the 23rd Annual Conference of the Cognitive Science Society (2001), 720-725. Mahwah, NJ: Erlbaum.

18.Sommerer, C. and Mignonneau, L. (Eds.). Art @ Science, New York: Springer Verlag, (1998).

12.Sato, M. and Makiura, N. Amplitude of Chance: the Horizon of Occurrences, Kinyosya Printing Co., Kawasaki, Japan, (2001).

19.Wilkins, D. E. Causal reasoning in planning. Computational Intelligence, vol. 4 (1988), no. 4, 373380.