From Representational Intelligence to Contextual ... - CiteSeerX

3 downloads 136 Views 207KB Size Report
Paper Category: Computational theorizing about complex social systems .... development of a computer based simulation of an emergency call center and was developed according ... call doctor and giving medical advice over the telephone.
From Representational Intelligence to Contextual Intelligence in the Simulation of Complex Social Systems Bernard Pavard and Julie Dugdale GRIC – IRIT (Cognitive Engineering Research Group – Research Institute of Computer Science of Toulouse). UPS-CNRS (UMR 5505), 118 route de Narbonne 31062 Toulouse, France Email: [email protected] [email protected] Paper Category: Computational theorizing about complex social systems

Abstract In this paper we explore the uses, advantages and limitations of two different paradigms in the computational simulation of complex social systems. We focus particularly on the simulation of cognitive mechanisms in social situations and illustrate our discussion with two of our current projects. We compare the now classical agent based approach (epitomized by models such as Sugarscape [Epstein & Axtell, 1996]) with an approach in which the computational agents are driven by users. We argue that cognitive mechanisms are still not understood well enough for us to be able to derive convincing rules for guiding interaction and that any intelligence exhibited by such systems are constrained by the rule representation. What we are essentially losing by substituting the human for a rule governed agent is the contextual intelligence that the human can provide in that situation. Any human response to a situation or artefact is based upon a multitude of interacting factors. A human will see, and then interpret the situation knowing the history of the interaction, using their own cultural background, drawing upon their own experience, and with a specific perception of other people and artifacts in that environment, etc. By employing this contextual intelligence, we argue that the behavior of the agents and the overall behavior of the system is closer to what would be expected in reality.

Contact: Dr Julie Dugdale GRIC – IRIT UPS-CNRS (UMR 5505), 118 route de Narbonne 31062 Toulouse, France Tel: +33 5 61 55 77 07 Fax : +33 5 61 55 77 08 Email: [email protected] Key words: Cognitive simulation, non-verbal communication, agent based simulation, complex social systems. Acknowledgement: This work is conducted as part of the Complexity in Social Science (COSI) project. COSI is a European TMR network funded by the European Commission under DG XII.

From Representational Intelligence to Contextual Intelligence in the Simulation of Complex Social Systems Bernard Pavard and Julie Dugdale Introduction In this paper we explore the uses, advantages and limitations of two different paradigms in the computational simulation of complex social systems. We illustrate our discussion with two of our current projects. Whilst both projects are concerned with simulating aspects of cognition and are in the same application domain, each project follows one of the two paradigms. There is currently a huge research effort concentrating on modeling and simulating complex social systems. The most currently favored approach focuses on endowing a society of heterogenous agents with a set of simple rules governing the interaction between the agents and between the agents and their environment. There are many examples of simulations in this genre, covering a wide range of social or collective phenomena [Epstein & Axtell 1996], [Doran, Palmer & Mellars, 1994], [Conte, Hegselmann & Terna, 1997]. This approach, which is intuitively easy to understand brings numerous advantages and has undoubtedly furthered our understanding of complex social systems. However, what we shall explore in this paper are the wider implications which stem from such a heavy reliance on those simple rules (the representation). In particular, we will focus our attention on its implication in the simulation of cognitive mechanisms. The central issue lies in the derivation of, and reliance on, the rules in agent based systems. For such an approach to yield useful results we must have at least a reasonable understanding of the complex cognitive mechanisms which operate in real-life social systems. This is necessary in order to formulate our agents simple rules. Combining the rules with the agent and environment representation, we are then armed with a selfcontained abstract model of the real situation. The assumption then behind this model, is that all of the ‘intelligence’ of the system is embedded in its representation. It could be argued that only simple (unintelligent) rules are being used and that the intelligence is an emergent feature of the system. Whilst this may be true, the point remains that the intelligence is embedded in its representation and therefore is also constrained by that representation. What we are essentially losing by substituting the human for a rule governed agent is the contextual intelligence that the human can provide in that situation. Any human response to a situation or artefact is based upon a multitude of interacting factors. A human will see and then interpret the situation knowing the history of the interaction, using their own cultural background, drawing upon their own experience, and with a specific perception of other people and artifacts in that environment, etc. The cognitive processes which handle all of these factors to produce the observed behavior are, in many cases, poorly understood. This makes it extremely difficult to derive basic rules for an agent based system whose aim is to simulate aspects of cognition. Our goal of simulating how a human might behave in a given situation is thus far from being realized.

Discussion The diagram on the right in figure 1 shows how contextual intelligence can be incorporated into a simulation of a complex system. As with the ‘closed society’ approach, shown on the left, there is a group of heterogenous agents interacting in an environment. However, in the contextual intelligence approach these agents1 are partially controlled by the users; n users controlling n agents in the artificial environment. Some of the actions of the artificial agent are guided by rules (the same as with agents in the closed society approach), but other actions are guided by the user. Agents with simple rules

Environment

Interactions

Agents – partially controlled by users

Environment Interactions

Figure 1: A representational intelligence view shown on the left, on the right is a contextual intelligence view. 1

In the case of our work, which we shall explain later, these agents are avatars operating in a virtual environment.

From Representational Intelligence to Contextual Intelligence in the Simulation of Complex Social Systems Bernard Pavard and Julie Dugdale With reference to figure 1 let us consider the completeness of the cognitive rules used by the artificial agents and the human agents2. When developing a computer simulation, the rules for the agents are typically derived from a task and an activity analysis of the real situation3 (for a description of this methodology and how it is used to develop a simulation see [Pavard, 1998], [Dugdale, Pavard & Soubie, 1999]). The set of cognitive rules is derived from both the analysis of the real social system and from our current knowledge of how cognitive mechanisms operate. Our knowledge of the situation and the functioning of cognitive mechanisms is thus totally embodied in these agent rules. The problem lies in our understanding of human cognitive mechanisms. Whilst we are aware that factors, such as personality, culture and experience, etc., affect these mechanisms, precisely what is the effect we do not know. Therefore, it is unrealistic to expect that our derived rules are a true interpretation of the mechanisms since we know that we are omitting relevant factors. We can then say that our derived rule set is incomplete. In the case of contextual intelligence, shown on the right of figure 1, the user is employing all of her cognitive ‘rules’, and background experience, etc. to guide the interaction with the other agents (which incidentally may also be guided by human users). Furthermore, utilizing the cognitive abilities of the human obviates the need to formalize their cognitive processes in any particular representation format. With the representational model, it is possible only to focus on a few aspects of cognition thus confining the simulation application to a narrow problem. However, the contextual intelligence approach offers the possibility of increasing the scope of simulation applications by embodying the cognitive abilities of the user. The fundamental consequence is of enhancing a system with contextual intelligence is that the behavior of the agents is closer to what would be expected in reality. This is because the actions of the human user, and implicitly the rationale behind these actions, are transmitted to the artificial agent, who thus acts in a far complex, and true to life, manner than if just acting according to a set of simple rules. For example, as the user surveys the artificial environment she is selecting what constitutes relevant information in order to determine her response. This selection processes, and also the resulting actions, are far more refined and complex than in the closed agent model. This is due to not only to the contextual awareness she brings to the decision, but also on the artificial environment which is itself more complex being composed of similarly user-driven agents.

Paradigm examples To further our discussion in comparing the two paradigms, we draw upon two simulations recently developed at our laboratory, both of which are in the emergency services domain. The first project concerns the development of a computer based simulation of an emergency call center and was developed according to the classical agent paradigm.

The Emergency Call Center Simulation We will start by giving a brief description of the center and the aim of the simulation. The center, which is based in the outskirts of Paris, is responsible for dispatching fire-trucks, sending ambulances, notifying the oncall doctor and giving medical advice over the telephone. From an initial call, the health worker must assess the nature of the incident and decide on the most appropriate course of action. This process often involves cooperation and communication with other team members who may communicate either directly (face to face), via artefacts (e.g. telephone), or non-verbally. To reduce the time taken to deal with an emergency incident, all of the health workers must try to be aware of ongoing events. The aim of the simulator was to help in the redesign of a new center by allowing the user to test the effects of new physical organisations. We were particularly interested in modelling and analysing how environmental factors, such as the level of noise, affect mutual awareness of the situation, the ability to overhear, the interruption mechanisms used by health workers, and cooperation and communication. A full description of the simulator together with results may be found in [Dugdale, Pavard & Soubie, 2000]. However, for this discussion we will focus on one cognitive process modelled in the simulator: the ability of an agent to overhear. From our field analysis we found that overhearing is an important cognitive mechanism since it affects the mutual awareness of the collective, which in turn affects the overall efficiency of the center.

2

Naturally, we are not assuming that the human user actually operates according to a set of rules, rather the rule analogy is carried forward to clarify the discussion. 3 Task analysis aims to uncover the prescribed work of the agents (that is, how the work is officially supposed to be conducted) whereas activity analysis documents how the work is actually performed. Whilst we are specifically focusing on cognitive simulations, the ultimate goals of task and activity analysis, which are conducted as part of a field study, are common in other types of simulations.

From Representational Intelligence to Contextual Intelligence in the Simulation of Complex Social Systems Bernard Pavard and Julie Dugdale

Figure 2: The actual emergency call center is shown in the photograph on the left, the screen shot on the right shows the developed simulator. In our simulator we considered that the ability to overhear is a function of four main inter-related factors. The first and most obvious factor is distance. If people are spatially close, they are more likely to hear what is being said. However, even if two people are very close, they will not be able to hear each other if the general level of noise in the room is high. Thus, the level of noise in the room is the second factor affecting the ability to overhear. The third factor is the intensity with which a person is speaking. Even if two people are far away from each other, they might still be able to overhear each other if they shout. The final factor is a person’s level of involvement with a task. If a person is dealing with an urgent problem, requiring full concentration, he will tend to ‘shut off’ from the outside world and not hear things that he would otherwise hear.

Limitations As we have said the emergency call center simulator the ability of an agent to overhear is governed by four factors. How realistic is this in terms of what we know about overhearing in real situations? From linguistics, ethnomethodology and psychology we know that gesture, posture and vision play important roles not only in the interpretation of speech, but as a precursor to the overhearing mechanism [McNeill, 1992], [Thompson & Massaro, 1986]. For example, a simple gaze or raise of the aim can attract a person’s attention and radically change the interpretation of the spoken message. Even if a spoken message is only partially heard, the listener can infer the correct meaning of the message using the gestures and visions of the speaker. Thus, what is absent from this simulation model is the non-verbal communication mechanisms which would enrich the interactions between agents making the overall simulation closer to the real-world situation.

The Simulator for Firemen Training The second simulator is concerned with using virtual reality as a training tool for collaborative work. Specifically, the aim of the work is to create a virtual reality environment that will replace the pen and paper methods currently used in the training of firemen in the Ecole Nationale des Sous Officiers sapeurs Pompiers in the south of Paris. The first stage of the project was to extract, from a video database of emergency incidents, the procedures and the communicative acts performed by the firemen. The result of such analysis was primordial to the development of the virtual reality tool and enabled us to: • Identify the set of features to be reproduced in the 3d simulation. • Infer a consistent number of behavioral rules that will drive the interaction of avatars in the virtual scene. • Obtain a frame of comparison to verify the level of realism obtained by the simulation.

From Representational Intelligence to Contextual Intelligence in the Simulation of Complex Social Systems Bernard Pavard and Julie Dugdale

Figure 3: On the left is shown an extract from the video database of emergency incidents. The picture on the right is from earlier work conducted to extract behavioural gestures; the avatar is rightmost in the picture. The second stage, which is in progress, is the implementation of the simulation. In this stage we will have the opportunity to verify if the interaction rules that we identified are suitable to reproduce a behaviour comparable for his richness and complexity to the one observed during the interaction of the real firemen.

Figure 4: A computer simulated emergency incident is shown on the left. The photograph on the right shows a user interacting with the simulator via an avatar. Limitations and considerations The primary advantage of the closed agent model is its simplicity. However, as we have discussed, when we are concerned with simulating aspects of cognition, such an approach is limited. Whilst the contextual intelligence approach addresses these problems, it brings with it other challenges and difficulties. The most fundamental challenge is ensuring the fusion between the virtual world and the real world of the user. For the simulation to be of use, the human user must believe, to a certain extent, that the avatar agent is actually an extension of his or her own self. The onus is then on the avatar agent to behave (i.e. have convincing gestures, etc.) as would a human user. The second major difficulty lies in the division of tasks between the user and the avatar agent. There is a wealth of research which is currently trying to address these two problems (see, for example, [Cassell, Ananny & Yan, 2000], [Decortis, Marti & Rutgers, 2002]). For certain aspects of the simulator, the division may be quite clear; for example, the user can control the spatial displacement of the avatar agent, can perform physical actions on the environment via the avatar, and can initiate communicative acts of an avatar. Similarly, from our video analysis, certain basic actions of the avatars are autonomous and may be separated from the user. We can also, without too much difficulty, determine the reaction of certain aspects of the environment (e.g. the behaviour of smoke and fire adhering to physical laws). Thus, we can reach a situation where the user, after assessing the relevant factors in the virtual environment, can guide the avatar to perform a certain action. A more problematic situation arises in how to control some of the gestural features embodied in

From Representational Intelligence to Contextual Intelligence in the Simulation of Complex Social Systems Bernard Pavard and Julie Dugdale the avatar agent. From the video analysis we can derive a repertoire of common gestural features, but when a user initiates a communicative act (i.e. wants to instruct her avatar to communicate with another avatar), how can she transmit her emotive state to the gestural behaviour of the avatar.

Conclusion In this paper we have discussed from a theoretical point of view two paradigms in the computational simulation of complex social systems. The aim was not to address specific problems associated with virtual reality, but rather to take a step back and compare what each approach has to offer. From our focus on the simulation of aspects of cognition, we argue that the classical closed agent society approach is limited. By embodying the contextual intelligence of the user we can develop a far richer simulation which is closer to reality.

References [Cassell, Ananny & Yan, 2000] Cassell, J., Ananny, M., Basu, A., Bickmore, T., Chong, P., Mellis, D., Ryokai, K., Vilhjalmsson, H., Smith, J., Yan, H. (2000) "Shared Reality: Physical Collaboration with a Virtual Peer." Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (CHI), pp. 259-260. April 4-9, Amsterdam, NL [Conte, Hegselmann & Terna, 1997] Conte, R., Hegselmann, R., Terna, P. (eds.) 1997, Simulating Social Phenomena. Berlin: Springer. [Decortis, Marti & Rutgers, 2002] Decortis, F., Marti, P., Moderini, C., Rizzo, A., Rutgers, J. (in press). Disappearing computer, emerging creativity: an educational environment for cooperative story building. International Journal of Human Computer Studies. [Doran, Palmer & Mellars, 1994] Doran, J. E., Palmer, M., Gilbert, N., Mellars, P. 1994. “The EOS project: modelling Upper Paleolithic social change.” In N. Gilbert and J. E. Doran (eds), Simulating Societies: The Computer Simulation of Social Phenomena, pp. 195-222. UCL Press, London. [Dugdale, Pavard & Soubie, 1999] Dugdale, J., Pavard, B., Soubie, J.L. 1999, "Design Issues in the Simulation of an Emergency Call Centre." In Proceedings of ESM 99 Modelling and Simulation, 13th European Simulation Multiconference. June 1-4, 1999. Warsaw, Poland. [Dugdale, Pavard & Soubie, 2000] Dugdale, J., Pavard, B., Soubie, J.L. 2000, "A Pragmatic Development of a Computer Simulation of an Emergency Call Centre." Designing Cooperative Systems. Frontiers in Artificial Intelligence and Applications. (Eds) Rose Dieng et al. IOS Press. [Epstein & Axtell, 1996] Epstein, Joshua M. and Axtell, Robert. 1996, “Growing Artificial Societies – Social Science from the Bottom Up.” MIT Press, Cambridge, MA. [McNeill, 1992] McNeill, D. 1992, “Hand and Mind: What Gestures Reveal about Thought.” Chicago: University of Chicago Press. [Pavard, 1998] Pavard, B. (1998) A Cognitive Engineering Methodology. Methodological Approaches, COTCOS Summer School, Château de Bonas, France. Available at: http://wwwsv.cict.fr/cotcos/pjs/cotcos.htm [Thompson & Massaro, 1986] Thompson, L. A. and Massaro, D. W. 1986, “Evaluation and integration of speech and pointing gestures during referential understanding.” Journal of Experimental Child Psychology, 42: 144-168.

Suggest Documents