Monitoring and Organizational-Level Adaptation of Multi ... - CiteSeerX

209 downloads 2794 Views 77KB Size Report
tation is based on the monitoring of the system's behavior and the dynamic ..... In this middleware, each machine runs a DarX server and each server provides.
Monitoring and Organizational-Level Adaptation of Multi-Agent Systems Zahia Guessoum OASIS Team, LIP6 Universit´e Pierre et Marie Curie Paris, France [email protected]

Mikal Ziane OASIS Team, LIP6 Universit´e Pierre et Marie Curie and Universit´e Ren´e Descartes Paris, France [email protected]

Abstract Static organizational structures are not always suited for large-scale open multi-agent systems. On the other hand, emergent organizational structures may lead to undesirable behavior despite agent-level adaptation. We thus propose a new adaptive multi-agent architecture with both agent-level and organization-level adaptation. The global-level adaptation is based on the monitoring of the system’s behavior and the dynamic reification of an organizational structure. This structure is used to prevent or detect undesirable behavior and take the required corrective actions. This architecture is applied to fault-tolerant multi-agent systems.

1. Introduction Most multi-agent architectures and methodologies (see for example Aalaadin [6] and Gaia [26]) are well adapted to domains with static organization structures and a limited number of agents. To deal with complex dynamic environments, several solutions have been proposed in which the global behavior of the system emerges from adaptation at the agent level. However, adaptive agent behaviors do not always involve an adaptive global behavior. This problem has been highlighted by Carley [2] and Odell [21]. Our goal, in this paper, is to provide a generic architecture to augment an already-built multi-agent system with a basic global-adaptation mechanism. This adaptation mechanism aims at preventing or correcting undesirable behaviors. Our first hypothesis is that the multi-agent system is at risk to display undesirable behavior whose detection requires information on the whole system. For instance, detecting that an agent is critical cannot always be done statically and may depend on its activity and its interactions with other agents. We thus propose to monitor the system to

Nora Faci MODECO Team, LERI Universit´e de Reims Reims, France [email protected]

detect, and when possible to anticipate, undesirable behavior. This monitoring is based on the reification, as a graph, of the aspects of the system’s behavior which are necessary to detect or anticipate undesirable conditions. Each node of the graph represents an agent. The arcs are labeled by any information which is susceptible to enable the detection or anticipation of undesirable behavior. Our second hypothesis is thus that this information can be acquired by monitoring the system’s behavior (the exchanged messages). In a fault-tolerant multi-agent system this graph would be an interdependence graph to allow the detection of the most critical agents. In our architecture, each node of the graph is managed by an agent monitor. These agent monitors are responsible for acquiring and storing the relevant information, for detecting or anticipating undesirable behavior and for taking corrective actions. We assume that detecting or anticipating undesirable behavior can be done with a limited amount of communication among agent monitors. More precisely, we assume that it can be done by using two kinds of information: the information attached to the arcs originating from the node and global information. The global information that an agent monitor receives and contributes to establish is aggregated and dispatched back to agent monitors by host monitors. Our assumption is that the overhead of the whole monitoring mechanism is acceptable and in particular that the global information does not have to be updated too frequently. This paper is organized as follows. Section 2 is a review of related work on monitoring multi-agent systems. Section 3 describes an adaptive multi-agent model with three main components: domain agents, monitoring agents and interdependence graph. Section 4 presents the proposed monitoring mechanism and architecture. This architecture has been applied to fault-tolerant multi-agent systems (see Section 5). Section 6 reports on experiments to validate this ar-

Monitoring is often used to make a multi-agent system more robust in the presence of undesirable behaviors such as faults. Several approaches address the problem of monitoring in multi-agent systems. They rely on events and aim at observing, analyzing and controlling the behavior of the system. These approaches usually observe the execution of the multi-agent system in order to define its current behavior model and correct the undesirable behaviors. Castelfranchi [4], and Sichman and Conte [22] introduce interdependence graphs. These graphs are used to emphasize properties relating to emergent phenomena such as the formation of groups or the cooperation. The authors show how to predict some undesirable situations (e.g., inequity or incompatibility). Their analysis relies on a priori defined knowledge such as the number of agents, their plans, their goals, and their relations of interdependence. However, the proposed approach is not suited to dynamic and open multiagent systems. Kaminka et al. [14] propose a monitoring approach in order to detect and recover faults. They use models of relations between mental states of agents. They adopt a procedural plan-recognition based approach to identify inconsistencies. However, the adaptation is only structural: the models of relations may change but the contents of plans are static. Their main hypothesis is that any failure comes from incompleteness of beliefs. This monitoring approach relies on agent knowledge. The design of such multi-agent systems is very complex. Moreover, the behavior of agent cannot be adaptive and the system cannot be open. Horling et al. [12] present a distributed system of diagnosis. The faults can directly or indirectly be observed in the form of symptoms by using a fault model. The diagnosis process modifies the relations between tasks, in order to avoid inefficiencies. The adaptation is only structural because they do not consider the internal structure of tasks. The different diagnosis subsystems perform local updates on the task model. However, performance is optimized locally but not globally. The work of Malone et al. [15] on coordination relies on a characterization of the dependencies between activities in terms of goals and resources. These dependencies represent situations of conflict, and the different coordination mechanisms represent the solutions to manage them. The main contribution of this approach is the proposed taxonomy of these dependencies. The authors offer a framework of coordination study which provides the basic stone to build a monitoring approach. However, this monitoring approach has not yet been developed. This work has been reused by Klein et al. [18] to detect exceptions in multi-agent systems.

3. Adaptive Multi-Agent Architecture In a multi-agent system, each agent is defined as an autonomous entity. However, the agents do not always have all the required competences or resources and thus depend on other agents to provide them. Interdependence graphs [3] [23] [24] were introduced to describe the interdependences of these agents. These graphs are defined by the designer before the execution of the multi-agent system. However, complex multi-agent systems are characterized by emergent structures [22] which thus cannot be statically defined by the designer. As was underlined in the introduction, dynamic interdependence graphs can be used, under some conditions, to detect or prevent undesirable global behavior.

Interdependence

Observation Level

2. Related Work

These approaches present useful solutions to the problem of monitoring in multi-agent systems. However, the monitoring component is often centralized and its design relies on the agents’ knowledge [20]. It makes thus the design of multi-agent systems more complex. Our approach aims at easing the design of this monitoring component by providing a generic architecture.

Micro Level

chitecture.

Graph

Monitoring agents

Domain Agents

Environment

Figure 1. Adaptive multi-agent model components

In our architecture, a multi-agent system is therefore composed of (see Figure 1): • A micro component: an agent organization which is defined by the set of domain (i.e. application-specific) agents related in a classical acquaintances graph. These agents represent the knowledge of the application domain.

• A macro component: a graph which reflects an emergent organizational structure. This structure can be interpreted to avoid or to detect and correct undesirable behaviors. • A micro-macro component: it continuously observes the domain agents, builds a state of the system and deliver this information to the adaptive part. It can also control the domain agents. For instance, to make the multi-agent system reliable, this coupling component determines the criticality of domain agents and applies replication mechanisms where and when it is needed. It thus adds new replicas, remove replicas, etc. (see Section 5).

0.2 N2

N3

0.5

0.3

0.2

0.7 0.6 0.4

N1

N4

0.5

N9

0.5

0.7

0.1

0.4 0.7

N5 0.7

N8

N6

The next subsections describe the components of an adaptive multi-agent system and the next section presents the proposed monitoring mechanism.

0.3

0.5

N7

Figure 2. Example of interdependence graph

3.1. Domain agents The domain agents represent the problem solving activity. They have the capacity to perceive and interact with their environment. They act continuously according to their goals and the evolution of their environment. They communicate with specialized message passing. They are related according to the application domain characteristics. The end of their activity evolves the end of the whole system activity. Note that our approach is generic and not related to a specific interaction language or application domain. Moreover, agents can be either reactive or cognitive [19]. We merely suppose that they communicate using an agent communication language such as KQML [7] or ACL [8].

3.2. Interdependence Graph For each domain agent, we associate a node. The set of nodes (see Figure2), named interdependence graph, is represented by a labeled oriented graph (N, L, W ). N is the set of nodes of the graph, L is the net of arcs and W the set of labels. (1) N = {Ni }i=1,n L = {Li,j }i=1,n,j=1,n

(2)

W = {Wi,j }i=1,n,j=1,n

(3)

Li,j is the link between the nodes Ni and Nj and Wi,j is a real number which labels Li,j . Wi,j reflects the importance of the interdependence between the associated agents (Agenti and Agentj ). These weights can be used, for example, to detect which links become too heavy or if the systems relies too much on a few agents.

A node is thus related to a set of other nodes that may includes all the nodes of a system. This set is not static: it can be modified when a new domain agent is added or an existing one disappears. The proposed adaptation mechanism of the interdependence graph is described in the next section.

4. Monitoring Monitoring consists in acquiring information to update the interdependence graph and in analyzing the graph to control the domain agents. This information may be based on standard measurements (communication load, processing time...) or multi-agent characteristics such as the roles of agents or the interaction protocols. The next sections present the monitoring architecture and define the adaptation mechanism.

4.1. Monitoring Architecture In most existing multi-agent architectures, the observation mechanism is centralized (see for example the work of Mazouzi et al. [17]). The acquired information is typically used off-line to explain and to improve the system’s behavior. Moreover, the considered application domains typically only involve a small number of agents and a priori wellknown organizational structures. These centralized observation architectures are not suited for large-scale and complex systems where the observed information needs to be analyzed in real-time to adapt the multi-agent structure to the evolution of its environment. We thus propose to distribute the observation mechanism to improve its efficiency and robustness. This distributed

mechanisms relies on a reactive-agent organization. These reactive agents have two roles: • they observe and control the domain agents, • they build global information and minimize communication.

Observation Level

These two roles are assigned to two kinds of agents: domain agent monitors (named agent-monitors) and host monitors (named host-monitors). An agent-monitor is associated to each domain agent and a host-monitor is associated to each host (see Figure 3).

Agent_Monitor 3

Agent_Monitor 1

update global information. In turn, agent-monitors are informed by their host-monitors when global information changes significantly.

4.3. Adaptation Mechanism The monitors make it possible to adapt the interdependence graph to the various changes of the domain agents. For example, the arrival of a new agent involves the automatic creation of the corresponding node and agent-monitor. Let us consider an interval of time ∆t. The global communication load QI(∆t) and the global number of sent messages N bM (∆t) are calculated as follows by the hostmonitors: QI(∆t) = op1(QI1,1 (∆t), ..., QIn,n (∆t))

Agent_Monitor 2

(4)

and

Agent_Monitor 3 Host_Monitor

N bM (∆t) = op2(N bM1,1 (∆t), ..., N bMn,n (∆t)) (5)

Agent Level

where 1111 0000 0000 1111 0000 1111 0000 1111

SendMessage

1111 0000 0000 1111 0000 1111 2 0000 1111

1

Domain agent1 1

Event

Domain agent 4

1111 0000 0000 1111 0000 1111 0000 1111 0000 1111

Domain agent 2

Control

111 000 000 111 000 111 000 111 000 111

Domain agent 3

Figure 3. Multi-agent architecture

• QIi,j (∆t) and N bMi,j (∆t) are respectively the communication load and the number of messages sent by agenti to agentj during the interval of time ∆t, • op1 and op2 are aggregation operators. For our experiments we used the average. Algorithm 4.3 gives an outline of the adaptation mechanism of the interdependence graph. This adaptation relies on local information (communication load ...) and on global information (aggregation of the various communication loads ...). This algorithm is used by each agent-monitor to manage the associated node.

4.2. Agent Communication 4.4. Discussion The monitoring agents (agent-monitors and hostmonitors) are hierarchically organized. Each monitor communicates only with one host-monitor. Host-monitors exchange their local information to build global information (global number of messages, global exchanged quantity of information, ....). Distribution and observation are based on a middleware. Our implementation relies on DarX [1]. In this middleware, each machine runs a DarX server and each server provides an observation module. This module collects events (sent and received messages) and data. Each agent-monitor registers to its host-monitor to receive information and events related to the associated domain agent. After each interval of time ∆t, the host-monitor sends the collected events and data to the corresponding agent-monitor. This agent-monitor activates then the adaptation algorithms. When the arcs of a node are significantly modified, the concerned agent-monitor notifies its host-monitor. The latter informs the other host-monitors to

In the proposed adaptation algorithm (Algorithm 4.3), we considered only two types of information: the number of messages and the quantity of exchanged information. It is sometimes necessary to consider other types of information such as: • the type of messages (performatives ...), • the priority of messages, • the sequences of messages (interaction protocols ...). The proposed algorithm can be easily re-used to integrate these various types of information. When needed, agent-monitors take corrective actions to prevent or correct undesirable behavior. The choice of the appropriate action depends on the definition of the undesirable behaviors. In the next section we present a faulttolerant multi-agent system in which these corrective actions consist in adding or removing agent replicats depending on the criticality of agents.

Wi,j (t+∆t) = Wi,j (t)+AgOp(M yQI, M yN bM ) (8) 4: end for

5. Example: Fault-Tolerant Multi-Agent Systems The main strength of the agent paradigm consists in the collective resolution of low-capable but interacting agents. As a distributed system, however, multi-agent systems are exposed to high rates of failure of their hardware and/or software components [11]. The failure of one component can often evolve into the failure of the whole system. So, the fault-tolerance has been underlined as one of the main important criteria for the design and implementation of multiagent systems (see [5]). However, no work, as far as we know, address the problem of fault tolerance in complex multi-agent systems. The aim of this section is to apply the proposed adaptive multi-agent architecture to build fault-tolerant multi-agent systems.

5.2. Adaptive Replication

Agent_Monitoring i Agent criticality

Replication Control

Replication of data and/or computation is an effective way to achieve fault tolerance in distributed systems. A replicated software component is defined as a software component that possesses a representation on two or more hosts [9]. Many toolkits (e.g., [9] and [25]) include replication facilities to build reliable applications. However, most of them are not quite suitable for implementing large-scale, adaptive replication mechanisms. Therefore we use a specific and novel framework for replication, named DarX [16], which allows dynamic replication and dynamic adaptation of the replication policy (e.g., passive to active, changing the number of replicas). Moreover, DarX has been designed to easily integrate various agent architectures, and the mecha-

Interaction Events

5.1. Replication

Replication

Observation

DarX Server (host a)

Observation level

3:

M yN bM = (N bMi,j (∆t) − N bM (∆t))/N b(∆t) (7) Update the weights by using the following rule:

Agent level

and

Adaptation

(6)

System Events

myQI = (QIi,j (∆t) − QI(∆t))/QI(∆t)

nisms that ensure dependability are kept as transparent as possible to the application. DarX provides the needed adaptive mechanisms to replicate agents and to modify the replication strategy. Meanwhile, we cannot always replicate all the agents of the system because the available resources are usually limited. We distinguish two cases: In the first case, multi-agent systems have static organizational structures and a small number of agents. Critical agents can be therefore identified by the designer and can be replicated by the programmer before run time. In the second case, multi-agent systems may have dynamic organizational structures, adaptive behaviors of agents, and a large number of agents. So, the criticality of agents may evolve dynamically during the course of computation. Moreover, the available resources are often limited. Thus, simultaneous replication of all the agents of a large-scale system is not feasible. Our idea is thus to automatically and dynamically apply replication mechanisms where (to which agents) and when it is most needed. The multi-agent system must be therefore able to observe its behavior to dynamically determine the agent criticality. In the next subsection, we will introduce our approach to this objective.

Interdependence

Algorithm 1 Adaptation algorithm of an agent Agenti Require: QI(∆t) and N bM (∆t) provided by the observer agent, and an aggregation operator (AgOp). Ensure: 1: for each j different from i do 2: Calculate:

Domain Agent i

Figure 4. General architecture for replication control

The analysis of an agent criticality allows to define its importance and the influence of its failure on the behavior and reliability of the multi-agent system. We propose to use the interdependences of agent to define its criticality. In our fault-tolerant multi-agent systems, the interdependences of each domain agent are thus processed by the associated the associated agent-monitor to compute its criticality. The criticality of Agenti is computed as follows: wi = AgOp(Wi,j j=1,m )

(9)

Where AgOp is an aggregation operation, The criticality wi is used to compute the number of replicas. Thus, Agenti may be replicated according to: • wi : its criticality,

6.1. Performances Monitoring is a useful mechanism. However, its cost seems important. Thus, our first experiment measures the monitoring cost in the proposed architecture. We consider, a multi-agent system with n distributed agents that execute the same scenario, each agent has a fixed scenario. The number of agents (n) is an important factor because our framework was specially designed for large-scale multiagent systems. For each n (100, 150, ..., 400), we realized two kinds of measures (with and without monitoring) and we: • use n/20 machines for each experiment,

• W: the sum of the domain agents’ criticality, • rm: the minimum number of replicas which is introduced by the designer,

• repeat each experiment 10 times.

• Rm: the available resources which define the maximum number of possible simultaneous replicas. It is defined by the designer. The number of replicas nbi of Agenti may be determined as follows: nbi = rounded(rm + wi ∗ Rm/W )

(10)

nbi is then used by DarX to update the number of replicas of Agenti . Note that W is defined and updated when needed by the host-monitors. This number is then used by the associated agent-monitor to update the number of replicats of the associated domain agent. Figure 5. Monitoring cost

6. Experiments We made some preliminary experiments using the example of a distributed multi-agent system that helps at scheduling meetings. Each user has a personal assistant agent which manages his/her calendar. This agent interacts with: • the user to receive his/her meeting requests and the associated information (a title, a description, possible dates, participants, priority, etc.), • the other agents of the system to schedule meetings. If the assistant agent of one important participant (initiator or prime participant) in a meeting fails (e.g., his/her machine crashes), this may disorganize the whole process. Since the application is very dynamic - new meeting negotiations start and complete dynamically and simultaneouslydecision for replication should be done automatically and dynamically. Note: The experiments presented in this section were carried out on twenty machines with Intel(R) Pentium(R) 4 CPU at 2 GHz and 526 Mb of RAM.

Figure 5 gives the average execution time for each n. We considererd two cases: 1) a multi-agent system without monitoring, and 2) a multi-agent system with monitroing. The figure shows that the monitoring cost is almost a constant function. It does not increase with the number of agents. That can be explained by the proposed optimization in the multi-agent architecture such as the communication between the agent-monitors and host-monitors. For instance, to build global information (global communication load ...), the host-monitors communicate only if the local information changes. Figure 6 gives an overview of the variation of the execution time ( including the monitoring cost) for different time intervals. The cost obviously increases when the time interval decreases. For instance, the average augmentation between 4000ms and 500ms is 25%. It is therefore very important to choose the most suited time interval or to propose and adaptive algorithm to choose it.

number of extra replicas. N SS (11) TNS where NSS is the number of simulations which did not fail and TNS is the total number of simulations. From these experiments, we found that the number of extra resources should be at least equal to the number of critical agents. Note that the monitoring agents are generic, they are not related to any problem (i.e. Fault tolerance) or application domain. Fault-tolerant multi-agent systems can be therefore built easily. Moreover, although preliminary, we believe these results are encouraging. SR =

Figure 6. Monitoring cost for various time intervals

6.3. Discussion 6.2. Robustness In order to simulate the presence of faults, we implemented a failure simulator randomly stopping the thread of an agent (chosen randomly). We considered 100 agents distributed on 10 machines. We run each experiment 10 minutes and we introduce 100 faults. We repeated several times the experiments with a variable Rm. Rm is the number of extra resources (see Section 5.2), it defines the number of replicas that can be used by the whole multi-agent system.

100

In the proposed adaptive replication mechanism, we proposed to compute the agent criticality. This criticality is then used to determine the number of replicats of each agent. This approach does not deal with the failure of machines. All the resources are thus considered as similar. To deal with the problem of management of heterogeneous resources and the problem of agent replication, we are working on ideas from economics [13]. Our monitoring architecture is based on agents, we associated monitors to domain agents. It does not rely on the middleware. Moreover, our approach aims at avoiding critical agents by replicating. The failure of a host is automatically detected by the middleware and covered. If the domain agents are critical they are automatically replaced by DarX. The associated agent-monitor is also created if needed.

Rate 90

7. Conclusion

Rate of Succeded Simulations

80 70 60 50 40 30 20 10 0 0

5

10

15 Number of Replicas

20

25

30

Figure 7. Rate of succeeded simTulations for each number of replicats

Figure 7 shows the success rate SR as a function of the

In this paper we proposed a generic architecture to augment an already-built multi-agent system with a basic global-adaptation mechanism to prevent or correct undesirable behaviors. We proposed to monitor the global behavior of the system and to reify as an interdependence graph the necessary information. This graph reflects the dynamic evolution of the organization and enables the detection of undesirable behaviors. This architecture relies on an organization of reactive agents which support monitoring including corrective actions. This organization is hierarchical in order to minimize the cost of monitoring and especially communication among monitor agents. This architecture has been implemented with the DIMA platform [10] and the DarX middleware [1] [11]. It has been used to build fault-tolerant multi-agent architecture which was validated on two examples: a distributed agenda and a basic crisis management system.

The proposed architecture gave promising results to easily build fault-tolerant multi-agent systems. We will now try and apply it to more general problems involving new kinds of acquired information and corrective actions.

Acknowledgment The authors would like to thank the members of FaultTolerant Multi-agent System project of the LIP6 for their many useful suggestions regarding the fault-tolerant multiagent systems.

References [1] M. Bertier, O. Marin, and P. Sens. Implementation and performance evaluation of an adaptable failure detector. In the International Conference on Dependable Systems and Networks, Washington, USA, 2002. [2] K. M. Carley. Adaptive organizations and emergent forms. Organization Science, 769(2-3), 1998. [3] C. Castelfranchi. Decentralized AI, chapter Dependence relations in multi-agent systems. Elsevier, 1992. [4] C. Castelfranchi. Modelling social action for AI agents. Artificial Intelligence, 103:157–182, 1998. [5] K. S. Decker, E. H. Durfee, and V. R. Lesser. Distributed Artificial Intelligence, chapter Evaluating research in cooperative distributed problem solving, pages 485– 517. Morgan Kaufmann, 1989. [6] J. Ferber and O. Gutknecht. Alaadin: a meta-model for the analysis and design of organizations in mutli-agent systems. In Y. Demazeau, editor, ICMAS’98, pages 128–135, Paris, 1998. [7] T. Finin, R. Fritzson, D. McKay, and R. McEntire. KQML as an agent communication language. In Third international conference on information and knowledge management. ACM Press, November 1994. [8] FIPA. Specification. part 2, agent communication language, foundation for intelligent physical agents, geneva, switzerland. http://www.cselt.stet.it/ufv/leonardo/fipa/index.htm, 1997. [9] R. Guerraoui, B. Garbinato, and K. Mazouni. Lessons from designing and implementing garf. In Proceedings Objects Oriented Parallel and Distributed Computatio, volume LNCS 791, pages 238–256, Nottingham, 1989. [10] Z. Guessoum and J.-P. Briot. From active objects to autonomous agents. IEEE Concurrency, 7(3):68–76, 1999. [11] Z. Guessoum, J.-P. Briot, O. Marin, A. Hamel, and P. Sens. Software Engineering for Large-Scale Multi-Agent Systems, chapter Dynamic and Adaptative Replication for LargeScale Reliable Multi-Agent Systems, pages 182–198. Number 2603 in LNCS. Springer Verlag, April 2003. [12] B. Horling, B. Benyo, and V. Lesser. Using self-diagnosis to adapt organizational structures. In 5th International Conference on Autonomous Agents, pages 529–536, Montreal, 2001. ACM Press.

[13] N. Jamali, P. Thati, and G. Agha. An actor-based architecture for customizing and controlling agent ensembles. IEEE Intelligent Systems, Special Issue on Agents, 1999. [14] G. A. Kaminka, D. V. Pynadath, and M. Tambe. Monitoring teams by overhearing: A multi-agent plan-recognition approach. Journal of Intelligence Artificial Research, 17:83– 135, 2002. [15] T. W. Malone and K. Crowston. The interdisciplanary study of coordination. ACM Computing Surveys, 26(1):87–119, March 1994. [16] O. Marin, M. Bertier, and P. Sens. DARX - a framework for the fault-tolerant support of agent software. In 14th International Symposium on Software Reliability Engineering (ISSRE’2003), Denver, Colorado, USA, 2003. IEEE. [17] H. Mazouzi, A. ElFallah-Seghrouchni, and S. Haddad:. Open protocol design for complex interactions in multi-agent systems. In AAMAS, Bologna, Italy, July 2002. ACM. [18] M.Klein, J. Rodriguez-Aguilar, and C.Dellarocas. Using domain-independent exception handling services to enable robust open multi-agent systems: The case of agent death. Journal of autonomous Agents and Multi-Agent Systems, 7(1-2):179–189, 2003. [19] J. Muller. The right agent (architecture) to do the right thing. In ATAL, volume 1555 of Lecture Notes in Computer Science, pages 211–225. Springer, 1998. [20] N.Roos, A.t.Teije, and C.Witteveen. A protocol for multiagent diagnosis with spatially distributed knowledge. In ACM, editor, First Workshop on Programming Multiagent Systems: Languages, frameworks, techniques, and tools (ProMAS03), AAMAS’03, pages 655–661, July 2003. [21] J. Odell. Agents and complex systems. Journal of Object Technology, 1(2):35–45, 2002. [22] J. S. Sichman and R. Conte. Multi-agent dependence by dependence graphs. In AAMAS2002, pages 483–490, Boulogna, Italy, 2002. ACM. [23] J. S. Sichman, R. Conte, and Y. Demazeau. Reasoning about others using dependence networks. In Actes de Incontro del gruppo AI*IA di interesse speciale sul inteligenza artificiale distribuita, Roma, Italia, 1993. [24] J. S. Sichman, R. Conte, and Y. Demazeau. A social reasoning mechanism based on dependence networks. In Proceedings of ECAI’94 - European Conference on Artificial Intelligence, Amsterdam, The Netherlands, August 1994. [25] R. van Renesse, K. Birman, and S. Maffeis. Horus: A flexible group communication system. Communications of the ACM, 39(4):76–83, 1996. [26] M. Wooldridge, N. Jennings, and D. Kinny. The methodology Gaia for agent-oriented analysis and design. AI, 10(2):1– 27, 1999.

Suggest Documents