Using MICE to Study Intelligent Dynamic ... - Semantic Scholar

2 downloads 0 Views 181KB Size Report
3] Susan E. Conry, Robert A. Meyer, and Victor R. Lesser. Multistage ... In Les Gasser and Michael N. Huhns, editors, Distributed Arti cial Intelli- gence, volume 2 ...
Using MICE to Study Intelligent Dynamic Coordination Edmund H. Durfee and Thomas A. Montgomery Department of Electrical Engineering and Computer Science University of Michigan Ann Arbor, Michigan 48109 (313) 936{1563 [email protected]

Abstract We describe a exible experimental testbed, called MICE, for distributed arti cial intelligence research. We argue that the adoption of MICE (or some other standard testbed) by the distributed arti cial intelligence community can draw together the community and permit a much greater level of exchange of ideas, formalisms, and techniques. MICE allows an experimenter to specify the constraints and characteristics of an environment in which agents are simulated to act and interact, and does not assume any particular implementation of an agent's reasoning architecture. MICE therefore provides a platform for investigating and evaluating alternative reasoning architectures and coordination mechanisms in many di erent simulated environments. We outline the design of MICE and illustrate its exibility by describing simulated environments that model the coordination issues in domains such as predators chasing prey, predators attacking each other, agents ghting a re, and diverse robots that are working together. In addition, we note that MICE's ability to simulate multi-agent environments makes it an ideal platform for studying reasoning in dynamic worlds; we can associate functionality to arbitrary objects in order to trigger changes in the environment. We conclude by discussing the status of MICE and how we are using MICE in our current research.

0 This research was sponsored, in part, by the University of Michigan under a Rackham Faculty Research Grant, and by a Bell Northern Research Postgraduate Award.

1 Introduction A fundamental hypothesis in distributed arti cial intelligence (DAI) is that there is some core of general knowledge and reasoning mechanisms for coordination. That is, when an intelligent agent needs to interact with others in a new situation, it draws on this core of knowledge in order to combine and adapt techniques for coordination to make them suitable for the new situation. When a group of people are thrown together to accomplish some task, for example, they decide how to organize themselves to work as a coherent team. They usually compare the situation with cooperative situations they have seen in the past, and adopt the most relevant coordination technique, such as choosing a leader to control the whole group, or forming subcommittees to focus on subtasks, or allowing individuals to propose alternative approaches to the task and then either voting on the choices or negotiating among choices to reach the best compromise. A goal of DAI research is to give AI systems diverse knowledge about coordination, along with reasoning mechanisms to exibly use this knowledge, so that AI systems can adapt how they coordinate based on their capabilities and the demands of the environment. If we are to develop a exible core of coordination knowledge and design agents that can use this knowledge, we need to be able to develop these agents incrementally, exposing them to di erent coordinative situations and extending them to handle these situations. In order to begin mapping out the space of coordination techniques and situations, most experimental investigations have studied speci c techniques in the context of particular application domains (Table 1). Holding the coordination technique and the environment xed, some researchers have developed mechanisms for intelligent agents that elicit the desired coordination behavior in the given environment [15, 5, 13, 3, 19, 16, 7]. Less research has concentrated on investigations where a xed technique is generalized to variable application environments (such as the Contract-Net protocol [18]), or where agents can employ a variety of techniques in a xed environment (such as partial global planning [6]). Coordination Technique xed variable Organizational Structuring xed Multistage Negotiation Partial Global Planning Environment Multiagent Planning Type etc. variable

Contract-Net

Research Using MICE

Table 1: Examples of Systems Classi ed by Environment Type and Coordination Technique. To derive general principles of coordination, we want to investigate variable coordination techniques in varying environments. To do this, we have built the Michigan Intelligent Coordination Experiment (MICE) testbed for empirically studying and evaluating coordination knowledge and techniques. MICE provides a single experimental platform in which the 1

characteristics of many di erent environments|characteristics that in uence the ecacy of a coordination technique|can be simulated. Thus, when an experimenter develops agents that coordinate well in environments with certain characteristics, these agents can be evaluated in quite di erent environments without any need to reimplement them. By examining how the agents apply their abilities to coordinate in di erent situations, and observing failures to coordinate well, we can expand our understanding of the knowledge and reasoning techniques underlying coordination in general. In addition, by making MICE available to the DAI research community, approaches to coordination taken by di erent researchers can be compared under identical constraints. In this paper, we describe MICE's relationship to other e orts (Section 2), and how we have designed and implemented it (Section 3). We then outline, in very abbreviated form, how MICE is used (Section 4). In Section 5, we illustrate the exibility of MICE by showing how it can simulate issues that are central to coordination in application domains such as predators capturing prey, forest- re ghting, cooperative robotic reconnaissance, real-time robotics, and several others. We discuss the current status of MICE in Section 6, along with our ideas for extending MICE and our research directions in building intelligent coordinating agents. Our hope in describing MICE and its potential uses is to attract DAI researchers to join in our e ort, and to spark interest in common experimental platforms for DAI (and AI in general) to facilitate the exchange and understanding of alternative software approaches.

2 Related Work MICE di ers from previous generic DAI testbeds because of the minimal assumptions it makes about how agents are implemented. For example, the MACE [11] testbed provides a language for de ning both agents and their environments, and provides many facilities for monitoring, error handling, and interacting with the user. This language simpli es the task of de ning new types of agents, but also limits the agents that can be built and evaluated to those that are speci able in the MACE language. In contrast, the MICE testbed provides facilities for describing only the environment in which agents act and interact, leaving the user with the exibility to implement agents in any way desired (so long as they can interface with Lisp). The increased latitude in how MICE agents can be implemented places a greater burden on the agents' developers, but allows experimentation with widely di erent architectures, including blackboard systems [9] and Soar [14]. Because MICE speci es only how agents interact indirectly through the environment, the agents' developers are free to specify how agents interact directly through communication (although, as we shall see in Section 5, the environment can be simulated to impact communication as well). MICE extends the ICE testbed developed at the University of Southern California [12] in which arti cially intelligent agents interact on a two-dimensional grid [1, 10]. MICE retains this two-dimensional grid model of the world and adds a number of extensions that allow greater exibility in the coordination issues that can be presented to the agents. MICE provides an environment where agents \live," and imposes constraints on the capabilities and actions of agents and on the interactions between agents. These constraints a ect the mobility of agents; the range, accuracy, and time needs of their sensors; their ability to move, create or remove other agents; and how collisions or other spatial relationships a ect agents.

2

3 Design and Implementation In designing MICE, we were primarily interested in building a testbed where we could reproducibly simulate agents that are acting concurrently. Of major importance was the ability to examine the individual decisions that agents make and the context in which those decisions are made. Thus, rather than implementing agent concurrency at the operating system level (and thus relinquishing control over agent scheduling), we designed MICE as a discrete event simulation, where events and actions in the environment take some amount of simulated time and each agent has a simulated clock. Because MICE is implemented on a serial processor, agents in MICE take turns deciding on the actions they wish to take; however, the execution of the actions is simulated to take place concurrently. Because MICE is responsible for modeling the environment in which agents act, it must ensure that the environment is legal (all environmental constraints are satis ed) at each simulated time. Whenever agents take actions that lead to an illegal situation (such as when 2 agents that cannot share a location move into the same location), MICE must resolve the situation using information about the agents and user-supplied predicates. For example, if 2 identical agents that cannot share a location attempt to move into the same location, MICE can resolve the con ict by returning them both to their previous locations. Because MICE assumes that the previous situation was legal (and that the initial situation speci ed by the user is always legal), it can always resolve a new situation into a legal situation, in the worst case returning every agent to its previous state. MICE not only represents time in discrete units, but in the current implementation it also represents agent actions and environment locations and events as discrete entities. For example, when an agent moves to its adjacent northerly location, it is in its original location at one discrete time, and is in the adjacent location in the next discrete time. There is no concept of being \partway" between discrete locations. Thus, in our design we had to decide how a movement that takes more than one time unit should be simulated: When exactly does an agent make the transition? To simplify resolution, MICE simulates these transitions by having the agent move to the new location immediately, and then \resting" there for the remaining duration of the move. As a consequence, a slowly moving agent can \claim" a location ahead of a quickly moving agent that decides to move into that location later.1 Besides reproducibility, our discrete event simulation with simulated time has computational advantages, such as being able to skip over simulated times when no agent acts. Clearly, however, this capability is a drawback when outside events not generated by the agents might take place over these intervals. Thus, the simulation assumes that every change in the environment is due to the action of some agent. In complex scenarios where changes beyond the control of any agent might occur, these can be simulated by including an agent whose only function is to generate arbitrary events attributed to \acts of nature." To facilitate experimentation, we have designed MICE to meet the goals of: 1. Flexibility. MICE can easily model the coordination issues that arise in di erent application domains. On the other hand, simulating an agent to arrive at its new location at the last second would be similarly problematic: It would \hold on" to its old location well after we would have thought it would have left it. 1

3

2. Limiting knowledge engineering requirements in simulating new scenarios. A library of domain-speci c predicates allows researchers to build environments with unique combinations of coordination issues. 3. Providing a clean interface to the intelligent agents. The interface between MICE and the agents is well-structured to give researchers the latitude to implement agents in any reasoning architecture desired. 4. Helping researchers collect the results of running coordination experiments. MICE provides a set of tools that can be used to view the interactions between agents in the environment, to review previous states and events, and to collect statistics over the course of experimental runs. Because MICE also allows agents to specify some amount of simulated time spent reasoning, MICE provides a platform for studying issues in real-time decisionmaking. For example, the function for deciding what an agent should do can record the times it begins and ends computation, and then map the elapsed real-time spent into some number of simulated time units (where di erent ratios of real to simulated time will change the severity of how quickly agents must reason). Dynamic environments are simulated by implementing even inanimate objects as agents. For example, in a blocks world we can implement each block as an agent, and blocks might act so that they sometimes move unexpectedly or slip from some robot-agent's grasp (see Section 5).

4 Using MICE 4.1 Building an Environment

Building an environment in the MICE system requires the speci cation of the world itself (the grid features), the characteristics of agents within the world, and the interactions between agents.

Grid Description. The rst step in building an environment is deciding on the features of

the world that the agents will occupy. This includes the size of the world (its dimensions) and the features of locations within the world. For example, in simulating a re ghting scenario, it may be decided to specify locations according to their content such as trees, water, and roads. In another simulation, it may be decided that the content does not matter, but that elevation is relevant. In any case, the important issue in deciding on grid features is not to try to mimic the real world exactly, but rather to nd the features of the real world that have an impact on the coordination issues faced by the intelligent agents.

Agent Types. Next, the di erent types of agents must be determined. This includes deciding on a classi cation of the agents and determining the characteristics shared and not shared by agents of the same type. Such characteristics determine the abilities of the agent in the environment and may relate back to the grid description (for example the speed of an agent may depend on the terrain being covered). 4

Agents are de ned through calls to create-agent which accepts a number of optional keyword arguments. Those that are used in almost all applications include an agent's :NAME, :LOCATION, :ORIENTATION, :TYPE, :SENSORS, :MOVE-DATA, :INVOCATION-FUNCTION, and :DRAW-FUNCTION. Other agent characteristics that are used in a large number of domains, though all of them might not be used in any one application, allow the user to specify solidity constraints (what types of agents block each other) and activation information (under what conditions should agents be created, removed, activated, or inactivated, and exactly how this should be done).

Agent Type Interactions. In addition to the relationship between the agents and the environment, the relationship between agent types must be speci ed. This includes determining what happens when two or more agents attempt to move to the same location, how the agents involved are a ected when they collide, how other spatial relationships a ect agents or the environment, and how the presence of an agent might obstruct the scanning of another agent.

4.2 Agent Invocation

MICE has been designed to separate agent implementation from the simulation of the environment in which agents live. Because of the desire to allow agents with very di erent reasoning architectures to reside simultaneously in MICE, we have enforced a very clearly delineated interface between agents and MICE. As MICE simulates the concurrent activities of agents, it must interleave the activities of the agents based on each agent's simulated clock time. When an agent is chosen for execution, MICE executes the agent by calling the agent's :INVOCATION-FUNCTION. The :INVOCATION-FUNCTION for an agent is speci ed by the user when the agent is de ned. An agent's invocation function takes as an argument a copy of MICE's internal datastructure for the current agent. This allows di erent agents to use the same invocation function, but that invocation function receives information about which agent is actually being executed. The invocation function must return a set of commands to MICE indicating the decisions that the agent has made about its actions at this simulated time.

Agent Commands to MICE. Agents can issue commands to MICE that represent de-

cisions that the agents have made regarding motion, perception, interactions, and time. These commands can include requests to :MOVE to an adjacent location, to :ROTATE in order to change orientation, to :SCAN some region, to :LINK to another agent, to :UNLINK from another agent, to take no action at the current time (the :QUIESCENT command), to take no action for a speci ed length of time (the :NULL-ACTION command), to spend a certain amount of time :REASONING, or to :STOP. MICE allows two sources of simulated-time costs. One source is the cost associated with a particular action, such as how long it takes to move, scan, or link. The information used to compute these costs are speci ed by the user when creating an agent. The other source is the cost associated with deciding on that action. In some simulations, this source is considered negligible, and so these costs are always 0. However, in other simulated environments, the

5

time needed to decide on an action adds to the overall time for taking the action, and might delay the initiation of the action. MICE allows the latter type of time cost via the :REASONING command, whereby an agent can specify that it has spent a certain amount of time reasoning. For example, let us say an agent spent 2 real seconds deciding to move :NORTH, and the user has designed the agent to map each second of processing time spent into one simulated time unit. The invocation function would return 2 commands, one indicating :REASONING for 2 time units, and then a second requesting a :MOVE :NORTH. These are bu ered by MICE and executed in order (with other agents' activities possibly interleaved with them). Of course, if the agent is behaving in a very dynamic environment, it might want to doublecheck between the :REASONING and :MOVE commands to make sure that the :MOVE is still valid. This capability must be supported by the agent itself: Once a sequence of commands are sent to MICE, they are executed entirely before the agent's invocation function is once again called. However, the user can implement an agent such that it internally bu ers a sequence of commands (possibly with associated validity conditions) and issues these one at a time to MICE. This way, the agent is given a chance to double-check the situation before issuing the next command. A command to MICE is essentially a request for some action to take place. These actions are tentative because concurrent actions by several agents might con ict. For example, two agents that cannot share locations can issue commands that would result in their being in the same location. In this case, the agents are tentatively moved and their simulated clocks updated, but MICE later recognizes the constraint violation and resolves it in the user-speci ed way (by default, it moves them back to their original positions). The time costs of the moves, however, are not undone|the agents have wasted some amount of time by taking con icting actions.

Initialization, Running, and Termination. The information about the grid and the

speci c agents in an environment are stored in a le containing information about the simulated environment and the agents. Once an environment le has been prepared, MICE can be invoked with the name of the le as a parameter. As it runs, MICE invokes the agent with the least advanced simulated clock. After it executes an agent, MICE checks to see what simulated time the agent with the least-advanced clock is at, and if this is greater than it was before the agent executed (MICE maintains a \global" clock to store this value), then MICE must resolve any con icting actions that might have occurred between the previous value of the global clock and the current global clock time. It steps through the intervening times, and for each time: 1. It checks any actions initiated by the agents at that time and the resulting state of the agents as a consequence of those actions. 2. It resolves any con icts between the actions. 3. It checks the resolved situation against the set of possible agent interactions. For example, it checks the predicates for removing agents, such as when a :PREY agent is surrounded by :PREDATOR agents.

6

4. It takes any actions triggered by the situation. 5. If any actions were taken in step 4, it goes back to step 3. Otherwise, it is done. The criteria for termination of a run are user-speci ed. Sometimes we want MICE to stop when no agent has moved for some xed amount of time. Other times we want MICE to stop when no goals are left to achieve (such as when all of the :PREDATOR agents have captured all of the :PREY agents). Currently, the user de nes a predicate called mice-continue-p that speci es the conditions under which MICE should continue the simulation. By default, the function continues until either a time-limit has been reached or until no agent has moved for a xed amount of simulated time.

4.3 User Interface

MICE uses graphics to display the environment's state at each time step. Agents and signi cant grid features are represented by geometric icons (squares, circles, triangles, etc.). These icons can be changed in response to the occurrence of events. For example, an agent represented by a lled square may change its representation to a hollow square when it deactivates. MICE has an event-driven set of predicates that can be used to aid interpretation of an experimental run. Such predicates can be used to maintain statistics, change the graphic display of an agent or a grid location, or cause other changes in the environment. In a re- ghting scenario, for example, after an area has been burned, its characteristics can be changed so that re cannot move through it again. At the same time, statistics can be updated on the amount of area consumed by the re, and the graphic representation of the burned location changed to re ect its condition. The information about a run (currently sucient for reproducing the graphics output of the run) can be saved using the function save-mice which takes a le to save the run to as its only argument. Similarly, the function restore-mice will load in the saved run to recreate the agent structures at a sucient level of detail to redisplay the entire run graphically.

5 Example Simulations MICE provides us with a exible framework in which we can simulate a wide variety of environments that involve multi-agent coordination. Its adoption by the DAI research community would have many bene ts including a greater exchange of ideas, formalisms and techniques. Researchers would be able to implement their ideas in their favorite system (such as GBB [4] or Soar [14]) and execute them in a simulation environment that is readily available to others. To illustrate how we use MICE to simulate particular multi-agent environments and populate these with agents, we describe several simulations that we have implemented in MICE.

Predator-Prey. The inspiration for the MICE environment sprang from previous work

that simulates the interactions between predators and prey in a two-dimensional grid environment [1, 10, 12]. Although di erent implementations have all concentrated on the

7

problem of how agents of one type (predators) can surround and capture agents of a second type (prey), the constraints and capabilities of the agents have slightly varied from one implementation to the next. MICE allows us to simulate a wide range of constraints and capabilities for this problem, including: (1) an agent's sensing range (how far it can sense), period (how often it can sense), sensitivity (what objects and agents it can sense), and time costs (how long it takes to sense;. (2) an agent's mobility, in terms of which directions it can move and how quickly it can move in each direction; and (3) agents' spatial constraints, such as whether two agents can occupy the same location, along with a predicate to resolve con icts. An example of a simple predator-prey environment is depicted in Figure 1. Using MICE, it is easy to extend the simulated predator-prey domain to a domain with 2 kinds of predators, each of which preys on the other kind. By modifying a few characteristics of the prey in the predator-prey simulation, we can give the prey the capability to capture predators. As in the predator-prey environments, one of the coordination tasks is for predators to dynamically team up to capture some prey. However, each predator must also have a goal of avoiding being captured itself. The tradeo between the bene ts of capturing agents (which encourage cooperation) and the costs of being captured (which fall essentially on individuals) leads to a tension between the simultaneous goals of being cooperative but also retaining some autonomy.

Forest-Fire Fighting. MICE can be used to simulate the coordination issues in a forest-

re ghting scenario [2]. We can de ne re agents who move in certain patterns, and we can specify predicates that simulate re moving downwind and burning di erently in areas with di erent groundcover. If unchecked for a certain amount of time, a re agent creates a copy of itself at an adjacent location. Thus, the re can spread and enlarge over time. Moreover, once a re agent has occupied a location, the features of that location are modi ed so that re cannot spread there in the future. We can also de ne re ghter agents by specifying an initial re ghting capability for each. When a re ghter encounters a re agent, it applies itself to destroy the re agent, but as a result it has less capability (it is weakened). The re ghter agents must work together to contain and extinguish the entire re before exhausting their capabilities. Strategic considerations include surrounding the re to contain it, ghting it before it can spread, and concentrating on the re's front. However, because each re ghter might have a limited local view of the re (limited sensor range), the agents might have di erent perceptions as to how to pursue these strategic goals. The agents must therefore communicate and coordinate their actions to work as an e ective team. Moreover, in MICE we can de ne additional re ghting agents, such as slowly-moving bulldozers, non-moving rebreaks created by bulldozers, and aircraft that are quickly-moving but less e ective at extinguishing re agents. These extensions to the forest- re ghting environment bring up important additional issues in e ectively coordinating heterogeneous agents and resources.

Cooperative Robotic Reconnaissance. In many application domains, intelligent agents need a wide variety of skills and abilities to interact e ectively with the environment, not all of which may be available in a single agent. It is not likely that robots will be built with, 8

In this symmetric environment, predators are represented by round clock faces and prey by square ones. The prey attempt to move across the grid to safe goal locations, while the predators attempt to surround and capture them. Without the coordination that they exhibit here, the predators would divide into ine ectual teams of three agents each and never complete a capture.

Figure 1: Predator-Prey for example, both very sensitive measuring equipment and heavy lifting equipment. Instead, the robots will be given di erent abilities and will have to work together to accomplish their goals. An example of this occurs in our cooperative robotics scenario in which a team of robots must map out the damage to a nuclear reactor after an accident. In this scenario (Figure 2), we have created an agent with a strong transmitter, whose role is to gather information from the other agents and send it to the human observers outside of the contaminated area. Other agents have been given good sensory abilities to explore various regions of the reactor and send reports to the transmittor. They are accompanied 9

The robots start in the lower left room and move to the upper right room. There the robots with sensitive vision systems (the hollow circles) pair up with bulldozer robots and move o to explore the remaining rooms. The bulldozers move debris out of the way, allowing their partners to enter and examine the remaining areas for signs of damage.

Figure 2: Cooperative Robotic Reconnaissance by agents with manipulators that are strong enough to move debris out of the way as it is encountered.

Cooperative Dynamic Blocks World. Classic AI environments can also be simulated in

MICE and given new twists. An example of this is the blocks world that has been enhanced by the addition of multiple robot arms and a dynamic world. The robots can link to blocks to move them. The blocks are simulated as agents that always try to move down (as if pulled by gravity), but are blocked by the table and other blocks. To make the domain more dynamic, the blocks are also given a certain probability of moving sideways. Such a move 10

would cause a stack of blocks to fall down and force the robots to react to the unexpected event.

Other Environments. Other environments we have simulated in MICE include \video

game" environments where a person controls one of the agents. By having MICE associate simulated time with the actual time the person spends making a decision, we can impose severe time constraints on the person's calculations. We can also map problems such as communication network management into MICE by having agents for each site generate and send message agents that move to other sites. If the network is congested, message agents can collide or can overwhelm a site, simulating conditions of contention and bottlenecking. Finally, domains like air-trac control are easily represented and studied in MICE.

6 Results, Status, and Conclusions In essence, MICE is a shell for empirically evaluating coordination techniques in a variety of environments. MICE provides mechanisms for keeping track of agent actions and interactions in a simulated environment, and tools for evaluating the behavior of agents in the environment. It is up to the user to distill out from some task domain the essential environmental constraints and characteristics that in uence coordination, and then to encode this information into MICE. Thus, MICE provides the software infrastructure for executing a simulation of some environment, but the user must provide the domain-dependent information. As a consequence, MICE is only the rst step toward our more general goal of studying multi-agent domains and extracting general principles and techniques for coordination. Insights about the transferability of coordination mechanisms across domains are much more readily developed when it is easier to move between domains and collect observations. MICE provides the touchstone with which to evaluate alternative coordination mechanisms. We have been using to evaluate a hierarchical protocol for coordination [8] and to study dynamic coordination in robotics tasks [17] The current version of MICE runs in Common LISP on Texas Instruments Explorers and in Allegro Common LISP on Apollo workstations. We are migrating toward using X-Windows for our graphic display and CLOS internally. By making MICE as portable as possible, we hope to promote its adoption as a standard testbed by the distributed AI community. This would greatly facilitate the exchange of ideas and the comparison of experimental results between researchers.

References [1] M. Benda, V. Jagannathan, and R. Dodhiawalla. On optimal cooperation of knowledge sources. Technical Report BCS-G2010-28, Boeing AI Center, Boeing Computer Services, Bellevue, WA, August 1985. [2] Paul R. Cohen, Michael L. Greenberg, David M. Hart, and Adele E. Howe. Trial by re: Understanding the design requirements for agents in complex environments. AI Magazine, 10(3):32{48, Fall 1989.

11

[3] Susan E. Conry, Robert A. Meyer, and Victor R. Lesser. Multistage negotiation in distributed planning. In Alan H. Bond and Les Gasser, editors, Readings in Distributed Arti cial Intelligence, pages 367{384. Morgan Kaufman, 1988. [4] Daniel D. Corkill, Kevin Q. Gallagher, and Kelly E. Murray. GBB: A generic blackboard development system. In Proceedings of the Fifth National Conference on Arti cial Intelligence, pages 1008{1014, Philadelphia, Pennsylvania, August 1986. (Also published in Blackboard Systems, Robert S. Engelmore and Anthony Morgan, editors, pages 503{518, Addison-Wesley, 1988.). [5] Daniel D. Corkill and Victor R. Lesser. The use of meta-level control for coordination in a distributed problem solving network. In Proceedings of the Eighth International Joint Conference on Arti cial Intelligence, pages 748{756, Karlsruhe, Federal Republic of Germany, August 1983. (Also appeared in Computer Architectures for Arti cial Intelligence Applications, Benjamin W. Wah and G.-J. Li, editors, IEEE Computer Society Press, pages 507{515, 1986). [6] Edmund H. Durfee. Coordination of Distributed Problem Solvers. Kluwer Academic Publishers, 1988. [7] Edmund H. Durfee, Victor R. Lesser, and Daniel D. Corkill. Cooperation through communication in a distributed problem solving network. In Michael N. Huhns, editor, Distributed Arti cial Intelligence, Research Notes in Arti cial Intelligence, chapter 2, pages 29{58. Pitman, 1987. (Also in S. Robertson, W. Zachary, and J. Black (eds.), Cognition, Computing, and Cooperation, Ablex 1990.). [8] Edmund H. Durfee and Thomas A. Montgomery. A hierarchical protocol for coordinating multiagent behaviors. In Proceedings of the Eighth National Conference on Arti cial Intelligence, pages 86{93, July 1990. [9] Lee D. Erman, Frederick Hayes-Roth, Victor R. Lesser, and D. Raj Reddy. The HearsayII speech-understanding system: Integrating knowledge to resolve uncertainty. Computing Surveys, 12(2):213{253, June 1980. [10] Robert F. Franklin and Laurel A. Harmon. Elements of cooperative behavior. Technical report, Environmental Research Institute of Michigan, Ann Arbor, MI 48107, August 1987. [11] Les Gasser, Carl Braganza, and Nava Herman. MACE: A exible testbed for distributed AI research. In Michael N. Huhns, editor, Distributed Arti cial Intelligence, Research Notes in Arti cial Intelligence, chapter 5, pages 119{152. Pitman, 1987. [12] Les Gasser and Nicolas Rouquette. Representing and using organizational knowledge in distributed AI systems. In Proceedings of the 1988 Distributed AI Workshop, May 1988. [13] Michael George . A theory of action for multiagent planning. In Proceedings of the Fourth National Conference on Arti cial Intelligence, pages 121{125, Austin, Texas, August 1984. (Also published in Readings in Distributed Arti cial Intelligence, Alan H. Bond and Les Gasser, editors, pages 205{209, Morgan Kaufmann, 1988.). [14] John E. Laird, Allen Newell, and Paul S. Rosenbloom. SOAR: An architecture for general intelligence. Arti cial Intelligence, pages 1{64, 1987.

12

[15] Victor R. Lesser and Daniel D. Corkill. Functionally accurate, cooperative distributed systems. IEEE Transactions on Systems, Man, and Cybernetics, SMC-11(1):81{96, January 1981. [16] Cindy L. Mason and Rowland R. Johnson. DATMS: A framework for distributed assumption based reasoning. In Les Gasser and Michael N. Huhns, editors, Distributed Arti cial Intelligence, volume 2 of Research Notes in Arti cial Intelligence, pages 293{317. Pitman, 1989. [17] David J. Musliner, Edmund H. Durfee, and Kang G. Shin. Execution monitoring and recovery planning with time. In Proceedings of the Seventh IEEE International Conference on AI Applications, pages 385{388, February 1991. [18] Reid G. Smith. The contract net protocol: High-level communication and control in a distributed problem solver. IEEE Transactions on Computers, C-29(12):1104{1113, December 1980. [19] Katia P. Sycara. Multi-agent compromise via negotiation. In Proceedings of the 1988 Distributed AI Workshop, May 1988.

13

Suggest Documents