Distributed Plan Maintenance for Scheduling and ... - Semantic Scholar

1 downloads 0 Views 202KB Size Report
G. Kraetzschmar, J. Schneeberger. FORWISS (Bavarian Research ...... 2] Michael Beetz, Matthias Lindner, and Josef Schneeberger. Temporal Projection forĀ ...
Distributed Plan Maintenance for Scheduling and Execution C. Beckstein

G. Kraetzschmar, J. Schneeberger

University of Erlangen, IMMD-8 (Computer Science, AI Department) Am Weichselgarten 9 D-91058 Erlangen, Germany Fax: +49-9131-699-198

FORWISS (Bavarian Research Center for Knowledge Based Systems) Am Weichselgarten 7 D-91058 Erlangen, Germany Fax: +49-9131-691-185

[email protected]

fgkk,[email protected]

Abstract. Within the pede project we investigate the generation and exe-

cution of plans in distributed environments. Multiple, communicating agents perform tasks like generating process plans and machine schedules and controlling their execution. A general problem in such environments is how to handle unexpected events like high priority jobs and machine breakdowns. A cooperative approach, where agents communicate in order to resolve occurring problems, usually leads to superior overall performance, but is bought at the price of increased network trac and more complex agent architectures. We argue that the application of distributed reason maintenance techniques yields simpler agents and structured communication. We present darms, a distributed reason maintenance system, and show how it can be applied to integrate planning, scheduling, and control. Keywords: multi-agent planning, scheduling, execution monitoring, contingency planning, pede problems, distributed plan maintenance, distributed reason maintenance, distributed belief revision problem, atms-based planning, darms

1. The PEDE domain The pede1 project studies techniques to improve the ecient generation and fail-safe execution of plans in distributed, highly dynamic environments. In a pede domain, we typically have several planning and scheduling agents (planners; e.g. various process planners, a master production scheduler, schedulers for job shops) and a potentially large number of executing agents (executers; for instance humans, machines, robots, and autonomous transport devices). Our example pede problem is a job shop for manufacturing and pre-assembling transmission parts2 . The job shop must perform planning, scheduling and control of execution. It is fed a stream of changes to its pool of jobs. The job shop has a library 1 pede = Planning and Execution in Distributed Environments 2

This example is a modi ed version of the case study described in [16]

of exible process plans, which contain primary operations in an enriched, nonlinear plan representation language. For each job, the job shop must retrieve the associated process plan, choose between alternative primary operations, linearize the plan, assign resources, plan and schedule secondary operations like workpiece and tool transport and setup, and control the correct execution of all operations.

2. Integrating Planning, Scheduling, and Control Dependency and Change

pede problems are often characterized by two dicul-

ties: (a) there is no straightforward decomposition of the overall problem into fairly independent subtasks like planning, scheduling, and control, that guarantees optimal results, and (b) the environment is very dynamic and a lot of things change. Decomposition is necessary in order to break down a highly complex task into tractable components. For the combined planning/scheduling/execution problem, there is no integrated solution known that would solve the overall problem directly. Therefore, we must decompose the overall task into smaller pieces that are easier to solve. However, in the transmission job shop example above, separating planning and scheduling of jobs and solving them independently often leads to suboptimal overall performance, although an optimal solution for each subproblem was found. Generating slightly less ecient subplans may result in much improved schedules. One reason for this behavior is that planners usually assume unlimited availablity of resources, while the scheduler's task is exactly to manage bounded resources. Thus, the subtasks are highly dependent on each other, and planners and schedulers should cooperate in order to provide better overall solutions. Another problem in the pede domain are dependencies that stem from planning, as e.g. present in partial-order process plans. Since manufacturing operations are usually performed by di erent machines, work cells, and robots, scheduling results in distributing the operations of a process plan to di erent machines. The dependencies now involve several, di erent executers and are more dicult to maintain properly. Maintaining all those dependencies is especially hard when the environment changes very often. Changes happen all the time in realistic scenarios. Examples for changes are broken workpieces and tools, machine failures, added and removed jobs. All of these may have far-reaching consequences, such as missed deadlines, inability to complete a job, or lost opportunities, and they may necessitate further work by planners and schedulers in order to account for the change. E ects on Agent Architecture and Communications We assume a system consisting of multiple agents3 which together solve a complex problem involving planning, scheduling and control of execution. The existence of many data dependencies is typical for such systems. Dependencies exist both among a single agent's data as well as among data maintained by di erent agents. Whenever changes in the environment are re ected in an agent's representation of the world state, the agent must update its data to maintain the dependencies. Since the agent's beliefs might have been communicated to other agents, changes may involve these other agents as well and necessitate update communications. Providing means to handle such changes often leads to very complex agent control structures, which are dicult to develop and to maintain. 3

See [3] for an introduction and overview on multi-agent systems.

The general problem is to nd ways that allow us to use more structured agent architectures and more structured communication.

3. Using Reason Maintenance Techniques The General Idea Our approach is to exploit reason maintenance techniques in

order to improve agent and communication structure. The rst basic idea is to impose a two-level architecture4 on each agent, which consists of a problem solver (ps) and a reason maintenance system5 (rms, see e.g. [12, 5, 4]). The reason maintenance module is used to represent the relevant problem solver data, e.g. plans and schedules. It will also take care of maintaining the dependencies between these data. As a consequence, the problem solver will have a much simpler structure and is easier to develop and maintain. The second basic idea is to introduce multi-level communication. In our approach, we have low-level communication between the rms modules, and high-level communication between the problem solvers. The next few paragraphs will extend these ideas a little bit. Using Reason Maintenance to Maintain Plan Dependencies Reason maintenance techniques are utilized for plan generation by [2, 10] in their Plan Net Maintenance System (pnms) using an atms. The pnms serves as a subsystem of a partial order planner which records partially elaborated plans together with the planning decisions and assumptions that justify the plan steps taken. The advantage is that incomplete models of action, domain constraints, derived and context-dependent e ects can be easily represented in this framework. The pnms stores persistence assumptions for the e ects of actions. More precisely, for each action an assumption is added that the action is executable with the speci ed e ects if its preconditions holds. These assumptions are used to justify the results of temporal reasoning processes. The justi cation structure generated during a planning process allows the pnms to retract assumptions detected to be invalid and to automatically revise its temporal model. The feasibility of plan steps and plan inconsistencies are deduced from the current set of planning decisions and their corresponding assumptions. Consequently, deviating planning decisions and unexpected events cause di erent deduced plan steps and temporal projections. The pnms provides means for storage, retrieval, and computation of plan information. A plan modi cation (i.e., a \planning operation") results in a transaction which adds events and states to the pnms. The propositions that hold in a world state are not stored explicitly but computed by a deductive query mechanism. In addition, justi cations are stored to represent the dependencies among database entries in order to allow for revisions of the temporal model if violated assumptions are detected. Another limited temporal projection mechanism based on an atms has been proposed by [13] and its implementation is the KEEworldsTM 6 system. However, this temporal projection technique causes serious problems when used in hierarchical, partialorder planning systems [6]. ;

The idea of a two-level architecture as described here is used frequently in the RMS community (c.f. [15]) 5 Also referred to as tmss, which stands for Truth Maintenance Systems 6 KEEworlds is a trademark of INTELLICORP 4

Reason Maintenance in Multi-Agent Systems A system performing assump-

tion-based reasoning, for instance a planner using an atms to represent plan structures, should exhibit a two-level architecture consisting of a planning module and an rms rather than a monolithic system architecture (like in Figure 1a). Using an rms has obvious advantages for planners: there is a clean functional interface between planner and rms, the rms takes a signi cant work load from the planner, and allows for simpler control structures on the planner side by preventing it from becoming clobbered with lots of low-level details. Furthermore, there exist ecient implementations of both jtms-type and atms-type rmss.

a

PS3

PS1 PS2 Agent X1

Agent X3 Agent X2 PS3

PS1

PS2

PS2 RMS1 Agent X1

b

RMS3 RMS2 Agent X2

PS3

PS1

Agent X3

RMS3

RMS1 Agent X1

c

RMS2

Agent X3

Agent X2

Figure 1: Towards two-level agent architectures and multi-level communications

In our applications, however, we have multiple assumption-based reasoners. Information that possibly depends on assumptions is communicated to other agents, which may use this communicated information in their own decision making. Thus, through communication, assumptions and beliefs based on assumptions (referred to as revisable information) can be imported from and exported to other agents, and thereby, dependencies on other agents' belief states may be implicitly introduced. From a global system view, communication of revisable information causes several agents to have local copies of the same piece of information. The two fundamental problems then are to distinguish between local and communicated versions of the same piece of information and to keep all the local copies, that agents created after receiving some information, consistent with the original. We call this problem the distributed belief revision problem. A simple approach would be to use rms-based agents (Figure 1:b). However, as the usual rmss assume a single-agent scenario, these rmss are not able to communicate with each other and do not provide any functionality to record what information has been sent to or received from which other agents. The ps must maintain additional data structures to hold this kind of information. Furthermore, its control structures must be changed, as for every single change in the rms the ps must compute the e ects on the other agents and communicate the changes to them. This necessitates a much

more complex ps design, and we lose most of the advantages we gained by giving an rms to each agent. Thus, we must take speci c measures to deal with the distributed belief revision problem. Distributed Reason Maintenance Systems A more promising approach is to extend rms technology to the distributed case (distributed rms) in order to provide support for multi-agent scenarios. For example, one could add functionality that enables the problem solver to ask its own rms about what other agents believe. The rms then requests the appropriate information from the other agent's rms and records the result in its dependency net in a way that allows it to locally handle later requests for the same information. In addition, the rms should establish a communication channel between itself and the other agent's rms, that is used to exchange updates on the shared belief (see Figure 1:c). Following this scheme, we now have two levels of communication between agents: low-level communication between the rms modules of the agents and high-level communication between the problem solver parts of the agents. Most of the advantages in the single agent case carry over to multi-agent scenarios: the problem solvers are not burdened with tedious update tasks and can be of much simpler design.

4. DARMS We propose an extension of reason maintenance techniques in order to solve the believe revision problem for the multi-agent case. As shown elsewhere [1], existing distributed reason maintenance systems (such as dtms [9], brtms [8] and datms [11]) exhibit major shortcomings when applied to the pede domain. Therefore, we have developed a distributed, assumption-based reason maintenance system, called darms, a general tool to support revisable reasoning in distributed environments. darms has been designed to avoid the shortcomings of the systems already known, has the basic machinery necessary for the pede domain, and is exible enough to allow for easy implementation of higher-level inter-agent protocols. 4.1. Key Ideas

Before describing the architecture and functionality of darms, we brie y introduce the key ideas for darms. In order to avoid the problems of translating knowledge between di erent representations, we assume that formulas are in propositional normal form when used in the communication between darms modules.

two-level architecture support: The darms approach assumes that each agent in a multi-agent system is structured into a problem solving (ps) component and a darms module (see gure 1:c). Once a belief has been communicated (initiated by ps$ps interaction), all further communication necessary to update belief states can be handled by darms$darms communication.

remote belief query capability: In order to maximize the amount of inter-agent

communication that can be handled on the darms module level, the ps$darms interface is extended by a remote belief query capability, i.e. an agent's ps asks its associated darms module, wheneverit wants to nd out what some other agent believes about some proposition.

context management: As a darms module is supposed to answer queries (like \Does

your ps currently believe p?") from other darms modules that involve the ps's current context (focus), the functionality of darms modules include a context management facility. The ps$darms interface has been extended to include functions for managing the focus. communicated nodes: darms modules use a new node type | communicated nodes | to represent other agents' beliefs, i.e. those beliefs are kept separate and are not automatically merged into the agent's own belief structures. update contract: An update contract is established between a darms module providing information (the belief state of a proposition) and the darms module receiving information. It ensures that the communicated node created by the receiving darms module will be updated whenever a change to the original node occurs. directed communication: Communication is asymmetric, i.e. a mutual exchange of belief states is not performed automatically. Receiving a proposition from another DARMS module does not require the recipient to send information on that proposition the other way round. sharing inconsistency: As all agents use the same representation language (propositional normal form), there also is a common notion of inconsistency . Whenever agents detect inconsistent situations, the darms modules exchange the corresponding NOGOODs, if they believe the NOGOODs to be relevant to each other. NOGOODs are relevant to an agent if they depend on one or more assumptions occurring in labels of nodes communicated to that agent. Also, NOGOODs which make an environment of a communicated node inconsistent must be communicated to the sender of that node. 4.2. Architecture of DARMS Modules

Each darms module is structured into the three sub-modules (a) dependency net management, (b) context management and (c) communication and control. For the following discussion, we introduce two agents X and Y . Also, we assume X and Y each have declared a proposition p to be relevant, that X believes p if the assumptions A and B hold, and that Y has asked its darms module whether X believes p. Dependency Net Management: The dependency net module is very similar to that of an atms. A new node type | communicated nodes | is used to locally represent communicated information. We denote a node for a datum p communicated by an agent Z as Z :p7. Communicated nodes may be used in the antecedents of justi cations, but not in the consequent. There is no mechanism to automatically link a communicated node Z :p with the node p representing the agent's own belief about the proposition at hand. Higher level protocols can easily provide such functionality in order to implement shared belief or various mechanisms for trust, for example. The data structure for darms nodes has been extended to include receivers, a represention of the set of agents We consider the Z :p notation to be a monadic modal operator representing another agents Z \belief" p. 7

the node has been communicated to. This set serves to nd out which agents are to be informed in case of label updates to the node. X

communication and control

Problem Solver

(c) other DARMS modules

prop -> node control node -> propNF

context management focus

(reduced) contexts

communicated assumptions

X1 : A B C ...

X1 :

Z : B C ...

Y : A B ...

(b)

dependency net management NOGOOD table

facts und justifications #n123 p {Y} {{AB}{BC}..}

communicated nodes

(a)

assumptions

a

b

c

z

A

B

C

Z

Z :p

Figure 2: The architecture of DARMS modules

Context Management: The context management facility manages the agent's

own current context (focus ) and two tables of assumption sets. The rst is a reduced context table , which is indexed by agents. For each agent that X has received information from, the darms module maintains an entry in the table that represents the other agent's reduced focus. Simply speaking, the reduced focus contains only those assumptions in focus , which occur at least once in a label of any communicated node X received from Z . By using the reduced focus of Z , X can determine for any communicated node received from Z by local label lookup, whether Z currently believes the data represented by the node or not. This is possible because whenever Z modi es its focus, it determines whether any other agents, e.g. X , are a ected and sends a message to update the reduced focus. In order to provide this capability, darms modules must keep track of the set of assumptions relevant to some other agent, which are exactly those assumptions that occur in nodes that have been communicated to this agent. In the example above, agent X needs to know that assumptions A and B are relevant to agent Y . This information is stored in a communicated assumptions table , which is indexed by agents. Communication and Control: Finally, the communication and control unit coordinates dependency net management module and context management facility, proX

Z

vides the necessary functional interface (adding data to the net, changing the focus, etc.) to the problem solver, and handles all communication with other darms modules. It also provides translation of propositions to normal form, which is used whenever propositions must be passed back to the problem solver or to other darms modules. 4.3. Functional Interface and DARMS Behavior darms extends traditional atms functionality mainly in two ways: it provides functions

to manage the focus and functions with an additional agent parameter. The former can be viewed as traditional atms queries with the focus as default assumption set. The latter provides a remote belief query capability , because the problem solver can ask its own darms about other agents' beliefs and the darms module handles all necessary communication. The functionality of the problem solver interface of darms is described in detail in [7] and is only brie y sketched here. The ps uses update functions to (a) report relevant data, new assumptions, new justi cations and freshly detected NOGOODs to its own darms, (b) have its darms tell other agents about locally derived data and NOGOODs known to it, and (c) to shrink and grow its own focus. On the other hand the ps can (a) query its darms about locally derived data (labels and NOGOODs) and the consequences of this data, (b) ask for explanations of data reported to it by its darms, and (c) instruct its darms to obtain information of the same sort from certain other agents. There are also functions the ps can use to nd out about its focus as well as the contexts of other agents as far as they got known to it via communication. Almost all of the functions comprising the interface between the ps and its darms result in complex low-level communication between the darms subsystems involved. The communication behaviour of a single darms follows a strict protocol that controls how the computations of the participating darms systems are connected. The protocol guarantees that the problem solver of every single agent (1) posesses a locally consistent darms subsystem and (2) shares a consistent view of the part of the world the agents have exchanged information about. It also ensures that the agents agree on a common notion of inconsistency, i.e. if an agent actually derives a set of assumptions to be inconsistent, basically all others have to accept the corresponding NOGOOD. For a formal speci cation of the communication protocol the reader is referred to [1].

5. Applying DARMS This section outlines the use of DARMS for improving the integration of planning, scheduling, and control. Due to space limitations, the examples must be very short, and may seem trivial at rst glance. However, they illustrate the basic mechanisms used to support replanning and contingency planning in the transmission job shop scenario. As an example, consider some workpiece B to be manufactured. Its process plan contains some operation p to be executed on some machine (which is not of interest here) using one of two possible tools A1 or A2. The use of these tools requires the execution of di erent numeric control programs on the manufacturing machine in order to achieve p. The two programs have di erent duration. Normally, using A1 is more ecient, and this option is preferred to using A2. However, A1 breaks every now and

then, it takes considerable time to replace it, and A2 is used in such cases. We consider two scenarios. Both assume, that the job for workpiece B has been scheduled under the assumption that tool A1 would be available, but the executer detects the contrary at some time between scheduling and execution. The rst scenario shows how to dynamically react in such a situation, the second one shows how the scheduler may plan ahead and provide contingencies, especially if A1 breaks really often. Both scheduler and executer use a darms module to represent their planning and scheduling data. Assumptions are used to represent that the the workpiece is okay, and that a tool can be con gured. An executable operation is represented as a fact that is justi ed by the belief in other facts, e.g. the underlying assumptions. PS1

PS2 DARMS1

assumption!(a1) assumption!(b) relevant!(p) justification!(p,(a1,b)) modify-focus!({A1,B},{})

DARMS2 {{A1}}

A1

a1

B

b

p {{B}}

{{A1,B}}

{A1,B}

update_schedule(p)

holds-for?(X1,p) darms-msg(X2,X1,sp-request, p) darms-msg(X1,X2,sp-update,(p,{{A1,B}})) darms-msg(X1,X2,focus-update,({A1,B},{})) {{A1}} A1

a1

B

b

focus-of?(X1) -> {A1,B} modify-focus!({A1,B}, {})

X1:p {{A1,B}} {{B}} {A1,B}

nogood!({A1})

{} A1

a1

B

b

p {} {{B}}

darms-msg(X2,X1,nogood-update,{A1})

{} A1

a1

B

b

X1:p {A1,B}

{A1}

{} {{B}}

assumption!(A2) justification!(p,(A2,B)) modify-focus!({A2},{A1})

{B}

{A1}

darms-msg(X1,X2,sp-update,(p,{{A2,B}})) darms-msg(X1,X2,focus-update,({A2},{})) {} A1

a1

B

b

focus-of?(X1) -> Ux1 modify-focus!(Ux1, {})

{{A1}}

{{B}}

A1

a1

B

b

A2

a2

p {{A2}} A2 a2

{{B}}

{{A2,B}}

X1:p {{A2,B}} {{A2}}

{A2,B}

{A1} {A2,B}

{A1}

Figure 3: Handling failures using DARMS

Handling Failure Situations In the rst scenario, the scheduler X 1 creates a

schedule for performing operation p using tool A1. In its DARMS module, it will have the assumptions A1 and B , the fact p and a justi cation A1; B =) p. Thus, p holds

(p is executable), if both A1 and B can be assumed at the same time. Since there is no knowledge to the contrary, the scheduler does actually assume both A1 and B and includes these assumptions in its focus. It then tells executer X 2 about the newly created schedule, which will add assumptions A1 and B as well as the communicated node X 1:p with label ffA1; B gg. Some time later, the executer detects that A1 cannot be used for some reason and tells its darms the corresponding NOGOOD fA1g. The NOGOOD results in label propagation in X 2's darms (giving p the empty label) and in a low-level communication from X 2's darms module to X 1's darms module about this NOGOOD. This triggers label propagation in X1's darms as well, which gives p the empty label and yielding X 1's focus inconsistent. X 1 must take A1 out of its focus and notices that there is a scheduled operation with an empty label, which means the machine cannot execute it as scheduled. The scheduler can now back up on its second alternative and decide on the use of tool A2. It creates an assumption for A2 and a new justi cation for p. The label update performed at p will | again as low-level communication between darms modules | be communicated to the executer X 2, which is now able to execute p. Note, that all necessary communication for handling the failure situation is performed by the darms modules. All communication the problem solvers have to do is with their own darms modules. The scheduler simply schedules jobs when they arrive, and informs the respective executer about them. Using darms, the executers get an explicit representation of the underlying assumptions the scheduler made, which are typically tied to states observable by the executer. Thus, they can check and verify or falsify relevant state information in a very focused, goal-directed manner. The scheduler checks its darms module for operations with empty labels on a regular basis and takes measures to repair them. If repair can be done, it tells its darms module about it, which automatically communicates and propagates changes to all other a ected darms. Planning for Contingencies In the second scenario, we assume that tool A1 breaks very often, i.e. there is a signi cant likelihood that it will break before executing operation p. In cases like this, contingency planning must be considered. In other words, the scheduler anticipates the situation that A1 breaks and provides alternative procedures to follow in this situation. In our example, the scheduler would consider both alternatives right away. X 1's darms would have all the assumptions, nodes, and justi cations as before (with p now having the label ffA1; B g; fA2; B gg), plus the additional NOGOOD fA1; A2g representing that both tools cannot be used simultaneously. X 1 would indicate its preference of using A1 by including it into its focus. The executer X 2 would try to adopt X 1's focus, as long as there is no con icting information. If A1 is declared NOGOOD before starting operation p, however, this alternative cannot be carried out any more. However, there still is another environment in X 1:p. As long as its assumptions can be included into the focus (i.e. the tool A2 can be con gured and workpiece B is okay), the executer can still proceed, at least with this particular operation. Again, low-level communication between the darms modules will ensure that the scheduler X receives the information about this change and can compute the appropiate consequences (e.g. a delay of subsequent operations). Note again, that no high level communication is necessary to handle the failure situation here. Also, low-level communication is reduced, as the executer can directly react to the failure, without waiting for the scheduler to come up with an alternative decision.

However, the scheduler had to do the extra work up front. In practical scenarios, the extra work may be limited to provide one alternative operation only, without computing the e ects of actually using the contingency option (i.e. the alternative plan/schedule is developed only to depth one). However, since machine operations in practice often take a signi cant amount of time (between several minutes up to several hours), this kind of minimum reactivity in schedules may give the scheduler just enough time to recompute the schedule, if such a failure situation arises. This is certainly better than having the machine sit around doing nothing while computing another schedule. PS1

PS2 DARMS1

assumption!(a1) assumption!(b) relevant!(p) justification!(p,(a1,b)) assumption!(a2) nogood!({A1,A2}) justification!(p,(a2,b)) modify-focus!({A1,B},{})

DARMS2

{{A1}} A1

a1

B

b

p {{B}} A1

{{A1,B}, {A2,B}}

a1 {{A1}}

{A1,B}

{A1,A2}

update_schedule(p) holds-for?(X1,p) darms-msg(X2,X1,sp-request, p) darms-msg(X1,X2,sp-update, (p,{{A1,B},{A2,B}},F)) darms-msg(X1,X2,nogood-update,({A1,A2}) darms-msg(X1,X2,focus-update,({A1,B},{}))

{{A1}} A1

a1

B

b

X1:p {{B}} A1

focus-of?(X1) -> {A1,B} modify-focus!({A1,B}, {})

{{A1,B}, {A2,B}}

a1 {{A1}}

{A1,B}

{A1,A2}

nogood!({A1}) darms-msg(X2,X1,nogood-update,{A1})

A1

a1

{} {}

B

A1

a1

B

b

b

X1:p

{{B}} p {{A2}} A2 a2

focus-of?(X1) -> Ux1 modify-focus!(Ux1, {})

{{A2,B}}

{{B}} A1

{{A2,B}} a1 {{A1}}

{A1,B}

{A1} {A2,B}

{A1}

Figure 4: Planning contingencies using DARMS

Advanced Use In this last paragraph we want to give a rough sketch of further

types of useful applications of darms in planning, scheduling, and control. The rst application leads towards a solution of the following problem: If we cannot perform the (optimal) process plan given current constraints, when should we replan (use a less ecient process plan) and when should we reschedule (pre-empt other jobs, miss due dates)? We simply assume that a planner creates cost-optimal process plans assuming

unlimited resources. The scheduler tries to use them and schedules its operations as good as it can. It then computes a cost penalty for this process plan with respect to its limited resources, by taking into account increased Work-In-Progress, job delays, etc. and informs the planner. The planner may now consider creating an alternative process plan that may be cheaper to schedule, but is by itself more expensive. A di erent scenario could be the following: Process plans are not created in advance, but only on demand, i.e. when a job must be scheduled. Using abstractions, the scheduler informs the planner about its current resource situation. The planner then tries to come up with a process plan that is optimal given certain resource constraints. Possibly, a very close cooperation between planner and scheduler may yield best results: the planner has all the planning knowledge (what operations are available, etc.), but it asks the scheduler every time to evaluate alternative operations. The scheduler's evaluation of alternatives may depend on its current resource conditions and workload.

6. Conclusions Summary E ectively and eciently maintaining dependencies is a major problem to

be solved for achieving a smooth integration of planning, scheduling, and control. This problem is especially hard for multi-agent systems modeling a dynamic environment. The general problem to solve here is the distributed belief revision problem. Reason maintenance techniques are a standard technique to support maintenance of dependencies. However, common rmss for single agents provide no acceptable solution for the multi-agent case. Their lack of builtin support for communicating revisions forces the problem solver to take care of it. This compromises on the two-level architecture and imposes severe control problems for the problem solver components. Existing distributed rmss did not provide the functionality and exibility required for the pede domain. Contributions We proposed darms as a solution to the distributed belief revision problem for multiple-context reasoners acting as independent planners and execution monitors. Key issues in the design were support for a two-level architecture in distributed environments | comprising of a problem solver level and a darms module level |, the capability to maintain multiple contexts, atms expressiveness, local control and belief autonomy. Key ideas underlying the darms architecture are the communicated node concept, which is used to separate communicated from local beliefs, adding context management, which allows to implement the remote belief query capability, and the notion of shared inconsistency. Finally, the application of darms for distributed plan maintenance for scheduling and control was illustrated using a few examples. Implementation and Application An implementation of darms is under construction. The algorithm for distributed label propagation has been worked out and shown to satisfy the formal constraints speci ed in [1]. It is a straightforward extension of deKleer's algorithm for propagating label di erences in single dependency nets. Details can be found in [7]. Applications of darms in the pede and other projects are planned and depend on the nal implementation becoming available. A serious evaluation of both e ectiveness and eciency of darms is planned using pede-lab [14], an environment for performing tests and experiments with distributed applications within a well-de ned simulative environment, that supports inter-agent communication and

allows for the simulation of various environmental conditions. Acknowledgements We thank Joachim Hertzberg and an anonymous EWSP Referee for detailed and helpful comments and suggestions, Herbert Stoyan for providing a stimulating research environment, and Robert Fuhge and Tim Geisler for many fruitful discussions.

References [1] Clemens Beckstein, Robert Fuhge, and Gerhard K. Kraetzschmar. Supporting Assumption-Based Reasoning in a Distributed Environment. In Katia P. Sycara, editor, Proceedings of the 12th Workshop on Distributed Arti cial Intelligence, Hidden Valley Ressort, Pennsylvania, USA, May 1993. [2] Michael Beetz, Matthias Lindner, and Josef Schneeberger. Temporal Projection for Hierarchical, Partial-Order Planning. In Tenth National Conference on Arti cial Intelligence, Workshop on Implementing Temporal Reasoning, July 1992. [3] Alan Bond and Les Gasser. Readings in Distributed Arti cial Intelligence. Morgan Kaufmann, 1988. [4] Johan deKleer. 1. An Assumption-based TMS, 2. Extending the ATMS, 3. Problem-Solving with the ATMS. AI-Journal, 28:127{224, 1986. [5] J. Doyle. A Truth Maintenance System. AI-Journal, 12:231{272, 1979. [6] R. Fikes, P. Morris, and B. Nado. Use of truth maintenance in automatic planning. In DARPA Knowledge-based Planning Workshop, Austin, TX, 1987. [7] Robert Fuhge. Verteilte Begrundungsverwaltung. Studienarbeit, IMMD, Universitat ErlangenNurnberg, 1993. [8] Thilo C. Horstmann. Distributed Truth Maintenance. Technical Report D-91-11, Deutsches Forschungszentrum fur Kunstliche Intelligenz GmbH, Kaiserslautern, 1991. [9] Michael N. Huhns and David M. Bridgeland. Distributed Truth Maintenance. In S. M. Dean, editor, Cooperating Knowledge Based Systems, pages 133{147. Springer Verlag, 1990. [10] Matthias Lindner. ATMS-basierte Plangenerierung. Diplomarbeit, Intellektik/Informatik, Technische Hochschule Darmstadt, 1992. [11] Cindy L. Mason and Rowland R. Johnson. DATMS: A Framework for Distributed Assumption Based Reasoning. In Michael N. Huhns and Les Gasser, editors, Distributed AI Volume II, pages 293{317. Pitman Publishers London, 1989. [12] Drew McDermott. A general framework for reason maintenance. AI Journal, 50(3):289{329, 1991. [13] Paul B. Morris and Robert A. Nado. Representing Actions with an Assumption-Based Truth Maintenance System. In Fifth National Conference on Arti cial Intelligence, August 1986. [14] Rolf Reinema. PEDE-lab Aufbau und Entwicklung einer Experimentierumgebung fur MultiAgenten-Systeme. Diplomarbeit, IMMD, Universitat Erlangen, 1993. [15] R. Reiter and Johan deKleer. Foundations of Assumption-Based Truth Maintenance Systems. In AAAI, pages 183{188, 1987. [16] Olaf Schrodel. Flexible Werkstattsteuerung mit objektorientierten Softwarestrukturen. Dissertation, Universitat Erlangen, 1992.