Exploiting Synergy While Maintaining Agent

1 downloads 0 Views 122KB Size Report
Figure 2: Plumbing Agent Hierarchical Plan. Figures 1 and 2 are examples of plan hierarchies, each repre- senting the overall hierarchical plan of a different ...
Exploiting Synergy While Maintaining Agent Autonomy Jeffrey S. Cox [email protected]

Edmund H. Durfee [email protected]

Department of Electrical Engineering and Computer Science University of Michigan Ann Arbor, MI 48109

ABSTRACT Agents in a multiagent environment may want to capitalize on cooperative opportunities to avoid execution inefficiencies. Agents can do so by merging their plans, to reduce overlapping actions and redundancies. However, such merging can lead to greater dependencies between agents, thus lessening their autonomy. In this paper, we present a new approach to multiagent plan merging that enables agents to remove some of these dependencies without compromising the efficiency gains the agents achieve by merging their plans.

1. INTRODUCTION Cooperative planning agents operating in a common multiagent environment may be able to exploit positive interactions, or synergies between their plans to increase the efficiency of their execution. In particular, synergies exist when agents assert overlapping or subsuming effects when trying to achieve their goals. In this case, the agents with the subsuming and subsumed plan steps can “merge” their plans (effectively allowing one or more agents to remove some of its plan steps in its plan) to reduce their combined cost of execution in state-oriented domains [7]. Though many researchers [4,11,14] have explored how planning agents can exploit such synergies, one of the drawbacks of these past approaches to this problem has been the complexity of the search process required to discover them. With this limitation in mind, we have recently developed an algorithm [2] capable of identifying and exploiting synergies between the plans of independent, autonomous agents that exploits a hierarchical plan representation that helps to ameliorate this complexity problem. The algorithm works by identifying individual plan steps in different agent hierarchies that are (symmetrically or asymmetrically) redundant, and returning modified hierarchies in which the redundant step has been removed. Our results show that our approach not only can reduce the computational costs of finding syn-

ergies compared to non-hierarchical strategies, but can also find synergies that might otherwise be missed. However, the methods in which synergies have been exploited in the past (both in previous work and our own work) can lead to problems in the multiagent case. Specifically, the previous methods (either in the single or multiagent case) removed redundant plan steps from the relevant plan. Though in the single agent case there is little need to preserve the redundant operators, when this method is used to exploit synergy between different agents, there can be a negative impact on an agent’s level of autonomy. For example, if a synergy is exploited so that an agent A is no longer planning to execute a redundant step, and agent B who performs the redundant step fails in some way to execute the step, then agent A agent fails to achieve its goals as well. This gives an indication of the tradeoff between exploitation of synergies between agents and their levels of autonomy, as the more plan steps are shared, the more dependent agents are on other agents to achieve their goals. As a solution to this problem, we introduce a modified merging method to our algorithm, which we term conditional merging, that allows agents to exploit synergies without sacrificing autonomy. To do this, instead of having an agent A remove a plan step when it is determined to be redundant in the presence of a step of agent B, we instead convert the redundant step to a conditional step. We add synchronization primitives so that agent B can inform agent A if it has executed the shared step successfully. When agent A reaches the point in its execution where the new conditional plan step is to be executed, if it has been informed by B that B has executed its step successfully, then A can skip the conditional step. Otherwise, it has the option of waiting for agent B to execute the step, or simply executing the conditional step as it would normally, without exploiting the synergy. This strategy effectively changes the level of commitment made between agents. Jennings [5] has previously argued that “commitments are the foundation” of multi-agent planning, as agents cannot deviate from their plans without potentially negatively impacting the plans of other agents. Though this was true in our original implementation of our plan merging system, it is not inherently the case with regards to synergy exploitation. This is because when searching for synergy, agents are already known to possess the skills to complete their goals individually. Thus, synergy commitments between agents can be considered weak com-

mitments to other agents, provided the method of coordination the agents employ leaves open the ability of both agents to achieve the redundant plan step (i.e., the agents do not necessarily remove their redundant steps from their plans).

2. RELATED WORK The elimination of redundancy in single-agent plans has long been a goal in more traditional planning systems. Many researchers have explored various methods for eliminating redundancies in the plans of a single agent, either during the actual planning process itself or when integrating separate sub-plans, with the goal of making the resultant plans more efficient. A commonly used planning process technique in classical planning called step-reuse first tries to achieve goal conditions by introducing causal links between the effects of existing plan steps and unachieved goal conditions, thus preventing the introduction of unnecessary plan steps in the formation of a plan. Plan space planners like POP make use of this technique to reduce the cost of the resulting plans [12]. More sophisticated approaches to redundancy elimination during the single-agent planning process have been developed as well. Critics in Sacerdoti’s NOAH system [8], Tate’s NONLIN [9], and Wilkin’s SIPE [13] are all systems capable of recognizing and eliminating various kinds of plan operator redundancies during the planning process. NONLIN and SIPE are capable of goal phantomization, where existing operators are grounded or constrained so as to achieve new goals (preventing redundant operator instantiation). Kambhampati and Hendler also describe a method of goal phantomization in their plan-reuse framework [6]. More recent work [14] has explored problems in which an agent has constructed several independent plans for separate subgoals, and now must form a single plan by merging the plans together. Yang has derived a formal definition for what it means for a set of plan steps to merge with a single plan step. Yang’s definition states that a set (subject to some grouping restrictions) of plan steps Σ is mergeable with a plan step µ (meaning µ can replace Σ in the plan) if the union of preconditions of Σ subsume those of µ, the postconditions of µ subsume the useful effects of Σ, and the cost of µ is less than the cost of Σ. In other words, µ achieves everything important that Σ does, needs at least as few preconditions when executed, and is at least as cheap to carry out as well. Yang’s definition is flexible, in that it allows for any single plan step in a partial order plan to merge with any possible subset. Work by Horty and Pollack [4] on evaluating new goal options given an existing plan context is similar to Yang’s. The premise of their research is for an agent to evaluate the cost of taking on some new task not by weighing the cost of the task’s plan in isolation, but taking into account how the plan meshes with its existing plans. Since the cost of a plan in context is often less than it would be otherwise (thanks to plan step merges that “kill two birds with one stone”), an agent may be able to adopt a new task without incurring an unacceptable additional cost. Horty and Pollack also describe a mechanism to allow agents to estimate the cost of the new task, as a way of allowing agents with limited com-

putation resources to make rational choices about whether or not to add the new task. Though this body of work on redundancy elimination in the single agent case is useful, it does not extend well to the multiagent case, at least in terms of the issues of agent autonomy. In these single-agent plan merging systems, there is no difference in the level of autonomy between the plans before redundancies are removed and after, since the plan or plans belong to a single agent. This is not true in the multiagent case, and redundancy elimination can have significant repercussions for agents who choose to merge their plans. An analysis of recent work on the multiagent plan merging problem shows this to be the case. One such multiagent plan merging system is one created by Ephrati [3]. Ephrati extended Yang’s work on single agent plan merging to the multi-agent context by farming out subgoals of a single goal to different agents to solve, and then integrating their subplans into a joint, multi-agent plan using an A* search algorithm. Ephrati’s system is able to handle both positive and negative interactions between these subplans, and because it utilizes an A* algorithm, it is claimed to be globally optimal. However, since completion of the resulting plan is dependent on the successful execution of each integrated subplan by each agent, if a single agent were to fail to execute a particular step, the entire plan would fail, indicating that the individual agents are not autonomous in their ability to achieve their individual subgoals. However, if they had not attempted to share common tasks and integrate their plans, they would not have sacrificed their autonomy. Tonino et. al. [10] have recently developed algorithms for exploiting positive interactions between different agents, that have many similarities to our own approach. Their method takes advantage of a rich plan and resource representation, but significantly, if plan steps in the resultant merged plan that belong to one agent but whose results are shared between different agents fail, cause all agents dependent on the step to fail in the achievement of their goals, not simply the agent who is executing the step. Thus, like Ephrati’s plan merging approach, agents who integrate their plans become dependent on other agents, and may fail to achieve their goals if others fail to execute shared steps.

3.

THE SYNERGY ALGORITHM

In this section and the following section we address the problem of agent autonomy in the context of multiagent plan merging. We begin with a description of of our Synergy Algorithm, capable of identifying and exploiting plan step merges between agents with hierarchical plans. We characterize our plan representation and the summary information calculation process we rely on, then describe the synergy criteria we use, and describe our search algorithm capable of discovering and exploiting these synergies. We then indicate the drawbacks of our approach with regards to agent autonomy, before presenting the extension to our Synergy Algorithm designed to address these drawbacks in the next section.

3.1

Hierarchical Plan Description

As we have stated earlier, our Synergy Algorithm supports agents who possess hierarchical plans used to achieve their goals. In essence, a plan hierarchy represents a set of possible plans that all achieve the same overall goal. Each possible complete refinement of a hierarchical plan represents one possible sequence of primitive plan steps capable of achieving the overall goal of the hierarchical plan. Our algorithm relies on a hierarchical plan representation of the following form: An agent’s plan hierarchy is a plan P , of form {Si , Li , Oi }. Si is a set of individual plan steps {Si1 , Si2 , ...}, which can be primitive or non-primitive. Steps that are non-primitive have various refinement methods, which are themselves plans of the form described above. Steps that are primitive have form {name, pre, in, post, cost}, where name is the (unique) name of the operator, pre, in, and post are sets of sentences (supporting propositional logic) representing the preconditions, inconditions (or during conditions) and effects of the plan step, and cost is the cost of executing the plan step (such as the resources used, etc.). Finally, for a plan Pi , Li is a set of causal links between steps in Si describing the causal relations between the steps, and Oi is a set of ordering constraints over the plan steps in Si . Roofing Plan

Place Generator

Roof House

Fuel Generator

Roof West First

Roof East First

Figure 1: Roofing Agent Hierarchical Plan Plumbing Plan

Though abstract plan steps in agent plan hierarchies can be annotated with their relevant preconditions and effects, this condition information is often incomplete. In order to determine the complete set of effects of plan steps at abstract levels of the plan hierarchy, we rely on a plan summarization process developed by Clement [1]. His process propagates condition information up the plan hierarchy, so that abstract steps of an agent’s plan are now annotated with both the necessary (must) and possible (may) preconditions, inconditions, and effects of the plan steps. By performing this operation, our Synergy Algorithm is able to reason about the necessary and possible preconditions and effects of abstract operators as well as primitive operators in the agent hierarchies, which will be useful when it tries to detect plan step merges (below).

3.2

Goal

Bring In Generator

Arrive at Site

has more than one refinement method, and the children of the step represent possible refinements of the step. A solid line connecting two plan steps of the same parent indicates a causal link between the steps, whereas a dotted line indicates an ordering constraint between the steps. For example, in the Plumbing Agent’s plan, the agent must rent a generator before going to the construction site, so the Rent Generator step is ordered before the Arrive At Site step.

Goal

3.3 Rent Generator

Arrive at Site

Place Generator

Bring In Generator

Fuel Generator

Install Plumbing

Install PVC

Install Copper Pipe

Discovering Plan Step Merges

Our algorithm searches through the agent plan hierarchies, looking for ways of removing redundant plan steps by merging them. To determine if a step P can be merged with a set of steps Q belonging to other agents, the algorithm examines the causal links departing step P , and determines if the combined effects of the plan steps in Q subsume the effects associated with these causal links.1 If this is the case, then P is considered mergeable, meaning it can be removed from its plan hierarchy, pending the replacement of its outgoing causal links with links from steps in the set Q. Note that a step Qi ∈ Q, contributing a condition c to replace a causal link from P to a step R, must not be ordered after R. Else, the resulting plan would contain a cycle. Once a mergeable plan step is detected, the algorithm performs a merging operation between the steps, described in detail in the next subsection.

Agent Synergy Discovery as a Top-Down Search

Since we could potentially find plan steps to merge between agents at all levels of the agents’ hierarchies, we need a systematic way of exploring the agent hierarchies to discover merges. Our solution to this problem is to implement a topdown search strategy, where the algorithm begins by looking for plan steps to merge at more abstract levels of the plan hierarchies, where there are fewer steps, before refining the hierarchies to look for merges at lower levels.

Figure 2: Plumbing Agent Hierarchical Plan Figures 1 and 2 are examples of plan hierarchies, each representing the overall hierarchical plan of a different construction agent. More abstract steps refine down into primitive steps indicated by lines connecting steps above with steps below. A bar across these lines indicates that the step refines into the plan steps below, whereas a step without a bar across the lines to its children indicates that the step

The search for plans to merge is through the space of partial expansions of the agent hierarchies. We term a partial expansion a frontier ; the search can thus also be characterized as a search through the space of possible agent plan 1 We refer to the effects of a plan step that are represented in outgoing causal links from that plan step as the step’s necessary effects, because they are the effects of the plan step needed by later plan steps.

Roofing Agent

Roofing Agent Bring In Generator

Arrive At Site

Roof House Arrive At Site

Plumbing Agent Rent Generator

Arrive At Site

Bring In Generator

Install Plumbing

frontiers. The search process begins with a state containing the top-level plan steps of the agents being coordinated, and generates new states by either performing merges between agent frontiers, or by refining plan steps to further refine the plan hierarchies. Figure 3 shows an example search state that the synergy algorithm would generate by coordinating the two construction agents with the plans described earlier. Here, it has refined the top-level plan steps of both agents’ hierarchies and replaced them with the sub-steps of these steps. The search algorithm generates new search states both by merging plan steps together and by refining existing plan steps and replacing them with their children on the frontier. When our search algorithm discovers plan steps that can merge according to the method outlined previously, it makes some adjustments to the frontiers of the agents to implement the merge. First, for each causal link exiting step P , the link is removed and a new link is added to go from the appropriate plan step in Q to the dependent plan step. Then, the plan step P that was found to be redundant given the presence of plan steps Q is removed from its frontier. Causal links between the plans of different agents are implemented by adding synchronization plan steps between the agents’ plans.

Roof House

Arrive At Site

Plumbing Agent Rent Generator

Fuel Generator

Roof House

Plumbing Agent

Figure 3: Example of Search State in Search Space

Roofing Agent

Place Generator

Arrive At Site

Bring In Generator

Install Plumbing

Figure 4: Implementation of Merge of Bring In Generator Plans Figure 4 shows the new search state created by merging the Roofer’s Bring In Generator plan step with the Plumber’s Bring In Generator step. The results are modified plans in which the Roofer is now depending on the Plumber to install the generator for both to use. Since this state is more efficient than the original plans of the agents (they do not wastefully bring in two generators), the algorithm returns these modified hierarchies as a potential solution to the agents being coordinated.

Rent Generator

Arrive At Site

Bring In Generator

Install Plumbing

Figure 5: Refinement of the Roofer’s Bring In Generator Plan Step The algorithm also generates new search states by refining the plans on the different agent frontiers. When refining a plan step, it removes the step off its frontier and places each child step on the frontier in its place. When refining a plan step with multiple possible refinements, it generates a new search state for possible refinement of the plan step, in each case replacing the step with a plan step representing the possible refinement. Figure 5 shows an example expansion of the Roofing Agent’s Bring In Generator plan step, where the step has been refined to the steps Place Generator and Fuel Generator. The refinement process propagates any associated ordering constraints and causal links down the hierarchy. Causal links are propagated by determining which step in the refinement of a step either achieves or requires the relevant condition. In the case of the two agents above, since Bring In Generator was achieving a condition for Roof House (namely, the generator being set up and ready) this causal link is changed to go from the Fuel Generator step in the refined frontier, since by completing the Fuel Generator step, the generator is now ready. To reduce the complexity of the search, our algorithm implements a pruning mechanism. This mechanism detects plan steps on agent frontiers that have no likelihood of merging with other plan steps, based on its summarized condition information (both necessary and possible conditions). These detected steps are marked so that the search algorithm will not generate new search states by refining these steps, because if a step’s summarized conditions do not overlap with the conditions of other steps, then the summarization mechanism guarantees that no lower-level steps will have any conditions in common either.

3.4

Plan Conflict Resolution

Regardless of the amount of synergy to be discovered between agent plan hierarchies, when agents also possess plan steps that may achieve contradictory effects, they must resolve such conflicts to guarantee correct execution. Our algorithm will not be of much help to planning agents unless it is also able to resolve potential problems that could arise as well. Agents may have plan steps that achieve conditions that clobber the preconditions, inconditions or postconditions of another agent’s plan steps. To enable agents to exploit overlapping effects when executing together, the coordinator should also identify and resolve potential conflicts between agents in addition to pointing out opportunities for synergy. Borrowing from previous work on this problem [1] we employ a conflict resolution critic that examines the relationships between the plan steps on different

agent frontiers, and resolves potential conflicts by suggesting synchronization constraints to be placed between plan steps with contradictory conditions. This allows agents to exploit synergies without having to worry about the negative effects they could have on each other as well.

3.5 Removing Irrelevant Plan Steps Once a plan step is removed from a plan because of a plan merge, it may render other steps in the partial order of steps irrelevant. That is, other steps may have been instantiated in the plan simply to enable the removed step, in which case these additional steps could be removed as well. The presence of causal link information between plan steps on the agents’ frontiers allows us to deduce what conditions a plan step is contributing, and to what other step it is contributing the condition. Thus, when a plan step is removed because of a merge, any plan steps that have outgoing causal links that were only connected to this removed step may be removed from the plan as well. Our synergy algorithm implements this plan step irrelevancy checking. After our merging algorithm implements plan step merge, it then scans the agent plan frontiers, looking for plan steps that have no outgoing causal links. Any such steps are removed from the plan frontier. This process is repeated until no more plan steps are determined to be irrelevant.

4. MODIFYING SYNERGY ALGORITHM TO MAINTAIN AGENT AUTONOMY Though the above approach to hierarchical plan merging has been shown [2] to result in dramatic efficiency improvements for agents with high degrees of plan similarity, the particular way our approach implements plan merges (which is similar to past approaches) results in some drawbacks for the agents being coordinated. Specifically, because merged steps are removed from the plan hierarchies, if the situation changes such that the agent or agents now achieving goals associated with the remove step are suddenly no longer able to perform their actions, then the agent with the merged step is now unable to achieve its goals, because of the dependencies created by the merging process. In fact, one of the greatest benefits of our approach, the ability to merge abstract plan steps as well as primitive ones, results in the creation of greater dependencies between agents, as the more abstract the plan merge, the more the agent who removes its merged plan step is dependent on others to help it achieve its overall goal. Thus, it appears that there is an apparent tradeoff between an agent’s ability to exploit efficiency gains by merging plans, and an agent’s level of autonomy, or ability to independently work to achieve its own goals. The drawbacks of our approach in terms of the loss of agent autonomy can be overcome by a modification of the process of implementing plan merges once they are discovered by the algorithm. The intuition is this: when a plan step P is found to be redundant by the algorithm, instead of removing the plan step from the hierarchy, we add a new refinement method to the plan hierarchy that refines the plan step into a new null step, that the agent does nothing to execute. The preconditions of this new null step are that all the steps that P has merged with are completed. We also prevent this conditional plan step from being further refined by the search process.

Significantly, we also require that any conflicts involving these conditionally merged steps be resolved before the modified hierarchies are returned to the agents as solutions. This marks a departure from our earlier merging approach, where removed steps could be safely ignored during the conflict resolution phase, as they were guaranteed not to be executed by the agent. In this case, since there is no such guarantee that the conditionally merged step will be refined into the harmless null step, we must ensure that if the agent does choose to refine it to P at execution time, such a refinement does not prevent the plans of other agents from completing successfully. To finish the implementation of this conditional merging approach, we merely need to add additional communication actions to the plan frontiers of the agents whose steps (in Q) are substituting for the merged step P . These actions inform the agent with step P that the preconditions of the null sub-step conditionally merged step have been satisfied, allowing the agent to choose the null step refinement at execution time, and then proceed to execute its remaining plan steps. This prevents the agent from refining a conditionally merged step and selecting the null step at execution time unless the other agents have enabled it to do so. So, when an agent reaches a point where there is a conditional step, it checks to see if that step is enabled (the other agent(s) has signaled that the step is done). If so, then it can execute the null step and go on. Else, it waits for some time τ to see if the null step is enabled. After τ , it executes the step itself. Roofing Agent Bring In Generator

Arrive At Site

Roof House

Plumbing Agent Rent Generator

Arrive At Site

Bring In Generator

Install Plumbing

Figure 6: Implementation of Merge of Secure Bridge Plan Steps Figure 6 shows the result of the conditional merge between the Bring In Generator steps. The Plumbing Agent’s Bring In Generator plan step has been replaced by a conditional step that itself refines into the Bring In Generator step and a null step. Note the dotted line connecting the end of the Roofing Agent’s Bring In Generator plan step and the Plumbing Agent’s conditional plan step, indicating the conditional merge between the two steps, and the communication action that will take place upon successful completion of the Bring In Generator step by the Roofer, alerting the Plumber that it can select the null step expansion of its new conditionally merged step. Note that if the Roofer does not inform the Plumber by the time that the Plumber reaches its conditional step, the Plumber will wait for a time τ to be notified, and then redundantly install its own generator so that it can complete its own plan. This can lead to a loss of efficiency, but guarantees that the Plumber can complete its plan.

4.1 Integrating Conditional Merging With Irrelevant Plan Step Determination One obvious question the reader may have is how this conditional merging approach integrates with our method of removing irrelevant plan steps after the removal of a merged step. Clearly, if an agent is informed at execution time by another agent or agents that a plan step is irrelevant, it will want to know this before executing any other plan step that is only present in the plan to support the merged step. For example, if the Roofer brings in its own generator, the Plumber would like to know this so it can use the generator instead of bringing in its own. To solve this problem, instead of simply leaving the irrelevant steps in the plan hierarchy, the irrelevant steps are converted to new conditional plan steps just as the original merged step was converted into a conditional step. The null sub-step of each new conditional step is given the same preconditions as the null sub-step of the merged step, so that the agent will not select the null step of any new conditional step unless it is enabled by communication from the other agent(s). Roofing Agent Arrive At Site

Bring In Generator

Roof House

Plumbing Agent

Rent Generator

Arrive At Site

Bring In Generator

Install Plumbing

Figure 7: Integrating Conditional Merging With Irrelevancy Checking Figure 7 demonstrates this method on the plans of the construction agents. Here, the algorithm has recognized the redundancy between the two Bring In Generator steps, and merged them, converting the Plumbing Agent’s Bring In Generator step into a conditional step with two different possible refinements, and so is the Rent Generator. Now, if the Plumbing Agent is informed by the Roofing Agent that the generator is installed, the Plumbing Agent is free to select the null refinement of both the Rent Generator step as well as the Bring In Generator step (assuming it had not done either yet).

4.2 Symmetric Conditional Merging In some cases, the set of plan steps Q and the plan step P that merges with the steps in the set Q may have identical necessary effects (meaning that they have causal links with identical effects associated with the links). In this case, rather than simply performing the conditional merge as we would normally, we can implement a symmetric conditional merge between P and Q. To do this, we simply convert both the step P and the plan steps in Q into new conditionally merged steps, where the criteria for the agent with plan step P selecting the null step when refining its new conditionally merged step are that all agents with plan steps in Q have informed the agent that their steps are completed. Conversely, the criteria for agents with plan steps in Q selecting

the null sub-step when executing their conditionally merged steps are that the agent with plan step P has informed them that P has been executed successfully. Roofing Agent Bring In Generator

Arrive At Site

Roof House

Plumbing Agent

Rent Generator

Arrive At Site

Bring In Generator

Install Plumbing

Figure 8: Implementation of a Symmetric Merge Figure 8 shows the intuition behind the idea of a symmetric merge in our running example of the two construction agents. In this case, since both agents are capable of installing the generator, it would be better if the first agent to get to the plan step when executing its plan simply brought in the generator, and then informed the other when the step was completed. Note that this symmetric merging is not possible using our original plan merging approach, as our earlier approach required the agents to commit to the merge offline, before execution time, meaning that one agent had to commit to performing the step. The ability to support symmetric plan merges, where neither agent commits to the shared task, is one of the many advantages of the conditional merging approach over our earlier approach to plan merging.

5.

EVALUATION

Evaluating the advantages of conditional merging approach relative to our previous unconditional approach is somewhat complicated. Clearly, one important metric for comparing the two approaches is the level of agent autonomy provided by each. Also, since the original motivation of our merging approach was to improve agent plan efficiency, the degree to which each method impacts agent efficiency is also a worthwhile metric. The impact of our conditional merging approach on these two metrics relative to our original method is highly dependent on the specifics of the problem domain. For example, if agents are unlikely to fail in executing their plans, but may experience temporary delay for various reasons, then our conditional approach will be less useful, because the agents may be better served to wait for others to complete shared tasks than to charge ahead and do it themselves. Conversely, if agents are speedy but have a high rate of failure, then having agents wait forever for others to complete shared tasks (as our original merging approach mandates) may cause agents not to complete their plans. In this case, the conditional merging approach and the increased autonomy it provides to agents may better benefit the agents. In our current implementation, we have assumed that the value τ is zero, but more generally, this value will be determined by the relative speed and reliability of agents in the multiagent system. In our future work, we will look more closely at the problem of deriving effective values of τ for various situations that agents can find themselves in. We are currently in the process of empirically evaluating the

impact of our change in merge strategy to the conditional approach, both in regards to agent autonomy and agent execution efficiency, in a variety of possible scenarios. We are considering varying both the probability of an agent successfully completing its plan steps, as well as the length of time it takes to execute its steps, to determine in which situations the conditional merging approach is advantageous.

can better estimate exactly how long they should wait for another to complete. This decision should be based both on the likelihood that an agent is to successfully complete a shared task, as well as how much time the agent needs to complete it. Additionally, much work remains on evaluating the efficacy of this approach in terms of agent execution efficiency, as well as other factors.

Finally, a drawback of our conditional merging approach is the change in the complexity of the conflict resolution process. One of the advantages of our original plan merging approach (and of previous work on the plan merging problem) is that when a plan step is merged away, it is removed from the plan, so that it cannot cause conflicts with other steps. This can simplify the conflict resolution phase of our algorithm, because any of these merged steps can be safely ignored during this phase. In contrast, conditionally merged steps may still be refined so that the original step is executed, requiring the conditionally merged steps to be considered during the conflict resolution phase. What this results in is a more complicated conflict resolution process, as more plan steps have to be checked for conflicts, and any conflicts found involving these steps have to be resolved. We are currently working on analyzing the impact of potential conflicts on the complexity of our search process and agent communication as well.

7.

6. CONCLUSIONS AND FUTURE WORK In this paper, we have described an algorithm capable of discovering agent synergies using a merging approach that helps agents to both exploit these synergies by merging their plans as well as enabling them to preserve some level of agent autonomy. One of the objections that could be raised about our solution to the problem of maintaining agent autonomy when exploiting synergy is that it would be better for agents to coordinate implicitly, via the environment itself. In the case of the two construction agents, instead of coordinating, the agent arriving later could simply have observed that the generator was already installed, and proceeded to accomplish its next objective. This is still coordination, but not of the explicit kind that our Synergy Algorithm performs. However, if at a higher level of its plan hierarchy, the plumbing agent had chosen a different day to do the plumbing, then there would be no synergy opportunity with the Roofing Agent. Our algorithm, by refining the agents’ plan hierarchies to discover synergy, enforces execution constraints on the agents to guarantee that they execute steps that create synergy opportunities. However, other issues remain open. One of the central ones is the question of what agents do when they reach the point where they can execute a conditionally merged step, but have not yet been informed by the other agent(s) that the shared task has been completed. We make the assumption that when this happens, the agent refines the conditional step to the step it was originally planning to do before coordinating. However, it may be worthwhile for the agent to wait for some amount of time, to give the other agents a chance to complete their tasks and inform the agent of their completion. Thus, our future work on this problem will most likely focus on giving agents more information so that they

REFERENCES

[1] B. Clement and E. Durfee. Top-down search for coordinating the hierarchical plans of multiple agents. In Proc. of Third Int. Conf. on Autonomous Agents, pages 252–259, 1999. [2] J. S. Cox and E. H. Durfee. Discovering and exploiting synergy between hierarchical planning agents. In Proc. of Second Int. Conf. on Autonomous Agents and Multiagent Systems (forthcoming), 2003. [3] E. Ephrati and J. S. Rosenschein. Divide and conquer in multi-agent planning. In National Conference on Artificial Intelligence, pages 375–380, 1994. [4] J. F. Horty and M. E. Pollack. Evaluating new options in the context of existing plans. Artificial Intelligence, 127(2):199–220, 2001. [5] N. R. Jennings. Coordination techniques for distributed artificial intelligence. In G. M. P. O’Hare and N. R. Jennings, editors, Foundations of Distributed Artificial Intelligence, pages 187–210. John Wiley & Sons, 1996. [6] S. Kambhampati and J. A. Hendler. A validation structure based theory of plan modification and reuse. Technical Report CS-TR-90-1312, Stanford University, Department of Computer Science, June 1990. [7] J. S. Rosenschein and G. Zlotkin. Rules of Encounter: Designing Conventions for Automated Negotiation among Computers. MIT Press, 1994. [8] E. D. Sacerdoti. A Structure for Plans and Behavior. Elsevier/North-Holland, Amsterdam, London, New York, 1977. [9] A. Tate. Generating project networks. In IJCAI87, Proceedings of the Tenth Int. Joint Conference on Artificial Intelligence, 1987. [10] H. Tonino, A. Bos, M. de Weerdt, and C. Witteveen. Plan coordination by revision in collective agent based systems. In H. Blockeel and M. Denecker, editors, Fourteenth Belgium-Netherlands Conference on Artificial Intelligence, pages 487–488. K.U.Leuven, 2002. [11] F. von Martial. Interactions among autonomous planning agents. In Y. Demazeau and J.-P. Muller, editors, Decentralized AI, pages 105–119. North Holland, 1990. [12] D. S. Weld. An introduction to least commitment planning. AI Magazine, 15(4):27–61, 1994. [13] D. E. Wilkins. Practical Planning: Extending the AI Planning Paradigm. Morgan Kaufmann, San Mateo, California, 1988.

[14] Q. Yang. Intelligent Planning: A Decomposition and Abstraction Based Approach to Classical Planning. Springer-Verlag, Berlin, 1997.

Suggest Documents