An Algorithm for Plan Veri cation in Multiple Agent

3 downloads 0 Views 228KB Size Report
be veri ed 8] ( i.e. how we can be sure that such a plan will be correct, given our uncertainty about .... De nition 4 Let O = f 1; 2;:::;ng be a set of symbols. We call a ...
An Algorithm for Plan Veri cation in Multiple Agent Systems Chengqi Zhang and Yuefeng Li School of Mathematical and Computer Sciences University of New England, Armidale, N.S.W. 2351, Australia Email: fchengqi, [email protected]

Abstract. In this paper, we propose an algorithm which can improve Katz and Rosenschein's plan veri cation algorithm. First, we represent the plan-like relations with adjacency lists and inverse adjacency lists to replace adjacency matrixes. Then, we present a method to avoid generating useless sub-graphs while generating the compressed set. Last, we compare two plan veri cation algorithms. We not only prove that our algorithm is correct, but also prove that our algorithm is better than Katz and Rosenschein's algorithm both on time complexity and space complexity. Keywords: Planning, Veri cation, Multiple agent systems.

1 Introduction Distributed problem solving plays an important role in distributed arti cial intelligence [2,10,17,19,20,21]. Now, it is fashionable to use planning as a kind of approach for distributed problem solving in multiple agent environments [4,7]. Planning research in multiple agent systems has historically focussed on two distinct classes of problems. One paradigm has been that of planning for multiple agents, which considers issues inherent in centrally directed multi-agent execution [7,13,15,17]. The second paradigm has been distributed planning, where multiple agents participate more autonomously in coordinating and deciding upon their own actions [5,10,16,21,22]. Planning the action of multiple agents in either paradigm needs parallel structures for representing planning. Taking the STRIPS representation of actions, directed acyclic graphs (DAGs) are particularly well suited to the representation of plans for parallel execution [7,18]. The question is: how can a DAG plan be veri ed [8] ( i.e. how we can be sure that such a plan will be correct, given our uncertainty about exactly when unconstrained parallel actions will be performed). In 1993, Katz and Rosenschein presented a method to verify whether planning is possible to execute in parallel [8]. Instead of having to examine each levels relation ( all possible parallel executive structures denoted by LEV ELD , see De nition 4 in Section 2), they introduced two new concepts: induced sets and compressed sets. Given a plan-like relation D and a state S, they proved that the compressed set of LEV ELD is a minimal set of levels relations that need to be veri ed. Using these results, they proposed an algorithm for verifying a plan. The algorithm used adjacency matrix to represent the plan-like relations

of D, and used recursive algorithm to generate all nodes of the compressed set. Although the algorithm can generate the forest for the compressed set, it will generate many useless sub-graphs before generating each node. In this paper, we propose an algorithm which can improve the above algorithm. We use an adjacency list and inverse adjacency list to represent the plan-like relations of D, and use a supplementary array to control the generation of sub-graphs ( i.e. the algorithm does not really generate these sub-graphs, but from the supplementary array, we can recognize the current sub-graphs), and hence present a method that avoids generating these useless sub-graphs (section 3). We not only prove that the algorithm is correct, but also prove that our algorithm is better than Katz and Rosenschein's algorithm both on time complexity and space complexity (section 4).

2 Basic concepts In this section we rst brie y introduce the de nitions of actions and plans in STRIPS, then we give the de nitions of  possible, levels relations and compressed sets [8]. Every STRIPS operator, , can be represented by a triple < P ; D ; A >, where P is the precondition list of , D is the delete list of and A is the add list of . In this paper, we assume that every agent which can execute every operation for a same planning, and all agents always obey a supervisor's commands, and they do not fail unless another agent who is executing another operation interferes. Another especially signi cant assumption in this work is the \uncertainty assumption", which states that it is not possible to know in advance the duration of any operation's execution. De nition 1 Let a plan, , be represented as a tuple < 1; 2; : : :; n >, and TI and TG are initial and goal states respectively. Given a state S , the operation sequence, 1; 2; : : :; n, n  1, is possible from a state S , if P i  Si?1 (i = 1; 2; : : :; n). Furthermore, if TI  S , TG  Sn , and 1; 2; : : :; n is possible from a state S , then we call  a sequential plan, and we denote Sn = r( 1; 2, : : :; n; S). Where, S0 = S and Si?1 is the result of executing i?1 from Si?2 for all 2  i  n + 1. De nition 2 Let  =< 1; 2; : : :; n > be a plan, and S be a state. Let O be the set of symbols whose elements are 1; 2; : : :; n. We say that it is possible to execute 1; 2; : : :; n in parallel from S (O is possible from S ) if whenever the supervisor gives instructions in the state S to n agents to execute these operations, every agent proceeds without interference and does the work indicated by the operation, and the states attained in this way are identical. The result of executing 1; 2; : : :; n in parallel from S is denoted by r(O; S).

A plan-like relation is a binary relation over f 1; 2; : : :; ng such that its transitive closure is a partial order (i.e. a directed acyclic graph (DAG)). Of course, not every plan-like relation is a plan for some pair of states. Below, we will give the de nitions of plans, levels relations,  possible, and compressed sets.

De nition 3 Let D be a plan-like relation and let S be a state. We say that

D is possible from S if every topological sequence of D is possible from S , and for each two topological sequences, = ( 1; 2; : : :; n), = ( 1 ; 2 ; : : :; n), of D, r( ; S) = r( ; S). We denote r(D; S) = r( ; S). We say that D is a plan for (TI ; TG ), where TI and TG are partial states, if for every state S such that TI  S , D is possible from S and TG  r(D; S). De nition 4 Let O = f 1; 2; : : :; ng be a set of symbols. We call a relation L a levels relation of O, if it divides O into h, h  n, sets (or levels), O1; O2; : : :; Oh such that (1) For each h0 , 1  h0  h ? 1, if i 2 Oh and j 2 Oh +1 then ( i; j ) 2 L. (2) For arbitrary ( i ; j ) 2 L, there is 1  h0  h ? 1, such that i 2 Oh and j 2 Oh +1 . De nition 5 Let  =< 1; 2; : : :; n > be a plan, S be a state and O = f 1; 2; : : :; ng. Suppose L is a levels relation of O. We call L  possible from S if O1 is possible from S , O2 is possible from r(O1; S), O3 is possible from r(O2; r(O1; S)); : : :. 0

0

0

0

Example 1. Assume that A is some object, and the agents are capable of executing the operations spray blue(A) and spray white(A) (possibly simultaneously). The operation spray blue(A) is represented by the sets Pblue = fA is hanging properly g, Dblue = fwhite(A)g, and Ablue = fblue(A)g, and spray white(A) is represented similarly. Moreover, assume that A is hanging properly in the current state S. Then the levels relation (fspray blue(A); spray white(A)g; fspray blue(A)g) is possible from S, but not  possible from S. The nal state (namely blue(A)) is guaranteed, regardless of A0 s colour after the rst level, so the levels relation is possible from S. However, since that results from the rst level is uncertain, the levels relation is not  possible from S. If L is  possible from S, the result of executing can be denoted by r(Oh , : : :; r(O1; S) : : :) or r(L; S). The relationships between operations can be described by a directed acyclic graph (or a plan-like relation )[7]. Given a directed acyclic graph D, the set of levels relations, LEV ELSD is de ned as follows: is a levels relation on D and for each i and j such LEV ELSD = fL j Lthat ( i; j ) 2 D; i is in a level lower than j :g De nition 6 Let D be a plan-like relation and S be a state. We call D is strictly possible from S if for each L 2 LEV ELSD , L is  possible from S . From the above de nitions, we can see that in order to prove D is strictly possible from S we need to prove each levels relation L is  possible from S. In order to give a quick method to verify whether D is strictly possible, Katz and Rosenschein presented the concept of compressed set of LEV ELSD , and they proved that each levels relation is  possible from S if and only if each levels relation in the compressed set is  possible from S.

De nition 7 Let LEV ELS  LEV ELSD . SL2LEV ELS LEV ELSL is the set

of levels relations induced by LEV ELS , which is denoted by I(LEV ELS). LEV ELS is then complete if LEV ELS = I(LEV ELS). Theorem 1 (Katz and Rosenschein, 1993) Let D be a plan-like relation and S be a state. Let LEV ELS be a set of levels relations over D's base such that I(LEV ELS)  LEV ELSD . If for each L 2 LEV ELS , L is  possible from S , then D is strictly possible from S and r(D; S) = r(L0 ; S), where L0 is any element in LEV ELSD .

To each plan-like relation D, the compressed set of LEV ELSD is de ned as follows: fL j L 2 LEV ELSD , L = (O1 ; : : :; Oh), and there is not h0, 1  h0  h + 1, such that for each i 2 Oh and C(LEV ELSD ) = j 2 Oh +1 , i and j are incomparable in D:g Theorem 2 (Katz and Rosenschein, 1993) Let D be a plan-like relation. Then a. I(C(LEV ELSD )) = LEV ELSD . b. C(LEV ELSD ) is the smallest set among the collection of sets of levels relations on D whose induced set is equal to LEV ELSD . Example 2. To the two plan-like relations shown in Figure 1, we have LEV ELSG =f(f1g; f2g; f3g); (f1g; f3g; f2g); (f2g; f1g; f3g); (f1; 2g; f3g); (f1g; f2; 3g)g 0

0

C(LEV ELSG ) = f(f1; 2g; f3g); (f1g; f2; 3g)g For plan-like relation D, j C(LEV ELSD ) j= 7 while j LEV ELSD j= 123. 1

2

2

3

5

1

3

4

6 (a) plan-like relation G

(b) plan-like relation D

Fig. 1. The plan-like relations

3 The algorithms for plan veri cation

The compressed sets play a main role for plan veri cation. In this section, we rst introduce Katz and Rosenschein's algorithm for generating C(LEV ELSD ), then give our improved algorithm.

3.1 Katz and Rosenschein's algorithm In this subsection we give the algorithm designed by Katz and Rosenschein for generating C(LEV ELSD ). This algorithm uses an adjacency matrix AD to represent dericted graph D. The main component of the algorithm is the recursive functions gen tree, which is called from outside for each possible rst level ENABLED(AD ). Here, we use two equivalent procedures instead of the three procedures given by Katz and Rosenschein.

Algorithm A1 (D; n) 1. C(LEV ELSD ) ;;

2. IF empty(D) THEN ( C(LEV ELSD ) f(f1; 2; : : :; ng)g; RETURN); 3. compute AD ; compute ENABLED(AD ); 4. For each S, S  ENABLED(AD ), S 6= ; DO ( compute ASD ; computer ENABLED(ASD ); NEW2 ENABLED(ASD ) ? ENABLED(AD ); IF NEW2 6= ; THEN gen tree(S; f1; : : : ; ng? S; ASD ; ENABLED(ASD ); NEW2 ; nil) )2

Algorithm gen tree (LEV EL;REMAINING; A;ENABLED;NEW1 ; address of father)

1. generate a node whose address is P, value is LEV EL and its pointer is set to address of father; 2. IF ENABLED = REMAINING THEN ( generate a node whose value is ENABLED and its pointer is set to P; RETURN ); 3. For each S, S  ENABLED, S \ NEW1 6= ; DO ( compute AS ; compute ENABLED(AS ); NEW2 ENABLED(AS ) ? ENABLED; IF NEW2 6= ; THEN gen tree(S; REMAINING?S;AS ; ENABLED(AS ); NEW2 ; P ) )2

In the above algorithms, Katz and Rosenschein use ENABLED(AD ) to denote the set of vertexes with in-degree 0 in AD , ASD to denote the matrix that is obtained from AD by erasing the appropriate rows and columns indexed by S, and AS to denote the matrix that is obtained from the current adjacency matrix by erasing the appropriate rows and columns indexed by S. Although the above algorithms can provide the compressed set, each recursion has to compute some useless AS and ENABLED(AS ), and need some spaces to store these AS matrices (see example 4, Section 4.2) as well.

3.2 Our algorithm In this subsection we will give a method to avoid the useless computing. Di erent from the above method, we use an adjacency list and its inverse adjacency list to represent the directed graph D. In the following algorithm, we rst select a

subset of ENABLED, a set of current vertexes with in-degree 0, whose elements send edges to some vertexes of a set with in-degree 0 on next level. For this reason we introduce two one-dimension arrays to decide what subsets are necessary for the compressed set. Then we recursively call this process. Algorithm V (D; HEAD1 ; HEAD2; n) 1. compute indegree of vertex INDEG[i] (i = 1; : : :; n); FOR i = 1 TO n DO MARK[i] 0; 2. compute vertex with indegree 0 ENABLED; IF ENABLED = f1; 2; : : :; ng THEN ( C(LEV ELSD ) f(fi = 1; : : :; ng)g; RETURN ); 3. compute vertex with indegree 0 REM on next level of ENABLED; select a subset H whose elements send edges to some nodes of REM /* The elements of H are subsets of ENABLED */; 4. FOR each S; S  ENABLED; S at least includes one element of H DO ( REM fb j Hb 2 H and Hb  S g; FOR each a 2 S DO MARK[a] 1; gen(S; f1; 2; : : :; ng ? S; REM [ (ENABLED ? S); REM; nil); FOR each a 2 S DO MARK[a] 0 )2

Algorithm gen (LEV EL; REMAINING; ENABL; NEW; address of father)

1. generate a node whose address is P, value is LEV EL and its pointer is set to address of father; 2. IF ENABL = REMAINING THEN ( generate a node whose value is ENABL and its pointer is set to P; RETURN ); compute vertex with indegree 0 NEW1 on next level of ENABL; select a subset H1 whose elements send edges to some nodes of NEW1 /* The elements of H1 are subsets of ENABL */; 3. FOR each S; S  ENABL; S \ NEW 6= ;, and S at least includes one element of H1 DO ( NEW1 fb j H1b 2 H1 and H1b  S g; FOR each a 2 S DO MARK[a] 1; gen(S; REMAINING ? S; NEW1 [ (ENABL ? S); NEW1 ; P); FOR each a 2 S DO MARK[a] 0 )2

The details of the sentences with underlines of the above procedures can be found in Appendix.

4 Analysis of plan veri cation algorithms In this section, we rst give the proof of the correctness of Algorithm V. Then we give some examples to explain how to avoid generating useless subgraphs in Algorithm V . Last we prove that our algorithm is better than Katz and Rosenschein's algorithm both on time complexity and space complexity.

4.1 The proof of correctness Theorem 3 Suppose Algorithm V selects a set S1 at step 4, from level S1 it recursively calls Algorithm gen, and suppose Algorithm gen (at step 3) selects set S2 , : : :, from level Sr?1 it recursively calls Algorithm gen, and suppose Algorithm gen selects level Sr . Then (S1 ; : : :; Sr ) 2 C(LEV ELSD ). On the contrary, if L 2 C(LEV ELSD ) and L = (O1 ; : : :; Oh) then L is a path of the forest generated by Algorithm V , and O1 is the root node of one tree and Oh is a leaf node of this tree. Proof. From the step 4 of Algorithm V , we have S1  ENABLED; and S1 at least includes one element of H When gen(S1 ; f1; 2; : : :; ng ? S1 ; REM [ (ENABLED ? S1 ); REM; nil) is recursively called, if Algorithm gen selects S2 at step 3, then we have S2  ENABL; S2 \ NEW 6= ;; and S2 at least includes one element of H1 From ENABL = REM [ (ENABLED ? S1 ); NEW = REM we have S2 \ REM 6= ;; here we suppose there is b 2 S2 \ REM From the de nition of H (see Appendix) we know that deleting vertexes from S1 can generate new vertexes with in-degree 0. So NEW = REW 6= ;. Because the vertexes in NEW = REW is generated by deleted S1 , there certainly has a 2 S1 ; b 2 S2 such that < a; b > is the directed edge of D. Similarly, Sr ?1 and Sr also satisfy the above property while 3  r0  r. It is easy to detect that (S1 ; : : :; Sr ) 2 LEV ELSD from Algorithm V , hence (S1 ; : : :; Sr ) 2 C(LEV ELSD ). If (O1; : : :; Oh ) 2 C(LEV ELSD ), then from the de nition of compressed set we know that the in-degree of each vertex in O1 is zero, and there are a 2 O1 ; b 2 O2 such that < a; b > is the directed edge of D. From (O1 ; : : :; Oh) 2 C(LEV ELSD ), we have (O1; : : :; Oh ) 2 LEV ELSD , so the in-degree of each vertex in O2 is zero when deleted O1, and we know that Algorithm V at step 4 can select O1. Similarly, after selecting O1, Algorithm V can select O2 ; : : :. After Oh is selected, Algorithm gen can get ENABL = REMAINING, hence Oh is a leaf node. 2 0

0

4.2 Avoidance of generating useless subgraphs

In algorithm V , a method of avoiding unnecessary computation is presented. The principle of this method is that the algorithm can decide what levels, whose elements send edges to some vertexes of a set with in-degree 0 on next level, are necessary for generating the compressed set before selecting levels. Example 3. To the plan-like relation D shown in Figure 1 (b), its adjacency matrix AD shows as follows. 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 In Algorithm A1 , computeENABLED(AD ) takes time O(n2 ), ( here n=6 ). Storing matrix AD needs space O(n2 ), and the result of running Algorithm A1 is a forest shown in Figure 2. {1,2,3}

{2,3}

{1,2}

{2}

{4,5}

{4}

{3,4,5}

{3,4}

{1,5}

{1,3,5}

{1,5}

{6}

{5,6}

{6}

{5,6}

{4}

{4}

{3,4}

{6}

{6}

{6}

(a)

(b)

(c)

(d)

Fig. 2. The forest generated In Algorithm A1 , ENABLED(AD )isf1; 2; 3g, the FOR loop in step 4 is iterated 7 times, and 7 matrices are generated. For each of these matrices, the set of vertexes with in-degree 0 are computed. But only 4 matrixes are useful (the corresponding S being = f1; 2; 3g; f1;2g, f2; 3g; andf2g). When Algorithm A1 selected S = f1; 2g in step 4, the matrix ASD = ADf1;2g, 0 0 0 1 0 0 0 1 f 1 ; 2 g AD = 0 0 0 0 0 0 0 0 ENABLED(ASD ) = f3; 4; 5g, NEW2 = f4; 5g = 6 ;. So Algorithm A1 will call procedure gen tree(f1; 2g; f3; 4; 5;6g; AfD1;2g; f3; 4; 5g; f4; 5g; nil)

For these inputs, Algorithm gen tree will iterate the FOR loop in step 3 6 times (corresponding to S = f3; 4; 5g, f4; 5g, f3; 4g, f3; 5g; f4g; f5g), generate 6 matrixes, but only f3; 4; 5g and f3; 4g are useful. Example 4. To the plan-like relation D shown in Figure 1 (b), its adjacency list HEAD1 and inverse adjacency list HEAD2 shows as Figure 3. 1

4

2

4

3

6

3

4

6

4 5

1

5 6

6

3

1 5

2

2

2 4

HEAD 2

HEAD 1

(b)

(a)

Fig.3. Adjacency list and inverse adjacency list For Algorithm V , after it nished the step 3, we have INDEG[1; 2; 3; 4; 5;6] = [0; 0; 0; 2;1;2]; ENABLED = f1; 2; 3g REM = f4; 5g; and H = ff1; 2g; f2gg So in step 4 only need to do 4 FOR loops, and corresponding S = f1; 2; 3g; f1; 2g, f2; 3g and f2g. When Algorithm V selects S = f1; 2g at step 4, then we have INDEG[1; 2; 3; 4; 5;6] = [0; 0; 0; 0;0;1]; REM = f4; 5g: So Algorithm V will call the procedure gen(f1; 2g; f3; 4; 5;6g; f3;4;5g; f4;5g;nil) Before step 3 of this procedure, we have ENABL = f3; 4; 5g; NEW = f4; 5g; MARK[1; 2;3;4; 5; 6] = [1; 1; 0; 0; 0;0] NEW1 = f6g; and H1 = ff3; 4gg So in step 3 only need to do 2 loops, and corresponding S = f3; 4; 5g and f3; 4g. The above examples show that Algorithm V di erent from Algorithm A1 , can avoid computing and storing useless sub-graphs.

4.3 Performance Comparison

In this subsection, from the worst case we further give some quantitative analyses. Theorem 4 Let D be a plan-like relation, n be the number of vertexes in D and e is the number of directed edges in D. If Algorithm V and Algorithm A1 output the rst levels relation of the compressed set are both (S1 ; : : :; Sr ), then Algorithm V will take time O(n + e) to output (S1 ; : : :; Sr ), and Algorithm A1 P P will take time  O(n2 + ri=1 (n ? ij =1 j Sj j)2 ) to output (S1 ; : : :; Sr ). Proof. The rst three steps of Algorithm V take time O(n + e) (see Appendix). Selecting S in step 4 is based on H, so getting set fb j Hb 2 H and Hb  S g should take time O(j S j). The process from selecting S1 to output Sr , in which Algorithm V will give O(j S1 j + : : :+ j Sr j) operations to MARK, the process, compute vertex with indegree 0 NEW1 on next level of ENABL; which needs O(e) operations, and the process, select a subset H1 whose elements send edges to some nodes of NEW1 which also needs O(e) operations. So for outputting (S1 ; : : :; Sr ), Algorithm V take time O(n + e) + O(j S1 j + : : :+ j Sr j). Because (S1 ; : : :; Sr ) is just a partition of vertex set of D, Algorithm V takes time O(n + e) to output (S1 ; : : :; Sr ). The rst three steps of Algorithm A1 take time O(n2 ). The step 4 will compute the vertexes with in-degree 0 in matrix ASD1 when Algorithm A1 selects S1 . This process at least needs time O((n? j S1 j)2) (notice that there may be some useless computing here). Similarly, Algorithm A1 will need time O((n ? (j SP ))2) when it selects S2 , : : :. So Algorithm A1 will take time 1 j + j S2 jP r 2  O(n + i=1 (n ? ij =1 j Sj j)2 ) to output (S1 ; : : :; Sr ). 2 From Appendix we can see that the process of \computing vertexes with in-degree 0 on next level ((3) and (5))" and the process of \select a subset whose elements send edges to some nodes of one set ((4) and (6))" are reciprocal in whole running of Algorithm V . So these two processes will take same time. For this, in the following we suppose that the basic operations do not include the process of \select a subset whose elements send edges to some nodes of one set", because the assumption does not a ect algorithm's time complexity. Theorem 5 Suppose Algorithm V and Algorithm S1 form the same forest. Let S0 be a node of the forest, and (S0 ; S1 ; : : :; St ) (t  0) be the path from S0 to its root. Suppose these two algorithms both generated node S0 . Then, Algorithm P V will take time O((n ? ti=0 j Si j) + P e0) to generate the rst child of S0 , and Algorithm A1 will need time  O((n ? ti=0 j Si j)2 ) to generate the rst child of S0 . Where e0 is the number of edges in graph ASD0 [:::[St .

Proof. From the assumption we know that (S0 ; S1; : : :; St) is the front part of a levels relation of D. This means that each sub-tree of node S0 does not contain any node in (S0 [ S1 [ : : : [ St ). So, from the time of generating S0 to form the rst child of S0 , \computing vertexes with in-degree 0 on next level", which takes time O(e0 ), the number of the edges of the rest graph ASD0 [:::[St . In addition, in this period, the number of assignment to arrayPMARK is the size of the rst child of S0 . So Algorithm V takes time O((n ? ti=0 j Si j) + e0 ) in this period. To Algorithm A1P , the time, used for computing vertexes with in-degree 0 in this period, is (n ? ti=0 j Si j)2, because the matrix used is ASD0 [:::[St . Furthermore, computing NEW2 needs some time and Algorithm A1 also takes some time to deal with the useless matrixes before computing ADS0 [:::[St . So Algorithm Pt A1 at least needs time O((n ? i=0 j Si j)2) in this period. 2 If e is the number of directed edges of D, and n is the number of vertexes. Then O(e)  O(n2). From the above two theories, it is easy to see that our algorithm is better than Katz and Rosenschein's on time complexity. The following theory tells us that Algorithm V is also better than Katz and Rosenschein's on space complexity. Theorem 6 Let D be a plan-like relation, n be the number of vertexes of D, and e be the number ofPdirected edges of D. Then the space complexity of Algorithm V is O(n +Pem+ mi=1 j SP j), and the space complexity of Algorithm A1 is i m  O(n2 + i=1 j Si j + i=1 j ASi j2), where S1 ; : : :; Sm are all nodes of forest formed by the compressed set of D, the de nitions of ASi can be found in Algorithm gen tree. Proof. For Algorithm V , the adjacency list and inverse adjacency list of D both need space O(n + e). Array MARK, INDEG and variables each requires the space O(n). Because each element Hb of set H at most need a S 2 fS1 ; : : :; Sm g in a level, and j Hb jj S j (similarly H1 needs the same P space). So Algorithm V at most takes the space O( mi=1 j SP i jm) to represent all H and H1. Since the resulting forest only needs the space i=1 j Si j. So the P space complexity of Algorithm V is O(n + e + mi=1 j Si j). To Algorithm A1 , storing adjacency matrix AD takes the space O(n2), and variables need the space O(n). Because Algorithm A1 has to store matrix AS : : :;m Sm g and some useless sub-graphs, Algorithm A1 for every element S of fS1 ; P S 2 at least requiresPthe space i=1 j A i j to do this. In addition, Algorithm A1 m takes the space i=1 j Si j to store the resulting forest. So, the space complexity P P of Algorithm A1 is  O(n2 + mi=1 j Si j + mi=1 j ASi j2). 2

5 Summary In this paper, we propose an algorithm to improve Katz and Rosenschein's plan veri cation algorithm. We use adjacency lists and inverse adjacency lists to represent the plan-like relations of D, and use a supplementary array to control the

generation of sub-graphs and hence present a method to avoid generating these useless sub-graphs. The new algorithm is better than Katz and Rosenschein's algorithm both on time complexity and space complexity. Planning veri cation in multiple agent systems is a basic problem. The concept of compressed sets can describe all necessary levels relations when the plan will be executed by several agents. When a plan D is passed to the supervisor, he sends agents the instructions according to the following rule [8]: Sending rule: The supervisor may send the instruction j , 1  j  n, to an agent x, if a. For each j , 1  j  n, such that ( i; j ) 2 D, the supervisor already sent the instruction i and received the appropriate \ nished" message. b. x is free, i.e. the supervisor did not send x an instruction since he last received a \ nished" message from x. In order to decrease the amount of communications between supervisor and agents, supervisor can send several operations to one agent at the same time rather than one by one [11]. The problem is what are the suitable structures among operations for this consideration. We think that the compressed sets may be used to describe the structures.

6 Acknowledgement This research is partially supported by the University of New England research grant and partially supported by the large grant from the Australian Research Council (A49530850).

References 1. R. I. Brafman and M. Tennenholtz, Modeling agents as qualitative decision makers, Arti cial Intelligence, 1997, 94: 217-268. 2. E. H. Durfee, V. R. Lesser and D. D. Corkill, Trends in cooperative distributed problem solving, IEEE Trans, Knowl. Data Eng. ,1989, 1: 63-83. 3. E. H. Durfee, V. R. Lesser and D. D. Corkill, Cooperation through communication in a distributed problem solving network. In M.N. Huhns (ed.) Distributed Arti cial Intelligence (Los Altos, California, USA: Morgan Kaufmann, Inc.) 1987, Ch. 2, 29-58. 4. E. Ephrati, M. E. Pollack and J. S. Rosenschein, A tractable heuristic that maximizes global utility through local, in Proceedings of ICMAS, 1995, 94-101. 5. E. Ephrati and J. S. Rosenschein, The Clarke Tax as a consensus mechanism among automated agents, in Proceedings of AAAI, 1991, 173-178. 6. E. Ephrati and J. S. Rosenschein, Multi-agent planning as search for a consensus that maximizes social welfare, in Proceedings of 4th European Workshop on Modeling Autonomous Agents in a Multi-Agent World, 1992, Chapter 3. 7. M. J. Katz and J. S. Rosenschein, Plans for multiple agents, In L. Gasser and M. N. Huhns (eds.) Distributed Arti cial Intelligence, Volume II (Pitman/Morgan Kaufmann, Inc., London), 1989, 197-228.

8. M. J. Katz and J. S. Rosenschein, Verifying plans for multiple agent, J. Experimental & Theoretical Arti cial Intelligence, 1993, 5: 39-56. 9. N. A. Khan and R. Jain, Uncertainty management in a distributed knowledge based system, in: Proceedings of IJCAI, Los Angeles, CA, 1985, 318-320. 10. V. R. Lesser and D. D. Corkill, The distributed vehicle monitoring test bed, AI Mag., 1983, 4: 63-109. 11. Y. Li and D. Liu, Task decomposition model in distributed expert systems, in: Proceedings of 9th International Conference on Computer-Aided Production Engineering, 1993, 259-264. 12. Y. Li and D. Liu, The method of verifying plans for multiple agents, Chinese J. of Computers, 1996, 19(3): 202-207. 13. E. P. D. Pednault, Formulating multiagent, dynamic-world problems in the classical planning framework. In M.P. George and A.L. Lansky (eds.) Reasoning About Actions and Plans (Los Altos, California, USA: Morgan Kaufmann, Inc.), 1987, 47-82. 14. M. Pollack, The uses of plans, Arti cial Intelligence, 1992, 57(1): 43-68. 15. J. S. Rosenschein, Synchronization of multi-agent plans, in Proceedings of AAAI, 1982, 115-119. 16. J. S. Rosenschein and M. R. Genesereth, Deals among rational agents. in Proceedings of IJCAI, 1985, 91-95. 17. R. G. Smith, The contract net protocol: High-level communication and control in a distributed problem solver, IEEE Transactions on Computers, 1982, C-29(12): 1104-1113. 18. D. S. Weld, An introduction to least commitment planning, AI Magazine, 1994, 15(4): 27-61. 19. C. Zhang, Cooperation under uncertainty in distributed expert systems, Arti cial Intelligence, 1992, 56(1): 21-69. 20. C. Zhang and D. A. Bell, HECODES: a framework for heterogeneous cooperative distributed expert system, Int. J. Data Knowl. Eng., 1991, 6(3): 251-273. 21. G. Zlotkin and J. S. Rosenschein, Cooperation and con ict resolution via negotiation among autonomous agents in noncooperative domains, IEEE Transaction on Systems, Man and Cybernetics, 1991, 21(6): 1317-1324. 22. G. Zlotkin and J. S. Rosenschein, Incomplete information and deception in multiagent negotiation. in Proceedings of IJCAI, 1991, 225-231.

Appendix Suppose the set of vertexes of directed graph D is f1; 2; : : :; ng. The head arrays of the adjacency list and its inverse adjacency list are HEAD1 , HEAD2 , respectively, and the structure of a node in the two lists is vertex link where link eld is the pointer eld, the value of vertex eld is the No. of the head of the directed edge indicated by the node. (1) compute indegree of vertex INDEG[i] (i = 1; : : :; n); FOR i = 1 TO n DO INDEG[i] 0; FOR i = 1 TO n DO ( P HEAD2 [i]; WHILE P 6= nil DO ( INDEG[i] INDEG[i] + 1; P

P " :link) );

(2) compute vertex with indegree 0 ENABLED; ENABLED ;; FOR i = 1 TO n DO IF INDEG[i] = 0 THEN ENABLED

ENABLED [ fig;

(3) compute vertex with indegree 0 REM on next level of ENABLED; REM ;; FOR each a 2 ENABLED DO /* selecting the vertexes with in-degree 0 on next level */ ( P HEAD1 [a]; WHILE P 6= nil DO ( i P " :vertex; INDEG[i] INDEG[i] ? 1; IF INDEG[i] = 0 THEN REM REM [ fP " :vertexg; P P " :link )); FOR each a 2 ENABLED DO /* restoring array INDEG */ ( P HEAD1 [a]; WHILE P 6= nil DO ( i P " :vertex; INDEG[i] INDEG[i] + 1; P P " :link )); (4) select a subset H whose elements send edges to some nodes of REM /* The elements of H are subsets of ENABLED */;

H ;; FOR each b 2 REM DO /* nding a subset which relates to each vertex of REM */ ( P HEAD2 [b]; Hb ;; WHILE P 6= nil DO ( IF MARK[P " :vertex] = 0 THEN Hb Hb [ fP " :vertexg; P P " :link )); FOR each b 2 REM DO H H [ fHbg; (5) compute vertex with indegree 0 NEW1 on next level of ENABL; /* this procedure is similar to procedure (3) */ NEW1 ;; FOR each a 2 ENABL DO /* selecting the vertexes with in-degree 0 on next level */ ( P HEAD1 [a]; WHILE P 6= nil DO ( i P " :vertex; INDEG[i] INDEG[i] ? 1; IF INDEG[i] = 0 THEN NEW1 NEW1 [fP " :vertexg; P P " :link )); FOR each a 2 ENABL DO /* restoring array INDEG */ ( P HEAD1 [a]; WHILE P 6= nil DO ( i P " :vertex; INDEG[i] INDEG[i] + 1; P P " :link )); (6) select a subset H1 whose elements send edges to some nodes of NEW1 /* The elements of H1 are subsets of ENABL */; /* this procedure is similar to procedure (4) */ H1 ;; FOR each b 2 NEW1 DO /* nding a subset which relates to each vertex of NEW1 */ ( P HEAD2 [b]; H1b ;; WHILE P 6= nil DO ( IF MARK[P " :vertex] = 0 THEN H1b H1b [ fP " :vertexg; P P " :link )); FOR each b 2 NEW1 DO H1 H1 [ fH1b g; This article was processed using the LATEX macro package with LLNCS style

Suggest Documents