Collective Memory Search - CiteSeerX

3 downloads 0 Views 214KB Size Report
Haynes et al., 1996b] Thomas Haynes, Dale Schoene- feld, and Roger Wainwright. Type inheritance in strongly typed genetic programming. In Kenneth E.
Collective Memory Search Thomas Haynes Department of Mathematical & Computer Sciences 600 South College Ave. The University of Tulsa Tulsa, OK 74104-3189 e-mail: [email protected] of NP complete problems [Garey and Johnson, 1979]. Collective memory can be de ned as the combined knowledge gained by the interaction of the agents with both themselves and their environment [Garland and Alterman, 1995, Garland and Alterman, 1996]. In this paper, we combine the raw power of collective action with the expressiveness of collective memory to enhance a distributed search process. While our collective memory search integrates the action and memory aspects of earlier research, it is more than just the combination of the two parts. Our research di ers from the collective action research of Dorigo, Maniezzo, and Colorni in that agents need not communicate and a central agent can process the gathered knowledge. Our work di ers from the collective memory research of Garland and Alterman in that agents need not necessarily learn via the collective memory, agents need neither interact nor communicate, and the memory is centralized. The integration of action and memory leads to a distributed society of search agents which can interact via collective memory. The collective memory allows for either communication among the agents or for a centralized search of the gathered knowledge. In this paper, we consider simple computational search agents, which are chromosomes in a genetic programming (GP) [Koza, 1992] population. Genetic algorithms (GA) [Holland, 1975] are a class of distributed search algorithms inspired by biological evolutionary adaptation. GA's are used for parameter optimization, process control, learning classi er systems, etc [Davis, 1991, Goldberg, 1989]. Genetic programming is an o {shoot of GA's, and is typically used in the automatic induction of programs. Both GA and GP represent search strategies in a population of chromosomes. Each chromosome in the population can be searching di erent parts of the search space or tness landscape. Each chromosome can be considered to be a behavioral strategy to control an agent [Haynes et al., 1995]. The agents, i.e. chromosomes, are considered to be autonomous in the sense that they do not typically interact to nd a solution. They can be considered to be implicitly cooperative

Abstract

Collective action has been examined to expedite search in optimization problems [Dorigo et al., 1996]. Collective memory has been applied to learning in multiagent systems [Garland and Alterman, 1996]. We integrate the simplicity of collective action with the pattern detection of collective memory to signi cantly improve both the gathering and processing of knowledge. We investigate the augmentation of distributed search in genetic programming based systems with collective memory. Four models of collective memory search are de ned based on the interaction of the search agents and the process agents which manipulate the collective memory. We present implementations of two of the collective memory search models and further show how collective memory search facilitates \scaling up" a problem domain. An Active-Passive model, which gathers results from the independent searchers, is examined and found to provide a springboard from which search agents can extend their exploration. A Passive-Active model, in which the gathered results are collated, is employed by the process agents to piece together the solution from the parts collected by the search agents.

1 Introduction A computational agent society can exhibit collective behavior in two dimensions: action and memory. Collective action can be de ned as the complex interaction that arises out of the sum of simpler actions by the agents [Dorigo, 1992, Dorigo et al., 1996]. These simpler actions re ect a computational bound on either the reasoning power or memory storage of the individual agent. Such bounds are caused by the combinatorial explosion found in either search or optimization of the class 1

Collective memory is an area where the raw informa-

since the more t chromosomes of generation Gi are more likely to contribute genetic material to the chromosomes in generation Gi+1 . Each chromosome is evaluated by a tness function, which maps the chromosome representation into a given problem domain. The evaluation of one chromosome typically is independent of all others. A notable exception arises in genetic{based machine learning (GBML) systems; both rules and rulesets must be maintained. In the \Michigan approach" each chromosome is a rule and the population as a whole is the ruleset. In the \Pitt approach" each chromosome is a ruleset, being comprised of multiple rules [DeJong, 1990]. In this paper, we investigate the addition of collective memory to GP{based learning systems. We allow the explicit reuse of knowledge from one generation to the next1 . We rst illustrate how agents are able to build of o the collective prior problem solving experience. Next we show how the knowledge retrieved by the agents can be integrated to form a whole greater than the parts. The rest of the paper is organized as follows: Section 2 de nes the four models of collective memory. Section 3 is an overview of the basics of genetic{based computational search systems. Section 4 illustrates how collective memory is utilized to facilitate search in a theorem proving domain. Section 5 reveals how collective memory can be utilized to improve search in a clique detection domain. Section 6 presents some experiments in implementing collective memory to scale up the the clique detector. Section 7 discusses some of the rami cations and drawbacks of collective memory within genetic programming. Section 8 concludes our discourse on the applicability of collective memory in distributed search. Section 9 examines further avenues of research in utilizing collective memory in distributed search.

tion retrieved by the search agents can be stored. Process agents are those agents which collate and process the collective memory. They can have combinations of the di erent privileges. Collation is a delete operation and could also be considered a reordering operation, which is a combination of read 1, read 2, delete 1, delete 2, integrate 1 and 2 into A, and write A. Processing can also encompass all three actions. Process agents cannot directly manipulate the search space: they must direct the search agents in order to sense and manipulate the search space. The search agents can neither manipulate the collective memory nor direct other search agents. Furthermore, a search agent cannot direct itself; once it has been assigned a task, it continues executing it until redirected by a process agent. The interactions of both the process and search agents with the collective memory form two orthogonal dimensions of access. Both dimensions can take on one of two discrete values: passive and active. Passive agents do not retrieve knowledge from the collective memory, while active agents can retrieve knowledge. We adopt the notation of referencing a tuple in these dimensions by Interactivity-Processing, where Interactivity denotes the state of the search agents and Processing denotes the state of the process agents. The four models of collective memory are: Active-Passive This collective memory is interactively accessed by the independent search agents. The agents gather knowledge and deposit it into the collective memory. Before a new search is started, or even during the search process, a search agent can retrieve and utilize knowledge from the collective memory to guide and shape the search. The theorem prover presented in Section 4 is an example of this model. Active-Active This collective memory is interactively accessed by the independent search agents. The process agents can manipulate the memory, and thus the search agents. Blackboard technology falls into this model [Fennell and Lesser, 1977]. Passive-Passive This form of collective memory is actually no collective memory at all. The family of genetic algorithms falls into this model. Passive-Active This collective memory does not interact with the search agents. The agents still gather and deposit knowledge into the collective memory, but they cannot retrieve knowledge from it. The collective memory is a repository from which process agents can manipulate the knowledge. The clique detector presented in Section 5 is an example of this model.

2 Collective Memory Garland and Alterman present a distributed collective memory in their research: agents manipulate their own slice of the collective memory [Garland and Alterman, 1995, Garland and Alterman, 1996]. We present a centralized collective memory, which is a knowledge repository, not local to the agents. As agents gather knowledge, they deposit it into the collective memory. Agents can have read, write, and delete privileges. The write action cannot overwrite. We de ne: Search agents are those agents which retrieve knowledge from the search space. They have write privileges, do not have delete privileges, and may or may not have read privileges. 1 As argued above, selection allows for the implicit reuse of knowledge.

2

In this paper we explore the addition of both ActivePassive and Passive-Active collective memory to a society of search agents represented by GP chromosomes. We examine the coordination of knowledge of looselycoupled, heterogeneous, and initially simple agents. The agents can adapt during the search process, eventually becoming quite complex.

various genetic operations to only construct syntactically correct trees. It has been shown that STGP can signi cantly reduce the search space [Montana, 1995, Haynes et al., 1995]. The STGP variant mainly restricts the construction and reproduction of chromosomes; the basic algorithm is GP.

4 Theorem Proving

3 Genetic Programming

We have used STGP to apply inference rules in the derivation of a target sentence from a knowledge base KB [Haynes et al., 1996a]. We encode sentences using UNITY2 [Chandy and Misra, 1988] operators to form both the KB and , and, then, use propositional inference rules from the UNITY proof logic to show that the knowledge base entails that sentence, KB j=i . The terminal set is comprised of the predicate variables fP, Q, R, S, Tg. Both a subset of the theorems presented in Chandy and Misra [Chandy and Misra, 1988] and some additional simple logic rules were chosen as the propositional inference rules used to prove entailment. The subset and the simple rules combine to form the function set as reported in [Haynes et al., 1996a]. Both the terminal and function set provide sucient functionality and representation to prove the entailment of sentences from the provided KB . The members of the function set map trees from GP space into trees in propositional logic space. A simple heuristic in the logical inference process is to only apply inferences when there is a match between the \arguments" of the hypothesis and sentences in the KB [Chandy and Misra, 1988, Russell and Norvig, 1995]. Strong typing enables the GP system to guide the pattern matching rule. The restriction of only considering valid child nodes has a side e ect of reducing the size of the possible search space [Montana, 1995, Haynes et al., 1995]. Each chromosome represents a propositional logical inference from the KB and is evaluated to determine its tness. The tness function evaluates the soundness of an inference from the KB . To aid this process, two additional knowledge bases are maintained: a newly derived sentence, KBn , and a derived sentence, KBd . We also de ne KBo as the given sentences and KB = KBo + KBd . The KBn is comprised of the newly derived sentences for the current generation. The KBd begins as an empty list. After each generation, the sentences from the KBn are moved to the KBd . The KBd allows us to track sentences derived by the inference engine. The tness function maps the chromosome from a parse tree to a series of operations of inference rules in

Genetic programming is a machine learning technique used in the automatic induction of computer programs [Koza, 1992]. A GP system is primarily comprised of three main parts:  A population of chromosomes.  A chromosome evaluator.  A selection and recombination mechanism. In implementing the system for a new problem domain, the designer must encode function and terminal sets, which will comprise the elements or genes of the chromosome, and implement a function which can evaluate the tness, or applicability, of a chromosome in the domain. Chromosomes are typically represented as parse trees. The interior nodes are functions and the leaf nodes are terminals. The rst population of chromosomes is randomly generated. Each chromosome is then evaluated against a domain speci c tness function. The next generation is comprised of the o -spring of the current generation: parents are randomly selected in proportion to their tness evaluation. Thus, more t chromosomes are likely to contribute genetic material to successive generations. This generational process is then repeated until either a preset number of generations has passed or the population converges. Two considerations for designing the function and terminal sets are closure and suciency. Closure states that all functions must be able to handle all inputs, i.e, division can handle a 0 denominator. Suciency requires that the domain be solvable with the given function and terminal sets. One common rami cation of closure is that all functions, function arguments, and terminals have just one typality. Hence, closure means any element can be a child node in a parse tree for any other element without having con icting data types. Montana claims that closure is a serious limitation to genetic programming. He introduces a variant of GP in strongly typed genetic programming (STGP), in which variables, constants, arguments, and returned values can be of any type [Montana, 1995]. The only restriction is that the data type for each element be speci ed beforehand. This causes the initialization process and the

2 UNITY is a popular language for expressing the formal speci cation of concurrent programs and for using program derivation to develop concurrent implementations that are provably correct.

3

5 Clique Detection

the propositional logic space. The resultant series of rules can be represented as a tree structure, which we will call a proof tree. The tness evaluation has di erent categories to allow for the evolution of more t solutions. The tness evaluation is as follows:

We have used clique detection as a benchmark for improving learning in GP systems [Haynes, 1996, Haynes et al., 1996b]. A collection of cliques in a graph can be represented as a list of a list of nodes which, in turn, can be represented by a tree structure. Given a graph G = (V; E ) a clique of G is a complete subgraph of G. We denote a clique by the set of vertices in the complete subgraph. Our goal is to nd all cliques of G. Since the subgraph of G induced by any subset of the vertices of a complete subgraph of G is also complete, it is sucient to nd all maximal complete subgraphs of G. A maximal complete subgraph of G is a maximal clique. Each chromosome in a STGP pool will represent sets of candidate maximal cliques. The function and terminal sets are F = fExtCon, IntCong and T = f1,: : : ,#nodesg. ExtCon \separates" two candidate maximal cliques, while IntCon \joins" two candidate cliques to create a larger candidate. The tness evaluation rewards for clique size and rewards for the number of cliques in the tree. To gather the maximal complete subgraphs, the reward for size is greater than that for numbers. We also ensure that we do not reward for a clique either being in the tree twice or being subsumed by another clique. The rst falsely in ates the tness of the individual, while the second invalidates the goals of the problem. The algorithm for the tness evaluation is:  Parse the chromosome into a sequence of candidate maximal cliques, each represented by an ordered list of vertex labels.  Throw away any duplicate candidate maximal cliques and any candidate maximal cliques that are subsumed by other candidate maximal cliques.  Throw away any candidate maximal cliques that are not complete subgraphs. The tness formula is

 N points for true sentences in the proof tree that are from the KBo .

 M points for true sentences in the proof tree which are in the KBd .

 4  (M + N ) points if the proof of KB j=i is at the root of the proof tree.

 2  (M + N ) if the proof tree contains the proof of KB j=i .  M + N points for the nal proof tree being true. The KBo is separated from the KBd to bias the inference engine to build a proof from the sentences in the KBo . The KBn is maintained separately from the KBd to allow a reward for the discovery of new sentences. We are concerned with both fairness of derivation of a sentence and in detecting random generation of a sentence versus derivation from the KBo . We enforce fairness in that a chromosome which derives a sentence without proving its validity will not be rewarded either before or after another chromosome in the current generation has derived the sentence while proving its validity. Chromosomes in later generations may treat any appearance of a sentence derived in an earlier generation as valid. The collection of valid sentences into the KBd exempli es an Active-Passive collective memory of previous successful entailments. Each generation of agents adds valid sentences to the the collective memory, KBd , decreasing the computational e ort agents in successive generations have to perform. Consider a hypothetical problem with two leads-to's3 in the

X F = c +

KBo = fp 7! q; q 7! rg

c

=1

and some target sentence , which needs an intermediate step of deriving the transitive

where c = # of valid candidate maximal cliques and ni = # nodes in clique Ci : Both and are con gurable by the user. has to be large enough so that a large clique contributes more to the tness of one chromosome than a collection of proper subcliques contributes to the tness of a di erent chromosome. Figure 1 is a ten node graph we have used in our previous research to test the clique detection system. There are exactly 10 cliques: C = f f0; 3; 4g; f0; 1; 4g; f1; 4; 5g; f1; 2; 5g; f2; 5; 6g; f3; 4; 7g; f4; 7; 8g; f4; 5; 8g; f5; 8; 9g; f5; 6; 9gg:

Once this sentence is derived, subsequent generations can just refer to p 7! r, without presenting the accompanying proof. The agents build o of past problem solving exercises, making computational shortcuts. They do not need to re-derive everything from scratch. 3 leads-to

7!

q.

;

i

p 7! q ; q 7! r : p 7! r

p

ni

is a UNITY progress property which is denoted by

4

0

3

1

4

7

5

8

2000. Each curve shown in Figure 3 is an average of 10 di erent runs. Each of the methods extends the previous methods. The rst method (R0) is a STGP system modi ed with the type inheritance presented in [Haynes et al., 1996b]. In this method, chromosomes are repaired during the tness evaluation, but they are not returned into the population. The second search method (R10Q7) replaces the original chromosome with the repaired one with a probability of :1. The coding segment is duplicated seven times during the replacement process. The third method (PACM) adds Passive-Active collective memory to piece together the set of all cliques. The average generation during which each technique discovered the optimal solution is shown in Table 1. On the average, PACM is 7 times more ecient than R10Q7 and 44 14 times more ecient than R0. Finally, if we investigate how much the repair process is assisting the Passive-Collective memory, we see in Figure 4 that the addition of the duplication of coding segments repair is not signi cant. The PACMR10Q7 curve corresponds to the PACM curve in Figure 3, while the PACMR0 curve represents a Passive-Active collective memory which does not use the repair process.

2

6

9

Figure 1: Example 10 node graph. An example chromosome for the 10 node graph is presented in Figure 2. It has ve candidate cliques, and the only cliques are #2 and #5: C = ff4; 8; 7g; f5; 6gg: The others are eliminated because they violate at least one of the rules: #4 contains duplicate nodes, i.e. node 7 is repeated; #3 is subsumed by #2; and, #1 is not completely connected. This example graph exhibits nice regularities which allows for the ecient comparison of results across different test runs. In our prior research [Haynes, 1996], we utilized these regularities to identify and enumerate the building blocks, i.e., the connected components. We repaired chromosomes by stripping out all invalid candidate cliques. We investigated various rates of return of repaired chromosomes into the population. We found that by duplicating the coding segments4 we could signi cantly improve the search process. If a chromosome contained no valid candidate cliques, we tried a repair strategy of injecting the set of all valid cliques found to date5 . We found that such a repair strategy led to premature convergence in a non{optimal section of the search space. It appears that the ActivePassive collective memory technique has failed to aid in the search process. We nd if we instead adopt a PassiveActive collective memory technique in this domain, the search process is greatly facilitated. With the Passive-Active collective memory we do not repair chromosomes which have no valid candidate cliques. Instead we gather candidate cliques in the collective memory, removing duplicates and candidates subsumed by larger candidates. In Figure 3 we present a comparison of three search techniques for clique detection. The noteworthy parameters for the STGP system were a max of 600 generations6 and a population size of

8000

7000

PACM R10Q7

R0

6000

Fitness

5000

4000

3000

2000

1000

0 0

100

200

300 Generation

400

500

600

Figure 3: Comparison of best tness per generation for no repair of chromosomes (R0), duplication of coding segments repair of chromosomes with a 10% return rate and 7 duplicates (R10Q7), and Passive-Active collective memory (PACM), which utilizes R10Q7 to drive the search agents.

6 Experiments

4 A coding segment is the material in the chromosome which contributes, either positively or negatively, to the evaluation of the chromosome. In this domain, the coding segments correspond to that material which was not stripped out. 5 Unlike the KB of the theorem proving domain, we do not d di erentiate between the knowledge being found in a prior versus current generation. 6 Even if we nd the optimal solution, we let the search continue on until the maximum number of generations had passed.

The addition of Passive-Active collective memory to the search technique signi cantly improves the eciency of the search process. We want to leverage that improvement to allow clique detection in more realistic graphs. The ten graph node we use to illustrate the clique detec5

ExtCon

ExtCon

ExtCon

Candidate Clique #1

Candidate Clique #2

Candidate Clique #3

IntCon

IntCon

ExtCon

IntCon Candidate Clique #4

3

5

4

IntCon

7

7

4

8

Candidate Clique #5

IntCon

IntCon

7

7

5

6

Figure 2: S{expression for 10 node graph. Strategy R0 R10Q7 PACM

7500

Appearance 354 56 8

PACMR10Q7 PACMR0

7000

Fitness

6500

Table 1: Average appearance of optimal solution for different search strategies.

6000

tion is contrived and thus facilitates the search process, i.e. a known optimal solution exists. The search for the optimal solution for this graph is not trivial with either plain GP or STGP systems. In the Second DIMACS Challenge [Johnson and Trick, 1996] random graphs were generated as tests for the maximumclique detection problem. While the duplication of coding segments repair process is able to search such graphs, the plain STGP system will prematurely converge. One of the shortcomings of these graphs is the results presented are for the maximal clique size found, if any, but no data is presented for either the number and composition of all cliques in the graph. Both nding the maximum and all cliques in a graph is NP complete [Garey and Johnson, 1979]. We conjecture, without formal proof, that nding all of the cliques in a graph is \more dicult", i.e., more computationally expensive, than nding the maximum clique of a graph. To nd the maximum clique of a graph of size n, one can start at size k = 2, and determine if there exists a clique of size k. If there is, then k is incremented by one, and the process is repeated until either a clique of size k is not found or k = n. There is no need to \remember" previous cliques found, unless we rst check the corresponding clique of size k ? 1 to see if another node can be added to it. The diculty of nding all the cliques arises in having to remember all of the candidate cliques. In the maximum clique algorithm we are not concerned with identity, but, rather with existence. When nding the maximumclique, we ask the question: is there a clique of size k? When

5500

5000 0

5

10 Generation

15

20

Figure 4: Comparison of best tness per generation for Passive-Active collective memory (PACMR10Q7), which utilizes R10Q7 to drive the search agents, and PassiveActive collective memory (PACMR0), which utilizes no repair. nding all the cliques we ask: is this clique of size k the same as another one of size k? If we want to use graphs from the DIMACS repository to test the duplication of coding segments repair process, we must implement some algorithm to generate all of the cliques. One brute force algorithm is presented in pseudocode in Figure 5. The algorithm builds candidate cliques in increasing levels of size, k. Since the clique detection problem is NP complete [Garey and Johnson, 1979], this algorithm is not guaranteed to be able to nd a solution. A viable search heuristic is to detect cliques from the Passive-Active collective memory. In this section, we experiment with a dataset from the DIMACS repository: hamming6-4.clq7. This dataset represents a graph which has 64 nodes, 704 edges, and a maximum clique size of 4. From our brute force 7

6

ftp://dimacs.rutgers.edu/pub/challenge

700000

# Find all cliques of size 2 for i = 0; i < nodes in graph; i++ for j = 0; j < i; j++ if edge( i, j ) insert_clique( 2, i, j );

600000 PACM

500000

Fitness

400000

# Find all cliques of size k for k = 3; k < nodes in graph; k++ for c = first clique of size k-1; c != NULL ; c = next clique of size k-1 for i = 0; i < nodes in graph && i not a node in c; i++ if i is connected to all nodes in c insert_clique( k, i, all_nodes( c ) );

300000

200000

100000

R10Q7

0 0

100

200

300 Generation

400

500

600

Figure 6: Passive-Active collective memory search applied to the hamming6-4 graph. In particular, comparison of best tness per generation for duplication of coding segments repair of chromosomes with a 10% return rate and 7 duplicates (R10Q7), and Passive-Active collective memory (PACM), which utilizes R10Q7 to drive the search agents.

delete all candidate cliques which duplicate or subsume others

Figure 5: Brute force deterministic clique detector algorithm. clique detector, we know that there are 464 cliques, with a maximum tness of 1,597,4248. We present the results of testing both R10Q7, i.e, replace the original chromosome with the repaired one with a probability of 0:1 and the coding segment is duplicated seven times during the replacement process, and PACM, i.e. add Passive-Active collective memory to piece together the set of all cliques9 . The averaged results are shown in Figure 6. The addition of Passive-Active collective memory is clearly signi cant in improving the search process. However, the highest reported tness of approximately 650,000 is only about 40% of the maximum tness. As the learning curve has not stabilized at a plateau, we could allow the search to continue for more generations. We could also increase the population size. Both methods fail to address our implicit desire to e ectively search the space in both minimal time and memory. A possible extension is bestow further computational e ort to the process agent(s). We could allow for these agents to try to extend candidate cliques by either merging ones in the collective memory or by utilizing the deterministic brute force algorithm on a subset of the candidate cliques.

theorem proving domain, valid sentences form the building blocks. In the clique detector domain, candidate cliques form the building blocks. The identi cation of building blocks in genetic programming is in general a dicult task [O'Reilly, 1995, Rosca and Ballard, 1996, Haynes, 1996]. In part this is due to the domain dependent nature of the alphabet, i.e. the members of the function and terminal sets10 . The repair of chromosome by duplication of coding segments strategy holds promise in automating the detection of building blocks [Haynes, 1996]. If the system designer can identify function nodes that allow for addition of non-coding segments without changing the semantical meaning of the chromosome, the detection of building blocks can be automated.

8 Conclusions The collective memory search model is applicable in integrating results from loosely-coupled agents. Simple search agents are e ective in gathering knowledge. We can increase the processing power of the search agents, but there might be physical or economical restrictions on the processing capabilities of the search agents. If there are such restrictions on the search agents, we can allow complex process agents to collate and process the raw data. We could also employ simple process agents,

7 Discussion The models of collective memory search are e ective if building blocks of the solution can be identi ed. In the

10 Building block are easier to nd in GA chromosomes, but the typical string representation is the binary alphabet and of xed length. As such, GA building blocks are at the structural level, whilst GP building blocks are at the semantical level [Haynes, 1996].

For this problem, and our earlier one, = 10 and = 9. Preliminary testing showed that R0 prematurely converges to a non-optimal solution. 8 9

7

capitalizing on the reduced search space. We have shown collective memory can be used to augment distributed search. It can serve to be either a springboard from which further search can be launched or as a central repository from which parts of the solution can be connected. We nd the Passive-Active model signi cantly improves the search process. It also allowed scaling up in a problem domain, while our previous method failed to scale.

Guide to the Theory of NP-Completeness. W.H. Freeman and Co., San Francisco, CA, 1979. [Garland and Alterman, 1995] Andrew Garland and Richard Alterman. Preparation of multi-agent knowledge for reuse. In David W. Aha and Ashwin Ram, editors, Working Notes for the AAAI Symposium on Adaptation of Knowldege for Reuse, Cambridge, MA, November 1995. AAAI. [Garland and Alterman, 1996] Andrew Garland and Richard Alterman. Multiagent learning through collective memory. In Sandip Sen, editor, Working Notes for the AAAI Symposium on

9 Future Work In our current research, the Passive-Active model of collective memory search is applied to the clique detection domain. The process agent in this domain just collates the knowledge, removing duplicates. There is scope to improve the process agent such that it explores the search space by seeing if the candidate cliques can be merged to form larger candidate cliques. While the process agent is active since it collates, it can become even more energetic by exploring the collective memory space.

Adaptation, Co-evolution and Learning in Multiagent Systems, pages 33{38, Stanford University, CA, March

1996.

[Goldberg, 1989] David E. Goldberg.

Genetic Algorithms in Search, Optimization & Machine Learning. Addison-Wesley, Reading, MA, 1989.

[Haynes et al., 1995] Thomas Haynes, Roger Wainwright, Sandip Sen, and Dale Schoenefeld. Strongly typed genetic programming in evolving cooperation strategies. In Larry Eshelman, editor, Proceedings

References

[Chandy and Misra, 1988] K. Mani Chandy and Jayadev Misra. Parallel Program Design: A Foundation. Addison-Wesley, 1988. [Davis, 1991] Lawrence Davis, editor. Handbook of genetic algorithms. Van Nostrand Reinhold, New York, NY, 1991. [DeJong, 1990] Kenneth A. DeJong. Genetic-algorithmbased learning. In Y. Kodrato and R.S. Michalski, editors, Machine Learning, Volume III. Morgan Kaufmann, Los Alamos, CA, 1990. [Dorigo et al., 1996] Marco Dorigo, Vittorio Maniezzo, and Alberto Colorni. The Ant System: Optimization by a colony of cooperating agents. IEEE Transactions

of the Sixth International Conference on Genetic Algorithms, pages 271{278, San Francisco, CA, 1995.

Morgan Kaufmann Publishers, Inc. [Haynes et al., 1996a] Thomas Haynes, Rose Gamble, Leslie Knight, and Roger Wainwright. Entailment for speci cation re nement. In Proceedings of the First Genetic Programming Conference, 1996. [Haynes et al., 1996b] Thomas Haynes, Dale Schoenefeld, and Roger Wainwright. Type inheritance in strongly typed genetic programming. In Kenneth E. Kinnear, Jr. and Peter J. Angeline, editors, Advances in Genetic Programming 2, chapter 18. MIT Press, 1996. [Haynes, 1996] Thomas Haynes. Duplication of coding segments in genetic programming. In Proceedings of

on Systems, Man, and Cybernetics Part B: Cybernetics, 26(1):29{41, 1996. [Dorigo, 1992] Marco Dorigo. Optimization, Learning and Natural Algorithms. PhD thesis, Politecnico di

the Thirteenth National Conference on Arti cial Intelligence, Portland, OR, August 1996.

Milano, Italy, 1992. [Fennell and Lesser, 1977] Richard D. Fennell and Victor R. Lesser. Parallelism in Arti cial Intelligence problem solving: A case study of Hearsay II. IEEE Transactions on Computers, C-26(2):98{111, February 1977. (Also published in Readings in Distributed Arti cial Intelligence, Alan H. Bond and Les Gasser, editors, pages 106-119, Morgan Kaufmann, 1988.). [Garey and Johnson, 1979] Michael R. Garey and David S. Johnson. Computers and Intractability: A

[Holland, 1975] John H. Holland. Adpatation in Natural and Arti cial Systems. University of Michigan Press, Ann Arbor, MI, 1975. [Johnson and Trick, 1996] David S. Johnson and M. A. Trick, editors. Cliques, Coloring, and Satis ability: The Second DIMACS Challange. DIMACS Series in Discrete Mathematics and Theoretical Computer Science. American Mathematical Society, 1996. (to appear). 8

[Koza, 1992] John R. Koza. Genetic Programming: On the Programming of Computers by Natural Selection. MIT Press, Cambridge, MA, 1992. [Montana, 1995] David J. Montana. Strongly typed genetic programming. Evolutionary Computation, 3(2):199{230, 1995. (Also published as BBN Technical Report #7866, Cambridge, MA, March 1994.). [O'Reilly, 1995] Una-May O'Reilly. An Analysis of Genetic Programming. PhD thesis, Carelton University, Ottawa-Carleton Institute for Computer Science, Ottawa, Ontario, Canada, 22 September 1995. [Rosca and Ballard, 1996] Justinian Rosca and Dana H. Ballard. Discovery of subroutines in genetic programming. In P. Angeline and K. E. Kinnear, Jr., editors, Advances in Genetic Programming 2, chapter 9. MIT Press, Cambridge, MA, USA, 1996. [Russell and Norvig, 1995] Stuart Russell and Peter Norvig. Arti cial Intelligence: A Modern Approach. Prentice Hall, 1995.

9

Suggest Documents