Towards Autonomous Molecular Computers Masami Hagiya
Department of Information Science University of Tokyo 7-3-1 Hongo, Bunkyo-ku Tokyo 113-0033, Japan
[email protected]
ABSTRACT In this paper, we rst summarize the research that has been completed in the eld of DNA computing and the research problems that must be overcome if the technique of computing with molecules is to become practical. We then propose a new direction in research towards autonomous molecular computers, describe the author's work on the implementation of state machines using DNA molecules, and discuss the future of DNA computing from this perspective.
1 DNA Computing and Accompanying Research Problems
DNA computing is an interdisciplinary eld of research straddling computer science and biochemistry (or biophysics). It has been an area of active investigation since Adleman's paper in Science, which described a method for solving a DHPP (directed Hamiltonian path problem) using DNA molecules [1]. In this section, we brie y review the research into DNA computing that has been performed to date, summarize the research problems that must be solved for the eld to progress, and discuss the probable directions of future research in this eld. In the next section, we build upon the arguments developed in the rst section and propose a new direction in research, stemming from the idea of autonomous molecular computers, in connection with the author's work on the implementation of state machines using DNA molecules. We call a molecular computation autonomous if it proceeds without explicit operations performed from outside a test tube. At the end of the paper, we brie y touch upon DNAbased nano-scale construction, because autonomous computation is the key for the technology, and it is one
of the promising areas to which DNA-computing will be applied.
1.1 The Adleman-Lipton Paradigm
The paradigm of DNA computing, the Adleman-Lipton Paradigm, which was rst proposed by Adleman [1] and later expanded by Lipton [12], is one of data-parallel computation using DNA molecules. It aims to solve combinatorial search problems such as DHPPs by generating and testing paths represented by DNA molecules. In the generation step, candidates for a solution to a search problem are randomly generated using hybridization between complementary sequences of DNA. Each DNA molecule that is generated represents one candidate for a solution. Within a test tube used in ordinary molecular biology experiments, about 1012 candidates are generated, although there are many copies of each single candidate. In the testing step, techniques of molecular biology are then employed to check whether each candidate satis es the conditions necessary for it to represent a solution to the search problem. In this step, each condition is tested by a particular operation in molecular biology and only those molecules that pass the test are selected. Since each operation is simultaneously applied to all of the molecules in the test tube, this process can be classi ed as a data-parallel computation. Reif uses the designation DP-BMC (biomolecular computation by distributed parallelism) to indicate DNA computation by the Adleman-Lipton Paradigm [16]. In DP-BMC, computation within each DNA molecule is sequential. Parallelism is achieved only because each operation is applied to all of the DNA molecules in a test tube in parallel. Since each molecule holds data independently of the other molecules, computation within each molecule is carried out completely independently. In the eld of DNA computing, there have also been attempts to move beyond generation and testing towards more general kinds of parallel computation using DNA molecules. Reif, for example, has proposed a method for simulating a PRAM (parallel random access machine), one of the abstract models for parallel computation, using DNA molecules [15]. Ogihara and Ray have
proposed a method for computing boolean circuits with DNA molecules and have performed some preliminary experiments [14]. However, it has become clear in recent years that DNA computers probably will not be able to replace electronic computers altogether. More eorts must be made for nding the \killer app", i.e., the right kind of application where DNA computing performs better than VLSI. In particular, appropriate applications suitable for massive parallelism of DNA computing must be presented [3]. As other kinds of applications, DNA ngerprinting [11] and nano-scale construction are considered promising. We brie y touch upon nano-scale construction at the end of this paper. However, many research problems must be solved before massively parallel DNA computers can reach a practical level in any kind of application, regardless of whether they are based on the Adleman-Lipton Paradigm or on some other type of parallel computation. In particular, the following three problems are of the greatest importance. reliability of experimental results problem size experimental costs Let us review these problems, together with some related research.
1.2 Reliability of Experimental Results
In order to increase the reliability of the results that are obtainable by DNA computing, one should take into account physico-chemical considerations in the design of tube algorithms and optimize tube protocols. For example, it is important to adjust physico-chemical conditions such as temperatures and concentrations to increase the yield of desired products. It is also important to reduce experiment errors if they are unavoidable. In order to reduce errors, one should not only make each molecular biology operation more accurate but also to devise a mechanism by which the whole process of computation can compensate for errors incurred by each operation. There has been progress in the design of better data encoding that permits less mis-hybridization [4], and in the reduction of errors by the iteration of operations [9], etc.
1.3 Problem Size
In order for DNA computing to become practical, it must be able to solve problems of a large size. Presently, however, the molecular biology experiments performed in this eld aim only to certify the feasibility of the operations in a proposed method for DNA computing, and their size remains relatively small. (For example, the graph in Adleman's experiment consists of only seven vertices.) To increase the problem size, it is necessary to increase the amount of information encoded in a single
molecule and to increase the number of DNA molecules in a test tube. In the former case, better encoding, which permits less mis-hybridization, is again very important. Although it is impossible to overcome the limit on the number of molecules in a test tube, one can introduce methods for approximate computation (such as evolutionary computation) into DNA computing. For example, Suyama has proposed a method for the approximate solution of a DHPP by combining locally generated paths into a Hamiltonian path [18]. Since the number of molecules in a test tube is extremely large compared with the number of processors in electronic computers, DNA computers are expected to perform equally well as or better than electronic ones even in approximate computation if one chooses appropriate applications. However, this expectation should be veri ed both theoretically and experimentally. There are many research issues, such as the proper implementation of basic operations in approximate computation (mutation and crossover in a genetic algorithm), and the properties that such operations possess. Evolutionary computation with DNA molecules is interesting with respect to the following two points: 1. Evolution in nature is also mediated by DNA molecules. 2. In-vitro selection of RNA, which is established as a method of searching for ribozymes, is a kind of evolutionary computation, although in in-vitro selection molecules are selected by chemical or physical properties. It is natural to replace such chemical or physical properties with the parameters of logical computation (Figure 1). An interesting application of evolutionary computation with DNA molecules is proposed by the molecular computing group at University of Memphis [5]. They presented a way to implement genetic algorithms using DNA in the search for good encodings.
1.4 Experimental Costs
Each molecular biology operation employed in DNA computing requires time and biochemical resources, including DNA and enzymes. In order to decrease the cost of each operation, new enzymes, new equipment, robots, and chemical ICs (integrated circuits) are needed. However, we believe that it is more important to reduce the number of operations required in DNA computation. In the Adleman-Lipton Paradigm, computation proceeds as DNA molecules are modi ed or separated by operations that are performed from outside the test tube, either by hand or by a robot. However, some kinds of molecular computation do not require such external operations. If one can make use of computation that proceeds autonomously by molecular reactions, then
evolutionary molecular computation
in vitro selection
random pool
mutation
mutation
selection by affinity etc.
selection by computation
Figure 1 Evolutionary computation with molecules. the costs of operations will be greatly decreased. Autonomous computation by molecular reactions is extensively discussed in the next section.
2 Autonomous Molecular Computers
In this section, we discuss autonomous computation by molecular reactions in the light of Winfree's cellular automata and the author's work on the implementation of state machines. Finally, we give a perspective on the future of this eld.
to form a planar crystal structure (Figure 2). If one prepares DX units, having dierent sticky ends, then one can play a brick-yard game using DX units, since they are selectively hybridized to other DX units with complementary sticky ends. Beginning with a one-dimensional base pattern, a two-dimensional brick pattern is formed as bricks are placed row by row in the test tube.
2.1 Winfree's Cellular Automata
Hybridization between complementary sequences of DNA is a powerful computational method in itself. In Adleman's work, random paths in a directed graph are generated only by hybridization between DNA molecules. This process is autonomous, because no operations are performed from outside the test tube except those that control the temperature of the tube. The power of hybridization between DNA molecules has been investigated by Winfree to the limit [19]. Winfree has proposed a computational model that employs rectangular tiles made of DX units (double crossover units), each of which consists of four DNA molecules hybridized to one another. A DX unit has four sticky ends, with which it can be hybridized to four other DX units (left upper, left lower, right upper and right lower units)
Figure 2 Tiling by DX units. Winfree showed that, by this method, it is possible to simulate computation of one-dimensional cellular automata by a tiling reaction of DX units in the twodimensional plane [19]. Winfree also showed that search problems such as the DHPP can also be solved by having many tiling reactions proceed in parallel in a test tube. DX molecules are tiled autonomously without any external operations. Winfree calls such a reaction one-pot, but we use the word autonomous in this paper.
Tiling of DX units constitutes a parallel computation, since more than one tiling reaction can proceed in parallel in the test tube. However, it is also parallel in the sense that more than one DX unit can hybridize in parallel in a single tiling reaction. Reif uses the word LPBMC (biomolecular computation by local parallelism) for expressing the latter kind of parallelism. To achieve LP-BMC, molecular computation must be performed autonomously, without external operations. Research on autonomous computation (LP-BMC, in particular) is not only important for decreasing the experimental costs in the Adleman-Lipton Paradigm, but also has value in itself, because it aims at understanding the computational power inherent in molecular reactions.
2.2 DNA State Machines
2.2.1 Successive Localized Polymerization In our previous paper, we proposed a method for implementing state machines using a single strand of DNA, where a state transition is performed by the polymerization of a hairpin structure formed by a single strand [8]. State transitions can be repeated in a test tube using simple thermal cycles, with no external operations. In this paper, we call the method successive localized polymerization. In the method, each single strand of DNA is regarded as an independent state machine. The current state of a state machine is represented by the subsequence at the 30 -end of the single strand. By virtue of the hairpin structure, the current state is hybridized to a part of the single strand representing an entry in a transition table. A state transition is then performed by polymerization of the hairpin structure, which adds the next state to the 30-end (Figure 3). The transition table of a state machine takes the form
stopper?state 01?state 1 ? stopper?state 02?state 2 ? ?stopper?state 0 ?state ; where in each pair (state 0 ?state ) of states, state den
n
i
i
notes the state before a transition, and state 0 the state after the transition. We developed a technique called polymerization stop, by which polymerization stops immediately after the next state is added to the 30 -end [8]. To the left of each pair of states is a sequence called a stopper sequence, denoted by stopper in the gure. The polymerization buer lacks one particular base of the four used to build DNA. Assume that T is missing in the polymerization buer. Then any repetition of A, the complement of T, can serve as a stopper sequence because polymerization must stop when A is encountered and its complement cannot be found. In the experiments previously reported [8], triplets AAA were used as stopper sequences. In order for each molecule to behave as an independent state machine, intramolecular reactions must be favored i
i
s1 gcacgatctaggaaa s2 tcccgttctgggcct s3 gtggtttgcgctcgt ini ccaaa Table 1 DNA sequences used in the experiment. so as to form hairpin structures. There are several approaches to the prohibition of intermolecular reactions, such as that of employing surface chemistry, and we currently take this approach in our experiments. State transitions can be performed successively, because the hairpin structure of the previous transition is destroyed at high temperatures, and a new hairpin structure for the next transition can then be formed when the tube is cooled down again. Therefore, simple thermal cycles enable autonomous state transitions. In our previous paper [8], we reported our preliminary experiment in which two successive transitions are performed by the following protocol:
initial denaturation step, 90C for 1 min extension step, 68C for 1 min four cycles with { denaturation step at 90C for 1 min { cooling down on ice for 1 min { incubation at 40C for 30 sec { extension at 68C for 30 sec. The cooling step is intended to form hairpin structures before extension. Figure 4 shows the successive transitions expected for the sequence used in the experiment, which is of the form ini-s2-s1-s3-s2-s1 as de ned in Table 1. Two bands were observed in electrophoresis of the product of the rst extension step. One is that of inis2-s1-s2-s3-s2, and the other is that of ini-s2-s1-s2s3-s2-s1. After the four cycles of the above protocol, the two bands disappeared and the band corresponding to that of ini-s2-s1-s3-s2-s1-s2-s3 emerged. We nally checked that two transitions actually occurred by sequencing the products of the above reaction. According to our recent work [17], successive transitions are also possible without thermal cycles, if the test tube is kept at a relatively high temperature, where hairpin structures are repeatedly generated and destroyed. This method can be applied to solve various kinds of problems, including many NP-complete problems such as the CNF-SAT, the vertex cover, the direct sum cover and the DHPP [17, 20]. If successive state transitions
state’i state i
state’j state j
a: stopper
state i
spacer
state j= state’i state i
state’j state j
b: state j state i
spacer
state’i state i
state’j state j
c: stopper
state j
Figure 3 State transitions. are regarded as one operation (one bio-step), these problems can be solved by a xed number of operations independent of the size of the problem. This is an example in which autonomous computation decreases the experimental costs of DNA computation. The molecular computing group at University of Memphis also reports their work on implementing state machines by DNA [6]. They propose three methods for implementing nite state machines: (1) ligation-based, (2) hybridization-based and (3) ligation-based nondeterministic. Each method has advantages and disadvantages. An interesting proposal in their paper is the use of methylation in the ligation-based nondeterministic method, in which they expect the self-regulation that keeps the concentration of each state approximately equal. In the ligation-based methods, since transition rules are encoded in input adapters, only one kind of state machine can be executed in a single tube. The situation is the same in the hybridization-based method because transition rules are represented by transition molecules in the solution. On the other hand, since a transition table is represented by a DNA molecule, our method can execute dierent state machines in parallel. 2.2.2 Computing Boolean Expressions In our previous paper, we proposed a method for computing boolean expressions by state transitions [8]. The dierence between our method and others for computing boolean circuits by DNA molecules, such as that of Ogihara and Ray [14], is that not only the input to the
boolean expression but also the boolean expression itself is represented as a DNA molecule (Figure 5). program (boolean expression)
input data
spacer
current state
Figure 5 Input, program and state. For computing a boolean expression containing n variables, x1 , x2 , , x , we prepare three states, denoted by var , var + and var ? , for each variable x . In state var , the value of variable x is read. State var +1 means that the value of x is true (1), and state var ?1 means that the value of x is false (0). The program that computes a given boolean expression is implemented by a state machine and represented by a single stranded DNA molecule. Before explaining the representation a boolean expression by DNA, let us de ne the representation of an input to a boolean expression. An input to a boolean expression is an assignment of a boolean value (true or false) to each variable in the boolean expression. If the true value is assigned to variable x , the transition from state var to var +1 is enabled. If the false value is assigned to x , the transition from state var to var +1 is enabled. Therefore, an input to a n
i
i
i
i
i
i
i
i
i
i
i
i
Step 1: stopper sequences
ini
s2
s1
s3 s2
s1 Step 2: ini
s2
s1
s3
s1
s2 s2
Figure 4 Expected extension. boolean expression containing n variables is represented by a transition table of the form
stopper?var 1 ?var 1? stopper?var 2 ?var 2 ? ?stopper?var ?var n
n
:
A boolean expression is also represented by a transition table. It determines which variable to read after a transition by an input. For example, consider the following boolean expression. x1 ^ :x2
This expression is evaluated as follows.
First read the value of x1 . If the value of x1 is true, read the value of x2 . If the value of x1 is false, output the false value as that of the expression.
If the value of x2 is true, output the false value. If the value of x2 is false, output the true value. To represent output values of boolean expressions, we prepare two other states output + and output ? , which denote the true and false values of boolean expressions, respectively. The above example is implemented by the following transition table.
stopper?var 2?var +1? stopper?output ? ?var ?1 ? stopper?output ? ?var +2? stopper?output +?var ?2 ?
The rst entry in this table allows the transition from state var +1 to var 2 . This means that if the value of x1 is true, the value of x2 is read. States output + and output ? are nal states from which no transition is possible. They also denote outputs of the evaluation of a boolean expression. The representation of an input and that of a boolean expression are then concatenated and the initial state is attached as in Figure 5 with a spacer sequence of an appropriate length. The initial state denotes the variable whose value is read rst. In the above example, the initial state is var 1 . Successive transitions are then performed, and the nal state reached by the transitions denotes the value of the boolean expression. Notice that boolean expressions that can be implemented by this method are restricted to those in which the next state is uniquely determined after reading the value of a variable. In other words, each variable is allowed to occur only once. For example, in the boolean expression (x1 ^ x2 ) _ (:x1 ^ x3 ); after reading the value of x1 , if it is true, then the next state could be either var 2 (for reading the value of x2 ) or output ? (because the conjunct :x1 ^x3 is false). Boolean expressions in which each variable occurs at most once are called -formulas. To implement general boolean expressions, one has to prepare copies of variables that occur more than once. For example, if variable x occurs twice in a boolean expression, a copy of x , say x0 , must be prepared. The value of x0 is also given in an input and must be identical to that of x . A more ingenious method for implementing general boolean expressions by successive localized polymerization is given by Winfree [20]. In his paper, he also points out that not only boolean expressions but also binary decision diagrams can be implemented by our method. i
i
i
i
i
-
+
- -
?
-
var i y var i trans(vi ; x; y ) = x trans(:e; x; y ) = trans(e; y; x) trans(e1 ^ e2 ; x; y ) = concatenate(trans(e1 ; first(e2 ); y ); trans(e2 ; x; y )) trans(e1 _ e2 ; x; y ) = concatenate(trans(e1 ; x; first(e2 )); trans(e2 ; x; y )) first(vi ) = vi first(:e) = first(e) first(e1 ^ e2 ) = first(e1 ) first(e1 _ e2 ) = first(e1 )
Table 2 Translation of -formulas. The algorithm for translating -formulas to their representations is described by the function trans(e; x; y) in Table 2. One can obtain the representation of -formula e by computing trans(e; output+ ; output? ). 2.2.3 Parallel Computation by Successive Localized Polymerization The Adleman-Lipton Paradigm of DNA computing is a paradigm for data-parallel computation (also known as SIMD | single instruction multiple data), because only data are represented as DNA molecules. In our method, on the other hand, not only data but also programs can be represented as DNA molecules. In our previous paper, we described a method for the representation of a restricted class of boolean expressions (called -formulas or read-once formulas) using DNA molecules. The inputs to the boolean expressions (assignment of boolean values to variables) are also encoded as DNA molecules. Since programs are represented as DNA molecules, our method provides a very exible framework for parallel computation. In our method, one can achieve the following three types of parallel computation, depending on how one combines the data and programs on the DNA molecules. An input, a program and a state are all placed on one molecule. Each molecule has its own input and program and behaves as an independent processor (MPMD | multiple program multiple data). An input and a state are placed on one molecule. A program is encoded on DNA in solution in the test tube. Although there is only one kind of program, inputs are processed in parallel (MIMD | multiple instruction multiple data). A program and a state are placed on one molecule. An input is encoded on DNA in solution in the test
tube. More than one program runs in parallel with the same input. This kind of computation can be applied to the search for programs satisfying given input-output examples (inductive inference). Winfree discusses inductive inference by successive localized polymerization in depth [20]. He proposes to implement GOTO programs, and shows that many problems, including NP-complete ones, are solved by searching for desired GOTO programs. The last type of computation can be further elaborated into the data ow type of parallel computation if a program is allowed to produce an output, which is then passed to another program as an input.
2.3 Data ow Computation
Let us give an example of data ow computation using the technique of polymerization stop. Figure 6 shows a program that receives x and y as inputs and produces z as an output. x x
z
y
stopper sequence its complement x
z
x
z
y y
y y
z
y
z
y
z
x
x
Figure 6 Data ow computation by polymerization stop. If there exists both a sequence having x as its 30 -end and a sequence having y as its 30-end, then the long sequence representing the program produces a sequence having z as its 30 -end. This output can be given to another program, which is running in parallel. Therefore, local parallelism (LP-BMC) is achieved. To increase the size and the complexity of this data ow computation, it must be possible to run many data ows in parallel and combine them to form a composite data ow. This requires a way of separating a data ow from others by enclosing it in a compartment, such as a cell. Moreover, it is necessary to assemble such compartments according to an appropriate topology.
We believe that construction and assembly of compartments will become an important research topic in DNA computing. Compartments are important because they create local environments for molecular interactions. Moreover, if they are assembled and allowed to send molecules to each other, they become communicating processors. Kurtz et al. give a similar idea for organizing molecular computations in terms of \cells" [10]. In their paper, they propose their translation-based approach towards molecular computing in which tCNA (abstract molecule inspired by tRNA) plays the central role. Among the techniques for making such compartments is that of liposomes [7]. It is well known that under appropriate conditions, lipids autonomously form spheres in the solution. Such liposomes are actually used in many applications in biology, chemistry and medicine.
2.4 From Computation to Structure Construction
It is extremely dicult to foresee the future of DNA computing, but we believe that apart from the research thread towards data-parallel computation by the Adleman-Lipton Paradigm, there must be another thread towards autonomous molecular computers. Autonomous computation by molecular reactions is, in itself, an important research topic. The greater importance of autonomous computation, however, is as a method for constructing structures on the molecular scale, rather than its use as a method of computation. Computation by molecular reactions, even with its massive parallelism, is much slower than computation by electronic or ionic ows, and is limited as a computational model. It is more realistic to apply the techniques of molecular computation to the construction of nano-scale structures (including those for computation). Autonomous computation is required for nanoscale construction, because it is still dicult to handle molecules individually (it is almost impossible to do so in three-dimensional structures). Recently, DNA-based methods for assembling nanoparticles were reported [13, 2]. In those methods, small single-stranded oligomers are attached to nanoparticles, such as colloidal gold particles, and self-assembly of nanoparticles is guided by small oligomers with sticky ends, or a long single-stranded DNA molecule on which nanoparticles are aligned. Such research shows the possibility that if one can construct complex structures out of DNA molecules, one can also organize other kinds of molecules guided by the structures of DNA.
Acknowledgments
The work described in this paper was performed as part of the molecular computer project supported by the
Japan Society for the Promotion of Science under Research for the Future Program (JSPS-RFTF 96I00101). The author deeply thanks Shigeyuki Yokoyama, Kensaku Sakamoto and Daisuke Kiga for their work on state machines, and all of the other members of the molecular computer project. He also thanks Max Garzon for inviting him to GP-98 and for giving him comments on the earlier draft of the paper. He nally thanks Erik Winfree for his stimulating ideas which greatly contributed the work reported in this paper.
References
[1] Leonard M. Adleman: Molecular Computation of Solutions to Combinatorial Problems, Science, Vol.266, 1994, pp.1021{1024.
[2] A. Paul Alivisatos, Kai P. Johnsson, Xiaogang Peng, Troy E. Wilson, Colin J. Loweth, Marcel P. Bruchez Jr and Peter G. Schultz: Organization of `nanocrystal molecules' using DNA, Nature, Vol.382, 1996, pp.609{611. [3] Dan Boneh, Christopher Dunworth and Richard J. Lipton: Breaking DES Using a Molecular Computer, DNA Based Computers, Proceedings of a DIMACS Workshop, April 4, 1995, Princeton University, Series in Discrite Mathematics and Theoretical Computer Science, Vol.27, 1996, pp.37{65. [4] Russel Deaton, Randy C. Murphy, Max Garzon, D. R. Franceschetti and S. E. Stevens, Jr.: Good Encodings for DNA-based Solutions to Combinatorial Problems, Second Annual Meeting on DNA Based Computers, June 10, 11, & 12, 1996, DIMACS Workshop, Princeton University, Dept. of Computer Science, pp.131{140. [5] Russel Deaton, Randy C. Murphy, J. A. Rose, Max Garzon, D. R. Franceschetti and S. E. Stevens, Jr.: A DNA-based Implementation of an Evolutionary Search for Good Encodings for DNA Computation, Proceedings of 1997 IEEE International Conference on Evolutionary Computation (ICEC'97), 1997, pp.267{271 [6] Max Garzon, Y. Gao, J. A. Rose, Randy C. Murphy, Russel Deaton, D. R. Franceschetti and S. E. Stevens, Jr.: In vitro Implementation of FiniteState Machines, to appear in Springer LNCS, Proc. Workshop on Implementing Automata (WIA'97). [7] Gregory Gregoriadis ed.: Liposome Technology, Vol.I{III, CRC Press, Boca Raton, Florida, 1984. [8] Masami Hagiya, Masanori Arita, Daisuke Kiga, Kensaku Sakamoto and Shigeyuki Yokoyama: Towards Parallel Evaluation and Learning of Boolean
-Formulas with Molecules, Preliminary Proceedings, 3rd DIMACS Workshop on DNA Based Computers, June 23 { June 25, 1997, University of Pennsylvania, pp.105{114.
[9] Richard Karp, Claire Kenyon and Orli Waarts: Error-resilient DNA computations, Seventh ACMSIAM Symposium on Discrete Algorithms, 1996, pp.458{467. [10] Stuart A. Kurtz, Stephen R. Mahaney, James S. Royer and Janos Simon: Biological Computing, to appear in the Complexity Retrospective II. [11] Laura F. Landweber and Richard J. Lipton: DNA2 DNA Computations: A Potential \Killer App"? Preliminary Proceedings, 3rd DIMACS Workshop on DNA Based Computers, June 23 { June 25, 1997, University of Pennsylvania, pp.59{ 68. [12] Richard J. Lipton: DNA Solution of Hard Computational Problems, Science, Vol.268, 1995, pp.542{ 545. [13] Chad A. Mirkin, Robert L. Letsinger, Robert C. Mucic and James J. Storho: A DNA-based method for rationally assembling nanoparticles into macroscopic meterials, Nature, Vol.382, 1996, pp.607{609. [14] Mitsunori Ogihara and Animesh Ray: Simulating Boolean Circuits on DNA Computers, Proceedings of the 1st International Conference on Computational Molecular Biology, 1997, ACM Press, pp.326{ 331. [15] John H. Reif: Parallel Molecular Computation, Seventh Annual ACM Symposium on Parallel Algorithms and Architectures (SPAA'95), 1995, pp.213{ 223. [16] John H. Reif: Local Parallel Biomolecular Computation, Preliminary Proceedings, 3rd DIMACS Workshop on DNA Based Computers, June 23 { June 25, 1997, University of Pennsylvania, pp.243{ 264. [17] Kensaku Sakamoto, Daisuke Kiga, Ken Komiya, Hidetaka Gouzu, Shigeyuki Yokoyama, Shuji Ikeda, Hiroshi Sugiyama and Masami Hagiya: State Transitions by Molecules, submitted to 4th DIMACS Workshop on DNA Based Computers, 1998. [18] Akira Suyama, Masanori Arita and Masami Hagiya: A Heuristic Approach for Hamiltonian Path Problem with Molecules, Proceedings of 2nd Genetic Programming (GP-97), 1997, pp.457{462
[19] Erik Winfree, Xiaoping Yang and Nadrian C. Seeman: Universal Computation via Self-assembly of DNA: Some Theory and Experiments, Second Annual Meeting on DNA Based Computers, June 10, 11, & 12, 1996, DIMACS Workshop, Princeton University, Dept. of Computer Science, pp.172{190. [20] Erik Winfree: to be submitted to 4th DIMACS Workshop on DNA Based Computers, 1998.