a 'trick' that can be learned from biological evolution, and one which seems ... can be explained, in principle, by competition (i.e. by what is often called gene kinetics is a question we do not discuss here - let it su ce to recall that J. Maynard Smith, .... all the variables, other than a just few, maybe a handsome, which we have ...
Self-Modifying Networks: A Model for the Constructive Origin of Complexity George Kampis
Dept. of Theoretical Chemistry, University of Tubingen, Tubingen and
Dept of Ethology, Eotvos University, Budapest
Introduction
In this paper we propose a non-algorithmic (yet, in some sense, by computers realizable) mechanism for the increase in complexity. It involves the production of new variables, a 'trick' that can be learned from biological evolution, and one which seems interesting enough. We believe that it can possibly lead to a new, nonconventional style of computational modeling. We shall discuss this idea and suggest applications to various abstract evolutionary processes, which are of interest to AI, cognitive science, and AL. We also present a computer model based on the same idea, and showing nontrivial evolutionary properties. The essential part of the material comes from a recent book, written by this author [Kampis 1991]. The book deals with many aspects of the natural creating processes and oers a framework that integrates several approaches to problems of developing complexity.
Why is Complexity Important for AL?
AL promises that computers will model (or perhaps realize) life, therefore, many people believe we should be able to model all life-like phenomena. One of the most exciting of these is evolution, the spontaneous generation of new life forms.But what is evolution? Darwinism said it is selective survival and reproduction, and a corresponding shift of the genetic constitution of a population. Now, this may be true on some time scales, and we may admit these are the time scales of utmost interest for many sub elds in biology, but if we consider the whole evolutionary process, from the primordial soup to man (or woman : : : ), it is striking to realize that the most visible, and in this sense the most important, of all characteristics of evolution is, that in it complexity increases. Whether or not this can be explained, in principle, by competition (i.e. by what is often called gene kinetics is a question we do not discuss here - let it suce to recall that J. Maynard Smith, the leading gure of Darwinism is one of those who consider the increase in complexity as an unsolved problem [Maynard Smith 1986]. So a computational approach to evolution must also consider the problem of complexity as a challenge.
Simple and Complex Systems
Computers oer a natural framework for complexity studies anyhow. A juxtaposition of simple and complex systems would amount to compare the degree of diculty of their characterization, and the amount of interesting' information that can be learned from such a characterization. In this sense a simple system is not computationally intensive and it is readily exhaustible by analyis, whereas a complex system is computationally intensive and 1
it is not readily exhaustible. Nonlinear phenomena of dynamical systems, as exempli ed by chaos or cellular automata systems, nicely illustrate this principle. Kolmogorov complexity (information complexity)[for good reviews of the development of this eld, see Lofgren 1977, 1987] and computational complexity [Garey and Johnson 1979, Wagner and Wechsung 1986] are precise mathematical ideas that express essentially the same idea. What is deemed complex by these de nitions is what requires lots of computer resources, that is, memory and/or runtime to cope with. In terms of information complexity the most complex objects are those which cannot be described for a computer in an any better way than to give them, with all their details, explicitly. In computational complexity the question is how computation speed depends on problem size (given a processor); the slower, the more complex. For instance, those algorithms which require more than polinomial time for their execution are practically intractable because of their demand of resources. Through these examples, the importance of computation processes with respect to complexity can be easily recognized.
Computers as Universal Pattern Generators
We can go even further: we can be fairly sure that a computer can represent every complex system by means of a simulation. This is so because a computer can act as a universal pattern generator. For instance, there is little doubt that one can produce, on a computer screen attached to a machine, all possible scences, or pixel value combinations, and that further, one can likewise produce all possible sequences of such patterns. In other words, in computer programs we can embed all states and transformations (no matter how complex they are). It would be false, however, to conclude from this that the computer is an ultimate metaphor for complexity. What happens on the screen is one thing; what makes it organized in a way typical to computers is another. There seems to exist a possibility for discovering new classes) of complexity, which go beyond the classical computational paradigm. Computations may be complex, but there are systems even more complex than that, systems which unlike computers come equipped with an inherent tendency for internal complexi cation. We suggest that self-modifying networks (SMN -s), to be discussed here, oer such a framework, by dealing in a new way not only with complexity, but also with dynamical laws, the determinants of complexity.
What is a Computation?
It appears that computers are central to our discussion. Therefore, to x things, let us characterize computations rst. Consider, for instance, the so-called Weak Church thesis : "Every eectively de nable function is -de nable" [e.g. Yasuhara 1971, Rogers 1967]. What this means for the modeler is that there is an immediate connection between the more familiar notion of Turing Machines (TM -s) and the lambda-functions of logic, and that, in a loose sense, all models of computation are equivalent, and what is more, they are directly equivalent with TM-s and with our present-day computers (many papers in [Herken 1988] discuss various aspects od this equivalence). 2
A TM is a nite state control system with a reading and writing head as well as a tape that can be extended in at least one direction and is operated on by the control mechanism. All a TM must be able to do, in oder to be a universal computer, is to move left and right, to stay where it is, and to replace symbols on the tape by new ones. Perhaps the simplest models of such a universal computer are given in [Trakhtenbrot 1973]. A TM is an attractive formalization because it oers itself to an easy study; from the Church thesis it follows that we can learn everything about computers by studtying just TM-s. If we analyse what the TM model entails, we nd that a TM consists of a set of primitives (which are the permissible symbols of the tape and the states of the controller), a syntax for prescribing how they are related, and a transition scheme to operate them. Wittgenstein was the rst (in his discussions with Bertrand Russell) to make this clear; for an account, see his [1939]. These together de ne a one-dimensional language. That is, ultimately, every computer is a processor; even cellular automata and other systemes like computer networks, which are intuitively felt "multidimensional" restrict themselves to string processing modes (a somewhat counterintuitive fact we shall discuss later in detail). The best way to see why every computation is a 1D process is to recall the fact that a computer is, according to one of its many equivalent formalizations, a calculator (a "number cruncher"). That is, whatever we do in a computer is ultimately expressed for the computer as numbers, and numbers inhabit the real line. K. Godel has developed a technique generally called Godel numbering which makes this possible. IN other words, computer programs do nothing but elementary arithmetics - making ones from zeroes or back. In this sense, computational processes are extremely simple and rigid. All the complexity a computer can have is complexity of the numbers. String processing appears to be a primary property of computers, and many other attributes of computing can be derived from that. The computer generations classic, [Hofstadter 1979], a good book everybody bought but nobody really read, brings nice chapters on just this.
The Problem of Representation
Advanced or adventureous computing style can hide from the eye this underlying structure of the computational process. Unlike in the above impoverished picture, our desktop computers are very exible tools from the point of view of practical computation. This is so beacuse some invisible encodings do for us a good part of the job, and it is only them, or rather, their programmers, who must be concerned with the numbers and the transition functions. The reason why it was perhaps not pointless to recall a few largely trivial facts about computing is that these indicate, it is not the computation per se but the chosen representation (in other words, the blinking screen rather than the numbers) that carries the relevant information for the user, in a modern application. Now, how is the complexity of a computation, in the scienti c terms of computing, related to the complexity sensation we get through the interface? In other words, can the use of suitable interfaces enlarge the capabilities of computers, with respect to their ability to deal with complexity? (that they do enlarge the scope of computers capabilities with 3
respect to "handiness" and the like, is obvious). It is doubtful whether it can, at least unless the question of representation is rethought radically. In the end of the paper we shall discuss how we can try to use "dirty tricks" to form new kind of representations which in some sense go beyond what computers themselves oer. But the way computer embeddigs are de ned today is rather limiting. A computer must "know" in advance, what to do; that is what programs are for. This gives us all the freedom (beacause, in a computer, unlike in the physical Universe surrounding us, it will happen what we want), but this also imposes a constraint. The embeddig, or interpretation, of every object that appears in a computer must be prespeci ed, either directly, as in a look-up table, or indirectly, by means of some generating function. In short, we have to think about all embeddigs before we let them work. Object-oriented programming [e.g. Peterson 1987], a widespread methodology today for dealing with computational objects as objects on their own, provides good examples for what the above strategy signi es. Object-oriented languages like C++ or Oberon [Reiser 1991] oer structures similar to those previously developed in AI under the name frame systems. The main idea is to represent objects as members of a class de ned by structured templates that de ne properties, inheritance, and relations [Reichgelt 1991]. (Note that "frames" in the sense of Minsky [1977] and of the so-called frame problem of AI [Brown 1987] are not exactly the same, yet they have enough in common In the making of an object-oriented system, foresight, speci cation and mapping play the key role, much as in the case of the numbers. Compilers can only work if they are themselves rigidly structured. We must conclude that every embedding inherits the most basic properties of the embeddig system, and is only conceivable as relative to them. In this sense, it still all depends on what arithmetic processes can do. We must return to the rst principles. We have repeatedly said there are complex processes that go beyond the kind of complexity computers can be characterized with, and it is time now to give a hint about what they are like. We will be perhaps surprised to see how easy it is to nd examples we all know but do not take seriously.
Non-Computational Processes
Whether there is anything beyond computers is a question many people are interested in, and debates about Chinese rooms and Emperors minds, which apparently make the computer practitioners angry, put this question in the foreground. But the line of works that deal very seroiusly with this question goes back to A.N. Whitehead, co-author of the famous Principia : : : [Russell and Whitehead 1912]. What I suggest to recall next owes a lot to him, in particular to his [1929]. To put it in one sentence, he pointed out that even computations are not quite computations. (Well, in fact he did not speak about computations, because there were no computers then, he spoke about doing mathematics.) In particular, a natural process is never a computation. Perhaps the best way to show this is by pointing to the physical systems which we use in order to realize given computations (your notebook PC, for instance). A basic fact is this: besides performing a computation, they always do something else, too: one form in which this occurs is what we call 'errors' (as if Nature could go wrong). What happens when a computer makes an 4
error is that another process interferes with the computation, in such a way which cannot be foreseen or accounted for in the computation itself. That is why we feel the latter breaks down (although, if we think into it, some computations are performed even by a damaged computer, because, unless completely defunct, it will produce some outputs, only these are not those outputs we wanted). In other words, in every natural process, even in ones especially built for doing nothing but computations, there is the potential of further interactions. In the moment when these interactions are "switched on" we transcend the computational framework. And, oh no, it doesn't help to rede ne your computation in order to include the error-producing processes, for instance. Then all we would have would be a new computation which would then produce new 'side eects', and so on. (Even a bad system can turn worse.) . "Error" is but one instance of the various unexpected interactions that may occur, Whitehead says. New interactions can drive a system into new modes of processing. Technically, he concludes, in every process we deal with a potential in nitude of variables; that this in nity (at least in the sense of the number of potentially relevant variables being 'unbounded') has to be taken seriously is indicated by the above example. That is, strictly speaking, computation is but a fruitful idealization, an example for what we may call a model. But fortunately Nature appears to be 'meaningfully' strati ed, and that means that often we can "slice" it safely into systems and levels. What we call computation is what is speci c to such a level. That is, usually we do not have to bother all the variables, other than a just few, maybe a handsome, which we have selected. Here, of interest is the Strong Church thesis (a.k.a. Church-Turing Hypothesis, CTH ). "Every process is computable". Of course by now we understand that this does not mean 'Nature is a computer'. That would be a ridiculous statement. The hypothesis is a serious one because it suggests that, whatever process we are studying, it suces to focus on a computational subset of all the variables that are in principle involved. All the rest is dormant, and unimportant.
Self-Modifying Networks.
SMN-s are systems that go beyond the CTH by assuming a mechanism for altering the fundamental primitives and interaction modes that de ne a computation. The concept, introduced by the author, is based on ideas developed in close collaboration with Vilmos Csanyi, Otto E. Rossler, and others. The general background of this research is reported in e.g. [Csanyi 1989, Kampis and Rossler 1990], and a complete theory is given in [Kampis 1991]. Definition: An SMN is a set of interconnected elements which are individually computational but together are not. In an SMN the result of individual computations at one node lead to the change of the computation of at least one further node in the network. This relationship holds for every element of the net, therefore, on a mutualistic basis, every node can change, and so does the identity of the whole network. 5
Because the individual elements can be computational, the self-modi cation of the system occurs as a consequence of the higher organization that integrates them.
What is a Network?
Regarding networks there is also the question, what do we understand by them. In one word, a network is a way of looking at things that are interrelated. The paradigmatic examples for network thinking come from arti cial parallel systems like computer networks (LAN-s and WAN-s), neural networks and PDP systems [Anderson and Rosenfeld 1988, Bechtel 1991], cellular automata [for a recent study: Gutowitz 1991](and their, relatives like the connection machine [Hillis 1985] or transputers [De Carlini and Villano 1991]), and genetic algorithms (conceived as populations of processors) [Holland 1975, Goldberg 1989]. From these examples, we can extract what seems to be central to the "network paradigm". Typically, a network consists of: - a large number of mutually interconnected elements - individual elements which are similar or identical - a homogeneous (often statistical) pattern of interconections As is it, in principle each of these assumptions could be relaxed, and we would still get a network. But that is just how things have been set for those systems we came to call "networks" today. By a common name we could call them uniform networks, and we can call the whole circle of thought the classical network paradigm. Uniform networks, or UN -s are in fact equivalent with Turing Machines. It is easy to see that for every UN there is a global computation rule that implies uniformity, tractability, and TM equivalence.
Examples For Uniform Networks
Cellular Automata As de ned by von Neumann [1966], cellular automata (CA) are 2D tessellations consisting of identical automaton elements on a grid. In such a formalism the direct equivalence with serial computers is obvious, and was discussed by von Neumann himself. But there are more interesting generalized CA structures called polyautomata [Smith 1976] for the situation is less simple. A polyautomaton is simply an arbitrary array of arbitrary automata. At a closer look, however, these automata are not exatly as arbitrary as they could in principle be, and we shall nd this condition to be necessary in order for them to be TM-like. Components of a polyautomaton are linked by a common interface and therefore they must have their input and output sets matched. If we imagine two arbitrary systems we wonder juts how special a case it is that the output of the one can be taken as input by the other. Furthermore, the inputs must satisfy conditions like nondestructivity upon reception; in other words, we exclude from study all those systems which do not nicely t to each other. Once such a situation is established we nd curious phenomena. For instance, - xed and growing CA are mathematically equal [Burks 1961], - there is a uniform embeddig for any heterogeneous automata system. The rst statement, proven by A.W. Burks quite a long time ago, means that it makes no dierence to consider xed or growing automata structures, i.e. ones to which we can add 6
further elements at will. The truth of the second depends on the existence of a "master cell" the state set and transition function of which is simply the union of all cells (or rather, cell types) that pertain to the given net. As per the stated conditions, the list of these types must be explicitely known and so the formation of such a "big cell", in terms of which we get a homogeneous network, is possible. How such a construction can be carried out is detailed in e.g. [Kampis 1991]. As a result, it is easy to embed CA-s in TM-s, or the way around. Parallel Processing The theory of parallel processing [which started with works like Hoare 1978 or Milner 1980] focuses on issues of parallel semantics, i.e. on the question, how to assign meaning to the intraction of programs. Communication between programs is the I/O of values of their variables, and so far this is like communication of neighbours in a CA. However, here there is a direct concern with how network properties and behaviour can be built up from component properties. Equivalent processes my dier in their actual execution and this complicates things, even if we know the component programs perfectly well. For instance, the programs x = x + 2 and x = x + 1; x = x + 1 do the same but may interact with other programs dierently (depending on when the value is comunicated). This indicates that relationships of timing are crucial. The "solution" to the problem of how to describe such temporal communication is not to describe it at all. Instead, they introduce protocols, the same concept we know from our university networks. A protocol de nes what and when can be communicated. Again, the crucial element again is that of the careful design, an imposed constraint that makes sure that things run smoothly. So, "classical" nets do not dier from individual TM-s at all, we must conclude | a curious situation. After all, a net consists of more than one computers (or computable elements), why should it behave as one?
Emergent Properties in Networks
We have numerous examples for ultimately trivial systems which possess certain 'holistic' traits that bring us closer to what nets could do. The literature on emergence goes back to the good old days and there is no point in reviewing it here. Nice old examples of the literature are [Jennings 1927, Miller 1932], a modern one is [Blitz 1992]. We have well-known examples like - encapsulation (e.g. the appearance of the property of being inside' in a heap of balls if we throw balls onto each other) - relational properties (e.g. codes and symbols as examples for arbitrary assignments that only depend on context and convention) - local/global interfaces (e.g. combinations of previously unrelated processes, such as collisions of trajectories of dierent systems), - open/closed relations (opening a closed system to a new intereaction, e.g. kicking into a swinging pendulum) and so on. The reader who just doesnt feel familiar with these notions and is uncertain about their meaning is suggested to consult e.g. [Fodor and Lepore 1992 or Kampis 1992]. 7
SMN-s dier from the numerous simple systems in which emergent interactions, like the ones listed above, occur accidentally. SMN-s utilize these emergent properties for reshaping system dynamics. As an application of the above emergent mechanisms, a most interesting category of emergent properties arises with the operation of adding elements freely to a system. That adding a new interaction (or component) can change the functionality of the old ones, is obvious. A particular example we look at more closely is this: Imagine a chain of boxes, linked together in some electronic circuit. Every box has two sets of connecting pins. Let us start to chain boxes toghether, by their respective pin sets. Okay. Now, if we add a new box, so that it links the rst and the last elements in the chain,we can change many things: a closed loop emerges. Depending on what's inside the boxes, this can cause a short-circuit, for instance. What does it mean? It means that we can easily transform network interactions from an electronic to, say, a chemical domain by adding an electronic element. (A short-circuit is likely to make chemical changes to the elements. There is no need to view this negatively, as this may well be a desirable action for someone). This idea is quite generalizable, and lies at the heart of how SMN-s work, emergently (but non-invasively). The general lesson is, that the act of linking elements in a network can produce new variables, or, to put it dierently, a free network interaction can activate what were 'dormant' or non-triggerable modes of its components. We can extend the original 'computations' of the network elements thereby, in the same way as any computation can be extended by paying respect to material structure, as we have learnt from Whitehead. But unlike errors and other unwanted cross-level interactions in a computer, this process is controllable. Similar is only the possiblilty of evoking a potentially in nite amount of unpredictable new information. It would completely miss the point if we tried now to analyze the "network acts" into a xed set of computations. Sure, once you got the network (and so everything of interest is already behind you), you can go and see what there is. But then you lose the whole process of making the network. Now, of course, if, once you have built it, the resulting network is xed, then most of the fun is lost, because the 'wild things' will happen only once in a lifetime: namely, when you set up the network. This produces for you a new computer, de ned by the newly activated network variables, if there are any (of course, if you designed your computer in advance as in the classical network paradigm, nothing unusual will happen). However, if we allow the network to change inde nitely, the rede nition game can go on forever : : :
A Class of SMN-s
A class of systems that can incorporate SMN-s has been introduced under the name component-systems by the author. A system is a component-system (CS ) if it produces its own components that come from a combinatorially de ned open-ended set. CS-s have several curious properties we shall not be discussing here, such as extradynamical determination (which allows for stopping and restarting processes even if dynamical information is lost, as in e.g. cooling), self-booting (i.e. these systems start their processes immediately if we just insert the components into them, without having the usemention dichotomy familiar from logica and programming), or evolution by 'bootstrapping' 8
(i.e. building a hill and climbing it, recursively), and so on. For a discussion, see (Kampis 1991). But what is a CS, after all? It is but a clever mechanism, invented by Nature to realize new network interactions over and over again. The key idea is the ongoing production of new components, or rather new types of components, by the system. In a CS the new components come from a big, in eect in nitely large, pool. Consequently, we have plenty of room for new, unpredictable interactions to emerge. In fact every production step may change the networks variables or its "computing". We propose that CS-s are natural candidates for theoretical biology/cognitive science applications, because they are easy to understand | and are also easy to implement, a question to which we return.
The Biological use of SMN-s and the Origin of Complexity
Molecules are perhaps the best examples for what we have just called CS-s. Molecular interactions have nothing to do with computations. They are only computable in retrospect. The tremendous success of molecular modeling does not contradict this. Only those molecular interactions can be modeled whiyh are already known, that is, which took place at least once. In the SMN property of molecular systems the point is that there are always new interactions; the SMN hypothesis says that these are the ones the use of which makes biological systems special. Molecules interact in many ways that have little to do with what their primary 'physical' constitution dictates. More precisely, let us only speak of biological molecules, sometimes also called macromolecules. For them, other rules are valide that for small molecules. Their molecular reactions produce molecules which often cause the others to change their further reactions selectively. Part of what viral infections do, for instance, is interaction in an unexpected way with existing structure (for which the cells can have no remedy). By utilizing geometrical form as a determinant of interactions, macromolecules recur to an open-ended set of variables to interact with. Here, the possibility of a new, contextdependent biological process theory is implicit. There are many other biological examples for self-modi cation. The "principle of function change" [Rosen 1973], or F. Jacobs evolutionary tinkering [1982] are variations for the same idea. Both assume that in biology the existing components are used in new and new ways, and that is a major eect. Van Valens Red Queen theory of coevolution [Van Valen 1973] predicts that evolution proceeds because a change in one species introduces a new challenge in the environment for the other species. Again, new property { new interaction. The emergent possibilities of molecular SMN-s could be exploited in more powerful future molecular computers which need not be restricted to transformational of an input information.
Distributed Code Systems
The idea, that molecular computations cannot be reduced to the string-processing, linear modes of computing was rst elaborated in detail by M. Conrad [1985]. Now molecular SMN-s suggest a more complete and fairly non-standard view on information processing. The inevitable metaphor of "genetic code" refers to a classical information de nition, according to which "information" is something that can be coded, transmitted, and stored. 9
This is the way computers use information but it appears not to be the way how molecules and other SMN components do. The notion of code implies that there is a reference frame to which the information content can be related or mapped. If, however, the molecules have an information content that depends on the other molecules that surround them, this means for information theory that there is no external reference frame in the rst place, and in fact the code for the molecular information content is described in the other molecules that interact with a given molecule. Moreover, because there can be many molecular components involved in the "coding" for one molecule, and also because this code-determination takes place on a mutualistic basis for all reaction network elements, it is proper to say that in a molecular SMN we deal with a disctributed code system. Distributed code systems can be expected to have new kinds of information-theoretic properties, to be mapped by future research.
SMN-s in Theory
It is not necessary (and not sucient either) for a network to change structurally, as in a CS, in order to become an SMN. For the insuciency part of this statement, consider growing automata (growing graphs, : : : growing whatever) de ned by a homogeneous growth rule incorporated in a transition scheme. As the point with SMN-s is the lack of any a priori rule for network growth, these systems are not SMN-s, and are irrelevant from our present viewpoint. SMN-s are producers, not consumers of their own rules. In a computer, you feed the rules in, and the system uses them. In an SMN you feed components in, and after a while you get rules. If somebody is only interested in what the rules are, to him or her the two may seem 'equivalent', but if he minds the process at all, they are not.
Related Works
There are various works that deal with SMN-like structures but not using CS-like structure. These developments come mostly from theoretical computer science and they consider computers linked non-informationally. One idea is communication without a protocol. It has a predecessor in old cybernetic models such as G. Pask's 'Conversation Theory' [Pask 1975] where agents do not form previous agreements. Let us detail two of the newer ideas. Interactive Proof Theory and Its Possible Implications Interactive proofs (zero-knowledge proofs, Arthur-Merlin games, etc. [e.g. Goldwasser and Sipser 1986, Ben-Or et al. 1989]) involve unreliable interaction between computers { unreliable not in the statistical sense, but in the sense that you don't know, for instance, if your partner is lying, or if it is giving random bits while pretending to solve a problem for you, and so on: : : just as in real life: : : This research results in a fundametally new view of interacting computers and their communication, where there is no more a common language they speak. What is potentially entailed here is this: For instance, an input (or, better, the change of the nature of input) can change a computation nontrivially { not what you think the computation is, but what it actually does. For instance, let us link two computers (TM-s) by feeding the output tape of the one as input tape into the other, but such that the second expects coded integers on which to 10
perform a certain numerical function. Now if the rst of these computers begins, due to its own program, or due to an error, to "fool" the second by giving it something else than its food, natural numbers (while still using the same ones and zeroes), there will be a creative misunderstanding between them. They together may do things none of them alone, nor a carefully designed net with a pre-set interface could. As a result, the second computer will compute { well, it will compute 'nonsense' if you pretend it should still compute the same numerical function it did. But in fact it will compute something new (maybe to be 'misuderstood'by a third machine, and so on). Hence, we can end up with computers changing each computations without having to raise a hand (or having to destroy any old code: they're just used dierently). That is, after the computer (i.e. one computer) was invented by Turing, two computers will be invented now. Although many details are missing yet, we can still ask: Will there be a second computer revolution? Relativistic Networks Einsteins relativity theory oers interesting candidates for processes that can exemplify SMN-like systems. Relativity theory is about the slowness of interaction in a system. Now, if the interactions are slow compared to the geometrical size of the system, the ends may never meet. The speed of light, for instance, imposes constraints on the degree to which dierent clocks can be synchronized or dealt with in a common framework. That is, the architecture of the system deterines what kind of events belong to the same system and what belongs to a dierent one (although both are located in the same Universe). As an extreme case, we can imagine that distant parts of a system develop as semi-isolated for a long period but develop new interactions later. Relativity theory, therefore, allows us to maintain local-global distinctions, and introduce new interactions, the phenomenon of which, in turn, may lead to SMN-s. In short, in a relativistic system there is, at any point, a potential unused but usable later. A relativistic system is anything but a computer. An interesting thing about relativity is that its logic is applicable to systems that operate far from physical limits, and therefore the above idea is not restricted to cosmological scale. Computer networks, for instance, suer from exactly the same problems of timing as discovered by Einstein. A recent dissertation in computer science [Murphy 1990] showed that such eects can lead to new types of problems, some aspects of which go beyond traditional computer science and resemble our discussions of SMN-s. It seems that, in principle, we could build computer networks which utilize relativistic eects (that is, ultimately, nothing but "bad communication") to realize emergent phenomena.
Realizations of SMN-s on a Computer
Well, you can't exactly realize a concrete, real-world SMN on a computer, because that would require a knowledge of all the system variables in advance, right? That would not be an SMN any more. But we can make computers behave like arti cial SMN-s. By studying these, we can study natural SMN-s (or at least some of their generic properties) indirectly. A methodology for realizing SMN-s as programs would have to utilize interactions of several independent programs, built with a 'don't care' philosophy (that is, without specifying or designing how the one aects the other). The idea is, don't design things, lean back, and let the system work. 11
For instance, for realizing CS-s we could apply some form of arti cial chemistry with new properties added whenever new components interact. To that we need property generators which ll the interaction/property matrix and increase its dimesionality. Suc a property generator must be non-algorithmic in order to keep things felxible, for instance, it could work with the help of other programs (not the same one which does the simulation). Here we can understand that the computer can play a new kind of role. We also see and why (and in what sense) SMN-s are realizable on a computer at all. Unlike in traditional computing, where arithmetical and logical abilities of the computers are used, here the computer, as an embeddig framework serves combinatorial purposes only: namely, it brings dierent sources of information in relationship. What these pieces are, and what tey do with each other, is not its concern.
A Computer Model
A rst computer models which utilized SMN philosophy was done in 1983 and published in 1987 by the author and V. Csanyi together [Kampis and Csanyi 1987]. The model simulated an emergent evolutionary process producing complex permanent structures in a system of changing composition. The system consists of a two-dimensional grid, the points of which correspond to component-types (i.e. of a functional rather than geometrical space). Components can interact with each other along the grid. The interactions can be of two kind. One of them modelled aspects of chemical katalysis, the other stood for a nonspeci c production of new components. The modelled process was some kind of abstract chemistry, that is, the production of new components from the old ones. Of key importance to the model was, what happens with the newly produced components? How do they interact with the rest of the system? The idea was to use random property generators (i.e. a random function to tell which other component to interact with, and how). The interpretation is randomness as ignorance. Randomness does not necessarily mean probability, it can mean that we lack information or do not suppose anything about the interacting agents. The model produced self-selecting evolution and the spontaneous emergence of stable structures in spite of self-modi cation and the random elements. This shows that (1) selfmodi cation is realizable and leads to interesting behaviour (2) non-computable processes do not necesarily mean "nonsense", just because they are not computable, or "rational", for that matter. For details of the model as well as a theory of evolution using self-modi cation rather than Darwinian selection, see the original source.
Practical Implications
A potential advent of the use of SMN-s is for complexity reduction. Complexity reduction may play an important role in cognitive science/AI for realizing memory based on directed (re)construction rather than storage. Complexity can be achieved in two ways: by explicit representation, as in a computer, or by construction, as in an SMN. The idea, to be taken seriously, is that simple systems can produce complex ones. In a chemical CS, you don't have to by 101 0; 000 chemicals to have a system with 101 0; 000 states; it may be sucient to buy ve (as pointed out by O.E. 12
Rossler [1984]). The rest of the complexity will be "manufactured" in the recursive process of newer and newer interactions. SMN-s oer to capture just this kind of productivity.
Acknowledgment
The paper was written with the support of the German Research Foundation (DFG), in the framework of a research grant entitled "New Concepts of Complexity". The support is gratefully acknowledged. The author also wishes to thank Professor O.E. Rossler for his hospitality and co-operation.
References
1. Anderson, J.A. and Rosenfeld, E. (ed.) 1988: Neurocomputing, MIT Press, Cambridge. 2. Bechtel, W. 1991: Connectionism and the Mind : An Introduction to Parallel Processing in Networks, Blackwell, Cambridge. 3. Ben-Or, M. et al. 1989: Everything Provable is Provable in Zero-Knowledge, in: Advances IN Cryptography, CRYPTO-88 (ed. Goldwasser, S.), Springer LNCS 403, pp. 37-56. 4. Blitz, D. 1992: Emergent Evolution : Qualitative Novelty and the Levels of Reality, Kluwer, Dordrecht. 5. Brown, F.M. (ed.) 1987: The Frame Problem in Arti cial Intelligence : proceedings of the 1987 workshop, Morgan Kaufmann Publishers, Los Altos, CA. 6. Burks, A.W. 1961: Computation, Behavior, and Structure in Fixed and Growing Automata, Behavioral Science 6, 5-22. 7. Conrad, M. 1985: On Design Principles of a Molecular Computer, Comm. ACM bf 28, 464-480. 8. Csanyi, V. 1989: Evolutionary Systems and Society: A General Theory, Duke University Press, Durham. 9. De Carlini, U. and Villano, U. 1991: Transputers and parallel architectures : messagepassing distributed systems, Ellis Horwood, New York. 10. Fodor, J. and Lepore, E. 1992: Holism: A Shoppers Guide, Blackwell, Cambridge, Mass. 11. Garey, M.R. and Johnson, D.S. 1979: Computers and Intractability: A Guide to the Theory of NP-Completeness, Freeman, San Francisco. 12. Goldberg, D.E. 1989: Genetic Algorithms in Search, Optimization, and Machine Learning, Addison-Wesley, Reading, Mass. 13. Goldwasser, S. and Sipser, M. 1986: Arthur-Merlin Games versus Interactive proof Systems, Proc 18th STOC, pp. 59-68. 14. Gutowitz, H. (ed.) 1991: Cellular Automata : Theory and Experiment, MIT Press, Cambridge. 15. Herken, R. (ed.)1988: The Universal Turing Machine: A Half-Century Survey, Oxford University Press, Oxford. 16. Hofstadter, D.R. 1979: Godel, Escher, Bach: An Eternal Golden Braid, Basic Books, New York. 17. Holland, J.H. 1975: Adaptation in Natural and Arti cial Systems, U. of Michigan Press, Ann Arbor. 13
18. 19. 20. 21.
22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40.
Hillis, W. D. 1985: The Connection Machine, MIT Press, Cambridge. Hoare, C.A.R. 1978: Communicating Sequential Processes, Comm. ACM 21, 8-25. Jacob, F. 1982:The Possible and the Actual, University of Washington Press,Seattle. Jennings, H. S. 1927: Some Implications of Emergent Evolution; Diverse Doctrines of Evolution { their relation to the practice of science and of life, The Sociological Press, Minneapolis, Minn. Kampis, G. 1991: Self-Modifying Systems in Biology and Cognitive Science: A New Framework for Dynamics, Information and Complexity, Pergamon, Oxford-New York, pp 543. Kampis, G. 1992: Causal Processes in Cognition and Elsewhere, in: Proceedings of ICCS-91 (ed. Ezquerro, J.), Kluwer, in press. Kampis, G. and Csanyi, V. 1987: A Computer Model of Autogenesis, Kybernetes 16, 169-181. Kampis, G. and Rossler, O.E. 1990: How Many "Demons" Do We Need?, in: Cybernetics and Systems 90 (ed. Trappl, R.), World Scienti c, Singapore. Lofgren, L. 1977: Complexity of Descriptions of Systems: A Foundational Study, Int.J.General Systems 3, 197-214. Lofgren, L. 1987: Complexity of Systems, in: Systems and Control Encyclopedia (ed. M. Singh), Pergamon, Oxford, 704-709. Maynard Smith, J. 1986: The Problems of Biology, Oxford UP, Oxford Miller, D. L. 1932: Emergent evolution and the scienti c method, U. of Chicago Press, Chicago. Milner, R. 1980: A Calculus of Communicating Systems, Springer LNCS 92, Berlin. Minsky, M. 1977: Frame-System Theory, in: Thinking (ed. Johnson-Laird, P. and Wason, J.), CUP, Cambridge. Neumann, J. von 1966: The Theory of Self-Reproducing Automata (ed. by A.W. Burks), U. of Illinois Press, Urbana-Champaign. Pask, G. 1975: Communication, Cognition and Learning, North-Holland, Amsterdam. Peterson, G.E. (ed.) 1987: Tutorial, Object-oriented Computing, Computer Society Press of the IEEE , Washington. Reichgelt, H. 1991: Knowledge Representation : an AI Perspective, Ablex Pub., Norwood. Reiser, Martin 1991: The Oberon System : User Guide and Programmer's Manual, ACM Press, New York. Rogers, H. 1967: Theory of Recursive Functions and Eective Computability, McgrawHill, New York. Rosen, R. 1973: On the Generation of Metabolic Novelties in Evolution, in: Biogenesis, Evolution, Homeostasis (ed. Locker, A.), Springer, Berlin. Rossler, O.E. 1984: Deductive Prebiology, in: Molecular Evolution and Protobiology (ed. Matsuno, K., Dose, K., Harada, K. and Rohl ng, D.L.), Plenum, New York, pp. 375-385. Russell, B. and Whitehead, B. 1912: Principia Mathematica Smith, A.R.S. 1976: Introduction to and Survey of Polyautomata Theory, in: Automata, Languages, Development (ed. A. Lindenmayer and G. Rosenberg), North14
41. 42. 43. 44. 45. 46.
Holland, New York, pp. 405-422. Trakhtrenbrot, B. A. 1973: Finite automata; behavior and synthesis, North-Holland, Amsterdam. Van Valen, L. 1973: A New Evolutionary Law, Evolutionary Theory 1, 1-30. Wagner, K. and Wechsung, G. 1986: Computational Complexity, Reidel, Dordrecht. Whitehead, A.N. 1929: Process and Reality, Free Press, New York. Wittgenstein, L. 1976: Lectures on the foundations of mathematics (Wittgenstein's Lectures on the foundations of mathematics, Cambridge, 1939 (from the notes of R. G. Bosanquet, Norman Malcolm, Rush Rhees, and Yorick Smythies, edited by Cora Diamond), Cornell University Press, Ithaca. Yasuhara, A. 1971: Recursive Function Theory and Logic, Academic Press, New York.
15