A Framework for the Automatic Identification and

0 downloads 0 Views 155KB Size Report
1 Department of Computer Science, Memorial University, Canada, ... An extreme view of the benefits of material computation is that exploiting physical ... may be necessary to put the physical system under some form of responsive control, in ... Autonomous experimentation techniques that combine machine learning and.
A Framework for the Automatic Identification and Extraction of Computation from Materials Simon Harding1 , James Neil2 , Klaus-Peter Zauner3 , Julian F. Miller4 , and Kester Clegg5 1

4 5

Department of Computer Science, Memorial University, Canada, [email protected] 2 [email protected] 3 School of Electronics and Computer Science, University of Southampton, UK [email protected] Department of Electronics, University of York, UK [email protected] Department of Computer Science, University of York, UK [email protected]

Abstract. This paper describes our intentions on how to advance the field of exploiting physical systems for computation. We identify two major issues in unconventional computation. The first is finding a system that can be used as a “physical processor”. The requirements are unclear, as we have limited experience in exploiting the unknown for computation. The second is determining the most appropriate ways to program and interface to “physical processor”, again this stems from our lack of experience with working with systems that we have neither designed or have full knowledge of their operation. We discuss a possible mechanism that will allow us to go from identification of suitable computational materials through to practical exploitation of their computational abilities.

1

Introduction

Today’s computing, classical computing, is a remarkable success story. However, there is a growing appreciation that it encompasses only a small subset of all computational possibilities. There are several paradigms that seem to define classical computing, but these may not be true for all of computation. Classically, computation is viewed mathematically, in terms of algorithms, complexity, and such like, based on the underlying essentially mathematical model of the Turing Machine. The fact that computation is physical, is necessarily embodied in a device whose behaviour is guided by the laws of physics, whose behaviour cannot be completely captured by a closed mathematical model, is becoming more apparent as we push the bounds of those physical laws. It has been argued that by exploiting the physical properties of a system, we may be able to obtain computational advantage. For example, as material systems are inherently parallel architectures, programming them to perform a desired computation can confer dramatic reductions in computation time. Indeed Mills et. al [1] solve partial differential equations using an analogue physical

computer in times many orders of magnitude faster than current conventional architectures - and this increase may be due to exploitation of the parallel features of the system. An extreme view of the benefits of material computation is that exploiting physical system may allow for super-Turing computation, for example, Rosen argues that biological systems implement computation by exploiting material properties rather than formal programming[2]. When attempting to use a physical system for computation, it is important to be able to identify the categories of computation processes that the material and other components of the system are performing. This is a difficult task; however, if we are to understand the limits of materials we must have a framework in which we can pose, and tractably answer, questions about the computational properties of materials. 1.1

Computation Using Materials and Physical Systems

Programming physical systems to perform computation has proved difficult. Indeed, this, and limitations of the hardware, prevented traditional analogue computing being practically useful. Human designers find it difficult to work with complex and non-linear systems, especially in situations where some properties are the system that is trying to be programmed are unknown. The normal approach to this is to construct logic gates and from these construct a traditional von Neumann architecture. Then the programming is a top-down design process that leads to the desired computation being performed. According to Conrad this process leads us to pay “The Price of Programmability” [3], whereby in conventional programming and design we proceed by excluding many of the processes that may lead to us solving the problem at hand. Some physical systems, such as those by Pask[4][5] and Mills [6][7] have been successfully programmed through a process of trial and error. However, this is highly inefficient and undoubtedly restricts us to solving only the simplest problems. 1.2

Evolution In Materio

It has been shown that the unconstrained search of evolution can be used to find the configuration of a physical system that can be used for computation. Thompson found that evolution could unintentionally exploit “unknown” properties of a FPGA to perform tone discrimination[8]. More recently, Harding and Miller have shown that evolution could be used to configure liquid crystal to act as a tone discriminator and a robot controller [9][10]. In the liquid crystal experiments, the intention was to demonstrate that evolution could be used to program a system that humans would have great difficulty in working with. Liquid crystal had never been used for computation in this way before, and there was no known technique for finding appropriate configurations. Harding and Miller found one major problem when working with the liquid crystal: the solutions were highly unstable[10]. The behaviour of the system varied dramatically, and the results would be unusable for practical purposes.

However, the work did demonstrate that evolution is able to operate even in cases where the genotype-phenotype mapping is noisy.

2

The Challenge

Previous work has highlighted several challenges in “evolution in materio”: – It is unclear which materials are most suitable. We need materials that are configurable, provide the right properties in terms of how they effect applied signals and are robust. – It is unknown what types of computation are most suitable to be performed on these materials. – Materials, such as liquid crystal, do not have a “designed” interface. It is unknown what methods are most appropriate to communicate and configure the materials. For example; are static voltages are alternating voltages better, which voltage range is most suitable or how long a computational result takes to manifest. – So far, material systems have been configured by applying a constant configuration pattern, however this may not be appropriate for all systems. It may be necessary to put the physical system under some form of responsive control, in order to program and then keep the behaviour stable.

3

The Approach

We may or may not know if a particular material can be used to perform some form of computation. We can treat our material as a “black box”, and using evolution as a search technique, automatically discover what, if any, computations our black box can perform. The first step is to build an interface that will allow us to communicate with a material (described in section 6). Then we will use to evolution to find a configuration we can apply using this platform, and then attempt to find a mapping from a given problem to an input suitable for that material, and a mapping from the materials response to an output. If this is done correctly, we will be automatically able to tell if a material can perform computation, and then classify the computation. This is described in more detail in section 7.

4

Candidate materials

It is unclear what materials may be best for performing computation in the manner described. In the most general sense, we need materials that can have their physical properties manipulated (preferably by electrical means) and that when their properties are altered, their effect on some form of incident signal changes. Miller and Downing suggested a number of candidate materials including liquid crystal, langmuir blodgett films and nano-particle suspensions [11]. Preliminary

work has shown that liquid crystal is a suitable medium, however it has many properties that appear undesirable. We could exploit the properties of existing silicon devices, and even modify their properties using radiation and see if there is any advantage in evolving with such systems. Adamastsky has had success in using Belousov-Zhabotinsky reactions to perform computations[12], and indeed such systems could be used in this setting. We also suggest the use of living systems, such as neurons[13], bacterial consortia[14][15] or slime moulds[16] as a candidate material. Clearly, there are a vast number of potential materials for performing computation, and it is unclear which ones are most suitable. We therefore expect to try a large number of candidates and use a technique (such as that described in section 4.1) that helps guide our search for appropriate materials. We suggest that for initial experimentation, before committing the technique to a hardware implementation, that the task of automatically identifying computation is tried in a virtual system. It is known that virtual systems such as recurrent neural networks or some cellular automata are capable of performing computations. In the first instance, our black box could contain a system such as these, and we would then demonstrate the ability of our approach to find a suitable “programming” technique. 4.1

Automated Exploration of Materials

There exists a vast space of materials that could conceivably be exploited as physical substrates for computation. One only needs to consider the combinatorial design spaces opened up by organic chemistry or molecular biology. The materials most likely to provide a practical advantage over conventional programmable devices are those capable of a complex response behaviour. Numerous inseparable component interactions typically render the simulation of such materials computationally expensive and make experimentation necessary. Autonomous experimentation techniques that combine machine learning and laboratory automation for closed-loop experimentation are well suited to probing response phenomena of materials on a large scale. The concept of scouting, for instance, has been developed as a tool to identify behaviour of context-sensitive components that may be utilised in computing devices [17]. Central to the scouting approach is the incremental development of an empirical response model for the explored parameter space. This empirical model serves to form expectations for measurements under heretofore unexplored experimental conditions. The degree of discrepancy between expectations and subsequent observations is used for prioritizing parameter settings for further probing—the more surprising the outcome of a test, the higher the attention for the conditions under which it was performed. Every observation is used to update the empirical model. Thus each measurement can have an immediate effect on the decision as to which test to perform next. The algorithm borrows from communication theory then notion that information is equivalent to ’surprise value’, and combines it evolutionary computation to guide the search for complex response phenomena.

5

Mitigating the Environment

A major problem when working with intrinsic evolution is separating out the computation allegedly performed by the target device, and that actually done by the device. As with Thompson’s FPGA exploiting some subtle physical properties of the device, Layzell found that evolved circuits could rely on external factors. For example, whilst trying to evolve an oscillator Bird and Layzell discovered that evolution was using part of the circuit for a radio antenna, and picking up emissions from the environment [18]. Layzell also found that evolved circuits were sensitive to whether or not a soldering iron was plugged in (not even switched on) in another part of the room[19]. An evolved device may not useful if it highly sensitive to its environment in unpredictable ways, and it will not always be clear what environmental effects the system is using. It would be unfortunate to evolve a device for use in a space craft, only to find out it fails to work once out of range of a local radio tower. However, environmental sensitivity can also be exploited. For example, when constructing sensors[18] or building circuits that change functionality when in different environments [20]. For the purposes of practical computation it seems we should minimise these risks, and we will need to check the operation of evolved systems under different conditions to encourage robust behaviour. We will need to test the behaviour of a device using a different set up in a different location. It will be important to know if a particular configuration only works with one particular sample of a given material.

6

Hardware Platform

An evolvable motherboard(EM)[19] is a circuit that can be used to investigate intrinsic evolution. Previous designs of evolvable motherboards, such as Layzell’s [19] and Crooks’ [21], have been used to evolve circuits containing traditional electronic components. The EM provides a way for a circuit to be rewired under computer control. Harding and Miller developed a “liquid crystal evolvable motherboard” (LCEM), which shares many of the properties required for the problems described in this paper. However, the existing boards are not standalone, and require a PC to run the evolutionary algorithm and generate or record signals. For the large scale tasks we propose, such an approach would be too expensive, therefore we suggest the approach described below as more practical. 6.1

The Specification

We need to be able to interact with the candidate materials in a number of ways, primarily this will be electrical, however, we may also wish to optically, magnetically or physically interact. Using a micro-electrode array, we can apply electrical signals to a small volume of the material. The pins of the array need

Fig. 2. A signal capture and generating unit.

Fig. 1. Overview of interfacing candidate materials to a computer.

to be linked to circuits that can either generate or record an arbitrary signal, The material may also require connections to ground. The signal generators, and where they connect to, will need to be configurable by evolution. Similarly, where in the material circuits for recording signals from the material will be connected. We anticipate, that for dielectrophoresis of particle systems we may need to generate signals of several megahertz, and similarly we may need to sample at this frequency. Figure 1 shows the general form of the required system required for a single candidate material. As we will need to try a variety of materials, it will be important to test a number of materials in parallel. This can be done using an array of candidate units. We will also need to monitor experiments, and permanently store appropriate information. The supervisor computer can also communicate with other systems, located in different institutions. This will allow for comparing the behaviour of the same material system in a number of locations, and minimise environmental effects. 6.2

Micro-electrode arrays

We have considered several different approaches for constructing the microelectrode array. The first is to etch an array out of silicon, and mount the array in a standard chip package. This could then be used in a ZIF socket, allowing us to change the array as needed. Fabricating an array in the manner would allow for small features to be produced, down into the nano-meter scale. A lower-cost

Fig. 3. Interfacing to the material using Genetic Programming

approach, but with a much larger feature size, is to use standard PCB fabrication techniques to produce a board with exposed contacts. To prevent corrosion, it may be beneficial to coat the electrodes in a non-reactive material such as gold. Similarly, another common technique for producing electrode arrays is to pattern them onto glass using optical lithography. Again the electrodes can be plated with a less reactive metal to improve durability.

6.3

Possible circuit implementation

The specification essentially describes an evolvable motherboard with signal generation and signal processing capabilities. We expect to have to work with a large number of relatively high frequency signals. We propose a system that has a large number of analogue to digital, and digital to analogue converters that are connected to a substantial amount of DRAM. A micro-controller will be responsible for taking the output of the ADC and copying it into memory, and also for playing back a signal from the memory, as illustrated in figure 2. This will allow us to generate high frequency, arbitrary signals easily and also to sample at high frequency for short periods of time, and then allow for the data to be collated and analysed. Each pin on the micro-electrode array may not need its own signal generating/capture unit. Work with the LCEM suggested that the ability to route any pin to a smaller number of signal generator/recorder was sufficient. Routing the pins through an analogue switch fabric, will reduce costs considerably as fewer signal units need to be constructed. As with the LCEM, this fabric can also function as a means to connect pins to ground, or to static voltages.

7

The Identification of Computation

When we evolve in materio, using mappings evolved in software, how can we tell when the material is giving us any real benefit? The lesson of evolution in materio has been that the evolved systems can be very difficult to analyse, and the principal obstacle to the analysis is the problem of separating out the computational role each component in the evolved system plays in the final result. Ideally, we want to be able to decompose the entire computation so that we can determine the role that each component is playing; hence showing when the material is, or is not, playing a meaningful role in the overall computation. We want to be able to assign each component a computational classification that can be combined, in some way, with the classifications of other components of the system, to produce a classification that reflects the classification of the overall computational system. One way to do this is to model and simulate the components; the problem is that our analysis can only be as good as our model. Harding [15] found reasonable similarities between the behaviour of liquid crystal and a simulation model; however, the behaviour of the materio systems investigated by Thompson [8] and Layzell [18, 19] were clearly outside what one would find in a standard simulation. This does not make modelling redundant; it just makes it significantly less useful than the modelling is in other areas of material science. In this section, we discuss another possible way to assess the computational role played by materio components of an evolvable system; one based on experimentation, comparison and the concept of a computational fingerprint. 7.1

The Computational Gap

Consider the evolution in materio system shown in Fig. 3. In this example, a standard GP representation has been used to encode and decode the inputs and outputs in to some evolvable material. The purpose of the encoding/decoding is to interface the domain and range of the computation to the material: necessary if the parameters programming the material – for example input voltages – have no obvious mapping onto the domain or range. In this set up, a question arises: is the material actually performing any function, or is the GP doing all the work? Moreover, is the material actually impeding the evolution? In essence, is there a computational gap between the computation of the GP and the computation of the entire system? If there is, then is material the providing some benefit? One logical way to try to answer these questions is to remove the evolutionary material from an evolved solution. One problem with this, even if the material is performing no-meaningful computation, is that the specific GP solutions will have accommodated the noise generated by the material’s inclusion. Another problem, if the material components of the system are providing a computational role, is that any part of it could be providing the role: it could quite conceivably be the wires carrying the signals to the material, or even subtle

variations in the voltages within the computer hardware: the work of Thompson and Layzell strongly indicate such possibilities. If we are to isolate the role of the material in the computation, it is clear that we will have to use a computationally inspired experimental paradigm that takes account of these complications. 7.2

The Computational Fingerprint

To solve this problem, we propose to abandon direct simulation of the material in favour of an approach that attempts to classify the computation the material is actually performing. An evolvable material has both inputs and outputs, and as such is representing a function. A lot of information about the nature of the computation is captured in the sequences of inputs and output pairing. By analysing these sequences, it may be possible to establish the kind of computational role being played by the material in the system. In essence, the input/output pairs form a standard regression problem: by searching for a solution to this problem in software, we may gain valuable insight into the computation the material is performing. The key to the methodology is that we are trying to find a computation in software that approximates the behaviour of the hardware. By solving problems both in materio, and in a computer using an arbitrary machine-learning method, and then using the same method to regress on the input output patterns to the material, we can form an experimental paradigm that we can use to test the computational properties of the material. The purpose of this exercise is not to provide information about how the material is performing the computation, only to provide an understanding of the computation that the material is performing: i.e. what its computational function is. The reason is simple, simulation of the material itself may tell us little or nothing about what is actually occurring – so we ask a different question: what is the material doing? What is it capable of doing? We can use machine-learning algorithms in two ways: first, to model the behaviour of the material by regressing on the material behaviour; second, by solving the entire problem from scratch within the GP framework. We can also solve the problem using the GP system alone, passing input straight to outputs, or trivially manipulating them in some way. The purpose of all this simulation and recapitulation is threefold: to give us models of the computational gap in the evolved solution; to give us an idea of how the material is impeding or improving search; and to let us know if the material is actually necessary for a solution, in a practical sense. 7.3

The Experimental Methodology

We can now compare the actual selected problem, the machine learning solution and the machine-learning model of the material system. We can also repeat

the exercise with multiple machine learning methods. This way, by repeated experiment, analysing the difference between the machine-learnt model of the material’s computation, the complete machine learning solution and the error in the model of the material, an understanding of what the material is capable of doing – its computational fingerprint – can begin to be identified. In this way, we have a methodology for answering scientific questions about computational possibilities of the material, whilst sidestepping the complications of direct simulation of the material. The experimental methodology will be as follows: – Select a known and understandable problem to regress upon: for example to emulate the behaviour of a specific feed forward or recurrent network. – Choose a machine-learning technique capable of solving the selected problem. – Third, solve the problem in material framework, and using the machinelearning system. – Fourth regress the machine-learning problem on the behaviour of the material component of the evolution in materio system. This clearly gives us a lot of data, but we now need to analyse it. This, in itself, is a difficult problem. Provided that the materials behaviour can be approximated, we can determine whether the material is providing any useful computation. Moreover, if the behaviour cannot be well approximated, this is good evidence that the material is computationally important to the solution. Thus, the methodology allows us to answer the principal question: is the material computing something? The proposed methodology will allow us to determine when the material is giving us significant benefit over a software solution. This is particularly important as material systems can be highly dependent on environmental conditions, and only benefits that are only marginal may be difficult to justify. To answer more involved questions about the nature of the computation being performed by the material would require analysis of the machine meaning solution. This, in itself is a difficult problem, but is a far more attractive proposition than analysing the material system. Indeed, this is the assumption behind the entire methodology: that we better can analyse the computational gap by understanding the computation, not the material. Moreover, by building up a computational fingerprint by continued experimentation with the material, important information that could guide actual simulation of the material is generated. Direct simulation, is obviously important to our understanding of how material actually compute, and invaluable for the selection and design of new materials. 7.4

The computational coefficients of materials

A second approach to determining if the material is performing computation can be achieved by looking at the roles of the evolved programmes used to map the inputs and outputs to the material (as shown in figure. 3). The evolved programs

are used to interface with the material, as it is unlikely that we will know the appropriate technique. They map the inputs and outputs from our specified structure, into ones that the material can utilize. However, by doing this we run the risk that the computation is performed by the evolved programs, and not by the material. Hence a method is needed to determine how much work the material is contributing to the computation, and how much is being performed by the material. With the material, the mapping from inputs to outputs is: inputs ⇒ GPm1 ⇒ material ⇒ GPm2 ⇒ outputs without the material, the mapping is: inputs ⇒ GP1 ⇒ GP2 ⇒ outputs If we can measure the amount of work each GP program is doing (c(GP )), perhaps by looking at its (approximate) Kolmogorov complexity, we can determine how much of the computation the material (cm. cm =

c(GP1 ) + c(GP2 ) c(GPm1 ) + c(GPm2 )

If cm is greater than 1, we can say that the material is positively contributing to the computation. If cm is less than 1, we can conclude that the material is being detrimental to the computation. We can also compare cm across materials. If on the same task cm is larger for one material than it is for another, we can conclude that the material with the larger cm is more suited to that computation. Such knowledge will in turn allow us to refine our choice of materials as we will be building knowledge as to what materials can benefit a particular computational problem.

8

Future Work

In this paper we have presented our ideas for a framework for furthering research into extracting computation from physical systems. We believe that this area holds great promise for delivering computational systems that can usefully complement existing systems. This area is still in its infancy, and we hope that this, and related papers, generates lively discussion on the future of computing.

References 1. Mills, J.W., Parker, M., Himebaugh, B., Shue, C., Kopecky, B., Weilemann, C.: Empty space computes: The evolution of an unconventional supercomputer. ACM International Conference on Computing Frontiers (2006) 2. Rosen, R.: Life Itself - A Comprehensive Enquiry into the Nature, Origin and Fabrication of Life. Columbia University Press, New York (1991)

3. Conrad, M.: The price of programmability. The Universal Turing Machine (1988) 285–307 4. Cariani, P.: To evolve an ear: epistemological implications of gordon pask’s electrochemical devices. In: Systems Research. Volume 3. (1993) 19–33 5. Pask, G.: The natural history of networks. In: Proceedings of International Tracts In Computer Science and Technology and their Application. Volume 2. (1959) 232–263 6. Mills, J.W.: Polymer processors. Technical Report TR580, Department of Computer Science, University of Indiana (1995) 7. Mills, J.W.: The continuous retina: Image processing with a single sensor artificial neural field network. Technical Report TR443, Department of Computer Science, University of Indiana (1995) 8. Thompson, A.: An evolved circuit, intrinsic in silicon, entwined with physics. In Higuchi, T., Iwata, M., Weixin, L., eds.: Proc. 1st Int. Conf. on Evolvable Systems (ICES’96). Volume 1259 of LNCS., Springer-Verlag (1997) 390–405 9. Harding, S., Miller, J.F.: Evolution in materio: A tone discriminator in liquid crystal. In: In Proceedings of the Congress on Evolutionary Computation 2004 (CEC’2004). Volume 2. (2004) 1800–1807 10. Harding, S., Miller, J.F.: Evolution in materio : A real time robot controller in liquid crystal. In: Proceedings of NASA/DoD Conference on Evolvable Hardware. (2005) 11. Miller, J.F., Downing, K.: Evolution in materio: Looking beyond the silicon box. Proceedings of NASA/DoD Evolvable Hardware Workshop (2002) 167–176 12. Adamatzky, A.: Computing in Nonlinear Media and Automata Collectives. Institute of Physics Publishing (2001) 13. Prasad, S., Yang, M., Zhang, X., Ozkan, C.S., Ozkan, M.: Electric field assisted patterning of neuronal networks for the study of brain functions. Biomedical Microdevices (2003) 125–137 14. Amos, M., Hodgson, D.A., Gibbons, A.: Bacterial self-organisation and computation (2005) 15. Harding, S.: Evolution In Materio. PhD thesis, University of York (2005) 16. Tsuda, S., Zauner, K.P., Gunji, Y.P.: Robot control with biological cells. In: Sixth International Workshop on Information Processing in Cells and Tissues. (2005) 202–216 17. Matsumaru, N., Colombano, S., Zauner, K.P.: Scouting enzyme behavior. In Fogel, D.B., El-Sharkawi, M.A., Yao, X., Greenwood, G., Iba, H., Marrow, P., Shackleton, M., eds.: 2002 World Congress on Computational Intelligence, May 12-17, Honolulu, Hawaii, IEEE, Piscataway, NJ (2002) CEC 19–24 18. Bird, J., Layzell, P.: The evolved radio and its implications for modelling the evolution of novel sensors. In: Proceedings of Congress on Evolutionary Computation. (2002) 1836–1841 19. Layzell, P.: A new research tool for intrinsic hardware evolution. Proceedings of The Second International Conference on Evolvable Systems: From Biology to Hardware, LNCS 1478 (1998) 47–56 20. Stoica, A., Zebulum, R.S., Keymeulen, D.: Polymorphic electronics. In: ICES ’01: Proceedings of the 4th International Conference on Evolvable Systems: From Biology to Hardware, London, UK, Springer-Verlag (2001) 291–302 21. Crooks, J.: Evolvable analogue hardware. Meng project report, The University Of York (2002)

Suggest Documents