Understanding Neural Computation in Terms of Pattern Languages Péter András School of Computing Science University of Newcastle Newcastle upon Tyne, NE1 7RU, UK
[email protected] Abstract There is some experimental evidence that the neural information processing in some small neural systems with complex behaviour (e.g., olfactory bulb, crab stomatogastric ganglion) can be best described in terms of activity patterns. Recently the pattern languages were proposed as an interpretation and analysis tool for such complex systems. Here we describe a model that shows how spatio-temporal neural activity patterns can perform abstract computation. We discuss how such activity patterns can be interpreted as a language of the neural system, and how neural information processing may be done in terms of pattern languages.
describe a potentially infinite variety of behaviours. The nervous system is one of the best candidates to check the validity of the pattern languages approach.
I. INTRODUCTION
The rest of the paper is structured as follows. In Section 2 we describe the pattern languages. In Section 3 we give a description of the Sierpinski brain. In Section 4 we discuss how pattern languages may be used to understand and interpret neural activity patterns and neural computation. We close the paper with the conclusions in Section 5.
The nervous system of animals performs information processing in order to produce the appropriate behaviours of the animal. Although several models were proposed in the past fifty years [5,9,11] it is still not very clear how neural information processing happens. Some of the models emphasize the role of individual neurons and consider spike rates as the essential information carriers (e.g., [9]), others emphasize the role of populations of neurons (e.g., [8]) and the role of spike timing patterns (e.g., [4]), further others consider the variation of spatiotemporal activity patterns of many neurons the key element of neural information processing (e.g., [5]). There is considerable indirect experimental evidence [5,7,14,16] that indicate that neural activity patterns play an important role in the processing of information, and mere consideration of individual neurons alone cannot reveal the whole story of neural information processing. For example, in the case of the crab stomato-gastric ganglion experimental results [13] show that the ganglial neurons generate a wide range of activity patterns involving some or all of the neurons, and by looking only at individual activities of neurons it is not possible to reconstruct many of these emerging patterns. Recently Wolfram [21] proposed that the activities of living systems, including the nervous system, can be viewed as computational processes. In this interpretation all these computational processes can be described in terms of pattern languages, i.e., in terms of transitions between basic activity patterns. The advantage of this approach is that it uses a relatively small set of transition rules and a relatively small set of basic patterns to
Here we describe a simple model neural system, the Sierpinski brain [1] that performs neural computation by interacting activity patterns, and discuss how the pattern languages approach may help in untangling the misteries of neural systems. The system that we describe can be viewed as an example of how neural computation may be interpreted in terms of pattern languages.
II. PATTERN LANGUAGES Ideas related to pattern languages are present in the science since early times. Related knowledge was organized into a coherent theory recently by Wolfram [21]. Essentially pattern languages can be described as sets of basic patterns and sets of pattern transition rules. A simple example is a cellular automaton, where configurations of cell states are the patterns and rules of changes of cell states describe the pattern transition rules. Figure 1 shows an example of a one dimensional cellular automaton. Pattern languages can be viewed as information processing tools, when the input information is specified in terms of the initial pattern. Such information processing is a computational process where input information is transformed into output information. Wolfram [21] describes examples of such pattern languages that can execute simple computations like addition, multiplication and others. He also suggests that many natural systems, including living tissues, organisms, and the nervous system, can be considered as realisations of such pattern languages, and their activities can be viewed as computational processes. A very important observation of Wolfram [21] is that many pattern languages are able to compute everything that is computable, in other words they are equivalent of Turing machines. He also suggests that most natural systems should
III. UNIVERSAL COMPUTATION WITH NEURAL ACTIVITY PATTERNS The Sierpinski brain was introduced by Andras [1] as a relatively simple neural system that performs universal approximation by computing with neural activity patterns. This neural system is based on the Sierpinski neural networks, an example of which is shown in Figure 3. Figure 1: An example of a one dimensional cellular automaton, the Rule 110 from Wolfram [21]. The square triplets in the upper rows show the determinant pattern, and the lower squares show the replacement for the middle square after a transition. have this feature, and should be able to act as universal computational tools. A relative important topic is to what extent one language can compute the behaviour of another, and what is the relative computational complexity of one language in the context of another language. Figure 2 shows an example of a pattern language represented by a one dimensional cellular automaton.
Figure 3: The Sierpinski neural network
Figure 2: The evolution of a pattern transition sequence according to a one dimensional pattern language; the initial pattern of black and white dots is the top line of the figure. Another important feature of pattern languages is that the amount of input/output computation is dwarfed in most of the cases by the amount of internal processing. This implies that looking purely at their input/output computations may reveal very little about the pattern language and for a true understanding of it we need to observe and understand the internal processing of the language.
The Sierpinski network contains an excitatory – inhibitory complex that selects one pair of excitatory neurons to fire at any time out of three such excitatory neuron pairs ((ax,ay), (bx,by), (cx,cy)). Each pair of neurons represents by their firing rate the coordinates of one vertex of a triangle. The integrating neurons (zx, zy) calculate a weighted combination of a random series of these coordinates. By this way the firing rates of the output neurons of the network represent at each time a point from within the triangle. The sum of points that is generated in this manner constitutes the Sierpinski triangle corresponding to the triangle determined by the vertices represented by the pairs of excitatory neurons in the excitatory – inhibitory complex (i.e., this is the probabilistic method of Sierpinski triangle generation [3]). The spatiotemporal output pattern of a Sierpinski neural network is shown in Figure 4. Calculating the number of intersections between two Sierpinski triangles with the vertex coordinates {(0,0), (1,0), (t ,1)} and {(0,1), (1,1), ( x,0)} gives the values of the Sierpinski basis function s t ( x ) [1]. The set of these functions can be used to perform
approximations of the function values calculated for a range of values from [-ε, ε].
Figure 4: The spatio-temporal output pattern of a Sierpinski neural network. The horizontal and vertical coordinates are the values encoded by the firing rates of the zx and zy neurons. universal approximation with respect to the closure of the set of continuous functions [1]. Using two Sierpinski neural networks that represent such triangles we can compute an approximation of any s t ( x ) value, by employing synchrony detector neurons that count the synchronous firings of the corresponding output neurons of the two networks. The Sierpinski brain is a combination of many such Sierpinski neural networks. The excitatory neurons representing the triangle vertex coordinates form two populations. In one of these neuron populations the neurons do not change their firing rates, and they represent the coordinate value 0 or 1. In the other population neurons may change their firing rate to represent any value in some range of values (e.g., -5 to 5). Many networks formed using these neurons correspond to triangles with vertices {(0,0), (1,0), (t ,1)} . These networks are the basis function networks of the brain. The input values to the Sierpinski brain are numbers and they are represented by networks corresponding to triangles with vertices {(0,1), (1,1), ( x,0)} . The input values are represented with some noise (i.e., the input representing networks represent the values x+ξ, where ξ is drawn from a uniform distribution over [-ε, ε], and ε>0 is a small number). At all times, the basis function networks compete to be active. The synchrony detector neurons calculate many s t ( x ) values corresponding to the noisy input values of the brain and the active basis function networks. The summed output of the synchrony detector neurons will be the output of the Sierpinski brain. This output is given in form of the firing rate of a final output neuron of the Sierpinski brain. Due to the noisy representation of the input values the output of the Sierpinski brain is an average of the actual
Having an appropriate combination of networks representing basis functions the output of the Sierpinski brain can approximate a particular input – output function. Considering that the Sierpinski basis functions can be used for universal approximation, this means that the neural system of the Sierpinski brain can perform the approximation of many input – output functions by adjusting the numbers of the basis functions represented by the component networks of the brain. Such adjustments can be performed by a learning process based on punishment for wrong approximation, minimization of the activity of the synchrony detector neurons, and maximization of the number of networks that represent basis functions, of which activation helps avoiding the punishment [1]. Figure 5 shows an approximation of a sine wave function by a Sierpinski brain. The Sierpinski brain provides a relatively simple example for how neural activity patterns may be used to perform neural computation by interactions between these patterns. This artificial neural system can be described in terms of a simple pattern language, which represents inputs and internal states (i.e., the set of basis functions) by activity patterns and computes simple transitions from pairs of patterns into numbers represented by the activity pattern (i.e., firing rate) of the final output neuron of the whole system. This system is able to function as a universal computational tool (i.e., by approximating any function from the closure of continuous functions) similarly to many natural pattern language systems. We also note that the direct processing of input information ends by selecting the right set of triangle representations of the input by activating the Sierpinski networks with an appropriate excitatory neuron (i.e., one that represents the input value or a slightly noisy variant of it) in their excitatory – inhibitory complex. The direct output processing happens at the final output neuron of the network. Compared to this small amount of processing linked directly to the input/output transformation a huge amount of internal processing happens that generates the triangle representing activity patterns and calculates their intersections. This strong domination of internal processing over input/output processing implies that similarly to other pattern language systems, in order to understand the Sierpinski brain as an outside observer we need to understand its internal processing and it is not enough to focus on its input/output processing to achieve a good understanding of it. IV. DISCUSSION In this section we address three issues. First, we discuss about why is the framework of pattern languages attractive for analysing and understanding neuroscience data. Second, we
Figure 5: The approximation of a sine wave function (sin(10x)) by a Sierpinski brain. The dotted line shows the approximated function and the continuous line the approximation of it. The approximation is calculated as the average of a sliding window with 21 consecutive values centred on the argument for which the approximation of the function value is calculated (the distance between the consecutive argument values was 0.002). The mean squared error of the approximation is 0.0056. look at what new analytical perspective on biological neural systems is implied by the pattern languages approach. Third, we discuss the implications of this approach for computational neuroscience. A. Understanding dynamic neural activity patterns There are several explanations about the role of complex dynamical activity patterns in the nervous system. Some proposals say that complex dynamic activity organizes itself into attractors of a dynamical system and depending on inputs the system jumps from one attractor to another, using these attractors to represent its output state [5]. Others suggest that the system traverses through a series of such attractors that are active usually for short time, and occasionally or in abnormal situations the system may get trapped in one of these short term attractors [18,19]. Further others say that the meaning of the complex dynamic activity is to produce noise in the system and contribute to the emergence of stochastic resonance within the system [6]. It was also suggested that the noise produced by complex dynamics may provide the basis for random search for the solution of neuro-computational problems [15]. To summarize, these proposals are diverse and do not offer a coherent explanation and description of the observed complex neurodynamics. In the case of the olfactory bulb it is possible to measure the emerging activity patterns over the set of glomeruli by using surface EEG [6] or optical recording [2]. These
activity patterns change over time and represent the odours that stimulate the olfactory receptors. Their combination corresponding to combined odour stimuli are sometimes simple additions of the single odour representing patterns [2], but other times they are complex nonlinear combinations of them [5]. Despite of the relative accessibility of the olfactory bulb for recordings and the large amount of experimental data about it, it is not yet clear how the bulbar networks compute the complex combinations of single odour representations. The crab stomatogastric ganglion contains between 30 – 40 neurons and these neurons are organized into two subnetworks, one regulating the activity of the gastric mill and the other the pylorus [10]. These networks produce a wide range of rhythmic output activity patterns, and in some cases they combine their activities. Although the knowledge about the individual neurons of the networks, their electrical and chemical synapses, and the effects of modulatory substances is very extensive [13], it is still unknown how these output rhythms emerge in many cases, and how do the networks join and separate their activities depending on their incoming stimuli and the presence or absence of modulatory substances. The framework of pattern languages [21] offers an opportunity to build a unifying theory to describe and explain dynamic neural activity patterns in various contexts. It is likely that recording neural activity patterns and their transformations we can describe systems like the odour representation in the olfactory bulb and the stomatogastric ganglion of the crabs in terms of pattern languages. A key
difference from other approaches is that the pattern language approach puts the emphasis on the spatiotemporal activity patterns of many individual neurons, instead of emphasizing the role of single neurons, or the role of large populations of neurons. This balanced approach considering both neurons individually and as part of larger networks offers an opportunity to capture more information about how biological neural systems work and consequently may provide a good framework to describe and analyse complex neural systems that are hard to handle using currently available analytical methods.
large extent on the proper recording, analysis, interpretation and explanation of internal processing by activity patterns in these neural systems. Although, inputs and outputs are very important, and without them the system would not work and we could not observe it, their importance should not shadow the importance of internal processes that are independent of inputs and outputs to a large extent. The proper analysis of the internal processes will allow the search for the transition rules of the corresponding pattern language, and a faithful description of the full system, including its knowledge about its environment encoded during its natural evolution.
B. Input/output versus internal processing
C. Implications for computational neuroscience
Classical theories about neural information processing [11] stress the importance of input/output processing, and focus to a very large extent on this aspect of neural computations. The hierarchical functional theory of visual processing is a particularly good example of this approach [20]. According to this at each step of visual information processing some well defined features of incoming information are filtered out (e.g., edges, colour spots, etc.) and later combined in order to produce the recognition of the visual scene and appropriate motor responses.
Traditionally computational neuroscience deals with systems described in terms of few equations and variables, or discrete approximations of these [11,17]. These computational systems focus primarily on producing of some output as a result of some incoming input. This approach fits very well with the classical theories about neural information processing.
In the case of pattern languages the internal processing of information is usually much more important than the simple input/output processing. The example of the Sierpinski brain shows the amount of activity that is not directly related to the received input or to the produced global output is far more than the amount of directly related neural activities. The internal processing represents the knowledge encoded in the rules of the system. This knowledge is used to interpret the input data and to produce some output. The use of this knowledge means many times long sequences of pattern transitions making up a large amount of internal processing. If complex neural systems can be best described in terms of pattern languages, this implies that the role of internal processing in these systems must be much higher than it is supposed by current theories focused on input/output transformations. Ignoring the internal processing ignores the majority of the processing and prevents the understanding of the whole system. Of course, the measurements contain always the data about the internal processing, but interpreting this data in terms of direct input/output processing may be very misleading. Doing the same in case of abstract pattern languages (e.g., cellular automatons, the Sierpinski brain) may produce easily diverging theories about the input/output processing of the system.
The use of pattern languages to describe and analyse neural systems needs a new kind of computational neuroscience. This can be built to some extent on existing work on symbolic dynamics [12] and pattern languages [21]. Obviously, major new developments are needed to find the best classes and types of pattern languages that fit well to the neuroscience data and that can be used efficiently to describe biological neural systems. Further new developments will be needed to build up a large set of analytical methods in order to generate meaningful predictions fast and in a reliable manner. The pattern languages – oriented computational neuroscience should focus to a large extent on the internal processes of neural systems, represented by the transition rules of the dynamic neural activity patterns. This shift from almost exclusive focus on input/output processing to internal processing will allow the new kind of computational neuroscience to provide a systematic and robust description of complex neural systems and their computational processes. Another important implication of the pattern languages approach is that we should look at the computational power of neural systems described as pattern languages, and even more importantly at the relative computational complexity of one system in the interpretational terms of another system. This analysis may reveal how changing of the pattern dynamics of one sub-system may lead to system level dysfunction (e.g., neuro-degenerative diseases) and also may give hints about potential interventions that may restore the global functionality of the system. V. CONCLUSIONS
In conclusion, the adoption of pattern languages for the description and analysis of complex neural systems implies that the research of these systems should focus to a
We proposed a new paradigm, the application of the pattern languages [21] modelling and analysis framework to analyse
and interpret neuroscience data. This approach seems particularly feasible for the analysis of activity pattern data from small neural systems with complex behaviour. The Sierpinski brain is presented as a simple artificial example of computation with activity patterns in neural systems. This artificial system reproduces key features of pattern languages (e.g., computation by pattern interactions, universal computational power, and dominance of internal processing over input/output processing). The application of the pattern languages approach implies that we should focus on the understanding of the rules of internal processing of biological neural systems in terms of transitions between activity patterns. In the context of computational neuroscience the pattern languages approach implies the change from systems described in terms of few equations to systems described by pattern formation and pattern transformation rules. This implies a shift from the focus on input/output processing to the description and analysis of internal processing. The pattern languages approach also offers new opportunities to look at how sub-systems of neural systems communicate, how changes in pattern dynamics in one sub-system may imply global dysfunction, and how such problems may be overcome by appropriate interventions. REFERENCES [1] P. Andras, “The Sierpinski brain”, In Proceedings of the International Joint Conference on Neural Networks 2001, pp.654-659, 2001. [2] L. Belluscio and L.C. Katz, “Symmetry, stereotypy, and topography of odorant representations in mouse olfactory bulbs”, Journal of Neuroscience, vol.21, pp.2113-2122, 2001. [3] R.L. Devaney, A First Course in Chaotic Dynamical Systems, Redwood City, CA: Addison-Wesley, 1992. [4] A.K. Engel and W. Singer, “Temporal binding and the neural correlates of sensory awareness”, Trends in Cognitive Sciences, vol.5, pp.16-25, 2001. [5] W.J. Freeman, “Role of chaotic dynamics in neural plasticity”, Progress in Brain Research, vol.102, pp.319-333, 1994. [6] W.J. Freeman, R. Kozma, and P.J.Werbos, “Biocomplexity: adaptive behavior in complex stochastic dynamical systems”, Biosystems, vol.59, pp.109-123., 2001. [7] A. Gelperin, “Oscillatory dynamics and information processing in olfactory systems”, The Journal of Experimental Biology, vol.202, pp.1855-1864., 1999. [8] A.P. Georgopoulos, A.B. Schwartz, and R.E.Kettber, “Neuronal population coding of movement direction”, Science, 233: 1416-1419., 1986. [9] S. Grossberg, “Linking laminar circuits of visual cortex to visual perception: development, grouping and attention”, Neuroscience and Biobehavioral Reviews, 25: 513-526., 2001. [10] R.M. Harris-Warrick, E. Marder, A.I. Selverston, and M. Moulins, (Eds.), Dynamic Biological Networks. The Stomatogastric Nervous System, MIT Press, Cambridge, MA. 1992. [11] S. Haykin, Neural Networks. A Comprehensive Foundation., Englewood Cliffs, NJ: Macmillan Publishers, 1994. [12] D. Lind and B. Marcus, An Introduction to Symbolic Dynamics and Coding, Cambridge, UK: Cambridge University Press, 1995.
[13] M.P. Nusbaum and M.P.Beenhakker, “A small-systems approach to motor pattern generation”, Nature, vol. 417, pp.343-350., 2002. [14] F.W. Ohl, H. Scheich, and W.J. Freeman, “Change in pattern of ongoing cortical activity with auditory category learning”, Nature, 412:733-736, 2001. [15] M.I. Rabinovich and H.D.I. Abarbanel, “The role of chaos in neural systems”, Neuroscience, vol.87, pp.5-14., 1998. [16] M.I. Rabinovich, P. Varona, and H.D.I. Abarbanel. “Nonlinear cooperative dynamics of living neurons.”, International Journal of Bifurcation and Chaos, vol.10, pp. 913-933., 2000. [17] F. Rieke, D. Warland, R. De Ruyter van Steveninck, and W. Bialek, Spikes: Exploring the Neural Code., Cambridge, MA: MIT Press, 1996. [18] S.J. Schiff, “Forecasting brain storms”, Nature Medicine, vol.4, pp.11171118, 1998. [19] I. Tsuda, “Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems”, Behavioral and Brain Sciences, 24: 793-810., 2001. [20] B.A. Wandell, Foundations of Vision, Sunderland, MA: Sinauer Associates, 1995. [21] S. Wolfram, A New Kind of Science, Champaign, IL: Wolfram Media, 2002.