A biologically inspired model of spiking neurons

0 downloads 0 Views 231KB Size Report
Keywords: associative learning, spiking neurons, artificial networks ... important to know that mediating substances determine ion channels ... consists of a capacitor C driven by a current I, in parallel ... network future development goals.
A Biologically Inspired Model of Spiking Neurons Suitable for Analog IC Design 1

Mircea Hulea1 Automatic Control Department, Technical University “Gh. Asachi” of Iasi, Iasi, Romania

Abstract - In this paper it is described the structure and the working way of an electronic model of spiking neurons developed for silicon implementation. Being sensitive to external medium changes and being able to strengthen their synapses when concurrent events happen, these neurons could develop their own synaptic configuration in an activity dependent manner. Networks of such spiking neurons best fit to physical medium understanding like speech recognition or image processing. Most models of neurons use software programmed functions to describe the natural neuron physiology. In this work I tried a different approach: the physiology of the biological neuron was simulated using basic electronic components which offer high computation speed while using low energy. To test the performance of these neurons it was developed a tool which could perform speech recognition. However, the final goal of this approach is to design an analog integrated chip that could be used in any domain that requires artificial intelligence. Keywords: associative learning, spiking neurons, artificial networks, analog design

1

Introduction

Interaction of the machine with physical environment – such as speech and face recognition – requires high complexity problem solving. Since elaboration of algorithms for this kind of task is very difficult, a computing architecture which could identify patterns and discriminate between them would be easier to develop. Being able to associate temporal events when stimulated by changes of the external conditions, the artificial networks of spiking neurons have a great potential for obtaining good results in physical environment understanding. Spiking neurons represent an accurate model of the natural ones and could be used as a computing tool for complex problem solving. Moreover, offering the possibility of synaptic activity direct observation, the use of these neurons helps in understanding of the interactions that take place inside the biological neural network. The neuron model mimics the membrane potential, refractory period and the natural neurons’ adaptation to the stimulus. They also generate positive and negative trains of impulses and have the ability to perform timing dependent

activity and synaptic strength adaptation in order to achieve a stable perception of reality.

2

Natural Neuron

The computation power of the biological neural networks is given by synapses which are the connections between neurons. The ability of the neural network to associate between external events is determined by structural modification of the synapse. Thus, the concurrent activated synapses are strengthened determining a gain in the power of the following impulses [1]. This behavior of the biological synapses represents the basic principle of the neural networks associative learning.

2.1 Spike Propagation The main components of the synapse are presynaptic and postsynaptic membranes, as well as ion channels [1]. Another critical constituent is the mediator released by the presynaptic membrane during presynaptic neuron activation, which is able to stimulate the postsynaptic membrane determining ion channels opening. In order to achieve the firing threshold of the postsynaptic neuron at a specific moment in time, the mediator can shorten the time until the postsynaptic neuron fires by depolarizing the postsynaptic membrane, or delay its activation by membrane polarization. The neurons that increase the postsynaptic membrane potential (PMP) are excitatory and those who decrease the PMP are inhibitory. One neuron could receive stimulation from both excitatory and inhibitory neurons. On the other hand, the presynaptic membrane of one neuron could produce only one type of mediator [1]. The basic principle of the neural computation represents the temporal and spatial integration of the incoming stimulation. More precisely, the PMP depends on the strength and timing of every spike received from all the presynaptic neurons. The increasing of the PMP above a threshold known as action potential, determines the postsynaptic neuron activation. This is followed by a temporary polarizing of the postsynaptic membrane below the equilibrium potential known as refractory period, while only a strong stimulation will make the postsynaptic neuron to fire [1].

2.2 Associative Learning There are two main components that contribute to synaptic structure alteration. One represents the number of vesicles released by the presynaptic membrane during each action potential and the other is the alignment of the postsynaptic receptors with the releasing sites [1], [2]. It is important to know that mediating substances determine ion channels opening in the postsynaptic membrane by stimulation of the specific receptors. For excitatory synapses the signals are carried by glutamate which activates Nmethyl-D-aspartate (NMDA) and α-amino-3-hydroxy-5methyloxazole-4-propionic acid (AMPA) receptors, while inhibitory impulses are carried by γ-amino-butyric acid [2], [4]. The NMDA receptors which are fixed on the postsynaptic membrane surface from the synaptic formation determine Ca channel opening. When stimulated, these receptors have stronger influence on the postsynaptic membrane potential than the AMPA receptors whose role is to open the Na channels [1]. When the postsynaptic membrane is stimulated by excitatory neurotransmitter, the AMPA receptors – being able to change their position on the postsynaptic membrane surface – are moved towards recently activated NMDA receptors. This improves the receptor stimulation efficiency of the next stimulus determining short-term potentiation (STP) of the synapse. Another mechanism observed during biological neurons studying is the possibility of AMPA receptors anchoring. Thus, during stimulation, the AMPA receptors are moving towards NMDA receptors and if the specific fixing conditions are encountered these receptors are anchored in their current places transforming the STP in long-term potentiation (LTP) [2]. On the other hand, if the anchoring mechanism is not triggered, the AMPA receptors are moved away from NMDA receptors in a period of time which determines STP duration [2]. Despite the fact that it is not known exactly when the specific reactions for transforming the STP in LTP occur, in this work it was considered that the receptors are fixed during postsynaptic action potential. Therefore, considering multiple impulses and the property mentioned above, it is clear that the concurrent synapses which participate to neuron activation will be more strengthened than the not concurrent ones. This is the crucial feature for Hebbian or associative learning – concurrent events are linked together by a postsynaptic neuron firing. This feature and the posttetanic potentiation (PTP) bring a great contribution to synaptic dynamics which are responsible for neural networks long-term learning.

increases the computation speed while providing an accurate membrane potential approximation, neuron excitation and inhibition, as well as adaptation to the stimulation frequency [4]. The basic circuit of this integrate-and-fire model consists of a capacitor C driven by a current I, in parallel with a resistor R. The capacity C is charged by incoming impulses until it reaches a previously chosen threshold. This determines the neuron activation which is signaled by sending a spike to the following neuron [3]. A neuron model based on this principle could be implemented in software or using simple electronic circuit elements. The analog model of the network designed using only basic circuit components such as resistors, capacitors, transistors, diodes offers high computation speed independent of the number of neurons. Thus, for this work the important aspects of the natural neuron physiology are modeled using physical laws which govern the electronic components.

4 Artificial Neuron Operation The electronic neuron was developed to model the critical features of natural synapse physiology. As mentioned in the previous chapter, the main activity of the neuron is spike generation. The other critical feature of the neuron, which is a consequence of the synaptic structure, is represented by associative learning. Inside the biological neural networks the information is coded by the amplitude and frequency of impulses. The efficiency of a synapse could be modified in two activity dependent ways. One is the stimulation strength alteration determined by increasing of the mediator quantity released per neuron activation and the other represents the modification of reception sensitivity as a consequence of synaptic geometry alteration [2]. Figure 1 shows the basic structure of the neuron electronic model. The input module (IM) integrates the incoming stimulation using a capacitor (CM) and when the integrated voltage reaches the neuron action potential, it activates the spike generation module (SGM). The threshold detection and the SGM module activation are performed by a NPN transistor. The efficiency control module (ECM) determines the strength of the spike that stimulates the postsynaptic neuron. Another role of the ECM is to mimic the natural mechanisms of learning.

3 Neuron Model The main types of biologically inspired neurons are the McGregor (integrate-and-fire) and the conductance models [3]. The last one offers a better accuracy in natural neuron modeling but the high complexity of the simulation lowers the computation speed [4]. In contrast, the McGregor model has a simplified spike generation mechanism which

Figure 1. Basic structure of the artificial neuron.

For this work the weight of the synapse is modeled by voltage spike duration: a longer electrical impulse will have stronger influence on the postsynaptic neuron input. The

synaptic efficiency could also be modeled by spike amplitude. However, the circuit simulation showed that when using the same operation voltage for both approaches the discrimination power of this neuron model was lower than the one obtained by spike duration modeling. Moreover, this would introduce a limitation in lowering the neuron operating voltage which is in contradiction with network future development goals. The structure of the neuron described above is implemented for both excitatory and inhibitory neurons. The difference between the two types of neurons is that for excitatory ones the SGM module generates impulses whose amplitude is above the equilibrium potential, while for inhibitory ones the amplitude of impulses is under the equilibrium potential.

4.1 Neuron Stimulation One important feature modeled using the electronic neuron is the absolute refractory period while the neuron is not able to fire. This represents a short time interval which begins after postsynaptic neuron activation and is followed by the refractory period which represents a temporary decrease of CM voltage under equilibrium potential. During this period the neuron can be activated only by strong presynaptic voltage pulses. The CM potential during absolute refractory period and the refractory period is illustrated by the signal diagrams shown in Figure 2. The waveform (a) shows the CM voltage of the postsynaptic neuron during its activation.

Thus, the input potential is increased until it reaches the neuron activation threshold and then it is pulled down below the equilibrium potential. This sudden decrease of the CM voltage models the absolute refractory period that represents a short time interval before the refractory period. To make possible the use of a NPN transistor as activation threshold detector, the equilibrium potential was set to 0.4 which is below base-emitter voltage of the transistor. Another parameter set to 0.3 during the development process of the electronic neuron is the value reached by the CM potential during the absolute refractory period. As it is illustrated by signal diagram (b), the potential comes back to its pre-firing value during the refractory period. Due to the fact that membrane potential is modeled using a capacitor, it is clear that voltage alterations presented above are described by logarithmic and exponential functions. The integration of the incoming stimulation is made by the postsynaptic neuron.

a)

b) a)

c) b) Figure 2. (a) The CM voltage during neuron activation and absolute refractory period; (b) The CM voltage during the refractory period;

Figure 3. (a), (b) The CM voltages for two presynaptic neurons; (c) the integration of the incoming stimulation from neurons (a) and (b) by a postsynaptic neuron.

As it was presented above, the IM module of the electronic neuron integrates the incoming stimulation using a capacitor CM which voltage simulates the PMP. The signal diagrams (a) and (b) from figure 3 illustrate the CM potentials of two presynaptic neurons which send voltage spikes to one postsynaptic neuron. The waveform (c) shows the CM voltage of this postsynaptic neuron illustrating the spatial and temporal integration of the incoming stimulation. The action potential of the presynaptic neuron whose input voltage is shown by signal diagram (b) activates the postsynaptic neuron. The activation of this neuron is pointed out by the sudden decrease of the potential which is followed by the refractory period.

4.2 Learning Components One important aspect of biological neural networks physiology is the ability to learn from previous experience. As mentioned before, natural synapses could change their efficiency by presynaptic spike strength increasing and by modification of the receiving sensibility of the postsynaptic membrane. For the electronic neuron considered for this work an important task solved by the efficiency control module – whose schematic is presented in figure 4 – was the simulation of the natural mechanism for synaptic weights adjustment using only basic electronic components while keeping the schematic as simple as possible. As it was mentioned in the previous chapter, the short-term potentiation (STP), long term potentiation (LTP) and posttetanic potentiation (PTP) bring an important contribution to network weights dynamics. The synaptic efficiency is stored in a learning capacitor (LC) whose charge is a consequence of the neuron previous activity. To simplify the schematic design, the maximum charge of the LC represents minimum efficiency of the synapse, and the minimum voltage reached by LC during electronic neuron normal operation models maximum weight of the synapse. When the neuron is activated the charge stored by LC affects the impulse duration, the corresponding voltage being used as input by the SGM. The variation of the synaptic strength is modeled by exponential functions which describe the capacitor’s charging and discharging. The capacitor charging is described by the logarithmic function (1) while the discharging of the capacitor follows the exponential function (2). For the artificial model the operating voltage V0 and the duration of the neuron activation were fixed during the neuron design, while the parameters R and C were chosen to model every type of synaptic efficacy alteration detailed forward.

V (t ) = V 0[1 − exp(−t / RC )] V (t ) = V 0 exp(−t / RC )

(1) (2)

Figure 4. Artificial neuron learning mechanism. The integration module (IM) determines the STP and PTP by saturating the NPN transistor and the postsynaptic neuron (PSN) triggers LTP. The synaptic efficacy stored by the capacity LC is given as input for spike generation module (SGM).

To simulate STP, the LC is temporary discharged during neuron activation with an amount which illustrates STP learning rate. The charge lost by LC is stored in another capacitor CS. If the specific conditions for transforming the STP in LTP are not happening – in this case the postsynaptic neuron (PSN) activation – the LC is continuously recharged from CS to almost its pre-firing voltage. The duration of this process is given by the parameter RS. The difference between pre-firing and post-STP voltages models the natural neuron PTP which represents the presynaptic component of learning. To model another aspect of natural neuron physiology, the PTP rate – determined by RV – was chosen much lower than the STP rate [1]. If the postsynaptic neuron is activated during temporary discharge of the LC – which models STP – the capacitor voltage is fixed at the current value by the sudden discharge of CS. This will stop the synaptic strength decreasing which is in concordance with biological LTP. Thus, the pre-synaptic neurons firing determine a temporary gain in weights of the activated synapses which is lost during a period of time in case of no postsynaptic activity. When the postsynaptic neuron reaches its firing threshold, the strength of the incoming synapses that participates to this activation is fixed at their current values. Therefore, the synapses activated before postsynaptic potential (PSP) will remain strengthened, while the synapses efficacies activated after PSP will remain unchanged. Considering all mentioned above, the LC charge variation mimics the AMPA channels dynamics which is the postsynaptic component of learning, as well as the increase in the quantity of mediator released from presynaptic membrane during neuron activation. Another important feature of the artificial synapse is that the LC is charged continuously at a very low rate by reverse current of the diode D, making the synaptic strength to decrease in case of no activity. This property was considered to mimic the depression of the synapse. Despite the fact that it is not known if the natural inhibitory synapses are capable of

plasticity [4], for this model the STP and LTP for inhibitory neurons were implemented.

transistor which represents a better solution for on-chip nonvolatile storage of the synaptic weights [10]. Thus, the integration of a network with 10 000 neurons will take about four square centimeters of die area.

6 Vocals discrimination experiment

Figure 5. The difference between LTP and PTP – the first increase in CM voltage is determined by a synapse potentiated by LTP while the after firing voltage increases is a consequence of a synapse strengthened by PTP.

To illustrate the ability of these electronic synapses to change their efficiency, figure 5 shows the input voltage of a postsynaptic neuron when it is stimulated by three presynaptic neurons. The stimulation consists of a set of three excitatory impulses, the second one activating the postsynaptic neuron. Therefore, it is easy to spot on the input potential of the postsynaptic neuron the significant difference between influences of the first and the third spikes. For the first one the strength increase was determined by LTP which was triggered by the postsynaptic neuron activation, and for the third the synaptic efficacy was increased only by the PTP process.

5 Artificial Neuron Implementation First step in natural neuron physiology modeling was the SPICE simulations of the circuit operation which provided good results for neuron development, but for large networks the simulation became time consuming [5]. Therefore, a board design represented a better solution for network studying and testing under real conditions. However, because of the big physical dimensions, this approach in hardware implementation becomes difficult to handle for large networks of such neurons. As it was previously mentioned, the artificial neuron schematic was developed using only electronic components such as transistors, diodes, resistors, and capacitors which are suitable for silicon implementation [6], [7]. Therefore, this will help in designing of an analog integrated chip with the same operation as the board, while taking the advantage of a substantially reduced size, which is one of the future work goals. An important aspect of this approach is that the neuron model uses nanofarads of capacitance which is more than a single die could economically include during normal fabrication process [6]. Therefore, the manufacture of the integrated chip requires a special dielectric such as barium strontium titanate [9]. Another aspect that will be taken into consideration is the replacing of the LC with a floating-gate

The biological ear splits the audio signal into frequency channels using the resonance of different segments of the basilar membrane and sends channel dependent impulses to the brain [11]. Some experiments on human subjects showed that the speech can be understood if only a number of these channels were presented to subjects [8]. Thus, each sound could be identified by its specific audio frequencies and considering that spoken vocals are some of these sounds, the experiment presented forward focuses on the basic aspect of the speech recognition.

6.1 Network structure To test the analog design of the learning mechanism developed for this artificial neuron it was considered a network of excitatory neurons able to perform spoken vocals discrimination. The simple topology of this neural network which is shown in figure 6 was implemented to illustrate the main principles of concurrent events association. The presynaptic neurons aFN1, aFN2, eFN1, eFN2 were stimulated using a microcontroller that was programmed to simulate the activation of frequency channels. Each channel activates one of the presynaptic neurons and the interval between generated spikes related to one vocal is 5ms. This time interval helps in illustrating the ability of the artificial network to associate between concurrent events. The training is made by repetitive stimulation of the network input layer by a pair of spikes that is specific for every vocal – e.g. the neurons eFN1 and eFN2 will be activated for ‘e’ vocal. At the beginning of the experiment, all the synaptic strengths were set to minimum that means they have no influence on the postsynaptic neurons in case of presynaptic neurons activation.

Figure 6. Network architecture for classification of simulated frequency channels. The output neurons (aN, eN) perform the association of the concurrent activated input neurons (aFN1, aFN2, eFN1, eFN2).

6.2. Experimental results

7 Conclusions

During network stimulation by the specific channels for vocal ‘e’, the strengths of the synapses eS1, eS2 and eS3 are increased in an activity dependent manner, this fact being illustrated by the signal diagrams in figure 7. The waveform (a) shows the influence of the concurrent activated synapses eS2 and eS3 on the neuron eN input during the last iteration of the network training. As could be observed, the strength of these synapses reaches a value which determines postsynaptic neuron activation. This increase in synaptic efficacy is due to concurrent firing of the neurons eFN1 and eFN2, the last one activating the postsynaptic neuron eN. On the other hand, the synapse eS1, which stimulates the postsynaptic neuron aN during the eFN1 action potential, is less strengthened than the concurrent ones. The neuron aN input potential is shown by diagram (b) in figure 7 where the voltage step illustrates the eS1 activation. Therefore, after training the efficacy of the synapses eS2 which participates to a postsynaptic neuron firing is more strengthened than the eS1 synapse which did not participate to neuron activation. Thus, the main factor that contributes to this difference in efficacy gain between synapses eS1 and eS2 represents the LTP process triggered by postsynaptic neuron eN action potential. The artificial neural network has the same behavior when stimulated by vocal ‘a’ specific channels. After training, the concurrent activated synapses aS1 and aS3 will be more strengthened because of the postsynaptic neuron aN firing than the single activated synapse aS2 which stimulates the non activated eN neuron.

This paper describes a new biologically inspired electronic model of neurons. The main goal of this new design is to obtain large neural networks without the disadvantage of computation time increasing. This model uses simple electronic components which make it suitable for analog integrated chip design while providing good simulation of natural neuron crucial features. The basic electronic components could be seen as high-complexity program functions governed by physical laws with parallel execution, which linked in a circuit provides the overall operation of the neuron. Same neuron behavior could be obtained when this circuit is simulated in software, but in this case the computation time increases substantially. The analog design of electronic neuron offers almost unlimited power for temporal events discrimination. The external events will be reflected in neural network activity and the concurrent stimulation will modify the synaptic configuration of the network. The artificial synaptic efficiency depends on a capacitor charge which is a simple solution to model the activity dependent alteration of the synaptic geometry. As was shown by the classification experiment made, the concurrent activated synapses become more efficient than the isolate activated ones, while the sensitivity of the unused ones remains the same or decreases. Thus, the network could develop its own topology which depends only on its previous activity. Considering that the brain synaptic configuration is a consequence of the experience gained in time under physical laws of the natural environment, it is possible for an electronic network having electricity as a source of energy to build the same synaptic configuration as the biological one. This would be useful for neurosciences when it is needed the activity simulation of the different parts of the brain, or for developing the control modules of the intelligent machines.

8 References a)

[1] R. O’Reilly and Y. Munakata, “Computational Explorations in Cognitive Neuroscience”, Cambridge MA:The MIT Press, pp.32-37, September 2000. [2] M. Baudry, J.L. Davis and R. Thompson, “Advances in Synaptic Plasticity”, Cambridge MA:The MIT Press, pp.133-137, November 1999. [3] W. Maass, C.M. Bishop “Pulsed Neural Networks”, Cambridge MA: The MIT Press, pp.23-42, November 1998.

b) Figure 7. The postsynaptic neurons CM voltages during ‘e’ channels stimulation after network training; a) eN is activated by synapses eS2 and eS3; b) the influence of the eS1 synapse on aN neuron.

[4] W. Swiercz, K.J. Cios, “A New Synaptic Plasticity Rule for Networks of Spiking Neurons” IEEE Transaction on Neural Networks, vol 17, Issue 1, pp. 94-99, January 2006.

[5] M.Hulea. “A Biologically Inspired Rule for Electronic Synapses Plasticity”, SACCS 2007 Proceedings, Editura Politehnum Iasi, Romania, pp. 89-93, November 2007. [6] A. Hastings. “The Art of Analog Layout”, PrenticeHall, Upper Saddle River, New Jersey, pp 77 – 84, 2001. [7] C. Saint, J. Saint. “IC Layout Basics – A Practical Guide”, McGraw–Hill, New York, pp 143 – 222, 2002. [8] P.C. Loizou, M. Dorman. “On the Number of Channels Needed to Understand Speech”, University of Texas at Dallas, pp 1 – 15. [9] M.Azuma. “Barium Strontium Titanate Integrated Circuit Capacitors and Process for Making the Same”, US Patent 6285048, pp. 14. [10] P. Hastler, J. Dugger. “Analog VLSI Implementations of Neural Networks”, The Handbook of Brain Theory and Neural Networks, Cambridge MA:The MIT Press, pp. 1 – 3, November 2002. [11] Kenneth N. Stevens. “Acoustic Phonetics”, Cambridge MA: The MIT Press, pp.204 - 214, January 1999.

Suggest Documents