Silicon neuronal networks towards brain-morphic computers - J-Stage

2 downloads 0 Views 809KB Size Report
Jul 1, 2014 - spike-frequency adaptation and autonomous bursting. ..... [19] A. Cassidy, J. Georgiou, and A.G. Andreou, “Design of silicon brains in the ...
NOLTA, IEICE Invited Paper

Silicon neuronal networks towards brain-morphic computers Takashi Kohno 1 a) , Jing Li 1 b) , and Kazuyuki Aihara 1 c) 1

Institute of Industrial Science, University of Tokyo 4-6-1, Komaba, Meguro-ku, Tokyo 153-8505, Japan

a)

[email protected] [email protected] c) [email protected]

b)

Received December 24, 2013; Published July 1, 2014 Abstract: Neuromorphic systems are designed by mimicking or being inspired by the nervous system, which realizes robust, autonomous, and power-efficient information processing by highly parallel architecture. It is a candidate of the next-generation computing system that is expected to have advanced information processing ability by power-efficient and parallel architecture. A silicon neuronal network is a neuromorphic system with a most detailed level of analogy to the nervous system. It is a network of silicon neurons connected via silicon synapses; they are electronic circuits to reproduce the electrophysiological activity of neuronal cells and synapses, respectively. There is a trade-off between the proximity to the neuronal and synaptic activities and simplicity and power-consumption of the circuit. Power-efficient and simple silicon neurons assume uniform spikes, but biophysical experimental data suggest the possibility that variety of spikes given to a synapse is playing a certain role in the information processing in the brain. In this article, we review our design approach of silicon neuronal networks where uniform spikes are not assumed. Simplicity of the circuits is brought by mathematical techniques of qualitative neuronal modeling. Though it is neither simpler nor low-power consuming than above silicon neurons, it is expected to be more appropriate for silicon neuronal networks applied to brain-morphic computing. Key Words: silicon neuronal networks, neuromorphic system, brain-morphic computing, qualitative neuron models

1. Introduction Information-processing capacity of digital computing systems has expanded drastically over this halfcentury. Its major driving force was the quick evolution of the semiconductor fabrication process, which has been getting slower in these ten years because of the restrictions ruled by physical laws. This situation is one of the factors that grow increasing importance of the parallelism in the digital computing systems as a key to the further enhancement of their information-processing capacity. The number of parallel processing units in recent digital computing systems are increasing rapidly. For example, general-purpose Graphics Processing Units (GPUs) brought several thousands of processing

379 Nonlinear Theory and Its Applications, IEICE, vol. 5, no. 3, pp. 379–390

c IEICE 2014

DOI: 10.1587/nolta.5.379

units even into personal computers and efficient connection among a massive number of processors are the crucial technology for high-performance supercomputers including Titan, Sequoia, and K systems [1] that are composed of about 1 million processors. Because massive parallelism is proving its potential, it can be imagined that the number of processing units in a computing system might reach and even exceed the number of neurons in a human brain in the future. Another stream of parallel computing architecture is formed by neuromorphic systems. This is a class of systems whose principle mimic or are inspired by the information processing scheme of the nervous system. It is a most robust, autonomous, and power-efficient information processing system which is constructed by networks of neuronal cells. Neuromorphic systems aim to inherit such parallelism and advantageous aspects. A silicon neuronal network is a neuromorphic hardware system with a most detailed level of analogy to the nervous system. It is an artificial neuronal network composed of silicon neuron and synapse circuits that respectively reproduce the electrophysiological activities of neuronal cells and synapses in real time or faster. Because it reproduces the most general structure of the nerve system, it is expected to be a universal platform for the neuromorphic system. In 1952, the cornerstone work by Hodgkin and Huxley [2] showed that electrophysiological activity of a neuronal cell membrane can be modeled quantitatively by the differential equations that describe the dynamics of the cell membrane’s ionic-conductances. Silicon neurons that implement ionic-conductance models by analog integrated circuits were reported to be able to precisely reproduce the original neuronal cell’s activity [3–6]. However, because ionic-conductance models are in principle composed of complex nonlinear equations with a relatively large number of variables, it requires complex and high-power-consuming circuitry to solve their equations. Because analog integrated circuits are sensitive to fabrication mismatch, analog silicon neuron circuits require the post-fabrication configuration by externally applied parameter voltages or currents. One of the most serious problems in this class of circuits is the difficulty in finding appropriate parameters which is raised by complexity of circuitry. A research group proposed to utilize a heuristic method, differential evolution algorithm, and reported successful results [3]. Though it requires relatively long time to find an appropriate parameter set, it made this class of circuit provide a powerful solution to application fields where a carbon-copy silicon neuron to a specific cell is required, such as neuromorphic biomedical devices and hybrid systems. Another class of silicon neurons, by contrast implement neuron models that are far simpler than the ionic-conductance ones. They originate from the Integrate and Fire (I&F) model, whose simplicity is attributed to bold approximation of the neuronal spiking process by reset of a state variable that corresponds to the membrane potential. Here, neuronal spikes are assumed to be always the same shape and magnitude. A variety of I&F-model-based silicon neurons with simple and low-power consuming circuitry have been developed [7–10]. They incorporate circuits for slower dynamics than the spiking process that modulate the spiking activity and produce more complex behavior including spike-frequency adaptation and autonomous bursting. Because of simplicity and low-power consumption of their circuitry, the number of silicon neurons integrated in a silicon chip can be increased. For example, orientation detection was realized in a silicon neuronal network chip with 32 silicon neurons [11]. Qualitative neuronal modeling has elucidated that there are more than two types of spike-generation dynamics [12, 13] that cover Class I and II neurons in the Hodgkin’s classification [14]. When a sustained current stimulus is applied, Class I neurons can fire in a wide variety of frequency depending on the strength of the stimulus and the frequency can be arbitrarily low when it is sufficiently weak. On the other hand, Class II neurons fire in relatively a narrow range of frequency and cannot fire in very low frequency. The Hopf bifurcation produces the Class II property in the Hodgkin-Huxley model and is thought to be one of the most major spike-generation dynamics in Class II neurons. Because the Hopf bifurcation requires minimum two system variables, I&F-based models above cannot realize this type of Class II neurons. The Izhikevich model [15] has two-variable spike-generation dynamics whose equations are simplified by reset of state variables. Several silicon neurons [16–18] implement this model, which is being a most major trend because it can approximately reproduce a variety of neuronal activities. Dedicated digital circuits that calculate this model efficiently are also

380

developed [19–21]. We proposed to design a silicon neuron model using similar mathematical techniques of the qualitative neuronal modeling but without reset of state variables [22–27]. Here the simplicity of the circuit is realized by the model’s affinity to the characteristics curves of the circuits. The concept and advantages of our designing approach are explained in the second section. Our analog and digital silicon neuron circuits are reviewed in Secs. 3 and 4, respectively, and then conclusion follows.

2. Qualitative silicon neuronal modeling The dynamics of neuronal activities has been elucidated and classified from the viewpoint of nonlinear dynamics theory [12, 13]. Topological structures in phase portrait and bifurcation diagrams provide tools to describe its texture. Meanwhile, it is still not well clarified what features in the neuronal activities are playing the key roles in the information processing in the nervous system. Thus, silicon neuron and synapse models for the brain-morphic information processing systems have to be designed carefully so that they inherit the features in the neuronal activity as authentically as possible. As mentioned above, neuronal spikes are assumed to be always the same shape and magnitude in the I&F-based models including the Izhikevich model, although biophysical experiment results suggest the possibility that the variation in the spikes can be playing important roles in the information processing in the nervous system [28, 29]. In addition, it was pointed out that structures in the phase portrait of the spike-generation system can critically affect the activity of neuronal networks. If a Class I neuron model has a structure named a narrow channel and meets several requirements, its network connected by the gap-junction (GJ) produces a wide variety of complex behavior including intermittent chaos [30, 31] despite the neuron model is apparently regular Class I when it has no connection. These facts suggest that the trade-off between the inherited features and the simplicity of the model may have another point of compromise, which provides an advantageous solution particularly for silicon neuronal networks applied to brain-morphic computing. In a series of our works [22–27, 32, 33], we proposed to design a silicon neuron model by compiling a qualitative neuron model using the formula that can be implemented by the electronic circuit efficiently. Because silicon neuron models designed by this approach do not approximate the spiking dynamics by resetting of state variables, they can produce graded response in Class II neurons where the spike shape is dependent on the stimulus. We have designed and verified two types of silicon neuron circuits. One is implemented by ultra-low-power consuming CMOS aVLSI technology, which is the abbreviation of Complementary Metal Oxide Semiconductor analog Very-Large-Scale Integrated circuit. Another is implemented by digital arithmetic circuits in a field-programmable gate array (FPGA) device.

3. An analog silicon neuron We have designed a silicon neuron model [23] that can realize several classes of neuronal activities including Class I and II in the Hodgkin’s classification, spike-frequency adaptation, square-wave bursting, and elliptic bursting. Its equations are composed of the characteristics curves of basic and matured low-power consuming CMOS circuitries; differential pair, transconductance amplifier, and τ -cell [34] where the field-effect transistors (FETs) are operated in their subthreshold domain. The equations of this model are given as follows. Cv

dv = −g(v) + fm (v) − n − q + Ia + Istim , dt fn (v) − n dn = , dt τn fq (v) − q dq = , dt τq

(1) (2) (3)

where variables v and n represent the membrane potential and abstracted activity of ionic channels, respectively. They compose the fast subsystem that produces the spiking dynamics. Variable q represents the slow hyperpolarizing current of the slow negative feedback. Parameters Cv , Ia , τn , and

381

Fig. 1. Elemental circuits in our silicon neuron. Circuits for (a) fx (v) and (b) g(v). (c) τ -cell (integrator) circuit. All the transistors operate in their sub-threshold domain.

τq are the membrane capacitance, a constant leak current, the time constants of n and q, respectively. Functions fx (v) (x=m, n, or q) and g(v) are the idealized characteristics of the differential pair and the transconductance amplifier, whose circuits are illustrated in Figs. 1(a) and (b), and described as follow: fx (v) = Mx g(v) = S

1

,

(4)

,

(5)

1 + exp (− UκT (v − δx ))

1 − exp (− UκT (v − θ)/2)

1 + exp (− UκT (v − θ)/2)

where UT is the thermal voltage (approximately 26mV at room temperature) and κ is the capacitivecoupling ratio that is dependent on the fabrication process and the operating condition of MOSFETs (between 0.6 and 1.0 in most cases). Parameters Mx , δx , S, and θ are specified by externally applied voltages VMx , δx , VS , and θ in Figs. 1(a) and (b). Equations (1)–(3) are solved by a circuit whose block diagram is drawn in Fig. 2. Here, in the fast subsystem (the lower block) that is responsible for the spiking dynamics, the outputs of current generators fm (v), g(v), and Ia are integrated by capacitor Cv whose voltage corresponds to v, whereas the output of the other current generator fn (v) is integrated by a τ -cell (Fig. 1(c)). They solve Eqs. (1) and (2), respectively. The last equation of our model, Eq. (3) is solved in the q-block where the output of current generator fq (v) is integrated by a τ -cell. The τ -cell is a well-known integrator circuit that solves the following equation: Iin − Iout dIout = , dt τ

382

(6)

Fig. 2. Block diagram of our analog VLSI silicon neuron. The spiking dynamics is produced by the fast subsystem (lower block). The q-block produces slow dynamics that modulates it. The voltage clamp amplifier (top block) provides a means to draw the v-, n-, and q-nullclines experimentally.

where τ is a time constant controlled by current Iτ in Fig. 1(c). Currents Iτv and Iτn as well as parameter Ia are specified by voltages externally applied to integrated V -I converters. Stimulus current Istim is applied externally by Vstim in the same way. We have fabricated this circuit by the Taiwan Semiconductor Manufacturing Company (TSMC) 0.35μm mixed signal CMOS process. In the fast subsystem, Eqs. (1) and (2) where q is assumed to be 0, the v-nullcline is cubic-like and the n-nullcline is sigmoidal. Appropriate selection of parameters produces saddle-node on invariant circle, Hopf, or saddle-loop homoclinic bifurcations. It is known that a spiking system is Class I (or Class II) when its spiking dynamics with a limit cycle in the phase plane emerges via a saddle-node on invariant circle (or a Hopf) bifurcation. In Figs. 3(a)–(d), bifurcation diagrams of v and spiking frequency obtained by experimental results are listed. In the Class I mode, an equilibrium is replaced by a limit cycle near at Vstim = 1 mV (Fig. 3(a)). As shown in Fig. 3(b), the frequency of the limit cycle is increased smoothly from 0 Hz. In experiments, we could reduce it down to about 1 Hz but when the frequency was low the spike intervals varied considerably by noises. In the Class II mode, bistability between an equilibrium and a limit cycle was observed. In case the stimulus was increased, the equilibrium was replaced by the limit cycle when Vstim was about 19.5 mV, wheres in case the stimulus was decreased, it vanished when Vstim = 18.5 mV (Fig. 3(c)). This conforms to the typical structure in Class II neurons that a limit cycle emerges via a subcritical Hopf bifurcation when the stimulus is increased and it vanishes via a saddle-node bifurcation of limit cycles at a lower value of the stimulus when it is decreased. As shown in Fig. 3(d), the firing frequency could not be decreased below 30Hz. The slow variable q is at the right-hand side of Eq. (1) with a negative coefficient. Because fq (v) is a monotonic increasing sigmoidal curve, it provides a negative feedback to the fast subsystem. When the q-block is activated, the dynamical structure of our silicon neuron model can be illustrated in the v-q plane, where the bifurcation diagram of the fast subsystem whose bifurcation parameter is q and the q-nullcline are drawn. A v-q plane is shown in Fig. 4(a), where another type of bistability exists in the fast subsystem. An equilibrium vanishes via a saddle-node bifurcation when q is decreased, and a limit cycle emerges via a saddle-loop homoclinic bifurcation before it. A n-v phase plane is illustrated in Fig. 4(b) when bistability exists (q = 30 pA). These figures were generated by numerical methods applied to Eqs. (1)–(3) using the XPPAUT software. Because dq dt is positive (negative) over (under) the q-nullcline, the system state transits the spiking state, or the limit cycle, and the silent state, or the equilibrium, alternately. Autonomous bursting cells in the pre-B¨ otzinger complex [35] and heart interneurons in Leech [36] are known to have this kind of dynamics, square-wave bursting. The Hindmarsh-Rose (1984) model [37], a qualitative neuron model, also has such dynamics. An

383

Fig. 3. Bifurcation diagrams of the fast subsystem. Red (blue) curves represent limit cycles (equilibriums). (a) and (b) are in the Class I mode, (c) and (d) are in the Class II mode. In (e), a limit cycle emerges via a saddle-loop homoclinic orbit bifurcation. Bistability between a limit cycle and an equilibrium exists in the Class II mode and (e).

Fig. 4. (a) A q-v plane of Eqs. (1)–(3). (b) A n-v phase plane of Eqs. (1)–(3) (the fast subsystem) when q is fixed at 30 pA.

experimental result in the square-wave bursting mode of our circuit is shown in Fig. 5(a). The frequency of a limit cycle is zero at the saddle-loop homoclinic bifurcation point and increases smoothly as the parameter is changed. Thus, spike-frequency adaptation is realized if the q-nullcline in the v-q plane of square-wave bursting is modified so that the integration of dq dt in a limit cycle is zero at near the bifurcation point. Figure 5(b) shows an experimental results in the spike-frequency

384

Fig. 5. Experimental results of our analog silicon neuron circuit. (a) Squarewave bursting, (b) spike-frequency adaptation, and (c) elliptic bursting modes.

adaptation mode of our circuit. Here stimulus current Istim is a positive step current that starts at t = 40 ms. Elliptic bursting is another class of bursting activity where Class II spiking system has slow negative feed back currents. The bistability in Class II are playing a key role in similar way to the square-wave bursting. As described above, the fast subsystem of our silicon neuron can realize Class II and we could realize the elliptic bursting experimentally as shown in Fig. 5(c). In above results, power consumption of the silicon neuron circuit was not larger than 50 nW. Experimental results of square-wave bursting mode was briefly reported in [38] and detailed results are to be published in the near future.

4. A digital silicon neuronal network We designed a simple silicon neuron model [22, 33], or a digital spiking silicon neuron (DSSN) model, that can be simulated efficiently in dedicated digital arithmetic circuits. Because the multiplier is one of the most resource-consuming circuit, the number of multiplication is reduced in this model, which is described as follows: φ dv = (f (v) − n + I0 + Istim ), dt τ dn 1 = (g(v) − n), dt τ

(7) (8)

where φ and I0 are constants and τ is a time constant. Stimulus current is represented by Istim . Functions f (v) and g(v) are f (v) ≡ g(v) ≡

 an (v + bn )2 − cn

when v < 0,

2

−ap (v − bp ) + cp when v ≥ 0,  kn (v − pn )2 + qn when v < r, kp (v − pp )2 + qp

when v ≥ r,

(9)

(10)

where ax , bx , cx , kx , px , qx , and r (x = n or p) determine the form of the nullclines. This model has spiking dynamics only, where the cubic-like form of the v-nullcline is realized by a piece-wise

385

Fig. 6. A n-v phase plane of our digital silicon neuron model. The Class I, II, and I∗ modes share the common v-nullcline.

Fig. 7. The largest Lyapunov exponent of a gap-junction connected network composed of 20 silicon neurons. Chaotic behavior is observed only when the silicon neurons are in the Class I∗ mode.

parabolic curve to reduce the number of multiplication of variables. The parameters are selected from power-of-2 or sums of two such numbers so that the multiplications of a constant and a variable can be implemented by a shifter or two shifters and one adder. As a consequence this model can be implemented using single multiplier which is the same number to the implementation of the Izhikevich model in [19]. Figure 6 is the n-v phase plane where the nullclines in the Class I, II, and I∗ modes are drawn. In [33], we reported simulation results of a network where silicon neurons are connected by simple GJ models. The network produced spatio-temporal chaos depending on the resistance of GJs (Rgj ) only when silicon neurons are in the Class I∗ mode. The largest Lyapunov exponent λ is plotted in Fig. 7. The neurons spiked synchronously when Rgj is sufficiently small and larger than about 22. Spatio-temporal chaos including intermittent chaos was observed between these two regions, which conform to the behavior of GJ-connected Class I∗ neuron network in [31]. In [22], we constructed a 256-neuron all-to-all connected network of the DSSNs on a FPGA device. The silicon neurons are connected each other via silicon synapses whose model has similar dynamics to chemical synapses. Its equation is  α (1 − Is ) when [T ] = 1, Is = (11) dt when [T ] = 0, −β Is where Is is the synaptic current and constants α and β correspond to the forward and the backward rates of synaptic receptors. Transmitter release is represented by [T ]; while the membrane potential v of the pre-synaptic neuron exceeds 0, [T ] = 1 and otherwise [T ] = 0. This synaptic model reflects the spike magnitude of pre-synaptic neurons to T[T ] , the time length

386

Fig. 8. Time length of transmitter release T[T ] in our silicon synapse model when only one pre-synaptic silicon neuron is spiking periodically in response to sustained current Istim . The pre-synaptic silicon neuron is in (a) Class I mode and (b) Class II mode. In (c), the pre-synaptic silicon neuron is the Izhikevich model in Class II mode. Only in (b), T[T ] reflects Istim .

Fig. 9. Successful recall rate of associative memory task, which was calculated using 10 trials. The input vectors were generated by flipping the sign of randomly selected elements in a stored vector. The horizontal axis represents the number of these elements, the Humming distance between the input and a stored vectors, divided by its dimension (256).

when [T ] = 1. Because the DSSN in the Class II mode has graded response to the stimulus (the magnitude of spikes is dependent on the stimulus), when the pre-synaptic neuron is in the Class II mode, the information of the stimulus received by it is transmitted to the post-synaptic neuron. In Fig. 8, T[T ] is plotted when sustained current Istim is applied to the pre-synaptic neuron. We can see this “analog” transmission where T[T ] depends on Istim particularly at the onset of spiking (see (b)), which does not exist in the Class I mode and in the Izhikevich model with its Class II mode (see (a) and (c)). Figure 9 summarizes the performance of our silicon neuronal network composed of 256 neurons as an associative memory. Here, 4 vectors orthogonal to each other were stored by the correlation learning. Each vector is composed of 256 elements and each element has value of 1 or -1. The figure shows the rate of successful recall when a stored vector with errors is presented to the network. The input vectors were generated by flipping the sign of randomly selected elements in a stored vector. The horizontal axis, the error rate, was calculated by dividing the number of these elements by the total number of elements (256). It is identical to the Humming distance between the input and stored patterns divided by 256. The blue (green) curve plots the success rate when all the DSSNs are in the Class I (II) mode. It can be seen that the network could recall the stored pattern with a larger error rate when the silicon neurons are in the Class II mode than when they are in the Class I mode. It was not elucidated whether the “analog” transmission is the key to this performance boost and will

387

be studied in the future works. It was also reported that the silicon neurons spiked synchronously when the network recalled the stored pattern successfully, and the synchronicity was low otherwise. Thus, we can determine whether the output of this associative network is one of the stored patterns or not. This feature is particularly advantageous when patterns are stored by unsupervised learning rules such as Hebbian and spike-timing-dependent plasticity (STDP) rules. Silicon neuronal networks with these learning rules were developed and reported in [32].

5. Conclusion In a series of our works, we proposed a design approach of silicon neuronal networks that provides another point of compromise to the trade-off between the proximity to the networks of neuronal cells and the simplicity of the circuits. Based on the qualitative neuronal modeling and its mathematical techniques, dedicated models for circuit implementation are designed by constructing appropriate structures in the phase portraits and the bifurcation diagrams. These models do not approximate the spiking dynamics by resetting of the state variables, therefore can reproduce the graded response in Class II neurons that are not supported by the I&F-based models. Power consumption of 50 nW in our analog silicon neuron is larger than sophisticatedly implemented I&F-based silicon neurons [7, 9], though it is smaller than conductance-based silicon neurons [4, 6]. Circuit sizes of our digital silicon neuron and implementations of the Izhikevich model [19–21] are expected to be rather similar because they share the number of multipliers. We expect our solution to the trade-off is appropriate for silicon neuronal networks applied to brainmorphic computing, because there is a possibility that it is excessive simplification to assume uniform spikes ignoring the biophysical experimental data [28, 29]. Two silicon neuron circuits designed with similar approach were reported in [39], one of them has Class I and another has Class II spiking dynamics. They are far simpler and low-power consuming than our implementation, which is realized by designing dedicated circuits for each spiking dynamics. Our silicon neurons are appropriate for a universal silicon neuronal network where the firing property of the silicon neurons have to be “programmable”, on the other hand, the power consumption and the circuit size can be reduced by those circuits in networks where firing property can be fixed before fabrication.

Acknowledgments This study was partially supported by JST PRESTO program, a Grant-in-Aid for Scientific Research (A) 25240045 from the Ministry of Education, Culture, Sports, Science, and Technology, the Japanese Government, and the Aihara Project, the FIRST program from JSPS, initiated by CSTP.

References [1] “Top 500 supercomputer sites,” http://www.top500.org/. [2] A.L. Hodgkin and A.F. Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve,” The Journal of Physiology, vol. 117, no. 4, pp. 500–544, August 1952. [3] F. Grassia, L. Buhry, T. Levi, J. Tomas, A. Destexhe, and S. Sa¨ıghi, “Tunable neuromimetic integrated system for emulating cortical neuron models,” Frontiers in NEUROSCIENCE, vol. 5, no. 134, pp. 1–12, December 2011. [4] S. Renaud, J. Tomas, Y. Bornat, A. Daouzli, and S. Sa¨ıghi, “Neuromimetic ics with analog cores: an alternative for simulating spiking neural networks,” in Proceedings of IEEE International Symposium on Circuits and Systems 2007, pp. 3355–3358, May 2007. [5] M.F. Simoni and S.P. DeWeerth, “Two-dimensional variation of bursting properties in a siliconneuron half-center oscillator,” IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 14, no. 3, pp. 281–289, September 2006. [6] M.F. Simoni, G.S. Cymbalyuk, M.E. Sorensen, R.L. Calabrese, and S.P. DeWeerth, “A multiconductance silicon neuron with biologically matched dynamics,” IEEE Transactions on Biomedical Engineering, vol. 51, no. 2, pp. 342–354, February 2004.

388

[7] J.V. Arthur and K.A. boahen, “Silicon-neuron design: A dynamical systems approach,” IEEE Transactions on Circuits and Systems–I, vol. 58, no. 5, pp. 1034–1043, May 2011. [8] G. Indiveri, F. Stefanini, and E. Chicca, “Spike-based learning with a generalized integrate and fire silicon neuron,” in Proceedings of IEEE International Symposium on Circuits and Systems 2010, pp. 1951–1954, May 2010. [9] P. Livi and G. Indiveri, “A current-mode conductance-based silicon neuron for address-event neuromorphic systems,” in Proceegins of IEEE International Symposium on Circuits and Systems 2009, pp. 2898–2901, May 2009. [10] G. Indiveri, “A low-power adaptive integrate-and-fire neuron circuit,” in Proceedings of IEEE International Symposium on Circuits and Systems, vol. 4, no. IV, pp. 820–823, May 2003. [11] E. Chicca, A.M. Whatley, P. Lichtsteiner, V. Dante, T. Delbruck, P.D. Giudice, R.J. Douglas, and G. Indiveri, “A multichip pulse-based neuromorphic infrastructure and its application to a model of orientation selectivity,” IEEE Transactions on Circuits and Systems-I, vol. 54, no. 5, pp. 981–993, May 2007. [12] J. Rinzel and G.B. Ermentrout, “Analysis of neural excitability and oscillations,” in Methods in Neural Modeling, 2nd ed., C. Koch and I. Segev, Eds. MA: MIT Press, ch. 7, pp. 251–291, 1998. [13] E.M. Izhikevich, Dynamical Systems in Neuroscience. Mass.: The MIT Press, 2007. [14] A.L. Hodgkin, “The local electric changes associated with repetitive action in a non-medullated axon,” The Journal of Physiology, vol. 107, no. 2, pp. 165–181, March 1948. [15] E.M. Izhikevich, “Which model to use for cortical spiking neurons ?” IEEE Transactions on Neural Networks, vol. 15, no. 5, pp. 1063–1070, September 2004. [16] V. Rangan, A. Ghosh, V. Aparin, and G. Cauwenberghs, “A subthreshold avlsi implementation of the izhikevich simple neuron model2,” in Proceedings of IEEE Engineering in Medicine and Biology Conference, pp. 2010:4164–4167, August–September 2010. [17] A. van Schaik, C. Jin, A. McEwan, and T.J. Hamilton, “A log-domain implementation of the izhikevich neuron model,” in Proceedings of IEEE International Symposium on Circuits and Systems 2010, pp. 4253–4256, May 2010. [18] J.H.B. Wijekoon and P. Dudek, “A cmos circuit implementation of a spiking neuron with bursting and adaptation on a biological timescale,” in Proceedings of IEEE Biomedical Circuits and Systems Conference, pp. 193–196, November 2009. [19] A. Cassidy, J. Georgiou, and A.G. Andreou, “Design of silicon brains in the nano-cmos era: Spiking neurons, learning synapses and neural architecture optimization,” Neural Networks, vol. 45, pp. 4–26, September 2013. [20] M. Ambroise, T. Levi, S. Joucla, B. Yvert, and S. Sa¨ıghi, “Real-time biomimetic central pattern generators in an fpga for hybrid experiments,” Frontiers in NEUROSCIENCE, vol. 7, no. 215, p. doi:10.3389/fnins.2013/00215, November 2013. [21] M. Ambroise, T. Levi, and S. Sa¨ıghi, “Leech heartbeat neural network on fpga,” in Biomimetic and biohybrid systems, Lecture notes in computer science, Living Machines 2013, vol. 8064, pp. 347–349, July–August 2013. [22] J. Li, Y. Katori, and K. Aihara, “An fpga-based silicon neuronal network with selectable excitability silicon neurons,” Frointiers in NEUROSCIENCE, vol. 6, no. 183, pp. 1–13, December 2012. [23] T. Kohno and K. Aihara, “A mathematical-structure-based aVLSI silicon neuron model,” in Proceedings of the 2010 International Symposium on Nonlinear Theory and its Applications, pp. 261–264, September 2010. [24] “A design method for analog and digital silicon neurons –mathematical-model-based method–,” AIP Conference Proceedings, vol. 1028, pp. 113–128, July 2008. [25] M. Sekikawa, T. Kohno, and K. Aihara, “An integrated circuit design of a silicon neuron and its measurement results,” Journal of Artificial Life and Robotics, vol. 13, no. 1, pp. 116–119, December 2009. [26] T. Kohno and K. Aihara, “Mathematical-model-based design method of silicon burst neurons,”

389

Neurocomputing, vol. 71, no. 7–9, pp. 1619–1628, March 2008. [27] “A MOSFET-based model of a class 2 nerve membrane,” IEEE Transactions on Neural Networks, vol. 16, no. 3, pp. 754–773, May 2005. [28] H. Alle and J.R.P. Geiger, “Combined analog and action potential coding in hippocampal mossy fibers,” Science, vol. 311, pp. 1290–1293, March 2006. [29] Y. Shu, A. Hasenstaub, A. Duque, Y. Yu, and D.A. McCormick, “Modulation of intracortical synaptic potentials by presynaptic somatic membrane potential,” Nature, vol. 441, pp. 761–765, June 2006. [30] S. Tadokoro, Y. Yamaguti, H. Fujii, and I. Tsuda, “Transitory behaviors in diffusively coupled nonlinear oscillators,” Cognitive Neurodynamics, vol. 5, no. 1, pp. 1–12, March 2011. [31] H. Fujii and I. Tsuda, “Itinerant dynamics of class I∗ neurons coupled by gap junctions,” Lecture Notes in Computer Science, vol. 3146, pp. 140–160, 2004. [32] J. Li, Y. Katori, and T. Kohno, “Hebbian learning in fpga silicon neuronal network,” in Proceedings of The 1st IEEE/IIAE International Conference on Intelligent Systems and Image Processing 2013, pp. 83–90, September 2013. [33] T. Kohno and K. Aihara, “Digital spiking silicon neuron: Concept and behaviors in gj-coupled network,” in Proceedings of International Symposium on Artificial Life and Robotics 2007, pp. OS3–6, January 2007. [34] A. van Schaik and C. Jin, “The tau-cell: a new method for the implementation of arbitrary differential equations,” in Proceedings of IEEE International Symposium on Circuits and Systems 2003, pp. 569–572, May 2003. [35] C.A.D. Negro, S.M. Johnson, R.J. Butera, and J.C. Smith, “Models of respiratory rhythm generation in the pre-B¨ otzinger complex. iii. experimental tests of model predictions,” Journal of Neurophysiology, vol. 86, no. 1, pp. 59–74, July 2001. [36] T. Malashchenko, A. Shilnikov, and G. Cymbalyuk, “Bistability of bursting and silence regimes in a model of a leech heart interneuron,” Physical Review E, vol. 84, pp. 041 910:1–8, October 2011. [37] J.L. Hindmarsh and R.M. Rose, “A model of neuronal bursting using three coupled first order differential equations,” Proceedings of the Royal Society B, vol. 221, no. 1222, pp. 87–102, March 1984. [38] M. Sekikawa and T. Kohno, “A laboratory experiment of a half center oscillator integratedcircuit,” in Nonlinear Dynamics of Electronic Systems 2013, July 2013. [39] A. Basu and P.E. Hasler, “Nullcline-based design of a silicon neuron,” IEEE Transactions on Circuits and Systems I, vol. 57, no. 11, pp. 2938–2947, November 2010.

390