High performance, low cost, thanks to the insatiable video ... approaching real-time performance. - Can be quite .... The BRIAN 2 Python simulation tool. Brette R ...
Easy-to-use GPU acceleration of neural network simulations with GeNN
James Paul Turner Esin Yavuz Thomas Nowotny University of Sussex School of Engineering and Informatics CoNNeCT Group INCF Neuroinformatics, Reading, 03/09/2016
General-Purpose Graphics Processing Unit (GPGPU) for neural network simulations ●
Central Processing Unit (CPU) simulations –
●
Graphics Processor Unit (GPU) simulations –
●
CPU cores cannot keep getting faster... Solution: use more cores!
High performance, low cost, thanks to the insatiable video games market
General-Purpose Graphics Processing Unit (GPGPU) for neural network simulations + Large neural network simulations approaching real-time performance - Can be quite difficult to write and optimise GPU code: –
Maximising device utilisation
–
Maximising memory bandwidth
The typical user probably doesn’t care about these details! 3
GPU-enhanced Neural Networks ●
GeNN is a C++ source library that produces GPU accelerated code
●
GeNN uses a code generation approach
●
GeNN is fast, flexible and easy to use
●
GeNN is open source and cross-platform (Linux, Mac and Windows)
●
●
●
Most simulation parameters are known up-front and can be hardcoded, saving valuable register and shared memory space The user need only define the neuron models, synapse models and connectivity, while GeNN does the technical and error-prone code The code is automatically optimised and load-balanced for the given network configuration and available hardware 4
1a. Defining Neuron Models neuronModel n; n.varNames.push_back("V"); n.varTypes.push_back("scalar"); n.varNames.push_back("U"); n.varTypes.push_back("scalar"); n.pNames.push_back("a"); n.pNames.push_back("b"); n.pNames.push_back("c"); n.pNames.push_back("d");
n.simCode = " \ if ($(V) >= 30.0) { \n\ $(V) = $(c); \n\ $(U) += $(d); \n\ } \n\ $(V) += (0.04 * $(V) * $(V) + 5.0 * $(V) \n\ + 140.0 - $(U) + $(Isyn)) * DT; \n\ $(U) += ($(a) * ($(b) * $(V) - $(U))) * DT; \n\ "; n.thresholdConditionCode = "$(V) >= 29.99"; nModels.push_back(n); const unsigned IZHIKEVICH = nModels.size() - 1;
Izhikevich EM: Simple model of spiking neurons. IEEE Transactions on neural networks, 2003 14(6), 1569-1572.
5
1b. Defining Pre-Synapse Models weightUpdateModel s; s.varNames.push_back("g"); s.varTypes.push_back("scalar"); s.pNames.push_back("Epre"); s.pNames.push_back("Vslope"); s.simCodeEvnt = " \ $(addtoinSyn) = ($(g) * tanh(($(V_pre) \n\ - $(Epre)) / $(Vslope))) * DT; \n\ $(updatelinsyn); \n\ "; s.evntThreshold = "$(V_pre) > $(Epre)"; weightUpdateModels.push_back(s); const unsigned GRADED_SYN = weightUpdateModels.size() - 1;
6
1c. Defining Post-Synapse Models postSynapseModel ps; ps.varNames.push_back("g"); ps.varTypes.push_back("scalar"); ps.pNames.push_back("tau"); ps.pNames.push_back("E"); ps.postSynDecay = " \ $(inSyn) *= exp(-DT / $(tau)); \n\ "; ps.postSynToCurrent = "$(inSyn) * ($(E) - $(V))"; postSynModels.push_back(ps); const unsigned EXP_DECAY = postSynModels.size() - 1;
7
2. Defining the Network void modelDefinition(NNmodel &model) { model.setName("MyModel"); model.setDT(0.1); double inParam[…], inInit[…]; model.addNeuronPopulation("In", 100, N_MODEL, inParam, inInit); double outParam[…], outInit[…]; model.addNeuronPopulation("Out", 100, N_MODEL, outParam, outInit); double preInit[…], preParam[…], postInit[…], postParam[…]; model.addSynapsePopulation("InOut", PRE_MODEL, DENSE, GLOBALG, NO_DELAY, POST_MODEL, "In", "Out", preInit, preParam, postInit, postParam); }
8
3. Simulation Control Code Outside of GeNN code, the user will: ●
Define input patterns and current
●
Define connectivity matrices and conductance values
●
Save, visualize, analyse and use the output Then use the GeNN-generated functions to:
●
Copy the data from host memory to device memory
●
Use stepTimeGPU() to integrate one step
●
Copy variable and spike data back to the host, as needed 9
Connectivity Options ●
Synapse data: dense or sparse representation?
●
Spike evaluation: per-postsynaptic or per-presynaptic neuron?
10
Benchmark: Izhikevich Model ●
Pulse-coupled network of Izhikevich neurons
●
Population is 80% excitatory and 20% inhibitory
●
1000 connections per neuron
●
Random conductances with sparse connectivity
●
Random parameters within a range
●
Random input to every neuron at each time step Izhikevich EM: Simple model of spiking neurons. IEEE Transactions on neural networks, 2003 14(6), 1569-1572.
11
Benchmark: Izhikevich Model
12 Adapted from: Yavuz E, Turner J, Nowotny T: GeNN: A code generation framework for accelerated brain simulations. Scientific Reports, 2016 6(18854).
Benchmark: Insect Olfaction Model 1000
100 20
learning
100 Nowotny, T., Huerta, R., Abarbanel, H. D., & Rabinovich, M. I. (2005). Self-organization in the olfactory system: one shot odour recognition in insects. Biological cybernetics, 93(6), 436-446.
13
Benchmark: Insect Olfaction Model
14 Adapted from: Yavuz E, Turner J, Nowotny T: GeNN: A code generation framework for accelerated brain simulations. Scientific Reports, 2016 6(18854).
Software Compatible with GeNN The SpineCreator interface for SpineML Richmond P, Cope A, Gurney K, Allerton DJ: From Model Specification to Simulation of Biologically Constrained Networks of Spiking Neurons, 2012 Network: Computation in Neural Systems, 23(4), 167-182
The BRIAN 2 Python simulation tool Brette R, Goodman DF: Brian: a simulator for spiking neural networks in Python, 2008 Front. Neuroinform., 2
15
Work in Progress ●
●
●
OpenCL code generation –
GeNN currently generates CUDA code (NVIDIA)
–
OpenCL also serves other devices (AMD, Intel, FPGA)
Multi-GPU with load-balancing –
Use more than one accelerator device optimally
–
Nearby neurons are ‘nearby’ in hardware
Numerical analysis of GPU simulations –
Rounding errors in parallel hardware
–
Model stability and integration method 16
Acknowledgements Core team Thomas Nowotny Esin Yavuz James Turner
Marcel Stimberg Dan Goodman Alex Cope Alan Diamond
https://github.com/genn-team/genn
17