universal computation are only true in the limit of infinite ... "Natural versus Universal Probability Complexity, and Entropy," in ... 1,000,000 Motor Neurons ...
“Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Systems” Chris Eliasmith Univ. Waterloo Waterloo, Canada
Charles H. Anderson Wash. Univ. Sch. Med. St. Louis, MO
Supported by the Mathers Foundation and the McDonnell Center for Higher Brain Function.
http://compneuro.uwaterloo.ca
MIT Press Jan. 2003
I also want to acknowledge my interactions with David Van Essen over the past 19 years and the support he has given me over the last 10.
Implementation “Theorems about Turing machines and universal computation are only true in the limit of infinite resources and infinite time.” (John Denker and Yann LeCun, Bell Labs) "Natural versus Universal Probability Complexity, and Entropy," in IEEE Workshop on the Physics of Computation, pp. 122-127, 1992
Implementation issues are therefore very important !!!
The First Question To Ask Is …
The First Question To Ask Is …
How do neuronal systems represent analog, real valued variables?
Why?
Why? Because Neuronal Systems Process Analog Signals
Neuronal Systems are Analog 1. Sensory Inputs (Vestibular – Visual)
Vestibular Signals: 1. 2. 3. 4.
Angular Head Acceleration Linear Head Acceleration 6 Degrees of Freedom ~1000 Neurons
Neuronal Systems are Analog 1. Sensory Inputs (Vestibular – Visual) 2. Motor Outputs
• ~100 Degrees of Freedom • ~700 Muscles • ~ 1,000,000 Motor Neurons
Neuronal Systems are Analog 1. Sensory Inputs (Vestibular – Visual) 2. Motor Outputs 3. Internal Cortical/Cognitive (Probabilities)
Neuronal Systems are Analog 1. Sensory Inputs (Vestibular – Visual) 2. Motor Outputs 3. Internal Cortical/Cognitive (Probabilities)
Some have suggested there are ~100 neurons per degree of freedom in cortical circuits.
So how do digital and neuronal systems represent real numbers?
Bit Representation System Variables
A/D Converter
Bit Variables
Bit Representation
Neuronal Representation System Variables
Neuron Variables
Neuronal Encoding
Neuronal Representation System Variables
Neuron Variables
Neuronal Linear Decoding Population Vector: Georgopolous et al. Linear Temporal Filters: Bialek et al.
Optimal Linear Decoding Vectors
Linear Decoding Representation
Neuron Tuning Curves 700
600
Encoding
Firing Rate
500
400
300
200
100
0 -1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
0.2
0.4
0.6
0.8
1
x
1
0.8
Estimate Actual
0.6
0.4
0.2
Decoding (Linearity)
0
-0.2
-0.4
-0.6
-0.8
-1 -1
-0.8
-0.6
-0.4
-0.2
0
x
Decoding Errors Encode
Responses
Decode
+Noise +Spikes Neuron Ensemble
1) Noise ~ 1/sqrt(N) 2) Static distortion ~1/N 3) Residuals from filtered spikes ~1/sqrt(N)
10
0
Distortion Noise Total 1/2 1/N 1/N
RMS ERROR Errors RMS
10
10
10
-1
-2
-3
10
0
10
1
Number of Neurons Number
10
2
10
3
Bits vs. Neurons •
Bits: (Almost infinite S/N ratio) 1. Precision scales as 2N, (N=#bits). 2. Fragile and costly in power.
Bits vs. Neurons •
Bits: (Almost infinite S/N ratio) 1. Precision scales as 2N, (N=#bits). 2. Fragile and costly in power.
•
Neurons: (S/N Ratio ~5/1) 1. Precision scales as sqrt(N), (N=#Neurons). 2. Fails gracefully, runs slower and/or loses precision…
Neurons work at the level of bits!!!
3 Key Elements of Framework 1) Neuronal Representation of Variables (Population codes). 2) Neural Implementation of Transformations (Synaptic coupling weights). 3) Mapping of Neuronal Dynamics onto Linear Control Theory (Spikes and Post-synaptic filters).
Neuronal Representations • Neuronal Variables: neuron outputs (as instantaneous firing rates or spike trains)
Examples
Neuronal Representations • Neuronal Variables: neuron outputs (as instantaneous firing rates or spike trains)
• Higher-level Variables: mathematical objects (scalars, vectors, functions, vector fields, probability functions, etc.)
Examples
Neuronal Representations • Neuronal Variables: neuron outputs (as instantaneous firing rates or spike trains)
• Higher-level Variables: mathematical objects (scalars, vectors, functions, vector fields, probability functions, etc.)
REPRESENTATION
(1)The higher-level variables are encoded into the neuronal variables using nonlinear spike generation rules,
Examples
Neuronal Representations • Neuronal Variables: neuron outputs (as instantaneous firing rates or spike trains)
• Higher-level Variables: mathematical objects (scalars, vectors, functions, vector fields, probability functions, etc.)
REPRESENTATION
(1)The higher-level variables are encoded into the neuronal variables using nonlinear spike generation rules,
(2) And decoded from the neuron variables
using linear decoding, where a weighted average is taken over neural populations and over time, i.e. linear filters.
Examples
Moving on to Spiking Neurons
Leaky Integrate and Fire Spiking Model
Firing Rate 200
On
150
100
50
0
x
n irg F te a R
-50
-100
-150
-200 -1
-0.8
-0.6
-0.4
-0.2
0 0.2 Input voltage
0.4
0.6
0.8
1
Firing Rate 200
Off
150
On
100
50
0
x
n irg F te a R
-50
-100
-150
-200 -1
-0.8
-0.6
-0.4
-0.2
0 0.2 Input voltage
0.4
0.6
0.8
1
Firing Rate 200
Off
150
On
100
50
0
x
n irg F te a R
-50
-100
Diff
-150
-200 -1
-0.8
-0.6
-0.4
-0.2
0 0.2 Input voltage
0.4
0.6
0.8
1
Reminder to run Matlab
On-Off Pair 2
2< --1
On-Off Pair
1
0
-1
-2 0
0.05
0.1
0.05
0.1
0.15
0.2
0.25
0.15
0.2
0.25
2nd 100 Neurons E ns 2
1
0.5
0
-0 .5
-1 0
tim e(sec)
On-Off Pair 2
2< --1
On-Off Pair
1
0
-1
-2 0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
0.01
0.02
0.03
0.04
0.05 tim e(sec)
0.06
0.07
0.08
0.09
0.1
2nd 100EnNeurons s2
0.6 0.4 0.2 0 -0 .2 -0 .4 -0 .6 -0 .8 0
100 Neurons 1
100 Neurons 2< --1
0.5
0
-0 .5
-1 0
0.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
0.1
0.01
0.02
0.03
0.04
0.05 tim e(sec)
0.06
0.07
0.08
0.09
0.1
2nd 100EnNeurons s2
0.6 0.4 0.2 0 -0 .2 -0 .4 -0 .6 -0 .8 0
Linear decoding can do more than support representation.
Population Representation • Diversity of neuronal responses is important! Representation
Neuron Tuning Curves 700
Encoding
Firing Rate
600
500
400
300
200
100
0 -1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
1
0.2
0.4
0.6
0.8
1
x 1
0.8
Estimate Actual
0.6
0.4
0.2
Decoding (Linearity)
0
-0.2
-0.4
-0.6
-0.8
-1 -1
-0.8
-0.6
-0.4
-0.2
0
x
Population Representation • Linear decoders also support functions of x. Representation
Extracting Functions
Neuron Tuning Curves 700
600
1
0.9
Encoding
Firing Rate
0.8 500
0.7
0.6 400 0.5
Quadratic
300
0.4
0.3
200
0.2 100 0.1
0 -1
-0.8
-0.6
-0.4
-0.2
0
0.2
0.4
0.6
0.8
0 -1
1
-0.8
-0.6
-0.4
-0.2
x 1
1
0.8
0.8
Estimate Actual
0.6
Decoding (Linearity)
0.4
0.2
0.2
Cubic
0
-0.2
-0.4
-0.6
-0.8
-0.8
-0.4
-0.2
0.6
0.8
1
0.2
0.4
0.6
0.8
1
0
-0.6
-0.6
0.4
-0.2
-0.4
-0.8
0.2
0.6
0.4
-1 -1
0
x
0
x
0.2
0.4
0.6
0.8
1
-1 -1
-0.8
-0.6
-0.4
-0.2
0
x
Neuron activities, ai(x)
0.35 0.3 0.25 0.2 0.15 0.1 0.05 0 -1
-0.5
0 x
0.5
1
4
Principle Components, ψ n(x)
3 2 1 0 -1 -2
ψ 1(x) ψ 2(x) ψ 3(x) ψ (x)
-3 -4 -1
4
-0.5
0 x
0.5
1
ai(x) Along Principle Component Axes
ψ4(x)
1
0
-1
-2 2 0
ψ3(x)
-2
-4
-2
0
ψ2(x)
2
4
Computing Functions by Linear Decoding
Moving on
3 Key Elements of Framework 1) Neuronal Representation of Variables (Population codes). 2) Neural Implementation of Transformations (Synaptic coupling weights). 3) Mapping of Neuronal Dynamics onto Linear Control Theory (Spikes and Post-synaptic filters).
Transformations
Transformations
Transformations
Transformations
Transformations
Neuron Coupling Weights
• Decode the incoming spikes. • Applies the transformation. • Encodes the result back into soma currents that generate the output spikes.
Neuron Coupling Weights
• Decode the incoming spikes. • Applies the transformation. • Encodes the result back into soma currents that generate the output spikes. Learning coupling weights is complex!!
3 Key Elements of Framework 1) Neuronal Representation of Variables (Population codes). 2) Neural Implementation of Transformations (Synaptic coupling weights). 3) Mapping of Neuronal Dynamics onto Linear Control Theory (Spikes and Post-synaptic filters).
Modern Control Theory
u(t) B
dx/dt
x(t) 1/s
A
Neural Dynamics and Control Theory • Modern linear control theory h( s ) =
1 s
u(t) B
1 1 + sτ syn
x(t) 1/s
A
• Neural Dynamics h( s ) =
dx/dt
u(t)
x(t) B'
1/(1+sτ)
A'
Neural Dynamics u(t)
x(t) B'
1/(1+sτ)
A'
• Filter - Integrator • Oscillator
Neural Dynamics u(t)
x(t) B'
1/(1+sτ)
A'
• Filter - Integrator • Oscillator
Changing the coefficients in the A Matrix changes the dynamics!!
Neuron Coupling Weights
• Decode the incoming spikes. • Applies the transformation. • Encodes the result back into soma currents that generate the output spikes. Learning coupling weights is complex!!
Synthesis (Putting it together) higher-level description
hsyn(t) input spikes
encoding decoding
neuron soma
dynamics matricies
hsyn(t) PSCs
synaptic weights dendrites
neuronal-level description
recurrent connections
output spikes
Synthesis (details) higher-level description
encoding input spikes
decoding
neuron soma
dynamics matricies
PSCs
synaptic weights dendrites
neuronal-level description
recurrent connections
output spikes
Applications •
Sensory Systems: 1. 2. 3.
•
Motor Systems: 1. 2. 3.
•
Visual System (Brandon Westover, David Van Essen) Sound Localization in Barn Owl (Brian Fischer) Vestibular System (Dora Angelaki)
Neural Integrator (Chris Eliasmith)---{S. Seung} Lamprey (Chris Eliasmith) Arm movements (Zoran Nenadic)
Cortical Systems: 1. 2. 3.
V1 (Brandon, Greg DeAngelis and David Van Essen) MT (Harris Nover and Greg DeAngelis) PDF’s (John Clark, Mike Barber)
Major Unanswered Questions 1. How to incorporate learning. 2. Efficient implementation of contextual – multiplicative interactions. 3. What is the source of the Poisson like fluctuations observed in cortical and cerebellar Purkinjie cells?
Summary: A Unified Approach to Neuroscience •
Applicable to sensory-motor-cortical systems; insects to humans.
•
Links engineering and neuroscience.
•
Bridges high level and low level modeling.
•
A “compiler” for building model circuits.
http://compneuro.uwaterloo.ca
Published by MIT Press, on Jan. 2003
http://compneuro.uwaterloo.ca
Neural-Simulations Matlab http://compneuro.uwaterloo.ca 1. Gen_ensemble.m •
Creates neuronal ensembles that represent vector spaces.
2. Setupsim.m •
Sets up a simulation from a text file describing the circuit.
3. Runsim.m •
Runs the simulation. C. H. Anderson, Chris Eliasmith and John Harwell
Dynamic Filter
Ensemble 1 x(t)
Ensemble 3
x
y
y(t)
k*y [y,k] Ensemble 4
k Ensemble 2
k(t)
Dynamic Filter.txt Ensemble 1 x(t)
Ensemble 1 (x) External 1
Ensemble 3
x
y(t)
y
Ensemble 2 (k) Feedback gain control External 2
k*y [y,k] Ensemble 4
Connections
k
k(t)
Ensemble 2
Ensemble 3 (y) Vector 1 Matrix 0.1 %% Scale this as 1-k to keep the output (y) within range. Function 4 Matrix 1
4 Ensembles Ensemble 1 X_N100D1 (x) Ensemble 2 Y_N100D1 (k) Ensemble 3 Z_N100D1 (y) Ensemble 4 X_N100D2 (y,k) Quad 01 %% Ensemble 4 will output the function k*y 00
Ensemble 4 (k,y) Hidden layer neurons for multiplication Vector 2 Matrix 1 0 Vector 3 Matrix 0 1 End
Outputs
3