PSYC 6256 Principles of Neural Coding. 2. SIGNAL DETECTION ... estimating
signals. □ For noisy signals ... Origins in radar detection theory. □ Developed ...
2. SIGNAL DETECTION THEORY J. Elder
PSYC 6256 Principles of Neural Coding
Signal Detection Theory Probability & Bayesian Inference
2
Provides a method for characterizing human performance in detecting, discriminating and estimating signals. For noisy signals, provides a method for identifying the optimal detector (the ideal observer) and for expressing human performance relative to this. Origins in radar detection theory Developed through the 1950s and on by Peterson, Birdsall, Fox, Tanner, Green & Swets
PSYC 6256 Principles of Neural Coding
J. Elder
Example 1 Probability & Bayesian Inference
3
The observer sits in a dark room On every trial, a dim light will be flashed with 50% probability. The observer indicates whether she believes the light was flashed or not. This is a yes-no detection task.
PSYC 6256 Principles of Neural Coding
J. Elder
Noise Probability & Bayesian Inference
4
In this example, the information useful for the task is the light energy of the stimulus. By the time the stimulus information is received by decision centres in the brain, it will be corrupted by many sources of noise:
photon noise
isomerization noise
neural noise
Many of these noise sources are Poisson in nature: the dispersion increases with the mean. PSYC 6256 Principles of Neural Coding
J. Elder
Equal-Variance Gaussian Case Probability & Bayesian Inference
5
It is often possible to approximate this noise as Gaussian-distributed, with the same variance for both stimulus conditions. Then the noise is independent of the signal state.
PSYC 6256 Principles of Neural Coding
J. Elder
Discriminability d’ Probability & Bayesian Inference
6
(
)
(
)
)
)
⎛ x−µ L exp ⎜ − 2 2σ ⎜⎝ 2πσ
p x | S = sH =
(
(
⎛ x−µ H exp ⎜ − 2 2σ ⎜⎝ 2πσ
p x | S = sL =
1
1
2
⎞ ⎟ ⎟⎠
2
⎞ ⎟ ⎟⎠
signal separation µH − µL d'= = signal dispersion σ
µH − µL
σ
PSYC 6256 Principles of Neural Coding
J. Elder
Criterion Threshold Probability & Bayesian Inference
7
The internal response is often approximated as a continuous variable, called the decision variable. But to yield an actual decision, this has to be converted to a binary variable (yes/no). A reasonable way to do this is to define a criterion threshold z: z
x ≥ z → ' yes' x < z → 'no' x
x
PSYC 6256 Principles of Neural Coding
J. Elder
Effect of Shifting the Criterion 8
Probability & Bayesian Inference
PSYC 6256 Principles of Neural Coding
J. Elder
How did we calculate these numbers? Probability & Bayesian Inference
9
(
)
(
)
)
)
⎛ x−µ L exp ⎜ − 2 2σ ⎜⎝ 2πσ
p x | S = sH =
(
(
⎛ x−µ H exp ⎜ − 2 2σ ⎜⎝ 2πσ
p x | S = sL =
1
1
2
⎞ ⎟ ⎟⎠
2
⎞ ⎟ ⎟⎠
d ' = zFA − zHIT
µH − µL
σ
PSYC 6256 Principles of Neural Coding
J. Elder
What is the right criterion to use? Probability & Bayesian Inference
10
Suppose the observer wants to maximize the expected number of times they are right. Then the optimal decision rule is to always select the state s with higher probability for the observed internal response x:
( ) ≥ 1→ ' yes ' p (x | s ) p (x | s ) < 1→ ' no ' p (x | s ) p x | sH L
The ‘likelihood ratio test’
H L
This is the maximum likelihood detector. For the equal-variance case, this means that the criterion is the average of the two signal levels: 1 z = µL + µH 2
(
)
PSYC 6256 Principles of Neural Coding
z
J. Elder
Optimal Performance Probability & Bayesian Inference
11
The performance of the maximum likelihood observer for this yes/no task is given by ⎛ d′ ⎞ p(correct) = p(HIT) = p(CORRECT REJECT) = erfc ⎜ − ⎝ 2 2 ⎟⎠
PSYC 6256 Principles of Neural Coding
J. Elder
Bias Probability & Bayesian Inference
12
For this optimal decision rule, the different types of errors are balanced: p(FA) = p(MISS) For observers that use a different criterion, the different types of errors will be unbalanced. Such observers have lower p(correct) and are said to be biased.
z
PSYC 6256 Principles of Neural Coding
J. Elder
ROC Curves Probability & Bayesian Inference
13
Suppose the experiment is repeated many times under different instructions. The first time, the observer is instructed to be extremely stringent in their criterion, only reporting ‘yes’ when they are 100% sure the light was flashed. On subsequent repetitions, the observer is instructed to gradually relax their criterion.
PSYC 6256 Principles of Neural Coding
J. Elder
ROC Curves Probability & Bayesian Inference
14
As the criterion threshold is swept from right to left, p(HIT) increases, but p(FA) also increases. The resulting plot of p(HIT) vs p(FA) is called a receiveroperating characteristic (ROC).
Increasing d ′ d′ = 0
PSYC 6256 Principles of Neural Coding
J. Elder
ROC Curves Probability & Bayesian Inference
15
Note that d’ remains fixed as the criterion is varied! Thus d’ is criterion-invariant, and is thus a pure reflection of the signal-to-noise ratio.
PSYC 6256 Principles of Neural Coding
J. Elder
Example 2: Motion Direction Discrimination Probability & Bayesian Inference
16
Britten et al (1992) Random dot kinematogram Signal dots are either all moving up or all moving down Noise dots are moving in random directions
PSYC 6256 Principles of Neural Coding
J. Elder
100% Coherence 17
Probability & Bayesian Inference
PSYC 6256 Principles of Neural Coding
J. Elder
30% Coherence 18
Probability & Bayesian Inference
PSYC 6256 Principles of Neural Coding
J. Elder
5% Coherence 19
Probability & Bayesian Inference
PSYC 6256 Principles of Neural Coding
J. Elder
0% Coherence 20
Probability & Bayesian Inference
PSYC 6256 Principles of Neural Coding
J. Elder
The Medial Temporal Area (V5) 21
Probability & Bayesian Inference
PSYC 6256 Principles of Neural Coding
www.thebrain.mcgill.ca
J. Elder
Experimental Details Probability & Bayesian Inference
22
Signal direction always in preferred or antipreferred direction for cell. What kind of task is this? Note that now there is external noise as well as internal noise. To calculate neural discrimination performance, assumed neuron paired with identical neuron, tuned to opposite direction of motion.
PSYC 6256 Principles of Neural Coding
J. Elder
Behaviour
Anti-Preferred Direction
Neuron
Preferred Direction
Hit Rate
False Alarm Rate
Priors Probability & Bayesian Inference
25
Note that if the probabilities of the two signal states are not equal, the maximum likelihood observer will be suboptimal. In this case we must make use of the posterior ratio.
( ) ≥ 1→ ' yes ' p (s | x ) p (s | x ) < 1→ ' no ' p (s | x ) p sH | x L
Maximum a posteriori (MAP) rule
H L
PSYC 6256 Principles of Neural Coding
J. Elder
MAP Inference Probability & Bayesian Inference
26
Using Bayes’ rule, we obtain: ( ) = p ( x | s ) p (s ) p (s | x ) p ( x | s ) p (s )
p sH | x
H
H
L
L
L
Thus we simply scale the likelihoods by the priors.
PSYC 6256 Principles of Neural Coding
J. Elder
Loss and Risk Probability & Bayesian Inference
27
Maximizing p(correct) is not always the best thing to do. How would you adjust your criterion if you were
A
venture capitalist trying to detect the next Google? A pilot looking for obstacles on a runway?
PSYC 6256 Principles of Neural Coding
J. Elder
Loss Function Probability & Bayesian Inference
28
In general, different types of correct decision or action will yield different payoffs, and different types of errors will yield different costs. These differences can be accounted for through a loss function: Let a(x) represent the action of the observer, given internal response x.
(
)
Then L s,a(x) represents the cost of taking action a, given world state s.
PSYC 6256 Principles of Neural Coding
J. Elder
The Ideal Observer Probability & Bayesian Inference
29
The Ideal Observer uses the decision rule that minimizes the Expected Loss, aka the Risk R(a|x):
(
)
(
)
R(a | x) = ∑ L s,a(x) p(s, x) = ∑ L s,a(x) p(x | s)p(s) s
s
PSYC 6256 Principles of Neural Coding
J. Elder
Example 3: Slant Estimation 30
Probability & Bayesian Inference
PSYC 6256 Principles of Neural Coding
J. Elder