Dec 15, 2014 - Bayesian Dynamic System Estimation. Using Markov Chain Monte Carlo Simulation. Brett Ninness, Khoa T. Tran and Christopher M. Kellett.
Bayesian Dynamic System Estimation Using Markov Chain Monte Carlo Simulation
Brett Ninness, Khoa T. Tran and Christopher M. Kellett University of Newcastle, Australia
December 15, 2014
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
1 / 16
Motivating Example Compare Prediction Error vs. Conditional Mean estimates
Prediction Error Estimate θˆPE = arg min θ
The Univ. of Newcastle, Australia
N X
2 yt − yˆt (θ)
(1)
t=1
Bayesian Dynamic System Estimation
December 15, 2014
2 / 16
Motivating Example Compare Prediction Error vs. Conditional Mean estimates
Prediction Error Estimate θˆPE = arg min θ
N X
2 yt − yˆt (θ)
(1)
t=1
Conditional Mean Estimate θˆCM = E{θ | Y } =
Z θ p(θ | Y ) dθ
(2)
where Y = {y1 , y2 , . . . , yN }
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
2 / 16
Motivating Example An1output error model
0.8 u G(θ) =
y
θ 1 + θ 2 q − 1 + . . .+ θ 5 q − 4 1 + θ 6 q − 1 + . . .+ θ 9 q − 4
0.6
0.4
To e s t imat e θ = [θ 1 θ 2 . . .θ 9 ] f r om t he input a Z = [u y ] yˆt -out = Gθput (q)udat t
0.2
0 0
yt = yˆt + et
0.2
The Univ. of Newcastle, Australia
0.4
0.6
Bayesian Dynamic System Estimation
0.8
December 15, 2014
1
3 / 16
Motivating Example An1output error model
0.8 u G(θ) =
y
θ 1 + θ 2 q − 1 + . . .+ θ 5 q − 4 1 + θ 6 q − 1 + . . .+ θ 9 q − 4
0.6
0.4
To e s t imat e θ = [θ 1 θ 2 . . .θ 9 ] f r om t he input a Z = [u y ] yˆt -out = Gθput (q)udat t
0.2
yt = yˆt + et
Given et ∼ U[−0.3, 0.3] 0 0
0.2
The Univ. of Newcastle, Australia
0.4
0.6
Bayesian Dynamic System Estimation
0.8
December 15, 2014
1
3 / 16
Motivating Example An1output error model
0.8 u G(θ) =
y
θ 1 + θ 2 q − 1 + . . .+ θ 5 q − 4 1 + θ 6 q − 1 + . . .+ θ 9 q − 4
0.6
0.4
To e s t imat e θ = [θ 1 θ 2 . . .θ 9 ] f r om t he input a Z = [u y ] yˆt -out = Gθput (q)udat t
0.2
yt = yˆt + et
Given et ∼ U[−0.3, 0.3], the posterior distribution of θ becomes 0 0
0.2
0.4
0.6
0.8
1
p(θ | Y ) ∝ p(θ) p(Y | θ)
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
3 / 16
Motivating Example An1output error model
0.8 u G(θ) =
y
θ 1 + θ 2 q − 1 + . . .+ θ 5 q − 4 1 + θ 6 q − 1 + . . .+ θ 9 q − 4
0.6
0.4
To e s t imat e θ = [θ 1 θ 2 . . .θ 9 ] f r om t he input a Z = [u y ] yˆt -out = Gθput (q)udat t
0.2
yt = yˆt + et
Given et ∼ U[−0.3, 0.3], the posterior distribution of θ becomes 0 0
0.2
0.4
p(Y | θ) = p(θ | Y ) ∝ H p(θ) H H
0.6 N Y
0.8
U[−0.3,0.3] (yt − yˆt )
1
(3)
t=1
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
3 / 16
Motivating Example Very short and noisy step response 2 yt = G θ( q ) u t + e t G θ( q ) u t
1.5
1
0.5
e ∼ U [−0.3, 0.3]
0
−0.5 0
10
20
30
40
50
Figure : Noise free response (red) vs. simulated measurement (blue) The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
4 / 16
Prediction Error Estimate vs. Conditional Mean Estimate Bayesian estimation yields the minimum mean square error estimator
Figure : Estimated frequency responses on the same system using 2 methods Prediction error estimates versus truth (bold)
8 6 4 2 0 −2 −4 −6 −8 −6
−4
−2
0
2
4
6
(a) Prediction Error Estimate
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
5 / 16
Prediction Error Estimate vs. Conditional Mean Estimate Bayesian estimation yields the minimum mean square error estimator
Figure : Estimated frequency responses on the same system using 2 methods Cond Mean estimates versus truth (bold)
Prediction error estimates versus truth (bold)
8
8
6
6
4
4
2
2
0
0
−2
−2
−4
−4
−6
−6
−8 −6
−4
−2
0
2
4
(a) Prediction Error Estimate
The Univ. of Newcastle, Australia
6
−8 −6
−4
−2
0
2
4
6
(b) Conditional Mean Estimate
Bayesian Dynamic System Estimation
December 15, 2014
5 / 16
Computational Challenge of Bayesian Estimation Using numerical integration
0.7 0.6
p(θ | Y )
T he c ondit ional me an e s t imat e Z E {θ | Y } = θ p ( θ | Y ) dθ
0.5 is a mul t i di m e ns i o nal i nt e g ral 0.4 0.3 0.2
U s ing Simps on’s r ule ?
0.1
e v a l s = 30 9 = 1.9683 × 10 1 3
0 −6
−5
−4
The Univ. of Newcastle, Australia
−3
−2
−1
Bayesian Dynamic System Estimation
0
1 December 15, 2014
2 6 / 16
Monte Carlo Integration Using random sampling
Build a random number generator θk ∼ p(θ | Y )
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
7 / 16
Monte Carlo Integration Using random sampling
Build a random number generator θk ∼ p(θ | Y ), then the central limit theorem gives M 1 X 1 d f (θk ) − E{f (θ)} → N (0, σ 2 ) M M
(4)
k=1
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
7 / 16
Monte Carlo Integration Using random sampling
Build a random number generator θk ∼ p(θ | Y ), then the central limit theorem gives M 1 X 1 d f (θk ) − E{f (θ)} → N (0, σ 2 ) M M
(4)
k=1
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
7 / 16
Monte Carlo Integration Using random sampling
Build a random number generator θk ∼ p(θ | Y ), then the central limit theorem gives M 1 X 1 d f (θk ) − E{f (θ)} → N (0, σ 2 ) M M
(4)
k=1
However, σ 2 must be finite σ 2 = σ02 1 + 2
∞ X
corr f (θ0 ), f (θi )
(5)
i=1
where σ02 = var{f (θ0 )} with θ0 ∼ p(θ | Y )
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
7 / 16
Monte Carlo Integration Using random sampling
θ
N e ar i.i.d. s ample s
Pe r f e c t c onve r ge nc e
20
20
10
10
0
0
−10
−10
−20
−20
θ
St r ongly c or r e lat e d s ample s
I nac c ur at e H is t ogr am
20
20
10
10
0
0
−10
−10
−20
−20
The Univ. of Newcastle, Australia
Sample his t ogr am Tr ue de ns ity
Bayesian Dynamic System Estimation
December 15, 2014
8 / 16
Metropolis Algorithm Random sampling using Markov chains
Let π(θ) be the target distribution we want to draw sample from
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
9 / 16
Metropolis Algorithm Random sampling using Markov chains
Let π(θ) be the target distribution we want to draw sample from Take any number generator γ(θk+1 | θk )
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
9 / 16
Metropolis Algorithm Random sampling using Markov chains
Let π(θ) be the target distribution we want to draw sample from Take any number generator γ(θk+1 | θk ) and apply an acceptance probability α(θk+1 |θk ) = 1 ∧
The Univ. of Newcastle, Australia
π(θk+1 ) γ(θk |θk+1 ) · π(θk ) γ(θk+1 |θk )
Bayesian Dynamic System Estimation
December 15, 2014
(6)
9 / 16
Metropolis Algorithm Random sampling using Markov chains
Let π(θ) be the target distribution we want to draw sample from Take any number generator γ(θk+1 | θk ) and apply an acceptance probability α(θk+1 |θk ) = 1 ∧
π(θk+1 ) γ(θk |θk+1 ) · π(θk ) γ(θk+1 |θk )
(6)
If we chose θk+1 − θk ∼ N (0, sk Σk )
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
9 / 16
Metropolis Algorithm Random sampling using Markov chains
Let π(θ) be the target distribution we want to draw sample from Take any number generator γ(θk+1 | θk ) and apply an acceptance probability α(θk+1 |θk ) = 1 ∧
π(θk+1 ) π(θk )
·
H γ(θ ) Hk |θ k+1 H H γ(θk+1 |θH ) kH
(6)
If we chose θk+1 − θk ∼ N (0, sk Σk ) then γ(θk+1 |θk ) = γ(θk |θk+1 )
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
9 / 16
1
Metropolis Algorithm Random sampling using Markov chains
Let π(θ) be the target distribution we want to draw sample from Take any number generator γ(θk+1 | θk ) and apply an acceptance probability α(θk+1 |θk ) = 1 ∧
π(θk+1 ) π(θk )
·
H γ(θ ) Hk |θ k+1 H H γ(θk+1 |θH ) kH
(6)
If we chose θk+1 − θk ∼ N (0, sk Σk ) then γ(θk+1 |θk ) = γ(θk |θk+1 ) N ar r ow pr op os al
Tar ge t de ns ity W ide pr op os al 0
1
θ2
The Univ. of Newcastle, Australia
3
4
5
Bayesian Dynamic System Estimation
December 15, 2014
9 / 16
1
Metropolis Algorithm Random sampling using Markov chains
Let π(θ) be the target distribution we want to draw sample from Take any number generator γ(θk+1 | θk ) and apply an acceptance probability α(θk+1 |θk ) = 1 ∧
π(θk+1 ) π(θk )
·
H γ(θ ) Hk |θ k+1 H H γ(θk+1 |θH ) kH
(6)
If we chose θk+1 − θk ∼ N (0, sk Σk ) then γ(θk+1 |θk ) = γ(θk |θk+1 ) 10
N ar r ow pr op os al
Tar ge t
θ2
0
−10
P r op os al
Tar ge t de ns ity W ide pr op os al 0
1
θ2
The Univ. of Newcastle, Australia
3
4
θ1 5 −20 −15
−10
−5
Bayesian Dynamic System Estimation
0
5
10
December 15, 2014
15
9 / 16
Adaptive Metropolis Sampling Tuning sk and Σk
Optimise sk using Robbins–Monro approximation α(θk+1 | θk ) − α∗ log sk+1 = log sk + k
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
(7)
10 / 16
Adaptive Metropolis Sampling Tuning sk and Σk
Optimise sk using Robbins–Monro approximation α(θk+1 | θk ) − α∗ log sk+1 = log sk + k Calculate the sample variance Σk by the recursive equation T k −1 1 Σk+1 = Σk + θk+1 − θ¯k θk+1 − θ¯k k k +1
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
(7)
(8)
10 / 16
Adaptive Metropolis Sampling Tuning sk and Σk
Optimise sk using Robbins–Monro approximation α(θk+1 | θk ) − α∗ log sk+1 = log sk + k Calculate the sample variance Σk by the recursive equation T k −1 1 Σk+1 = Σk + θk+1 − θ¯k θk+1 − θ¯k k k +1
(7)
(8)
10 α∗ ∈ [0.2, 0.4] 5
0
−5
The Univ. of Newcastle, −10 Australia
Bayesian Dynamic System Estimation
December 15, 2014
10 / 16
Slice Sampling Transform π into a uniform distribution on a “slice”
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
11 / 16
Slice Sampling Transform π into a uniform distribution on a “slice”
Introduce h ∼ φ(h | θ) and form φ(h, θ) = π(θ)φ(h | θ)
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
(9)
December 15, 2014
11 / 16
Slice Sampling Transform π into a uniform distribution on a “slice”
Introduce h ∼ φ(h | θ) and form φ(h, θ) = π(θ)φ(h | θ)
(9)
The marginal density of θ ∼ φ is Z φ(θ) = π(θ)φ(h | θ) dh
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
11 / 16
Slice Sampling Transform π into a uniform distribution on a “slice”
Introduce h ∼ φ(h | θ) and form φ(h, θ) = π(θ)φ(h | θ)
(9)
The marginal density of θ ∼ φ is Z φ(θ) = π(θ) φ(h | θ) dh
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
11 / 16
Slice Sampling Transform π into a uniform distribution on a “slice”
Introduce h ∼ φ(h | θ) and form φ(h, θ) = π(θ)φ(h | θ)
(9)
The marginal density of θ ∼ φ is Z φ(θ) = π(θ) φ(h | θ) dh = π(θ)
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
(10)
December 15, 2014
11 / 16
Slice Sampling Transform π into a uniform distribution on a “slice”
Introduce h ∼ φ(h | θ) and form φ(h, θ) = π(θ)φ(h | θ)
(9)
The marginal density of θ ∼ φ is Z φ(θ) = π(θ) φ(h | θ) dh = π(θ)
(10)
If φ(h | θ) = U[0, π(θ)]
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
11 / 16
Slice Sampling Transform π into a uniform distribution on a “slice”
Introduce h ∼ φ(h | θ) and form φ(h, θ) = π(θ)φ(h | θ)
(9)
The marginal density of θ ∼ φ is Z φ(θ) = π(θ) φ(h | θ) dh = π(θ)
(10)
If φ(h | θ) = U[0, π(θ)] = 1/π(θ), ∀ 0 ≤ h ≤ π(θ)
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
11 / 16
Slice Sampling Transform π into a uniform distribution on a “slice”
Introduce h ∼ φ(h | θ) and form φ(h, θ) = π(θ)φ(h | θ)
(9)
The marginal density of θ ∼ φ is Z φ(θ) = π(θ) φ(h | θ) dh = π(θ)
(10)
If φ(h | θ) = U[0, π(θ)] = 1/π(θ), ∀ 0 ≤ h ≤ π(θ) then φ(h, θ) = π(θ)/π(θ) = 1
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
(11)
December 15, 2014
11 / 16
Slice Sampling Transform π into a uniform distribution on a “slice”
Introduce h ∼ φ(h | θ) and form φ(h, θ) = π(θ)φ(h | θ)
(9)
The marginal density of θ ∼ φ is Z φ(θ) = π(θ) φ(h | θ) dh = π(θ)
(10)
If φ(h | θ) = U[0, π(θ)] = 1/π(θ), ∀ 0 ≤ h ≤ π(θ) then φ(h, θ) = 1Υ , Υ = {(h, θ) | 0 ≤ h ≤ π(θ)}
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
(11)
December 15, 2014
11 / 16
Slice Sampling Transform π into a uniform distribution on a “slice”
Introduce h ∼ φ(h | θ) and form φ(h, θ) = π(θ)φ(h | θ)
(9)
The marginal density of θ ∼ φ is Z φ(θ) = π(θ) φ(h | θ) dh = π(θ)
(10)
If φ(h | θ) = U[0, π(θ)] = 1/π(θ), ∀ 0 ≤ h ≤ π(θ) then φ(h, θ) = 1Υ , Υ = {(h, θ) | 0 ≤ h ≤ π(θ)}
(11)
Sampling from π(θ) ⇔ uniform sampling from Υ The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
11 / 16
Slice Sampling Transform π into a uniform distribution on a “slice” 0.8
h π ( θ k)
θ k + 1 ∼ U [S ( h k ) ]
0.6 hk 0.4
Υ = {( h , θ ) | 0 ≤ h ≤ π ( θ ) }
0.2
0 −1
0
1
θk
2
3
4
5
Slice Sampling Transform π into a uniform distribution on a “slice” 0.8
h π ( θ k)
θ k + 1 ∼ U [S ( h k ) ]
0.6 hk 0.4
Υ = {( h , θ ) | 0 ≤ h ≤ π ( θ ) }
0.2
0 −1
0
1
θk
2
3
4
5
hk ∼ ϕ(h | θk ) = U[0, π(θk )]
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
12 / 16
Slice Sampling Transform π into a uniform distribution on a “slice” 0.8
h π ( θ k)
θ k + 1 ∼ U [S ( h k ) ]
0.6 hk 0.4
Υ = {( h , θ ) | 0 ≤ h ≤ π ( θ ) }
0.2
0 −1
0
1
θk
2
3
4
5
hk ∼ ϕ(h | θk ) = U[0, π(θk )] θk+1 ∼ ϕ(θ | hk ) = U[S(hk )]
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
12 / 16
Slice Sampling Transform π into a uniform distribution on a “slice” 0.8
h π ( θ k)
θ k + 1 ∼ U [S ( h k ) ]
0.6 hk 0.4
Υ = {( h , θ ) | 0 ≤ h ≤ π ( θ ) }
0.2
0 −1
0
1
θk
2
3
4
5
hk ∼ ϕ(h | θk ) = U[0, π(θk )] θk+1 ∼ ϕ(θ | hk ) = U[S(hk )] S(hk ) = {θ | π(θ) ≥ hk } The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
12 / 16
Slice Sampling Uniform sampling on line segments 10 8
C ar t e s ian dir e c t ions
6 4 2 0
S ( h k)
−2 −4 −6
Eige n-dir e c t ions
−8
Σk → ek
The−10 Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
13 / 16
Simulation Study Compare Slice sampling vs. Adaptive Metropolis
Same dynamics yt = Gθ (q)ut + et , et ∼ U[−0.3, 0.3]
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
14 / 16
Simulation Study Compare Slice sampling vs. Adaptive Metropolis
Same dynamics yt = Gθ (q)ut + et , et ∼ U[−0.3, 0.3]
The test models include Chebyshev low–pass filters with order 1–5.
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
14 / 16
Simulation Study Compare Slice sampling vs. Adaptive Metropolis
Same dynamics yt = Gθ (q)ut + et , et ∼ U[−0.3, 0.3]
The test models include Chebyshev low–pass filters with order 1–5. Performance indicator is an average squared error kˆ pM (θ) −
π(θ)k2
nθ Z h i2 1 X = p ˆM (θ(i) ) − π(θ(i) ) dθ(i) nθ
(12)
i=1
at different sample sizes M
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
14 / 16
Simulation Study Compare Slice sampling vs. Adaptive Metropolis
Same dynamics yt = Gθ (q)ut + et , et ∼ U[−0.3, 0.3]
The test models include Chebyshev low–pass filters with order 1–5. Performance indicator is an average squared error kˆ pM (θ) −
π(θ)k2
nθ Z h i2 1 X = p ˆM (θ(i) ) − π(θ(i) ) dθ(i) nθ
(12)
i=1
at different sample sizes M The true target density is approximated with 2 × 109 samples.
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
14 / 16
Simulation Study Convergence rate of Adaptive Metropolis and Slice Sampling
Or de r Or de r Or de r Or de r Or de r −3
1 2 3 4 5
−3
10
kp (.) − π(.)k s e
10
−4
−4
10
10
6
7
10
10 I t e r at i on s
8
10
(a) Adaptive Metropolis The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
6
7
10
10 I t e r at i on s
(b) Slice Sampling December 15, 2014
15 / 16
Questions & Feedbacks
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
16 / 16
Further Research Metropolis within Slice Sampling
π(θ )
Ste p 1: Se le c t a random slic e h ∼ U [0, π(θ )]
Ste p 2: Prop ose a ne w sample θ ′ ∼ N (θ , sΣ )
N (θ , sΣ ) h
Ste p 3: Ke e p θ ′ if π(θ ′ ) ≥ h Othe rw ise , ke e p old sample .
s=
3
2 log h + n θ log (2π) + log|Σ | n θ χ 2n θ(β )
4
θ The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
1 / 10
Simulations Quality of convergence
F SS 0.8
0.6
0.6
Pr obability
Pr obability
AM 0.8
0.4
0.2
0 −3
0.4
0.2
−2.5
−2
−1.5
−1 a1
−0.5
0
0.5
0 −3
1
−2.5
(a) Metropolis Sampling
−2
−1.5
−1 a1
−0.5
0
0.5
1
(b) Slice Sampling A MSS
Pr obability
0.8
0.6
0.4
0.2
0 −3
−2.5
−2
−1.5
−1 a1
−0.5
0
0.5
1
(c) Metropolis + Slice Sampling The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
2 / 10
Simulations Speed of computation
C P U T im e ( s ) Or de r Or de r Or de r Or de r Or de r
377 ( s ) 1 2 3 4 5
76.1 ( s ) 38.8 ( s ) Me t r op olis The Univ. of Newcastle, Australia
Me t r op olis + Slic e Bayesian Dynamic System Estimation
Slic e Sam pling December 15, 2014
3 / 10
Further research Robust MCMC simulation techniques
Is there a common framework for the whole range of MCMC algorithms available? How much do we gain from combining multiple MCMC strategies in one computing solution? How do we best utilise parallel computing in MCMC simulation?
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
4 / 10
Further research Simulated Annealing
Te mp e r e d de ns ity Tar ge t de ns ity
0.15
0.1
0.05 β = 0.01
0 −60
−40
The Univ. of Newcastle, Australia
−20
0
20
Bayesian Dynamic System Estimation
40
60
80
December 15, 2014
5 / 10
Further research Simulated Annealing
Te mp e r e d de ns ity Tar ge t de ns ity
0.15
0.1
0.05 β = 0.027826
0 −60
−40
The Univ. of Newcastle, Australia
−20
0
20
Bayesian Dynamic System Estimation
40
60
80
December 15, 2014
5 / 10
Further research Simulated Annealing
Te mp e r e d de ns ity Tar ge t de ns ity
0.15
0.1
0.05 β = 0.077426
0 −60
−40
The Univ. of Newcastle, Australia
−20
0
20
Bayesian Dynamic System Estimation
40
60
80
December 15, 2014
5 / 10
Further research Simulated Annealing
Te mp e r e d de ns ity Tar ge t de ns ity
0.15
0.1
0.05 β = 0.21544
0 −60
−40
The Univ. of Newcastle, Australia
−20
0
20
Bayesian Dynamic System Estimation
40
60
80
December 15, 2014
5 / 10
Further research Simulated Annealing
Te mp e r e d de ns ity Tar ge t de ns ity
0.15
0.1
0.05 β = 0.59948
0 −60
−40
The Univ. of Newcastle, Australia
−20
0
20
Bayesian Dynamic System Estimation
40
60
80
December 15, 2014
5 / 10
Further research Simulated Annealing
Te mp e r e d de ns ity Tar ge t de ns ity
0.15
0.1
0.05 β =1
0 −60
−40
The Univ. of Newcastle, Australia
−20
0
20
Bayesian Dynamic System Estimation
40
60
80
December 15, 2014
5 / 10
Further research Simulated Tempering
β β β β β
0.15
= = = = =
0.01 0.031623 0.1 0.31623 1
0.1
0.05
0 −60
−40
The Univ. of Newcastle, Australia
−20
0
20
Bayesian Dynamic System Estimation
40
60
80
December 15, 2014
6 / 10
Further research Direction Sampling
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
7 / 10
Limitations The 1 % chance of failure
Fail to converge to the target distribution for approx. 1 % of the cases No indication of such failure is given beside visual inspection of the raw data The target distribution is evidently multimodal
The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
8 / 10
Limitations The 1 % chance of failure
AM
MSS
ǫe
0.25
ǫe
0.25
0.2
0.2 2
4 6 I t e r at ion A MSS
8
10
2
5
x 10
8
4 6 I t e r at ion
8
10 5
x 10
ǫe
0.25
ǫe
0.25
4 6 I t e r at ion F SS
0.2
0.2 2
4 6 I t e r at ion
The Univ. of Newcastle, Australia
8
10 5
x 10
Bayesian Dynamic System Estimation
2
10 5
x 10
December 15, 2014
9 / 10
Slice Sampling Uniform sampling on line segments
π ( θ k) I k = {θ k + r e k } ∩ S ( h k )
St e p 1: Se le c t a dir e c t ion e k t o de fine t he int e val I k hk St e p 2: Me asur e t he w idt h of t he int e r val I k St e p 3: Sample unif or mly f r om t he int e r val θ ∼ U [I k ] θk The Univ. of Newcastle, Australia
Bayesian Dynamic System Estimation
December 15, 2014
10 / 10