Data reduction for Particle Filters

1 downloads 0 Views 632KB Size Report
Christian MUSSO (ONERA) and Nadia OUDJANE (EDF/R&D). 1. The filtering ..... the error due to the Monte Carlo approximation is reduced and its propagation ...
1

Data reduction for Particle Filters Christian MUSSO (ONERA) and Nadia OUDJANE (EDF/R&D)

1. The filtering problem: motivations 2. Gauss Legendre quadrature 3. Application to the filtering problem 4. Simulations results

C. MUSSO - ONERA and N. OUDJANE - EDF/R&D

´ GDR 10 fevrier - Paris 2006

1. The filtering problem: motivations

2

The filtering problem

(Xn )n≥0 ∈ E = Rd

⊲ State process (signal) Xn

Markov chain

⊲ Observation process

∼ (π0 , Qn )

Ex :

Xn = Fn (Xn−1 , Wn )

(Yn )n≥0 ∈ Rq

Yn = h(Xn ) + Vn

Vn ∼ gn

⊲ Computing the optimal filter

πn (dx) = P[ Xn ∈ dx | Y1:n ]

C. MUSSO - ONERA and N. OUDJANE - EDF/R&D

(πn )n≥0 ∈ P(E)

with

Y1:n = (Y1 , · · · , Yn )

´ GDR 10 fevrier - Paris 2006

1. The filtering problem: motivations

3

Interacting Particle Filter (IPF) (1)

(2)

N N N N N −−−−−−−−→ πn|n−1 ) −−−−−→ πn πn−1 = S N (Qn πn−1 = Ψn ·πn|n−1 Correction

Sampled Prediction

N 1 N 1 πn−1 = { ωn−1 δξn−1|n−2 + · · · + ωn−1 δξ N

n−1|n−2



(1.a)

}



Sampling

1 N { ξn−1 , · · · ξn−1 }

(1.b)

N πn|n−1



Evolution

Qn



1 1 1 δN } + ··· + = { δξn|n−1 N N ξn|n−1 (2)



Correction

Yn



i ωni ∝ Ψn (ξn|n−1 )

1 πnN = { ωn1 δξn|n−1 + · · · + ωnN δξN

n|n−1

C. MUSSO - ONERA and N. OUDJANE - EDF/R&D

}

´ GDR 10 fevrier - Paris 2006

1. The filtering problem: motivations

4

Exponential growth of the error with time ⊲ The error of particle filters grows exponentially with the observations number sup E[ |hπn − πn′ , φi| | Yt1 , · · · , Ytn ] ≤

kφk=1

n X

C n−k+1 δ

k=1

where δ is a bound for the local error and C is a positive real.

⊲ We propose to reduce the number of observations with the same amount of information

• to avoid the exponential growth of the error with time • to reduce the computing time

C. MUSSO - ONERA and N. OUDJANE - EDF/R&D

´ GDR 10 fevrier - Paris 2006

2. Gauss Legendre quadrature

5

Definitions ⊲ Legendre Polynomial of degree m ∈ {1, · · · , n} tn X

Ψm

Ψm (t)tj = 0 for any integer 0 ≤ j ≤ m − 1

t=t1

⊲ Lagrange polynomial of degree m − 1 Φi , 1 ≤ i ≤ m     1 if t = Ti Φi (t) =    0 if t = Tj 6= Ti . ⊲ (T1 , · · · , Tm ) are the roots of Ψm C. MUSSO - ONERA and N. OUDJANE EDF/R&D

´ GDR 10 fevrier - Paris 2006

2. Gauss Legendre quadrature

6

Gauss-Legendre quadrature for discrete sums: result ⊲ Gauss-Legendre quadrature for discrete sums • t1 < · · · < tn

positive reals

m ∈ {1, · · · , n}

• f : R 7→ R having its derivatives continuous up to the order m tn X

f (t) =

t=t1

m X

kΦi k22 f (Ti ) + Em (f ) with

i=1

1 Em (f ) = kΨm k22 f (2m) (ξ) where t1 < ξ < tn (2m)! ⇒ If f is a polynomial with a degree p ≤ 2m − 1, then any sum of n values of f is equal to a weighted sum of m ≤ n values of f ⊲ kΦi k22 is of order n/m C. MUSSO - ONERA and N. OUDJANE EDF/R&D

´ GDR 10 fevrier - Paris 2006

Gauss-Legendre quadrature for discrete sums m

1

∑ || Φ i || = n

1 ˜ || Φ i || = n ∫ Φ i (t)dt + o( ) n 0

2 2

2 2

i=1

Example : m=3 with t k +1 − t k = 1 || Φ1 ||22 =

0



T3 = T2 − τ

5n(n 2 − 1) || Φ 2 || =|| Φ 3 || = 6(3n 2 − 7)

4n(n − 4) 3(3n 2 − 7) 2

2 2



T1 = (n + 1) /2 τ=

3n 2 − 7 20

2 2

• T = T +τ 2

1

n

3. Application to the filtering ptoblem

7

Initial filtering problem ⊲ Initial filtering problem   Xt0 ∼ π0         X˙ t = f (Xt )          Y = h(Xtk ) + σεk tk

for

t≥0

for

1≤k ≤n,

• π0 is a given probability distribution on Rd (modelling the error on Xt0 ) • (ε1 , · · · , εn ) are i.i.d. ∼ N(0, 1) • g : R+ 7→ R such that

g(t) = h(X(t)) , for all t ∈ R+ derivatives continuous up to the order 2m

C. MUSSO - ONERA and N. OUDJANE EDF/R&D

´ GDR 10 fevrier - Paris 2006

has its

3. Application to the filtering ptoblem

8

Building a new set of observations ⊲ New observations (Y˜1 , · · · , Y˜m ) tn X 1 Φi (t) Yt for all i ∈ {1, · · · , m} Y˜i = 2 kΦi k t=t 1

⊲ The new set of observations is related to the state process (X) σ ˜ Yi = h(XTi ) + εi for all i ∈ {1, · · · , m} where kΦi k2 (ε1 , · · · , εm ) are approximately i.i.d. ∼ N(0, 1) ⊲ The Fisher matrix I associated to (Yt1 , · · · , Ytn ) is approximately equal to ˜1 , · · · , Y˜m ) the Fisher matrix I˜ associated to t (Y I ≈ I˜

C. MUSSO - ONERA and N. OUDJANE EDF/R&D

´ GDR 10 fevrier - Paris 2006

Properties of this new set of observations tn 1 Y˜i = 2 ∑ Φ i (t)Yt || Φ i ||2 t1

- iid Gaussian tn

∑ Φ (t)Φ i

t =t1

j

(t) = δ (i = j)

- No loss of information (Fisher matrix) GQ T

1 t n ⎛ ∂h ⎞ ⎛ ∂h ⎞ 1 m 2 ⎛ ∂h ⎞ I = 2 ∑⎜ ≈ || Φ || ⎟⎜ ⎟ ∑ i 2 ⎜⎝ ∂X ⎟⎠ σ t =t1 ⎝ ∂X t ⎠ ⎝ ∂X t ⎠ σ 2 i=1 t

T

t =Ti

⎛ ∂h ⎞ ⎜ ⎟ = I(Y˜ ) ⎝ ∂X t ⎠ t =T i

3. Application to the filtering ptoblem

9

New filtering problem ⊲ Initial filtering problem ≈ ”Data reduced filtering problem”    Xt0 ∼ π0         X˙ t = f (Xt ) for t ≥ 0        σ  ˜  ˜i εi with σ ˜i = = h(XTi ) + σ  Yi kΦi k2 where the observation noises (ε1 , · · ·

, εm ) are approximately i.i.d. ∼ N(0, 1).

˜i is smaller that the original one σ ⊲ The new standard deviation σ ⊲ The modified filter is updated less frequently than the original filter but the modified filter remains in a sense “piecewise recursive”

C. MUSSO - ONERA and N. OUDJANE EDF/R&D

´ GDR 10 fevrier - Paris 2006

New filtering problem - Design - Data reduction rate - Choice of m Depending on observation process variation and Computing time reduction (m=n gives original problem)

- Computing cost of the roots t k +1 − t k = Δt t k +1 − t k ≠ Δt

Of line On line (Laguerre method)

- Piecewise recursive - Choice of n

Very fast

4. Simulations results

10

Bearing only: problem formulation ′ ⊲ The goal is to estimate the target state Xt = (x(t), x(t), ˙ y(t), y(t)) ˙   X ∼ π0   0 X˙ t = F Xt for t ≥ 0      Y = Arctan x(tk )−xo (tk ) + σε for 1 ≤ k ≤ n tk

C. MUSSO - ONERA and N. OUDJANE EDF/R&DN

y(tk )−yo (tk )

k

´ GDR 10 fevrier - Paris 2006

4. Simulations results

11

Simulation parameters

• Initial observer position: (xo (1), yo (1)) = (0, 0) • Observer velocity norm: 20m/s • Legs angle: 45o ; • Initial target position: (x(1), y(1)) = (10km, 10km) • Target velocity: (x(t), ˙ y(t)) ˙ = (10m/s, 10m/s); • Number of observations n = 800; • Initial law of the state X0 ∼ N(M, Σ) with Σ=diag[(5km, 10m/s, 5km, 10m/s)2 )] and M ∼ N(X0 , Σ); • Standard deviation of the observation noise σ = 1o .

C. MUSSO - ONERA and N. OUDJANE EDF/R&DN

´ GDR 10 fevrier - Paris 2006

4. Simulations results

12

SMISE of the x-position estimate w.r.t. the time ⊲ RPF with N = 5000 particles and DRPF with N = 30000 particles The computing cost for the RPF is more than five times greater than the computing cost of the DRPF 4000 PCRB RPF DRPF

3500

3000

X-position std (m)

2500

2000

1500

1000

500

0

0

100

200

C. MUSSO - ONERA and N. OUDJANE EDF/R&DN

300

400 Time (s)

500

600

700

´ GDR 10 fevrier - Paris 2006

800

4. Simulations results

13

SMISE of the x-velocity estimate w.r.t. the time ⊲ RPF with N = 5000 particles and DRPF with N = 30000 particles The computing cost for the RPF is more than five times greater than the computing cost of the DRPF 12 PCRB RPF DRPF 10

Vx-velocity std (m/s)

8

6

4

2

0

0

100

200

C. MUSSO - ONERA and N. OUDJANE EDF/R&DN

300

400 Time (s)

500

600

700

´ GDR 10 fevrier - Paris 2006

800

14

Conclusion

⊲ We have proposed a method which allows data reduction in filtering problems and keeps approximately the amount of information unchanged

• the computing time is drastically reduced which allows to increase the number of particles

• the error due to the Monte Carlo approximation is reduced and its propagation along the time

⊲ The new filter has shown good performances in simulations when applied to the bearing only tracking problem

C. MUSSO - ONERA and N. OUDJANE - EDF/R&D

´ GDR 10 fevrier - Paris 2006

Suggest Documents