A Fast Resampling Scheme for Particle Filters Tiancheng Li #*1, Tariq P. Sattar #2, Dedong Tang +3 #
Centre for Automated and Robotics NDT, London South Bank University 103 Borough Road, London, SE1 0AA, UK 1
2
[email protected] [email protected] 3
[email protected]
*
+
School of Mechatronics, Northwestern Polytechnical University, 127 Youyi Xilu, Xi’an, 710072, P. R. China Chongqing University of Science and Technology, Chongqing Huxi University Town, Chongqing 401331, P. R. China
Abstract—An unbiased resampling method is proposed for particle filters which is computing fast for implementation. There are two differences of our approach from other methods. First, the number of the particles is not fixed but varies around a reference. Second, it is a deterministic sampling procedure since there is no random numbers used. The core idea is simply replicating each particle as many times as the rounding result on the product of the reference number and weight of the particle. As an extension, the application of random numbers in resampling is discussed. Simulations show that our approach obtains comparable estimation accuracy with traditional resampling methods but be faster.
I. INTRODUCTION This paper is focused on the resampling implementation for particle filters. Since the importance resampling (earlier proposed in [1]) was applied in the bootstrap filter [2], Sequential Monte Carlo approaches (SMC, commonly referred as particle filters, PF) combining importance sampling (Sequential Importance Sampling, SIS) and resampling (Sequential Importance Resampling, SIR) to implement Bayesian estimation, have been widely applied for the optimal state estimation problem of dynamical systems [3, 4]. Generally speaking, the PF approximates the state density p(xk) via a set of random particles with associated nonnegative weights Nk
p( xk ) wk ,i xk ,i ( xk ), s.t. k 1
Nk
w k 1
k ,i
1, wk ,i 0
(1)
where {xk ,i , wk ,i }i 1,2,..., Nk represent the states and weights of particles respectively, Nk is the total number of particles at time k and δx(•) denotes the delta-Dirac mass located in x. The weights are chosen using the principle of SIS, i.e. wk wk 1
p( yk xk ) p( xk xk 1 ) q( xk x1:k 1 , yk )
.
(2)
where yk is the measurement at time k, q(•) is the proposal importance density. It is well known that after a few iterations in the particle propagation process, the weight will concentrate on a few particles only and most particles will have negligible weight, namely the so-called sample degeneracy appears. This is one of the inherent faults of SIS, see also [5]. A very intuitive idea to counteract this problem is Resampling, which eliminates
particles that have small weights and concentrates on particles with large weights. Since the importance of resampling was first demonstrated in the context of Sequential Monte Carlo (SMC, commonly referred to as particle filters) [2], quite a few resampling methods have been proposed to enhance particle filters, typically such as multinomial resampling [2], stratified resampling [6], systematic resampling [6, 7] and residual resampling [8]. In them, the size of the particle set is maintained the same before and after resampling. However, this seems not mandatory. Instead there are some schemes proposed to reweight or reallocate particles that break through the restriction of constant sample size, such as the reallocation [9], particle merging and splitting procedure [10], etc see also the review [11]. In particular, the systematic resampling is widely used since it minimizes the Monte Carlo (MC) variation (that means a closer match between the resampled number of a particle and its weight.) and is known to be the extremely fast for computing [12]. In this paper, we will present one new scheme which is as fast as the systematic resampling and minimizes the Monte Carlo (MC) variation. Simulation demonstration is given as well. In practice, as a general principle, the resampling should preserve the original particle distribution if there is no more information being considered in the process. More precisely, the expected number of times ni that each particle is resampled is proportional to its weight wi, i.e. E (ni | wi:M ) N wi (3) where M is the number of particles before resampling. This constraint is also known as the unbiasedness condition [8, 9]. II. THE PROPOSED METHOD Generally speaking, reducing the access to random numbers in the resampling process is helpful to reduce computation and also to achieve a lower MC variance. One intuitive idea is to resample without using any random numbers. As such, we propose an unbiased deterministic sampling procedure for resampling, termed as rounding-copy. The Matlab®-style pseudo-code of our approach is given in algorithm 1. In which, M particles {Xi, Wi}, i=1, ..., M and the reference number N of particles are input and N* particles {xj, wj}, j=1, ..., N* are output. The weights Wi have already been normalized before resampling, i.e. ΣM(Wi)=1.
and deterministic resampling [5], etc. The RSR still uses random numbers while the deterministic resampling is free of random numbers but it requires partitioning of the state space which is dimensionality sensitive. In contrast, there is no use of random numbers and any additional sampling strategies at all which allows our approach to be extremely concise and fast computing. This however is at the price of the fluctuation of the number of particles as indicated by remark 1, although the fluctuation is unbiased and scope-limited. In fact, as will be shown in our simulation, the number of particles will not fluctuate too much from the reference N, much less than the extreme scope [-M/2, M/2]. In spite of its simplicity and randomness-free, the roundingcopy scheme may also suffer from the side effect of resampling namely the sample impoverishment just like the general resampling method. To mitigate this, only resampling at selective time [11] and strategies like the roughening [2, 3] to compensate are feasible and helpful to our approach.
Algorithm 1 The pseudo code of our approach [{xj, wj}, j=1,..., N*] = resampling [{Xi, Wi}, i=1,..., M, N] N*=0 FOR i=1:M ni=round(N×Wi) IF ni>=1 FOR h=1: ni N*=N*+1 xN*=Xi END END END FOR j=1:N* wj=1/N* END A. Unbiasedness As shown in algorithm 1, each particle is resampled strictly ni = [N×Wi] (i.e. round(N×Wi) in algorithm 1) times, where the symbol [∙] means the nearest integer around the contents. This forms a pure deterministic sampling scheme without introducing any randomness. On the rounding operation, we have E[N×Wi]=NWi that is subject to the “unbiasedness” condition. Summing up both sides, it is obtained that (4) E( N * ) N Further, we have the scope of N* as follows: Remark 1. max(N-M/2, 0)