Song Wang,Huiyuan Wang and Xiufen Wang. School of Information Science ..... is downloaded from the internet and the resolution is normalized to 320Ã240.
An Improved MCMC Particle Filter Based on Greedy Algorithm for Video Object Tracking Song Wang,Huiyuan Wang and Xiufen Wang School of Information Science and Engineering, Shandong University, Jinan 250100 Abstract- In this paper, an improved MCMC (Markov Chain Monte Carlo) particle filter for video tracking is proposed. MCMC plays an important role in video tracking and so is of popular use in this field. However, it is still very difficult to
where C is a normalization constant. The likelihood p ( Z t | X t ) expresses the measurement model, that is, the probability we observed of the measurement Z1:t given the state X t at time t. The motion model p ( X t | X t 1 ) predicts the state X t at time t given the previous state X t 1 .
satisfy the requirement of real-time application for its high computation complexity. To solve this problem, the concept of greedy algorithm is adopted questioning this study. Experiment results show that the proposed approach performs well in both tracking robustness and computational efficiency.
I.
B. MCMC Particle Filters By equation (1), we can compute the
distribution p ( X t | Z t 1 ) in theory. However, in many applications, this distribution is analytically intractable. In order to address this problem, MCMC is used in a sequential setting to obtain an approximation of this
in [3]. This is achieved by using a set of unweighted particles to represent the density p ( X t 1 | Z1:t ) :
INTRODUCTION
Recently, many algorithms for visual object tracking such as Kalman filters, paticle fiters, meanshift and MCMC particle filters were proposed and applied to surveillance, robotics, human machine interface, etc. Most algorithms can be classified into parametric methods and nonparametric methods. As a nonparametric method, meanshift[1] tracking has gained much attention and been used widely. However, it is widely known that the algorithm can not predict the object position in the next frame using this method. On the other hand, parametric methods such as Kalman filters[2] and Particle Filters(PF)[3-5] can deal with the situation of overlap and predict the next position. The limitation is that Kalman filters only work under the hypothesis of linearity and Gaussian noise of the system and there exists the problem of particle degeneracy and impoverishment in the application of particle filters. To solve above problems, MCMC algorithms[6-11] are usually applied to replace the traditional importance sampling step in particle filters with a novel Markov chain Monte Carlo sampling step and this leads to more effective algorithms than conventional PF. Even though, MCMC method has some shortcomings of its own,one of which being that the high time cost makes it very hard to realize real-time processing. In this paper, the idea of greedy algorithm is introduced into the MCMC PF to reduce the time cost under the condition that the tracking performance keep unchanged.
p ( X t 1 | Z1:t 1 ) | ¦ G ( X t 1 X t(n1) )
Then, the particle filter propagates N particles, known as a Markov Chain over time, to approximate p ( X t 1 | Z1:t ) . Thus Equation (1) becomes:
p ( X t | Z1:t 1 ) | C 1 p ( Z t | X t )¦ p ( X t | X t(n1) )
In MCMC particle filters, these samples are drawn iteratively through a
s, MCMC is a strategy for generating samples X i ,t , i 1...n while exploring the state space X using a Markov chain mechanism. t C. Metropolis-Hastings The Markov chain can be constructed using random walk algorithm such as Metropolis-Hastings and Gibbs sampler. We prefer Metropolis-Hastings algorithm in this paper. For each frame, a Markov chain will be constructed by drawing N = Nburn + Nmix samples to approximate the recursive Bayesian filtering distribution with the help of MH. Nburn is the number of burn-in samples and Nmix is the number of samples required to reach the mixing time after burn-in. The MH algorithm[1113] for multiple objects is described below. At the time instant t-1, the MH sampler is initialized by choosing a sample from the current Markov chain and move all the targets according to their state evolution. The result is to be used as the seed for the new chain, X0. 1. Begin with the state of the previous sample X* = Xn-1. 2. Select a target object m* from the target proposal distribution. 3. Propose a new configuration for X* by sampling a new configuration for object m* from the proposal distribution (state evolution), while keeping all other
A. Bayesian Target Tracking Target tracking can be expressed as Bayes ing. We can recursively update the posterior distribution p ( X t / Z1:t ) over the states of all the n targets X i ,t , i 1...n , given all observations up to and including time t, according to: (1) p ( X | Z ) C 1 p ( Z | X ) p ( X | X ) p ( X | Z )d X 1:t
t
t
³
t
t 1
t 1
1:t 1
(3)
n
II. MCMC PARTICLE FILTERS
t
(2)
n
t 1
X t 1
___________________________________ 978-1-61284-307-0/11/$26.00 ©2011 IEEE
objects fixed. Compute the acceptance ratio as follows: § p ( Z t | X t* ) · (4) a min ¨1, ¸ © p( Zt | X t ) ¹ Because other objects remain unchanged when the selected object evolves its state, the acceptance ratio reduces to § p ( Z t | X t*, m* ) · ¸ (5) a min ¨1, ¨ p( Zt | X * ) ¸ t ,m © ¹ 5. Add the nth sample to the Markov chain. If a
the proposed configuration X* Xn. If not, add the proposed configuration with probability a. If the proposed configuration is rejected, add the previous configuration Xn-1 Xn. [13] From the MH algorithm, we know that this mechanism is constructed so that the chain spends more time in the most important regions. In addition, accepting a sample with probability a can avoid the algorithm from being stuck in the local maxima. In above basic MH algorithm , two problems are difficult to deal with: 1. Long Markov chain. The N samples help to get the information of the posterior distribution p ( X t / Z1:t ) . So if the Markov chain is not long enough, it will not probably explore the full state space X . But a long Markov chain t will lead to high time-cost. 2. Monitoring the acceptance rate. If acceptance rate is too high, the chain is probably not mixing well. While if acceptance rate is too low, it means that too many candidate draws are rejected so that the algorithm is too ineĜcient.
(4) Combine the optimal solutions of all the sub-problems to form the final approximate solution. Figure 1 illustrates the main idea of greedy algorithms.
4.
Figure 1 Idea illustration of greedy algorithms B.
Using Greedy Algorithms in MCMC for Object Tracking In MCMC particle filters, a Markov chain is constructed to approximate the posterior distribution p ( X t / Z1:t ) with precise algorithms such as Metropolis-Hastings. To develop our idea further, it is necessary to look into the detailed process of MH algorithm. The MH algorithm explores areas of high likelihood by proposing a change to one target at a time and automatically accepting samples which increase the overall likelihood, while rejecting samples which decrease the overall likelihood with probability a . a ! 1 always means that a sample is accepted because it increases the overall likelihood. While in the frame, it means that the new sample is better than the last sample. In most time, the position of the new sample is near the actual position of this object. We will get the better sample if the proposal position is got from the sample along the direction of the last sample’s change until the sample position runs away from the actual position as shown in Figure 2. In Figure 2, a is the sample accepted and A is the actual position of the target. The arrow shows the direction of this sample. If we can get sample along this direction, we will get better sample until the position is too far to go away from the actual position. Based on such hypothesis, we can improve MCMC using the idea of greedy algorithm.
III. AN IMPROVED MCMC PF WITH GREEDY ALGORITHM To solve above problems in the MH algorithm, let us introduce the idea of greedy algorithm into MCMC in the section. A.
Greedy Algorithms
The basic idea behind greedy algorithms [14] is to build up a solution from smaller pieces. Unlike other approaches, however, greedy algorithms keep only the best solution they find as they go along. Greedy algorithms are fast, generally linear to quadratic and require little extra memory. Unfortunately, they are not always correct. But when they do work, they are often easy to implement and fast enough to execute. The main steps of greedy algorithms can be described as : (1) Build the model for the problems according to the requirement. (2) Divide the main problem into some small parts. (3) Obtain the optimal solution of every sub-problem based on greedy criterion.
sample in the conventional way. Do as the common MH until get N samples. Figure shows the diagram of this improved approach, where Num denotes the order number of the objects whose last sample was accepted with a>1. The proposed approach makes full use of the special features of video sequences to get samples that can explore the state more sufficiently with the help of MH algorithm. By greedy sampling, the particle we get will usually lie in higher likelihood area than conventional sampling. In this way, the chain will approximate the posterior distribution with less samples and thus improve the overall accept ratio. 4.
a
A
IV. EXPERIMENTS AND DISCUSSION The above proposed approach was implemented on real-life data in our experiments. We set the color and binary information of the foreground and the background as the features of the observation model. The frame images are segmented with the method described in [9].The test sequence is downloaded from the internet and the resolution is normalized to 320×240. In the video sequence, two racing cars were passing through the camera successively. The cars ran in high speed and turn left after they entered the scene. We performed moving object tracking using conventional MCMC and the proposed approach respectively, and then compared the results. Selected frames from the resulted sequence are shown in Figure 4 to Figure 6. In the first two groups of frames (Figure 4 and Figure 5), we set the length of Markov chain as N= Nburn + Nmix=50+5, and that of the last group (Figure 6) N= Nburn + Nmix=100+25.
Figure 2 Example of actual tracking
Initialize Markov Chain Choose a target n If n==Num, greedy sample
n!=Num, common sample Iterate N times
Get the sample
a>1 Accept it
a1 to the same object. If the accept ratio a