hypotheses pruning in jpda algorithm for multiple target ... - CiteSeerX

0 downloads 0 Views 74KB Size Report
The received results are applicable in all real time JPDA algorithms and their modification. (IMM JPDA). ..... Roy Jonker and Ton Volgenant, “A Shortest.
HYPOTHESES PRUNING IN JPDA ALGORITHM FOR MULTIPLE TARGET TRACKING IN CLUTTER∗∗ K. M. Alexiev, P. D. Konstantinova

25A acad. "G.Bonchev" str., Sofia, Bulgaria, [email protected]

Multiple target tracking in heavy clutter is a challenging task. Many algorithms have been proposed in recent years to solve this problem. One of the most effective and practical algorithms is Joint Probability Data Association (JPDA) algorithm. This paper comments several aspects of this algorithm. The most time consuming (combinatorial) part of this algorithm is hypotheses generation and hypotheses score calculation. Most of hypotheses are without significance, with negligible effect over final result – the choice of the best hypothesis. In this case it is useful to reduce the number of generated hypotheses. The paper comments how to do this. The received results are applicable in all real time JPDA algorithms and their modification (IMM JPDA). Keywords: multiple target tracking, JPDA

1.

Introduction

Multiple target tracking in heavy clutter is a challenging task. This task differs from standard state estimation problem by the fact that the measurement origin is also uncertain. When new measurements are obtained, the association between the measurement list and the track list requires the estimation algorithm to test which measurement-totrack correspondence is correct, while simultaneously estimating the target states. Some times, when there are closely spaced targets, multiple tracks may share the same measurement(s). Joint events are formed by creating all possible combinations of track-measurement assignments. The probabilities for these joint events are calculated. The expressions for the joint events incorporate the probabilities of track existence of individual tracks, as well as an efficient approximation for the cluster volume and an a-priori probability of the number of clutter measurements in each cluster. From these probabilities the data association and track existence probabilities of individual tracks are obtained. Several approaches were proposed to solve described data association problem [5]. The simplest method is the so-called nearest neighbor (NN) approach. The NN approach associates one from gated measurements with minimum distance with the track file under consideration. The strongest neighbor method can be regarded as a modification of NN method. The JPDA algorithm is an extension of the Probabilistic Data Association method, which allows the possibility that a measurement may have originated by one of a number of candidate tracks or by clutter. In each scan JPDA partitions tracks into clusters, where tracks in each cluster have common measurements. It generates all possible joint measurement to track assignments and calculates the



a-posteriori probability of each joint event. From these probabilities, the data association coefficients of each track are calculated and then used to update the track estimates. The multiple hypotheses tracking (MHT) method exhaustively enumerates all possible hypotheses over a number of most recent frames and chooses the most likely one. Joint Probability Data Association (JPDA) algorithm is the most effective from described above approaches and it can be implemented successfully for multiple closely spaced targets even in the presence of heavy clutter. But JPDA is rather complex because it creates a joint event for each possible combination of measurement origin. The number of joint events can grow very rapidly in a dense clutter situation. In this case JPDA requires a fairly large amount of computation to evaluate the weighting probabilities. To improve this situation, the paper studies the problem of hypotheses generation. An extension of the algorithm in previous our work [1] is proposed. Instead of enumeration of all feasible hypotheses we propose to use ranked assignment approach to find the first K-best hypotheses only. The problem is how many hypotheses K to be found out. The value of threshold K has to be optimal regarding a criterion. In this paper a probabilistic approximate measure of necessary number of hypotheses is given. The paper is organized as follows. Next section describes briefly the common JPDA algorithm. In the 3rd section the motivation of choice of probabilistic threshold is given. The 4th section presents simulation results. 2. JPDA algorithm and K-best hypotheses When several closely spaced targets form a cluster, the standard JPDA algorithm [5] generates all feasible hypotheses and computes their scores. Every hypothesis meets two important constraints:

The research reported in this paper is partially supported by the Bulgarian Ministry of Education and Science under grants I-1205/2002 and I-1202/2002 and by Center of Excellence BIS21 grant ICA1-2000-70016.

a)

no target can create more than one measurement; b) no measurement can be assigned to more than one target. The set of all feasible hypotheses includes such hypotheses as ‘null’ hypothesis and all its derivatives. The consideration of all possible assignments including the ‘null’ assignments is important for optimal calculation of assignment probabilities [6]. Hypothesis probability is computed by the expression: ( ) P ′(H l ) = β [N M −(NT − N nD )] (1 − PD )N nD PDN r − N nD g ij g mn



(1) where β is probability density for false returns, g ij =

e

(2π )



M

d ij 2

- is probability density that 2

S

measurement j originates from target i , N M is total number of measurements in the cluster, N T - total number of targets, dij – statistical distance, N nD number of not detected targets, M is measurement vector size, S is innovation covariance matrix. The step ends with the standard normalization: P ′(H l ) P(H l ) = N , H ∑ P′(H l ) l =1

where N H is the total number of hypotheses. To compute for a fixed i the association probability pij that observation j originates from track i , we have to take a sum over the probabilities of those hypotheses in which this event occurs: pij = ∑ P(H l ) , j = 1, , mi (k ) , i = 1, , N T , l∈L j





where L j is a set of indices of all hypotheses, which include the event mentioned above, mi(k) is the number of measurements falling in the gate of target i, and N T is the total number of targets in the cluster. For every target the ‘merged’ combined innovation is computed: mi (k ) ν i (k ) = ∑ pijν ij (k ) .

Lhe

j =1

most time consuming part of the algorithm is hypotheses generation and scores computation. The number of all feasible hypotheses increases exponentially with N M . To avoid these overwhelming computations we take into consideration only small part of all feasible hypotheses with the highest scores. Let us suppose that the first K-hypotheses (with highest score) are under consideration. In order to find out the first Kbest hypotheses we use an algorithm due to Murty [2] and optimized by Miller et al. [3]. This algorithm gives a set of assignments to the assignment problem

[4], ranked in increasing order of cost. Every solution of the assignment problem represents a sum of elements of the cost matrix. To define cost matrix correspondingly, we take logarithm from both sides of (1). From the left-hand side we obtain logarithm of hypothesis probability and, from the right-hand side, a sum of logarithms of partitioning elements: ln (P ′(H l )) = (N M − (N T − N nD ))ln β + N nD ln (1 − PD ) + + (N r − N nD )* ln PD + ∑ g ij ⋅

We construct a cost matrix with negative logarithms of the elements. In this case the optimal solution (the minimum) of the assignment problem with such a cost matrix will coincide with the hypothesis with highest probability. In order to use any of the widespread assignment algorithms, as well as the algorithm [1] for finding the K-best hypotheses, the cost matrix has to be filled to square matrix. The values into added columns are appropriately chosen. These columns have not to influence on the optimal solution. Let us suppose that the algorithm finds K-best assignments with highest probabilities. The normalization (transformation from likelihood function to probability) can be done by equation: P ′(H l ) . P(H l ) = K ′ ( ) P H ∑ l l =1

One important question of practical significance is how to choose the number of generated and calculated hypotheses. The value of K has to be sufficiently small to ensure acceleration of the algorithm, and, at the same time, has to be not too small to lead to distortion in computing assignment probabilities. If, for example, the score of every one of these hypotheses differs from any of the others by no more than one order of magnitude, it should not be possible to truncate some significant parts of all hypotheses. If, however, the prevailing share of the total score is concentrated in a small percent of the total number of all hypotheses, then the interest in considering only this small percent of all hypotheses becomes very high. The analysis of hypotheses score distribution shows that the scores of feasible hypotheses decrease very rapidly and some 1-5 per cents of them cover more than 95 per cents of the total score sum. One possible expression for termination hypotheses generation process is given in [1]: H (n ) − H (n + 1) < α ⋅ H (n ) , where α

Suggest Documents