Multitarget Tracking Metrics for SIAP Systems W. Dale Blair Georgia Institute of Technology Atlanta, Georgia 30332-0857
[email protected] Abstract- Effective modeling and simulation as well as system-level performance metrics are critical to successful development and deployment of multiple target tracking algorithms. However, the performance assessment methodology and metrics are often ad hoc. Algorithm developers or engineers must view the modeling and simulation and performance assessment as part of the challenge to developing algorithms that perform effectively when deployed in the field. Performance assessment function should be distinctive and separate from the real-world models and the algorithms, and the interface between that function and the real-world models and the algorithms should be controlled. Single Integrated Air Picture (SIAP) track metrics are discussed for performance assessment of multiplatform-multisensor-multitarget tracking systems. Keywords: Tracking
1
Multiple Target Tracking, SIAP, Target
Introduction
Since the errors sources associated with tracking maneuvering targets and measurement-to-track association for multiple targets are extremely complex, analytical calculation of performance are impossible for most all practical problems. Thus, one must resort to Monte Carlo simulations to assess the performance of the multiple target tracking algorithms. Therefore, effective modeling and simulation and system-level performance metrics are critical to successful development and deployment of multiple target tracking algorithms. However, many legacy simulations neglect the effects of sensor resolution and measurement-to-track association. In many cases, the modeling and simulation is performed by the algorithm developer and it is effectively an “inverse” of the solution. Furthermore, the performance assessment methodology and metrics are often ad hoc. Effective modeling and simulation requires an understanding of the problem and the impact of the environment and hardware on the tracking algorithms. Algorithm developers or engineers must view the modeling and simulation and performance assessment as part of the challenge to developing algorithms that perform effectively when deployed in the field. Along these lines, modeling and simulation for tracking algorithms can be decomposed in the “real-
world,” test article algorithms, and performance metrics as illustrated in Figure 1. The architecture of the modeling and simulation program is very important. One of the most important features of a successful modeling and simulation program is a distinction between the “real-world” models and the signal processing and tracking algorithms. The models in the real-world should be separate and different from those included in the algorithms section. Information and the associated parameters that are not known to the tracking algorithms should be represented in the real-world and access to that information should be controlled by a restricted interface and any real-world information should be reconstructed or estimated from the data that is available through the restricted interface. The interface between the signal processing and the tracking should be restricted to that anticipated in the actual sensor system. The communications from the sensor trackers to the network-level trackers should be restricted to reflect that anticipated in the actual network. Performance assessment function should also be distinctive and separate from the real-world models and the algorithms, and the interface between that function and the real-world models and the algorithms should be controlled. A common mistake that is made in this area is the scoring of trackers immediately after a measurement update of the track filters. Scoring trackers immediately after a measurement update gives one a false expectation of performance. The scoring times and other scoring parameters should be based on the system performance metrics and set separately from the tracking algorithms as in [1].
2
SIAP Track Metrics
A performance assessment process can be viewed as a four step process as illustrated in Figure 2 [2]. First, the estimates and truth are collected at specified scoring times. For the track metrics, the truth includes the kinematics states (i.e., position and velocity) of all objects for the specified times and the estimates include the tracks state estimates and covariance for all reported tracks. Second, the tracks are assigned uniquely to the truth. This is accomplished by the unique assignment of the track state estimates to the kinematic truth using a kinematic distance measure or a measurement-based purity measure. Track to truth assignments are performed during scoring times and used to determine assigned, spurious, and redundant tracks.
A cost matrix is generated detailing the cost of assigning tracks to current active truth objects, or alternatively the cost of not assigning a track to any of the truth objects (the guard value). If any cost exceeds a gate value, that trackto-truth assignment is not allowed. Typically, the cost matrix represents a Euclidean or chi-square distance. The assignments are usually made using one of the following
algorithms: modified Jonker-Volgenent assignment, onesided Bertsekas auction, or greedy nearest neighbor. Third, using the track-to-truth assignment of the second step, the performance metrics are computed for each truth object with the assigned track. Fourth, the summary statistics are then computed.
Figure 1. Modeling and Simulation Architecture for Multiplatform Multiple Target Tracking
Figure 2. Four-Step Performance Assessment Process Categories of the SIAP track metrics are given in Figure 3. The metrics are the means by which the individual sensors and comprehensive networked sensors performances may be evaluated in different environments/scenarios. The metrics are computed during run-time execution and most are stored as simple “roll-ups” which sums the values across the runs that are typically averaged. However some metrics are stored on a run-to-run basis (initiation time metrics for example). Figure 4 gives a list of metrics that are computed and stored during the run-time execution.
The tracker metrics are the “generic” tracking metrics which gauge the performance on the ability of the sensors and/or hosts to manage tracks on objects in the scenario. The objects to be tracked consist of the assignable objects which are by default every object in the scenario that may be tracked, and a subset of those objects called the scorable objects. The tracks are classified into four main types: assigned, scored, spurious, and redundant. The classification occurs through the assignment algorithm on the cost matrix of tracks to truth. Position and velocity
errors (mean, standard deviation, root-mean-square) and the covariance consistencies (Mahalanobis distances) are calculated for the tracks as well. Finally, sensors that share
• Accuracy – Position – Velocity – Covariance • Completeness • Ambiguity – Redundant – Spurious
a network message may be analyzed with the group-wide metrics.
• Continuity – Breaks – Switches • Timeliness • Cross-Platform Commonality – State Vector Differences – Non-common Track Numbers
Figure 3. Categories of SIAP Track Metrics
• Assignable Objects
• Track Initiation Time
• Scorable Objects
• Longest Duration Track
• Assigned Track Count
• Cumulative Track Switches
• Assigned Track Completeness Ratio
• Cumulative Track Breaks
• Scored Track Count
• Position/Velocity Errors (Mean, Sigma, RMS)
• Scored Track Completeness Ratio • Spurious Track Count • Spurious Track Ratio • Redundant Track Count • Redundant Track Ratio • Total Track Count
• Covariance Consistency (Position, Velocity, Combined) • Group-Wide Non-Common Track Numbers • Group-Wide Track Position Estimate Differences • Group-Wide Track Velocity Estimate Differences
Figure 4. List of Tracker Metrics
References [1] W. D. Blair, G. A. Watson, T. Kirubarajan, and Y. Bar-Shalom, Y., “Benchmark for Radar Resource Allocation and Tracking Targets In the Presence of ECM,” IEEE Trans. Aero. Elect. Sys., Oct 1998, pp. 1097-1114. [2] Oliver E. Drummond, Person Communications, March 2008.