On the Performance Evaluation of Tracking Systems using ... - CiteSeerX
Recommend Documents
Abstract. One of the most interesting applications of sensor networks is the localization and tracking of moving ob- jects. The Cricket Location-Support System ...
video sequences that represent a variety of challenges to illustrate the practical ... effective performance analysis for surveillance systems and proposed some ...
The Florida Roundabout Design Guide also compared one- or two-lane ... aid for design and evaluation of intersections such as signal- ..... The writers would like to express their appreciation to the Office of ... ing financial support for this resea
mispredictions, and several categories of issued and graduated instructions ... step usually is not included, as it takes place only once, and the transform is.
industry-standard datasets. Later on, the emphasis has. Page 2. shifted more towards standard metrics used to compute .... systems calls for more sophisticated measures of object ... scene in the right hand center corner of the camera field.
Sep 28, 2007 - carrier tracking. The analysis is conducted in the presence of various tracking error sources including noise, receiver clock jitter and dynamics.
Oracle XMLDB implements a number of SQL/XML standard based functions .... An aggregate mix of database transactions was tested in a throughput test, ...
percentage error reduction between cascade controller and Gain Scheduling controller is about ... show that the tracking error is reduced up to micron meter.
good performance. There is a variety of commercial products available for supporting email services. Most of them rely on Internet standard protocols to format, ...
performance, performance evaluation, prediction models. 1. .... good tool support. There are many other models. ... An Eclipse-based open-source tool called âPCM-Benchâ is implemented ..... Early performance testing of distributed software.
Feb 2, 2011 - to assess student performance based on several criteria with a strong ..... http://www.mathworks.com/access/helpdesk/help/pdf_doc/fuzzy/fuzzy.pdf. ... Computing, Cloud Computing, Fuzzy Logic, Neural Networks, Genetics ...
Feature Transform Technique and the section 3 will talk about ... Samsung Galaxy note-2, 8 MPixel camera is given. ... The computational cost of applying.
tion V decimation of the measured radiation patterns are studied by using different values of and , but the results presented in this section were computed without ...
data captured online and stored in a surveillance database. Tracks are ..... and the average duration of a dynamic occlusion (DDO) to provide a measure of the ...
another is located at the MAE West Exchange Point in San Jose, California (SJ). In order to examine the proxy performance improvement, we need to consider ...
2 Department of Computer Science, University of Calgary, Canada ... discussing potential solutions for fixing the bug. ... tion needed by developers to fix bugs.
Evaluating the Performance of Systems for Tracking Football Players and Ball. Y. Li. A. Dore. J. Orwell. School of Computing. D.I.B.E,. School of Computing.
presented. The evaluation method is demonstrated to com- pare results of an implemented multi-camera tracker. 1. Introduction. Within the field of computer ...
Feb 1, 2017 - ï½Figure 1ï½The Korean Government Performance Evaluation System. Source: Government ...... Korean Intellectual Property Office(2016).
The tutor in project-led education: evaluation of tutor performance .... roles of a tutor on the one hand and on the other hand evaluating his or her performance ...
tight performance lower bound, which corresponds to the best .... transmit filter and frequency selective fading is the same as ... where â denotes the operation of convolution, pT (t) and pR (t) ...... and timing phase offset doesn't necessarily m
2Plymouth Business School, UK. 3Beale and ... family business but today, it is one of the leading firms of building services engineers in the South West of. England with branches in Exeter, Yeovil and Plymouth. ... issue was that high growth organisa
Abstract: Beale and Cole is a company that was experiencing significant levels of growth in its business. However, its existing operational practices and ICT ...
Application Performance Evaluation of the HTMT Architecture* by. Mark Hereld,1,2 Ivan R. Judson,1 Rick Stevens1,2,3. {hereld, judson, stevens}@mcs.anl.gov.
On the Performance Evaluation of Tracking Systems using ... - CiteSeerX
Jul 8, 2005 - Object tracking from multiple Pan Tilt Zoom (PTZ) cameras is an ... Testing video-analysis algorithms is very important in the academic and ...
On the Performance Evaluation of Tracking Systems using Multiple Pan-Tilt-Zoom Cameras X. Desurmont1, J-B. Hayet2, C. Machy1, J-F. Delaigle1 and B. Macq3 Multitel ASBL, Parc Initialis – Rue Pierre et Marie Curie 2, B-7000, Mons, Belgium. 2 Institut Montefiore, University of Liège, Building B28, B-4000, Liège, Belgium. 3 Communications and Remote Sensing Laboratory, UCL, Louvain-la-neuve, Belgium. 1
ABTRACT Object tracking from multiple Pan Tilt Zoom (PTZ) cameras is an important task. This paper deals with the evaluation of the result of such a system. This performance evaluation is conducted by first considering the characterization of the PTZ parameters and then by the trajectories themselves. The camera parameters with be evaluated with the homography errors; the trajectories will be evaluated according to the location and miss-identification errors. Keywords: performance evaluation, tracking, multi-view, ptz-cameras, metrics.
1. INTRODUCTION Accurate object tracking in multi-view video sequences from Pan Tilt Zoom (PTZ) cameras is an important task in many applications such as video surveillance [3], traffic monitoring, marketing and sport analysis. It basically combines in parallel the multi-view matching and fusion of video images using a 3D model of the scene and the positioning of mobile objects within this scene using tracking algorithms. Testing video-analysis algorithms is very important in the academic and industrial communities in order to analyze and enhance these technologies. To enable objective performance evaluation, multiple steps are required. Firstly, video sequences must be available. Secondly, the Video Content Analysis (VCA) technique to be evaluated must generate results from the test sequences. Thirdly, ground truth needs to be available. Then, the ground truth needs to be compared with the generated results, requiring an unambiguous definition of the metrics. Finally, the evaluation results are combined for each video sequence to be presented to the user. The Performance Evaluations of Tracking and Surveillance (PETS) workshop deals with these issues since 2000. It proposes datasets for surveillance, sports, smart meetings, etc. However, until recently there were no datasets for PTZ cameras with full camera ground truth available. They have been presented in [1] from the TRICTRAC1 project. It supplies to the video processing community synthetic, high-definition video content of PTZ cameras with 3D ground truth including the parameters of the cameras and the mobile objects. However the corresponding performance metrics have not been yet presented. In this paper, we focus on the performance evaluation methodologies and facilities designed for the TRICTRAC project to test the ability to recover the trajectories of each player 1
http://www.multitel.be/trictrac
of a soccer game from the video streams of all the fixed or PTZ cameras. In section 2, we present the ground truth data which is result the video analysis algorithms should recover. Metrics will be designed in order to highlight the typical errors of vision algorithms. The metrics will work on different independent levels. In section 3, we will present the first level: the homography undergone by image when PTZ camera parameters are changing. Next part is devoted to the second level: the evaluation of mobile objects 2D trajectory in each video stream and then directly the position of the tracked objects in the 3D space of the world (the stadium). Finally we will show the results of the implementation of this performance evaluation through an example, and conclude about the real interest of our proposal.
2. DATASET AND GROUND TRUTH The framework for evaluating VCA is described in figure 1. The first requirement for an evaluation of a VCA algorithm is to have video data. Indeed, to enable proper benchmarking with other algorithms, it makes sense to evaluate algorithms with standard video data [2]. For the TRICTRAC project, public sequences were available like from the IST Inmove2 project (see figure 2) but were done with static cameras. Thus, real PTZ sequences have been found but for legacy issues they cannot be public (and thus not “standard”). Moreover, the ground truth was not available. For these two reasons, synthetic video were processed.
Figure 1. Performance evaluation framework.
Figure 2. Inmove PETS sequences.
The basic scenario of these sequences is fully described in a previous work [1], but basically, there were 9 cameras (see figure 3) around a stadium focusing on a soccer play (see figure 4). 2.1. Ground truth The ground truth of the dataset has been created automatically during the rendering. The ground truth only describes mobile objects. It consists of the parameters of the active cameras, the positions in 3D and in the screen view (2D) of the players. A typical frame description is shown in figure 5. The world coordinates are expressed in meters (the centre of the stadium is 2
Project IST INMOVE (IST-2001-37422): http://www.inmove.org
(0,0,0), the ground plane is Y=0).The time is expressed as yyyy/mm/dd@hh:mm:ss .microseconds, and there is a bijection between time and frame number (every 40 milliseconds, there is a new frame, i.e. the frame rate is 25 fps).
Figure 3. Location of the cameras.
Figure 4. Synthetic image from the camera 5.
camera 4 5.69682,0,0,0,0,7.59575,0,0,0,0,-0.99999,-1.99999,0,0,1,0 0.390245,-2.90758e-09,0.920711,28.7194,0.340928,0.928917,-0.144503,12.2661, 0.855264,0.370288,0.362506,50.2468,0,0,0,1 tracking 0 person 19.2547,0,-39.8366 0.0416068,0.00513261 …/…
Figure 5. XML format of the GT of TRICTRAC Dataset. The position of the object (in 2D and 3D) is on the ground plane (z=0). The following equation (1) gives the relations between the 3D positions and the 2D positions of the players.
The ground truth is defining which result the tested system should output. To summarize, this result is composed of the parameters of the cameras and the positions of objects.
3. EVALUATION METRICS The metrics will work on different levels. The first level is the homography undergone by image when PTZ camera parameters are changing. The second level is the evaluation of the mobile objects 2D trajectory in each video stream. The last level will directly evaluate the position of the tracked objects in the 3D space of the world (the stadium). 3.1 Level 1: Homography
As presented in the ground truth section, first level parameters would be the view matrix and the projection matrix; the errors will be seen as differences with the ground truth. Usually, algorithms (that we want to evaluate) output the homography matrix (which is the product of the view matrix and the projection matrix). The homography is defined in 2D space as a mapping between a point on a ground plane from as seen from one camera, and the same point on the ground plane as seen from a second camera. Thus it can be seen as a method for determining objects position into an image or video with the correct pose. Figure 6 shows typical problems that occur in homography determination.
(b)
(a)
(b)
Figure 6. Typical configuration errors: a) translation in the plane, b) rotation, c) translation in the 3d space.
A first approach for the evaluation of the homography determination would be to compare the parameters of the ground truth with the results. One should consider which parameters are relevant: the homography or the decomposition of the view and projection matrix in order to describe errors in a “human friendly” representation (e.g. the angle error between the ground truth camera and the real camera. To do this decomposition, we can use several methods [3, 4] like Singular value Decomposition (SVD). By the way, metrics directly derived from these parameters are not very representative of the performance of the system! Indeed, for the same homography error, the distance of points from the real ground plane to the estimated one can vary a lot. E.g. in figure 7, the distance (3D Euclidian) error of corresponding points is low in (a), and on the contrary, high in (b) for the same rotation error.
(b) (a) Figure 7. Difference of “distance error” from the same rotation error with different center. The conclusion is that, for the evaluation of the homography, one should consider focusing on the “working region” and especially on the distance error of positioning the points of the working region in the 3D space (all position are Y=0). In our case, the “working region” is considered as the points of the stadium ground plane that are visible in the current screen image. Thus we propose the metric Herror to characterize the error of the homography determination (see (2) ).
H error
∑ =
−1
GT = workingregion
−1
d ( H GT × GT , H RS × GT )
∑
1 GT = workingregion
(2)
With d (a, b) = (a x / a z − bx / bz ) 2 + (a y / a z − b y / bz ) 2 Some homography estimation [5] algorithms output a covariance matrix of the uncertainty of H, a 9x9 matrix. In this case there is an estimation of covariance (x,y) on the −1 −1 ground plane. Thus we can compare the local error displacement ( H GT × GT , H RS × GT ) and the variance and conclude about the capacity of the system to evaluate itself the homography error. 3.2 Level 2: Tracking
The output of a tracking system is the set of trajectories of objects in the scene. As described by Smith et al [6], there are key properties for a good tracker such as: • Track objects well; place the correct number of trackers at the correct locations for each frame, • Identify objects well; track individual objects consistently over a long period of time, Typical errors are thus about the locations and the identities. When the tracking system mismatches two objects because of an inversion, it could be seen as an identification error. On the other hand, when the position given by the system differs slightly from the ground truth, it is considered as a location error. Figure 8 shows the two possible errors. Sometimes, it is not obvious to classify the error in a miss-identification or location drift; it really depends on the interpretation.
b) a) Figure 8. a) Location error between the ground truth (continuous line) and the result (dash line); b) Identity error between ground truth (continuous lines) and result (dash lines). In order to propose a robust metric, this metric should give the score of the best objective interpretation of the result regarding the ground truth. This metric should be able to propose a good distinction between location and identity error. 3.2.1 Tracks matching related works
As discussed in the previous sub-section, the taxonomy of errors for the tracking is mainly composed of location and identity mistakes. In the simple case where there is no
possibility of identities errors within a track (e.g. in medical laboratory when one wants to track mouse by painting color markers on their back) there is still a global matching between ground truth and result tracks do proceed. The idea is to compute a distance matrix of all the ground truth tracks and the result tracks, and then to find the best set of matches between each track of ground truth and the results, in order to minimize the sum of the distance. This general problem is known as Linear Assignment Problem (LAP) and the optimized algorithm to solve it has been presented [7] and is know as “Hungarian algorithm”. In the general case (possibility of identity mistakes), to evaluate the tracking performance, one should do a matching between elements in the ground truth and in the results. Brown et al [8] propose a tracking evaluation with a framework to determine a match between results tracks and ground truth tracks. However, this match is not necessarily the best one. Smith et al [6] also propose an improvement of [8]. Needham et al [9] propose, for the matching, many methods to compare trajectories that can differ by spatial or temporal bias. They also bring up the possibility to use dynamic programming to align trajectories. 3.2.2 Proposed method
The basic idea of the proposed method is to find the best interpretation of the results according to the ground truth. We first present the set of all possible interpretations and then an optimized method to build the best one without testing all the possibilities (which is computationally too much expensive). We limit the taxonomy of error to miss-identification and location errors. As described in figure 8 b), tracking algorithms can miss-identify a target at any time (usually, the majority of miss-identification starts when several targets are close together). Thus all the possible interpretations would be to suppose miss-identification in all possible segments of the trajectory. E.g. In a scene with n objects in the result, m in the ground truth, during o images there are p (3) possibilities of miss-identification. We define a cost of location error as the distance (e.g. Euclidian) between the two points, and the cost of miss-identification as 4 (this number should be chosen according to the user requirement of what is considered as the maximum acceptable distance for the location error). For each possibility we can compute a global score of the performance (which is the sum of distances between ground truth and results points + sum of costs of miss-identification) of the system according to this possibility of interpretation. Then we only keep the best interpretation. With this method, we obtain, in essence, an explanation of the errors that arise during the tracking; it is quite similar to ideas of [10, 11]. p=
Max(n, m)! n − m!
(3)
For optimization, we propose to limit the set of tested possibilities, by building a first interpretation and then enhance it recursively in order to minimize the global cost. We propose to use Dynamic Programming (DP) find the best lower cost (see figure 10).
The choice of the initialization of the first interpretation is done by minimizing the distance cost between points without taking care of the cost of miss-identification, see figure 9. Then we propagate possibilities not to have these miss-identifications. Finally we keep the best possibility. Next subsection shows in an example that the number of possibilities is drastically lowered by this method. (It is proportional to the number of ambiguities of the trajectories which is lower than (3) ). 3.2.3 Example
In this section, we show an example of the use of the proposed method. Figure 9 describes in a space-time graph, the movement of 3 objects. The ground truth is display in plain grey shapes. At time around t3, objects 2 and 3 are crossing together; at time around t5, objects 1 and 3 are crossing together. The result of the tracking system is displayed on the same graph with back contour shapes. The usual interpretation made by user is to say that the tracking system fails at time t5 because of a miss-identification. We are going to find this with the proposed method.
Figure 9. Graph view of the ground truth and results of tracking of 3 objects.
Cost t1 = 0.9; Cost t 2 = 1.1; Cost t 3 = 1.1; Cost t 4 = 1.1; Cost t 5 = 1.2; Cost t 6 = 1.2; Cost total = 6.6
The cost of miss-identification: In the « matrix view », miss-identification is highlighted when there is a swap between vectors from one key frame to another; here it appears at (t2,t3), (t3,t4) and (t4,t5). Thus 3 x 4=12. Interpretation summary: This interpretation cost is 12+6.6= 18.6 and stands for “the system has started a miss-identification at time t3, finished it at time t4, started another miss-identification at time t5”. 2) “Removed miss-identification” possibilities investigation: There are basically three missidentification to try to remove: (t2,t3), (t3,t4) and (t4,t5). Removing (t2, t3): The proposed data association matrix in order to minimize the global cost of association: 1 0 0 1 0 0 1 0 0 1 0 0 0 0 1 0 0 1 t1 = 0 1 0; t 2 = 0 1 0; t 3 = 0 1 0; t 4 = 0 1 0; t 5 = 0 1 0; t 6 = 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 1 0 0 1 0 0 The cost of association matrix: Cost t1 = 0.3 + 0.4 + 0.2; Costt 2 = 0.4 + 0.4 + 0.3; Costt 3 = 0.3 + 0.9 + 0.8; Cost t 4 = 0.4 + 0.3 + 0.4; Cost t 5 = 0.3 + 0.5 + 0.4; Cost t 6 = 0.3 + 0.5 + 0.4
Cost t1 = 0.9; Cost t 2 = 1.1; Cost t 3 = 1.8; Cost t 4 = 1.1; Cost t 5 = 1.2; Cost t 6 = 1.2; Cost total = 7.3
The cost of miss-identification: It appears at (t4,t5). Thus 1 x 4=4. Interpretation summary: This interpretation cost is 4+7.3= 11.3 and stands for “the system has started a miss-identification at time t5”. We could remark that removing (t2,t3) automatically removes (t3,t4). Removing (t4, t5): The proposed data association matrix in order to minimize the global cost of association: 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 1 t1 = 0 1 0; t 2 = 0 1 0; t 3 = 0 1 0; t 4 = 0 1 0; t 5 = 0 1 0; t 6 = 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 1 1 0 0 The cost of association matrix: Costt1 = 0.3 + 0.4 + 0.2; Costt 2 = 0.4 + 0.4 + 0.3; Costt 3 = 0.3 + 0.9 + 0.8; Costt 4 = 0.4 + 0.3 + 0.4; Costt 5 = 0.7 + 0.5 + 0.6; Costt 6 = 0.3 + 0.5 + 0.4
Cost t1 = 0.9; Cost t 2 = 1.1; Cost t 3 = 1.8; Cost t 4 = 1.1; Cost t 5 = 1.8; Cost t 6 = 1.2; Cost total = 7.9
The cost of miss-identification: it appears at (t5,t6). Thus 1 x 4=4. Interpretation summary: This interpretation cost is 4+7.9= 11.9 and stands for “the system has started a miss-identification at time t6”. Removing (t5, t6): The proposed data association matrix in order to minimize the global cost of association: 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 0 t1 = 0 1 0; t 2 = 0 1 0; t 3 = 0 1 0; t 4 = 0 1 0; t 5 = 0 1 0; t 6 = 0 1 0 0 0 1 0 0 1 0 0 1 0 0 1 0 1 0 0 1 The cost of association matrix: Costt1 = 0.3 + 0.4 + 0.2; Costt 2 = 0.4 + 0.4 + 0.3; Costt 3 = 0.3 + 0.9 + 0.8; Costt 4 = 0.4 + 0.3 + 0.4; Costt 5 = 0.7 + 0.5 + 0.6; Costt 6 = 6.1 + 0.5 + 6.3
The cost of miss-identification: Here it doesn’t appear. Interpretation summary: This interpretation cost is 19.8 and stands for “the system has not done any miss-identification”
The same result could be found directly on a graph (figure 10), where the horizontal axis stands for the key frames in chronologic time order, and the vertical axis, the possible local interpretation association for each key frame. One can see that the lower row stands for the first interpretation 1) and the other rows for possible other interpretations. On the right of each matrix, one can see the minimum global cost to arise to this matrix with the best path from the first key frame (It is the basic requirement of the DP optimization). Thus, at the last column, the end of the best path is highlighted because of the lowest cost, then the DP algorithm allows to backtrack the best interpretation (in the figure the cost of the best path is bold). t1
Figure 10. Representation of the paths for the 4 interpretations studied, with DP it is possible to compute it in an optimize way.
3.3 Level 3: Fusion of camera and 3D tracking The last level of evaluation combines the fusion of cameras and extends the positions of mobile objects to 3D. This fusion is generally done inside the “3D tracking algorithm”. For example, the implementation of TRICTRAC tracking system is shown on figure 11. It is distributed: all video processing algorithms, associated to one video flow, are embedded in one particular machine; then the outputs from the different cameras are collected in a central machine. More specifically, on each camera, Generic Manifold (GM) local trackers handle the 2D tracking (in the image) of targets. In the same time, a rectification module estimates the homography matrix that maps 2D image coordinates into 2D ground coordinates. This output, for each tracker, and for each camera, is sent to the Main PC. In this central PC, the data are associated to targets, first, and filtered by classical Kalman techniques, second, to produce filtered trajectories Pt.
Figure 11. Frame work of TRICTRAC 3d tracker. Basically, the proposed metric is the same as 2D tracking but extended to 3D. The distance between the objects is processed in the 3D world.
4. CONCLUSION AND FUTURE WORK In this paper we have presented the performance evaluation of a tracking system using multiple PTZ cameras. This performance evaluation is conducted by first considering the characterization of the PTZ parameters and then by the trajectories themselves. We have shown that trying to find the best interpretation (according to minimize a global cost) gives a plausible explanation of the miss-identification errors occurred during the tracking process. We have also shown an optimized way to solve the implementation of this search. In future work, we will extend the number of tracking errors that are handled by adding non detections and false detections of objects. Acknowledgment: This work is supported by the Walloon Region within the scope of the TRICTRAC project.
5. REFERENCES [1] X. Desurmont, J-B. Hayet, J-F. Delaigle, J. Piater, B. Macq, "TRICTRAC Video Dataset: Public HDTV Synthetic Soccer Video Sequences With Ground Truth", Workshop on Computer Vision Based Analysis in Sport Environments (CVBASE), 2006. [2] X. Desurmont, R. Wijnhoven, E. Jaspert, O. Caignard, M. Barais, W. Favoreel and J.F. Delaigle, "Performance evaluation of real-time video content analysis systems in the CANDELA project", conference on Real-Time Imaging IX, part of the IS&T/SPIE Symposium on Electronic Imaging 2005, 16-20 January 2005 in San Jose, CA USA. [3] R. Cucchiara, A. Prati, R. Vezzani, "Advanced Video Surveillance with Pan Tilt Zoom Cameras", The Sixth IEEE International Workshop on Visual Surveillance, Graz, 2006. [3] P. Sturm and S. Maybank, “On plane-based camera calibration: A general algorithm, singularities, applications”, In IEEE Conf. Computer Vision & Pattern Recognition, 1999. [4] P. Gurdjos, P. Sturm, "Methods and Geometry for Plane-Based Self-Calibration", Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Volume 1, page 491-496 – 2003. [5] A. Criminisi and I. Reid and A. Zisserman","A plane measuring device",.In Proc. BMVC, UK, September 1997. [6] K. Smith, D. Gatica-Perez, J.-M. Odobez, and S. Ba, “Evaluating Multi-Object Tracking”, Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Workshop on Empirical Evaluation Methods in Computer Vision (CVPR-EEMCV), San Diego, Jun. 2005. [7] R. Jonker and A. Volgenant, “A shortest augmenting path algorithm for dense and sparse linear assignment problems”, Springer Journal Computing, ISSN 1436-5057, Volume 38, Number 4 / December, 1987. [8] L. M. Brown, A.W. Senior, Y. Tian, J. Connell, A. Hampapur, C. Shu, H. Merkl, M. Lu, "Performance Evaluation of Surveillance Systems Under Varying Conditions", Proceedings IEEE International Workshop on PETS, pp 1-8, Breckenridge, CO USA, January 7 2005. [9] C.J. Needham and D. Boyle, “Performance Evaluation Metrics and Statistics for Positional Tracker Evaluation”, Proc. of the Computer Vision Systems: Third International Conference, ICVS 2003, vol. 2626, pp 278-289, Graz, Austria, April 2003. [10] D. Lopresti , "Performance Evaluation for Text Processing of Noisy Inputs", Symposium on Applied Computing, pp 759 - 763, March 13-17, 2005, Santa Fe, New Mexico, USA. [11] X. Desurmont, R. Sebbe, F. Martin, C. Machy and J-F. Delaigle, "Performance Evaluation of Frequent Events Detection Systems", Ninth IEEE International Workshop on Performance Evaluation of Tracking and Surveillance, New York, 18th June 2006.