event could be, âMr X shouted in the lobby today at 11.00 pmâ. From the perspective ... invade personal liberty, which is unlike the case of tradi- tional information ...
Modeling Quality of Information in Multi-sensor Surveillance Systems M. Anwar Hossain, Pradeep K. Atrey and Abdulmotaleb El Saddik Multimedia Communications Research Laboratory University of Ottawa 800 King Edward, Ottawa, Ontario, Canada {anwar, patrey, abed}@mcrlab.uottawa.ca Abstract
users), surveillance information should be of higher quality. The Quality of Information, we denote as QoI, is a measurement of how valuable the information is to the user [1]. In the context of surveillance, QoI refers to the superiority of information that may help identify criminals, prevent crimes, provide leads, help citizens feel safe, deter vandalisms and so on. On the contrary, low quality information may lead to false alarm, undesired consequences, or invade personal liberty, which is unlike the case of traditional information service providers such as search engines. Therefore, it is paramount to measure the quality or goodness of surveillance information. The measurement of QoI in a surveillance system is a challenging task as it depends on several factors. We identify these factors as accuracy, certainty, timeliness, and integrity. Each of these factors, again, consists of different assumptions, analysis, and computations. Based on these factors, we propose a model to characterize QoI in a surveillance system. In our proposed model, we provide two alternative approaches of QoI computation, one that computes the quality at each information item level by considering all the quality factors and aggregates the computed quality values to find the overall QoI. The other, first computes the quality factors for all the information items and subsequently aggregates them to evaluate the overall QoI of the system. To the best of our knowledge, our approach is the first one to consider QoI in a surveillance paradigm with an effort to model and formalize the concept. Although our proposed model considers four quality factors as a basis of QoI computation in the surveillance domain, it may be extended to include other quality factors for different other scenarios. For example, to evaluate the quality of information of the ambient intelligence (AmI) environment, which promotes a paradigm where humans are surrounded by an intelligent multi-sensor electronic environment capable of recognizing and responding to their needs [7]. Therefore, the AmI environment must be providing high-quality information anytime, anywhere, and on any portable devices [8]. To this end, we envisage that our
Current surveillance systems use multiple sensors and media processing techniques in order to record/detect information of interest in terms of events. Assessing the quality of information (QoI) of a surveillance system is an important task as any misleading information may lead to suspicion, undesired consequences, and unwanted invasion of privacy. In this paper, we propose a model to characterize QoI in multi-sensor surveillance systems in terms of four quality parameters, which are: accuracy, certainty, timeliness and integrity. The proposed model is extendable to include other quality parameters if deemed necessary for different task-specific scenarios, such as the ambient intelligence environment, which aims to provide context-aware personalized services to the poeple living in that environment. To demonstrate the utility of the proposed method, we provide experimental results in a surveillance system designed for identifying authorized entry in the observation area.
1
Introduction
With the occurrences of crimes, security violations, and threats, multi-sensor surveillance systems are increasingly being deployed in public places, schools, buses, retail centers, and other premises. The aim of such systems is to record media streams and provide high-level information about the happenings in the environment. In the surveillance paradigm, the information of interest is also expressed as events (or information items). An event describes a physical reality consisting of one or more living or non-living entities involved in one or more activities over a period of time, and at a particular location [2]. For example, a typical event could be, “Mr X shouted in the lobby today at 11.00 pm”. From the perspective of the surveillance personnel (i.e.
1-4244-0832-6/07/$20.00 ©2007 IEEE.
11
QoI model can be explored further to include contextual parameters of the AmI environment. We aim to explore this in the future. The remainder of this paper is organized as follows. Section 2 comments on some related literature. The problem formulation is presented in section 3, followed by a detailed description of our proposed model in section 4. The experimental results are elaborated in section 5. The conclusion along with the scope of our future work is presented in section 6.
There are surveillance systems that primarily focus on detection and recognition of objects as well as the evaluation of their performance. The objects of interest may be any living or non-living entity, such as a person, a vehicle, a tree, and so on. Research in [12] proposes seven metrics for evaluating the output of the object detection algorithms with the ground truth. These metrics are: area-based recall for frame, area-based precision for frame, average fragmentation, average object area recall, average detected box area precision, localized object count recall, and localized output box count recall. For determining the ground truth in video data, they use ViPER tool [6]. The work in [13] creates the correspondence between GT and the detected objects by minimizing the distance between the centroids of the GT and the detected objects. The authors use the false positive track rate, the false negative track rate, the average position error, the average area error, and the object detection lag as the set of performance metrics. Another recent work [14] provides objective metrics for evaluating the performance of different object detection methods. The evaluation considers several types of errors (such as detection failures, false alarms, splits, merges, and split/merges) when comparing the output of the video detector with the manually edited GT. For a surveillance scenario where object detection is of primary focus, these approaches may provide means to assess the quality of object detection out of many alternatives. However, the information needs of different surveillance systems can vary. Therefore, a more comprehensive approach towards evaluating the overall quality is paramount.
2. Related Work The focus of this paper is to model QoI as a comprehensive measurement of assessing the quality of information to be provided by the surveillance system. The measurement of such quality also falls in the general category of performance measurement. Therefore we first review the existing works that are related to the performance evaluation of surveillance systems. A significant amount of work has been done to address the performance evaluation issues of the surveillance systems. The work was directed mostly towards evaluating the performance of motion detection [17, 11], object detection [14, 12, 13], tracking [5, 4, 17], event detection [20] and so on. In the following, we comment on some of these works. Motion detection often constitutes the core requirements of any visual surveillance system. Therefore, an increasing amount of research is geared towards evaluating the performance of motion detection algorithms. Some of these works are based on manually edited ground truth (GT), which compares the output of the algorithms with the GT. However, manually editing the GT is time-consuming and takes considerable effort. Tools such as ViPER [6] and ODViS [9] can help generate the GT through semi-automatic means. The work in [17] illustrates the automatic GT generation process that uses real image sequences and superimposes computer-generated humans into that sequence. The authors also propose several error metrics for a quantitative evaluation of the motion detection algorithm such as missing foreground pixels, added foreground pixels, and rate of misclassification. Research presented in [11] proposes an object-based motion detection approach instead of a pixelbased approach. They use the F-measure method instead of the ROC [16] curve method for the optimization of evaluation parameters. The F-measure optimization may be explained as the weighted harmonic mean of the Precision and Recall[3] metric, often used in evaluating the performance of the information retrieval process. While these approaches provide different mechanisms to evaluate motion detection techniques, we are interested in the high-level understanding of the overall information provided by the system as a whole.
The evaluation research targeting object tracking also faces the challenge of manual GT generation. The authors in [5] propose an alternative way of approaching this issue by using pseudo synthetic video to evaluate the tracking performance. They adopt the technique of inserting an increasing number of agents in the scene by synthetically varying the perceptual complexity of the tracking task. They use several metrics such as tracker detection rate, false alarm rate, track detection rate, and track fragmentation for evaluating the results of object detection and tracking. The work in [4] uses corresponding metrics to map between GT objects and tracker result object. The authors propose metrics for both frame-based and object-based evaluations. The metrics used for frame-based evaluation are true negative, true positive, false negative, and false positive. However, in object-based evaluation metrics there is no true negative. Instead, they calculate the tracker detection rate and the false alarm rate. Research in [17] uses the metrics consisting of the hit rate, miss rate, and false attempt. Like other evaluation techniques, these approaches are also concentrated on evaluating some aspect of surveillance activities and on providing traditional performance evaluation methodology. The research in [20] presents an evaluation method for
12
selecting an appropriate event detection solution from different candidate solutions. The proposal emphasizes on evaluating the detection of event as a whole, not the individual constituent modules (such as object tracking, background update strategy etc.) of an event. The authors adopt a multi-step approach for evaluating the event detection solutions. The approach consists of the precise definition and description of events in a particular scenario, the collection of realistic datasets consisting of occurrences and nonoccurrences of events, and a manual definition of GT. This approach also includes a definition of performance metric on the ground truth data for computing the score, testing the solutions based on the selected realistic datasets, and an evaluation using the metric. The authors demonstrated their approach in a public transportation scenario (subway network) and defined several event groups such as warnings, alarms, and critical alarms that included events such as proximity, dropping of an object, walking on rails, crossing the rails and others. This work is targeted for early evaluation of different event detection solutions before the actual deployment can be done. However, it does not specify any criteria on how to evaluate the overall information quality of a surveillance system from the perspective of surveillance users.
objectivity, relevancy, reputation, security, timeliness, understandability, and value-added. Not al these factors may be of interest for assessing information quality in all domains; however, they prove to be useful as an extended metrics. We also investigated some of these factors in proposing our QoI model for surveillance systems.
3
Problem Formulation
We consider a typical surveillance scenario to state the problem. The surveillance system in consideration deploys multiple sensors into a location. It also uses multiple media processing units that process the captured media streams in order to detect and record events of interest. The events may be occuring in real-time or detected from the previously captured media streams. It is imperative to determine the quality factors of the detected events that would provide confidence in the information provided by the surveillance system. We express such confidence in terms of QoI. We describe the problem of determining the QoI of a surveillance system as follows: 1. Let S be a surveillance system designed for providing a set of information items (e.g. events) Ir = {I1 , I2 , . . . , Ir }, r being the total number of information items. Also, let the surveillance system S utilize a set Mn = {M1 , M2 , . . . , Mn } of n ≥ 1 number of media streams obtained from heterogeneous sensors.
In essence, all the above works focus mostly on evaluating the accuracy of the respective objective task. However, our work is different and unique to those mentioned, in that we aim to focus on evaluating the QoI provided by the surveillance systems from the high-level requirements point of view. We observe that accuracy is just one of the key factors to measure the quality of the surveillance information. However, there are other contributing factors towards measuring the overall QoI of the surveillance system. We aim to fill this gap by providing a comprehensive model to evaluate QoI in terms of other quality factors, including accuracy.
2. let qb,j (1 ≤ b ≤ k, 1 ≤ j ≤ r) be the bth quality factor for the j th information item. Also, let qb be the quality factors contributing towards the computation of the quailty of information at the system level. In both the cases, the value of qb,j and qb lies between 0 and 1, 1 ≤ b ≤ k, and the number of quality factors is k. Our objective is to determine -
There are domains of research other than surveillance, which address the information quality, such as in information retrieval systems, distributed information sources, products, services and so on. The work in [15] proposes that five simple steps may maintain the quality of information created by the virtual community of practice. These 5 steps are accountability as a basis for reputation, a thematic focus and culture, a sense of trust and identity through personal profile pages, a common memory or knowledge repository developed in collaboration, and membership criteria to keep the level of discourse high and on topic. Research done in [10] proposes information quality benchmarks for evaluating product and service performance. This benchmark is generated from their earlier work presented in [19], which determines several dimensions of information quality such as accessibility, appropriate amount of information, believability, completeness, conciseness and consistent representation, ease of manipulation, free-of-errors, interoperability,
1. The quailty Qj , 1 ≤ j ≤ r for all the individual information items. 2. The overall quailty of information QoI of the surveillance system S.
4. Proposed Model 4.1 Overview We propose a hierarchical model in order to compute the QoI of a surveillance system deployed in different scenarios. The cornerstone of our model is to consider several quality factors (such as accuracy, certainty, etc.) that provide a high-level understanding of the quality of information delivered by a surveillance system. The computation
13
(q3 ), and integrity (q4 ). In this section, we describe how the quality factors qb,j (1 ≤ b ≤ k, 1 ≤ j ≤ r) at the information item level and the quality factors qb (1 ≤ b ≤ k) over all the information items at the system level are computed.
of the overall QoI of such system is derived by computing these factors at different levels of abstraction. For example, the values of each of the quality factor may be computed at each information item level or over all the information items that the system provides. Therefore, two alternative approaches may be adopted to compute the overall QoI. The choice of one approach over the other may be influenced by the requirements of the particular surveillance system. Figure 1 shows a schematic view of these two approaches. Figure 1(a) shows the information item level quality computation. We assume that multiple sensors provide heterogeneous media streams. Feature extraction and subsequent event detection from those streams may be performed by the media processor attached to the sensors or it may be performed by the event processors employed by the surveillance framework. The details of the event detection mechanisms are not covered in the scope of this paper. The term qb,j (1 ≤ b ≤ k, 1 ≤ j ≤ r) denotes the bth quality factor for j th information item. For each information item Ij (1 ≤ j ≤ r), the quality factors qb,j are computed according to the formalisms provided in section 4.2. These factors are combined to calculate the quality of information at individual item-levels (Qj , 1 ≤ j ≤ r) as follows: (1) Qj = f1 (q1,j , q2,j , . . . , qk,j )
4.2.1
Accuracy refers to the degree of how the observed information conforms to the reality. For example, in the surveillance domain, the accuracy of event detection is the ratio of the number of correctly detected events to the total number of events that occurred in the environment. The numbers of correctly detected events are those events that corresponds to actual events that occured in the environment. Therefore, the accuracy, q1,j for j th information item level and q1 at the system level (over all the information items) are computed as: q1,j = EC,j /ET,j 1 q1,j r j=1
(5)
where, EC,j is the number of correctly detected instances of j th event by the system over a period of time. ET,j is the total number of instances of j th event over a period of time. 4.2.2
(2)
Certainty
Certainty represents the measurement of confirmation of the information. In surveillance systems, the certainty of the information is the output of event detectors in the form of a probability score. The event detectors may employ different methods (e.g. trained classifiers) in order to provide this score. This score provided by the event detectors eventually represents the confirmation level of the identified events, be it true positive, false positive, true negative or false negative. We compute q2,j and q2 by using the following equations:
Where, Qr is the overall quality of rth information item Ir and f2 is the integration function, the detail of which will be provided in section 4.3. Figure 1(b) shows the second approach of QoI computation. Here we propose the computation of individual quality factors (qb , 1 ≤ b ≤ k) over all the information items (such as the accuracy factor for all the items). Again, the values of these factors are combined through an integration function in order to obtain the overall QoI as stated below: QoI = f3 (q1 , q2 , . . . , qk )
(4)
r
q1 =
Where, f1 is an integration function as will be described in section 4.3. Finally, the overall QoI is obtained through an integration function that considers all the values of Qj resulted from all the information items. At a higher level, this function may be denoted as: QoI = f2 (Q1 , Q2 , . . . , Qr )
Accuracy
q2,j = Avg(P rob(Ij |Mn ))
(3)
(6)
r
Where, f3 is another integration function. The detail of this will be provided in section 4.3. We now describe how the quality factors are computed in section 4.2.
q2 =
1 q2,j r j=1
(7)
In the above equation, the term P rob(Ij |Mn ) represents the probability of existence of the information item Ij (e.g. the occurrence of an event) based on the set Mn of n media streams. Avg is a function to average the certainty level of an individual information item over a period of time.
4.2 Quality factors Our proposed model is based on four quality factors (k = 4). These are accuracy (q1 ), certainty (q2 ), timeliness
14
Q1
Overall QoI
Overall QoI
Fusion
Fusion
Q2
...
Qr
Information item-level quality computation I 1(q 1,1 ,.., q k,1 )
I 2 (q 1,2,.., q k,2 )
...
M2
M3
...
qk
...
System-wide quality factor computation I r(q 1, r,.., q k, r)
I1
Information item (event) detection and quality factor computation
M1
q2
q1
...
I2
Ir
Information item (event) detection
M1
Mn
M2
(a) Scheme 1
M3
...
Mn
(b) Scheme 2
Figure 1. Schematic view of the QoI computation hierarchy.
4.2.3
Timeliness
item may be computed as:
Timeliness is a measure of information being available at the desired time. For example, if an unexpected event occurs at 8:02 pm, the system should be capable of reporting that event within an acceptable time delay. Let, the system is expected to detect the j th information item (or event) within time T of its occurrence. However, if the system takes time T + Δ, Δ being the delay, then the timeliness q3,j of this information item may be measured as: q3,j = T /(T + Δ) (8)
q4,j = Dist(ISj , IOj )
Where, Dist is a function for computing the cosine distance between the source ISj and the obtained IOj data or information, respectively. At the system level, q4 for all the information items may be calculated as: r 1 q4,j (11) q4 = r j=1 It should be mentioned here that other quality factors such as reliability, consistency, interoperability, objectivity, relevancy, security etc. may be considered in our proposed model. However, a lot of them have to deal with the system and architectural perspective of the specific application in mind, and hence is left uncovered. For other related factors, we aim to explore them in the future.
Note that q3,j is averaged over time for the j th information item. Also if (T + Δ) ≤ T , the timeliness value falls within the expected limit and hence q3,j = 1 for that timestamp. The q3 at the system level (over all the information items) is computed as: r 1 q3,j (9) q3 = r j=1 4.2.4
(10)
4.3 Overall QoI
Integrity
The quality of information at individual item level is computed using function f1 (given in equation 1), which is stated below:
Integrity refers to the fact that the available information has not been manipulated. As an example, the occurrence of an event has been detected based on authorized and untampered media streams. The manipulations of data or information may occur at different levels such as at the time of capturing the data or delivering the information. Therefore, the integrity can be computed by determining the similarity of the original and the obtained data or information over a period. Precisely, the integrity q4,j of the j th information
Qj =
k
wb × qb,j
(12)
b=1
where, k = 4, 1 ≤ j ≤ r, r is the number of information items, and wb is the weight of the bth quality factor. Here, the value of the weight wb lies between 0 and 1 and the
15
sum of all the values of wb is 1. To illustrate this weight assignment, let us consider a surveillance system which is more concerned with the certainty of the information than that of timeliness. In this scenario, the certainty factor may be assigned a higher weight than the timeliness factor. The overall quality of information, QoI, is calculated in the two different schemes as given below. In scheme 1 (see figure 1(a)), we compute QoI by integrating all the quality values Q1 to Qr of the individual information items. Let wj , 1 ≤ j ≤ r, is the weight assigned to the j th information item. The fusion function f2 (as mentioned in equation 2) may be computed as: QoI =
r
Qj
Table 1. QoI computation Parameters q1,1 q1,2 q2,1 q2,2 q3,1 q3,2 q4,1 q4,2 q1 q2 q3 q4 Q1 Q2 Overall QoI
Approach 2 0.625 0.679 0.8563 1 0.7018
Remarks Accuracy factor for I1 Accuracy factor for I2 Certainty factor for I1 Certainty factor for I2 Timeliness factor for I1 Timeliness factor for I2 Integrity factor for I1 Integrity factor for I2 Accuracy, certainty, timeliness and integrity factors over all the informtion items Assuming w1 =0.5, w2 =0.3, w3 =0.1, w4 =0.1 QoI of the system
(13)
j=1
5.2
Note that, in equation (13), we assume for simplicity that all the information items in a surveillance system are of equal importance. Relaxing this assumption would enable the extension of the proposed model, which will be explored in the future. In scheme 2 (see figure 1(b)), the QoI is computed by integrating the quality factors q1 to q4 at the system level. The fusion function f3 (as mentioned in equation 3) is shown below: k wb × qb (14) QoI =
Information item processing
The media stream M1 is processed by the system in realtime to identify the two information items I1 and I2 . For ten different individuals, we first trained the system with just one image (face) of each person captured by the camera. We then asked six individuals, instead of all ten, to walk into the lab. This may have provided us with false positive data for some matching, which is a good way to test the system’s efficiency. The same sequence was performed six times for each person. We recorded the test data for both face detection and identification events. Figure 2 shows some of the image samples, which consist of the training face, the correctly detected face, and the incorrectly detected face.
b=1
k Note that, similar to equation (12), b=1 wb = 1. Also note, for integration, we adopt a linear weighted sum fusion strategy as in [18] by assigning normalized weight to different quality factors.
5.3 QoI computation We show here the computed values of the quality factors and the overall QoI provided by the demo surveillance system based on our proposed schemes. Table 1 summarizes the results of the computed values at each level (information item level and system level). We provide a brief discussion regarding these results in the following section.
5. Experimental Results To show the utility of our model for the computation of QoI, we present the preliminary experimental results in a simple multimedia surveillance environment. The events of interest (information items) in our case are: I1 : the detection of a human face in the environment. I2 : the recognition of the detected human face.
5.1
Approach 1 0.8611 0.3888 0.679 0.679 0.8563 0.8563 1 1 0.8198 0.5837 0.7018
5.4
Discussions
The test data consists of 36 records based on 6 individuals. The face detection event, I1 is correctly performed 31 times out of 36. This gives us the value of accuracy factor q1,1 for information item I1 as 0.8611 according to equation 4. Face recognition, I2 , is correctly performed by the system 14 times out of 36, and that produces a value of 0.38888 for q1,2 . In the case of the certainty factor, the system does not provide us with a percentage value for face detection, I1 . However, it provides us with the certainty percentage value for face recognition I2 . This value is averaged to obtain q2,1 as 0.679. As we do not have q1,2 , we assign it the same value of q2,1 . Similarly, for the timeliness factor, we
System setup
We installed a demo surveillance system that consists of one camera (Logitech webcam) at the entrance of our research lab. The camera provides the media stream M1 . We used face detection and identification software in our system for the detection of information items (I1 & I2 ) based on the media stream, M1 . It is however mentioned that the experimental setup portrayed at this stage is primitive. We aim to focus more on this issue in the future.
16
Correctly identified
Correctly identified
Incorrectly identified
Sam ple 3
Sample 2
Sample 1
Training image
Figure 2. Sample test faces.
only receive the values for I2 in all attempts of face recognition. From these values, we take the average as the expected time (T in equation 8) to identify the specified event and calculate q3,2 for all the samples over the testing period. For simplicity, we assume q3,1 to have the same value as q3,2 as the system does not provide us this value. Finally, we consider that the data and information is not tampered and the integrity is maintained. With this assumption, we have q4,1 = q4,2 = 1. We then compute q1 to q4 using equations 5, 7, 9, and 11 respectively. Once all the values of the quality factors are calculated, we assign a sample weight to each of the quality factors (as shown in table 1) and compute Q1 = 0.8198 and Q2 = 0.5837 using equation 12. The values of Q1 and Q2 provide us with the quality at the information item level. The final QoI is computed as 0.7018 (using equations 13 and 14 separately) for both the approaches we proposed. The value of QoI in both the cases is same due to the fact that we considered similar weight values in both cases, which could be different as per requirements. Note that the quality of I1 is higher than the quality of I2 as the system responded better to I1 . Now, in some systems, one information item may be of higher interest, and accordingly weights could be assigned at each information item level. Furthermore, there could be a situation where different weights need to be assigned for both information
items and quality factors. This would have an impact on the overall calculation of QoI. We will explore this in future. We demonstrated the proposed QoI calculation methodology with a simple surveillance scenario, as building a complex surveillance system was not the purpose of this research. Instead, we wanted to emphasize how to consider several high-level factors in order to compute the quality of surveillance information.
6. Conclusions This paper proposes a comprehensive model for evaluating the quality of information delivered by the surveillance system. We emphasize on evaluating such quality as it may provide increased confidence and trust in surveillance. The proposed model is based on several quality factors such as accuracy, certainty, timeliness, and integrity. We provide two alternative approaches to compute the overall QoI of the surveillance system in consideration using these factors. Our model allows computing QoI at the information item level as well as at the system level. Our proposed QoI model, at this stage may be applied for static computation of QoI. That means experiments need to be done to compute QoI prior to deploying the surveillance system. However, we are currently working on dynamically evaluating the QoI that will enable a user to measure the QoI
17
at any running stage of the system. This will take into consideration the ambient context and environment model, the recorded history, and any upgrades of the sensors and the accompanying media processing algorithms. Nevertheless, we believe that our proposed work may be considered as a benchmark for evaluating the QoI of surveillance system.
[13] S. Muller-Schneiders, T. Jager, H. Loos, and W. Niem. Performance evaluation of a real time video surveillance systems. In The Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance (VS-PETS), pages 137–143, Beijing, China, October 2005. [14] J. Nascimento and J. Marques. Performance evaluation of object detection algorithms for video surveillance. IEEE Transactions on Multimedia, 8(4):761–774, August 2006. [15] A. Neus. Managing information quality in virtual communities of practice. In E. Pierce and R. Katz-Haas, editors, 6th International Conference on Information Quality at MIT, Boston, USA, 2001. MA: Sloan School of Management. [16] F. Oberti, A. Teschioni, and C. S. Regazzoni. Roc curves for performance evaluation of video sequences processing systems for surveillance applications. In IEEE International Conference on Image Processing, volume 2, pages 949–953, Kobe, Japan, October 1999. [17] T. Schlogl, C. Beleznai, M. Winter, and H. Bischof. Performance evaluation metrics for motion detection and tracking. In 17th International Conference on Pattern Recognition (ICPR), volume 4, pages 519–522, California, USA, August 2004. [18] J. Wang, M. Kankanhalli, W.-Q. Yan, and R. Jain. Performance evaluation of a real time video surveillance systems. In ACM Workshop on Video Surveillance(IWVS), Berkeley, CA, USA, November 2003. [19] R. Wang and D. Strong. Beyond accuracy: what data quality means to data consumers. Journal of Management Information Systems, 12(4):5–34, 1996. [20] F. Ziliani, S. Velastin, F. Porikli, L. Marcenaro, T. Kelliher, A. Cavallaro, and P. Bruneaut. Performance evaluation of event detection solutions: the creds experience. In IEEE Conference on Advanced Video and Signal-Based Surveillance, pages 201–206, Como, Italy, September 2005.
References [1] Information quality at wikipedia. http://en.wikipedia.org/wiki/Information quality. [2] P. K. Atrey, M. S. Kankanhalli, and R. Jain. Information assimilation framework for event detection in multimedia surveillance systems. ACM Multimedia Systems Journal, Online version published in 2006. [3] R. Baeza-Yates and B. Ribeiro-Neto. Modern information retrieval. Addison Wesley, New York, ACM Press, 1999. [4] F. Bashir and F. Porikli. Performance evaluation of object detection and tracking systems. In Proceedings 9th IEEE International Workshop on PETS, pages 7–14, New York, USA, June 2006. [5] J. Black, T. Ellis, and P. Rosin. A novel method for video tracking performance evaluation. In The Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance (VS-PETS), pages 125–132, Nice, France, October 2003. [6] D. Doermann and D. Mihalcik. Tools and techniques for video performance evaluation. In International Conference on Pattern Recognition (ICPR), volume 4, pages 167–170, Copenhagen, Denmark, September 2000. [7] K. Ducatel, M. Bogdanowicz, F. Scapolo, J. Leijten, and J.-C. Burgelman. Scenarios for ambient intelligence in 2010. ist advisory group final report, 2001. ftp://ftp.cordis.lu/pub/ist/docs/istagscenarios2010.pdf. [8] L. Feng, P. M. G. Apers, and W. Jonker. Towards context-aware data management for ambient intelligence. In F. Galindo, M. Takizawa, and R. Traunm¨uller, editors, 15th Int. Conf. on Database and Expert Systems Applications (DEXA), volume LNCS 3180, pages 422–431, Berlin, August 2004. Springer-Verlag. [9] C. Jaynes, S.Webb, R. M. Steele, and Q. Xiong. An open development environment for evaluation of video surveillance systems. In IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (PETS), pages 32– 39, Copenhagen, Denmark, June 2002. [10] B. Kahn, D. Strong, and R. Wang. Information quality benchmarks: Product and service performance. Communications of the ACM, 45(4ve):184–192, April 2002. [11] N. Lazarevic-McManus, J. Renno, and G. A. Jones. Performance evaluation in visual surveillance using the Fmeasure. In 4th ACM international Workshop on Video Surveillance and Sensor Networks (VSSN), pages 45–52, California, USA, October 2006. [12] V. Mariano, J. Min, J.-H. Park, R. Kasturi, D. Mihalcik, H. Li, D. Doermann, and T. Drayer. Performance evaluation of object detection algorithms. In 16th International Conference on Pattern Recognition (ICPR), volume 3, pages 965–969, Los Alamitos, CA, USA, September 2002.
18