Integrating RFID on event-based hemispheric ...

10 downloads 0 Views 132KB Size Report
[17] Xiangqian Peng, Youping Chen, Wenyong Yu, Zude Zhou,. Guodong Sun ... [29] Hui-Huang Hsu, Zixue Cheng, Tongjun Huang, Qiu Han. 2006. Behavior ...
Integrating RFID on Event-based Hemispheric Imaging for Internet of Things Assistive Applications V. Kolias, I. Giannoukos National Technical University of Athens

C. Anagnostopoulos, I. Anagnostopoulos University of the Aegean

{vkolias, igiann}@medialab.ntua.gr

[email protected] [email protected]

ABSTRACT Automatic surveillance of a scene in a broad sense comprises one of the core modules of pervasive applications. Typically, multiple cameras are installed in an area to identify events through image processing techniques, which however present limitations in terms of object occlusion, noise, lighting conditions, image resolution and computational cost. To overcome such limitations and increase recognition accuracy, the video sensor output can be complemented by Radio Frequency Identification technology, which is ideal for the unique identification of objects. In this paper we examine the feasibility of integrating RFID with hemispheric imaging video cameras. After a brief description and discussion of related research regarding RFID location, video surveillance and their integration, we examine the factors that would render such a system feasible in terms of hardware, software and their environments. The advantages and limitations of each technology and their integration are also presented to conclude that their combination could lead to a robust detection of objects and their interactions within an environment. Finally, this work ends with the presentation of some possible applications of such integration.

Categories and Subject Descriptors I.4.8 [Image Processing and Computer Vision]: Scene Analysis – sensor fusion, tracking.

General Terms Measurement, Performance, Design, Experimentation, Security and Theory.

Keywords RFID, Hemispheric Imaging, Surveillance, Pervasive Systems, Internet of Things (IoT)

1. INTRODUCTION The remote activity monitoring across large environments, such as government facilities, public buildings or industrial environments

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. PETRA’10, June 23–25, 2010, Samos, Greece. Copyright © 2010 ACM ISBN 978-1-4503-0071-1/10/06... $10.00.

V. Loumos, E. Kayafas National Technical University of Athens {loumos, kayafas}@cs.ntua.gr

in real time, is a prerequisite for various monitoring applications. Such activities can also prove a useful addition in assistive environments in smaller size spaces, like houses and offices. Modern video-based surveillance systems, which employ powerful real time analysis techniques, are widely deployed and can be useful to this end as well. Additionally, the use of multiple cameras to provide surveillance coverage over a wider area while ensuring object visibility over a large range of depths, introduces the need to coordinate the cameras in order to detect events of interest, something which increases system complexity. Radio Frequency Identification (RFID) technology is arguably the ideal solution for object identification. It has successfully been used in a large variety of applications, like enterprise supply chain management for inventorying, tracking and of course objects identification. RFID may also prove useful for pervasive computing, for providing identity to virtually everything. It is not accidental that RFID, along with wireless sensor and nanotechnologies have been combined to form what is known as the Internet of Things [1]. For nearly every pervasive computing application, another vital requirement is real time locating, which emanates from the inherent need for just-in-time actionable information. Over the years, many systems have addressed the problem of automatic location sensing with various techniques. In this paper we elaborate on the feasibility of developing a hybrid system by combining video surveillance feed and RFID, in order to provide a solid system for automatic identification and tracking of objects in the output of a video camera. The use of hemispheric imaging cameras that maximize the area coverage of a surveillance system is examined, in order to eliminate the need for multiple cameras and the factors that influence the purpose of such a system are identified. The rest of this paper is organized as follows: In Section 2 we provide an overview of RFID and video processing technologies along with other competitive or complementary technologies. In Section 3 related research pertaining surveillance that uses RFID sensors, video cameras and their combination is provided. Section 4 describes the necessary components in terms of hardware and software that are necessary for such a system and discusses its advantages, disadvantages and possible implementation issues. Section 5 describes possible applications of this integration. This paper concludes in Section 6.

2. UNDERLYING TECHNOLOGIES 2.1 RFID Overview One of the pivotal enabler technologies of pervasive and ubiquitous computing is RFID. Grouped under the broad category of automatic identification technologies, RFID is used as a generic term to describe a system where the identity (in the form of a unique serial number) of an object is transmitted wirelessly, using radio waves. A typical RFID system is composed of: (a) the RFID tag, which contains a digital number associated with the physical object that it is attached to and (b) the RFID reader (also known as interrogator) which is usually connected to a backend database. The reader is also equipped with an antenna, a transceiver and a processor that broadcasts a radio signal in order to query the tag and read its contents. Two are the most important characteristics of an RFID system each encapsulating differences in range, data transfer and transmission under certain environmental conditions: (a) the energy resources and computational capabilities of RFID tags and (b) its operating frequency. According to the first characteristic, RFID tags are distinguished into passive and active, as well as their combinations. Active tags incorporate a battery and can transmit signals autonomously, over a long operation range with high performance but they are expensive and usually have a large size. On the other hand, passive tags require an external source to provoke signal transmission, which is acquired using either inductive coupling or electromagnetic capture and they communicate with the reader by utilizing load modulation or electromagnetic backscatter. Passive tags are widely used and in many cases they are preferred over the active due to their low cost, small size, and practically unlimited life time. Additionally, RFID systems can be categorized into four operating frequencies: (a) Low (b) High (c) Ultra High and (d) Super High Frequency (or Microwave). As the frequency increases, range and data transfer rates also increase, but penetration through water and materials such as metal decreases. The necessity of establishing uniform engineering criteria, methods, process and practices for the defining characteristics of RFID systems, led to the proposal of various standards, two of which are more prominent. EPCglobal [2] defines a combined method of classifying tags that specifies unique identification numbers (Electronic Product Codes), frequencies, coupling methods, types of keying and modulation, information storage capacity and modes of interoperability among others. Similarly ISO (jointly with IEC) [3] developed standards for identification, communication between the reader and the tag, data protocols for the middleware and testing, compliance and safety.

2.1.1 Identification and RFID A fundamental requirement of pervasive systems in general, is the ability to uniquely identify things and/or entities. RFID satisfies this requirement by nature. There are also several other technologies that serve this purpose, each having advantages and disadvantages, but steadily being replaced in most application areas with RFID. The barcode, still the most widely used product tracking method in supply chain management and the cheapest identification solution, is an optical, machine readable write-once representation of an object category. The most important weaknesses of barcode technology are the inability to provide extra information regarding

a single object (two-dimensional symbologies deal with this issue however) and the requirement of the bar-coded object to be in a line-of-sight (LOS). Card technologies, is another category for identification of objects and entities that include magnetic cards, smart cards and optical cards. Usually embedded in a credit card-sized plastic card they encompass either a magnetic stripe or an integrated circuit and they have greater storage capacities. In case they incorporate a microprocessor, they also have increased processing capabilities, something which allows them to be used in demanding applications such as security (with great limitations however). On the downside, most cards either require contact or to be in a very close distance with the reader. The costs also increase significantly in proportion to the features of the card in use. RFID balances between efficiency and cost-effectiveness, while it alleviates the need of the tag being in a LOS. This is the reason why RFID is widely used in access control, anti-counterfeiting and tracking and tracing among others.

2.1.2 Real time locating and RFID RFID is primarily made for identification, over the last decade research efforts focused on the use of RFID for real time locating. There are plenty of competitive and/or complementary real time locating technologies to RFID, each of which differs in accuracy, precision, complexity and cost among other factors. An overview of these techniques is presented in [4]. Wi-Fi (also referred to as IEEE 802.11), the technology used for wireless device interconnection, is probably the ideal solution for locating devices equipped with Wi-Fi tags, like laptops and PDAs, already connected in a wireless network. Wi-Fi real time locating however is highly dependent on network infrastructure, it has serious scalability issues and it may introduce some burden on the network, let aside the affection by various environmental conditions like obstacles, temperature and humidity, commonly met in most wireless technologies. Another real time locating technology similar to Wi-Fi is Bluetooth, the wireless networking standard designed for low power consumption and communication in a personal area network. Bluetooth is standardized, widely adopted, multipurpose and relatively accurate. Nevertheless, the range of Bluetooth access points is rather short and because of the inquiry process, the positioning delay is relatively high. Along with Wi-Fi, Bluetooth tags are not suited for very small objects. Ultra Wideband (UWB) is another radio technology that can be used at low energy levels for short-range high bandwidth communications. UWB systems provide high accuracy that can be reduced to a few centimeters, however UWB signals interference through metallic and liquid materials is a constant problem and their cost is prohibitive at least for small scale applications. ZigBee is a low-cost, low-power, wireless mesh networking proprietary standard. Since it is standardized, interoperability of equipment from different manufacturers is guaranteed. Also it is has excellent performance in low signal to noise ratio environments and it is fault tolerant. Nonetheless, ZigBee routers have short range and their signal has low penetration through walls, and other obstacles. As far as RFID, both passive and active tags can be used for real time locating. Passive tags can be acquired at a very low cost and can be attached to almost everything. They also facilitate higher

read rates (approx. 1500 tags per second). However they have also low tolerance on harsh environmental conditions and they require the presence of multiple readers and antennas in order to cover wider areas. Active tags on the other hand, improve the accuracy and tolerance but they pose serious maintenance challenges since they have limited lifetime. Currently, there is no best real time location sensing technique. Each technology has its own distinct characteristics when applied in real environments and the choice is clearly a matter of tradeoff between accuracy, precision, system complexity and suitability in a given environment.

2.2 Video Surveillance Video cameras, in the form of a closed circuit television (CCTV), have been widely used in surveillance applications, where a human operator evaluates the captured events to provide alert. These systems are installed mainly in public spaces, where security is the major concern. However, the human factor involved in the procedure greatly influences surveillance effectiveness, due to fatigue or lack of concentration. Intelligent Surveillance (IS) systems have been developed, to automatically detect objects, track a person and recognize an event in order to react upon any abnormal behavior taking place in a scene. The IS systems are being extensively researched in literature and solutions have been proposed for a number of applications, especially in the case where a human operator can not be offered. Recently, these systems have been considered for pervasive applications, including ambient home, e-health and ecare systems. However, IS systems still present certain limitations [5]. Several cameras are required to cover a large space, even if they are equipped with wide angle lenses. In this case, extensive installation modifications limit acceptance of video surveillance by a wide audience, despite the fact that in some cases installing cameras can be proven useful, such as a home-assisting environments for the elderly. Furthermore, the image processing modules of these systems [6] are sensitive to complex environmental changes and object occlusions, that is, when a targeted object hides behind another. Additionally, video cameras that usually have a lower resolution than still cameras, can not robustly detect small or distant objects. The Charged Coupled Device (CCD) that video cameras are equipped with also presents noise, by design, which in turn hinders the object location performance. Finally, multi camera environments and the storage capabilities they require lead to extra cost for the acquisition of a high-end pc and storage arrays. Therefore, other solutions should be considered that would limit the cost of these systems and increase their effectiveness in order to become more appealing for wider audiences.

2.2.1 Hemispheric Cameras Although super wide angle cameras are available for many years, they are becoming mainstream only recently. A hemispheric camera is capable of covering an entire room by generating two 180º panoramic views simultaneously and record a scene in detail without losing any event occurring under its field of view. They are equipped with high speed processors and CCDs, in order to provide high quality images at a low cost. A hemispheric camera attached to a room ceiling can facilitate surveillance space of about 60m2[7]. Consequently, the number of

cameras required to cover a space is greatly reduced along with the costs of acquiring multiple cameras and the computational needs for their coordination under a given scenario. The main disadvantage of hemispheric imaging is the barrel effect distortion it presents, especially in the borders of the images. Nevertheless, many methods have been proposed in the literature regarding perspective correction of the produced images. Providers of hemispheric cameras have integrated such methods in their embedded processors, thus achieving the video perspective correction signal in real time. Therefore, hemispheric imaging cameras have become suitable for pervasive applications, since no additional CPU performance resources are required to correct the hemispheric images. The surveillance procedure can be conducted even in a personal computer with relatively slow processing capabilities. In Fig. 1, the image produced from a hemispheric lens camera, is shown [7]. In the right side of this figure the hemispheric image acquired from the camera is presented. Then, the embedded processor of the camera enables real time perspective correction of the barrel effect. Furthermore, virtual Pan, Tilt and Zoom (PTZ) is offered, and surveillance in corrected panoramic display mode of two halves of a room.

Figure 1 - A hemispheric camera video feed and the result of perspective correction

3. RELATED WORK 3.1 Location Sensing with RFID Early real time locating is a research area that was initiated in the Second World War and has intensified nowadays, especially for indoor spaces. However the possibility of using RFID for real time locating was examined only during the last decade. In [8] authors investigate the suitability of RFID for locating objects and describe a location sensing prototype system based on RFID called LANDMARC. They use active RFID tags which are organized in a grid array in order to serve as reference points that help location calibration. Since the products they used did not provide the strength of the signal directly, they had to estimate it, thus introducing unnecessary processing time and errors. The authors report that at the 50 percentile has an error distance of around 1 meter while the maximum error distances are less than 2 meters. An improvement on the above application is proposed in [9]. Authors underline the unnecessary computation because of the inclusion of all reference tags as candidates and the error rate increase with the density of the reference tags. With their improvements, the mechanism can reach higher accuracy with reduced computational load since it simply computes the target coordinate with positions of the neighboring tags and their granted weighting values.

An older Radio Frequency location sensing system is SpotON [10] which is based on a radio signal strength analysis aggregation algorithm for 3-D location sensing. SpotON uses custom built tags which can be located by homogenous sensor nodes without a central control and in order to improve accuracy and precision they exploit the density of tags and correlation of multiple measurements. In [11] authors propose a navigation mapping and localization approach using the combination of a laser range scanner, a robot and RFID. They present a sensor model that allows the computation of the likelihood of tag detections given the relative pose of the tag with respect to the robot. Through practical experiments they demonstrate that their system can build accurate 3D maps of RFID tags and they further illustrate that resulting maps can be used to accurately localize the robot and moving tags. In [12] authors propose method for identifying the best RFID tag position by employing an intelligent tag strength prediction approach using the back propagation learning algorithm of neural networks. They allege that the proposed approach gives high prediction accuracy of mostly about 90% with their experiments.

3.2 Video Surveillance In the past few years, research has shifted from still images to video sequence processing in order to segment and track objects, classify events and analyze behavior. Video cameras act as intelligent image sensors that estimate various characteristics, in an attempt to bridge the semantic gap, that is, linking the description of an action to the pattern analysis conducted on the image sequence. Video surveillance has been successfully applied on a number of situations. First of all, cameras have been installed to serve as image sensors and act as a non-intrusive means of increased security [13]. Secondly, video processing methods have been extensively used for industrial applications [14]. The industry inspection systems include, among others, quality textile production [15], metal product finishing [16], glass manufacturing [17], machine parts [18], printing products [19] and many others. Another field in which video surveillance has been successfully applied is the development of Intelligent Transportation Systems [20]. These include automatic lane finding [21] and license plate recognition systems [22]. Video surveillance has found recently increased usability in behavioral analysis [23][24][25]. Finally, many e-health and e-care applications have been benefited from the use of video cameras [26]. Typical video processing techniques include background extraction, object locating and tracking. Background extraction is used in inspection applications to detect change in a scene. A comprehensive review of related techniques is presented in [27]. Additionally, core modules of video surveillance comprise the object segmentation and tracking methods [28]. Although video processing technology is widely adopted, it presents certain limitations. The use of low resolution cameras hinders recognition of small objects. Typically, events in a scene involve human interaction with small items, therefore recognizing both the people and the items could lead to a high level classification of behavior. Additionally, to cover a large space (e.g. a whole room) efficiently multiple cameras are required,

which in turn leads to increased computational cost and storage requirements for fusing and spatially registering the video data. Finally, object occlusion seriously reduces effectiveness of intelligent surveillance systems.

3.3 RFID with Video Integration The integration of RFID with Video is not a novelty. Recent proposals in literature have combined RFID data and video sequences to present multimedia applications for various purposes. Specifically, the authors in [29] propose a system for monitoring learners’ behavior using an RFID reader and a video camera. The aim of this work is to provide personalized learning services to students, that is, a ubiquitous learning room. To this end, an RFID reader is attached to the bottom surface of a desk and a camera is installed besides the learner to capture the student’s activities. A linear sensor data fusion technique is used along with a rule-based system to categorize the learner’s activity into away, distracted, sleeping or studying a particular book (e.g. English literature or Mathematics). In a similar work [30], authors employ an RFID reader, and multiple image sensors in the form of two wide and two narrow Field of View cameras to recognize activities of day living. The RFID reader is in the form of a bracelet that a person is wearing and has detection range of about 10–15 centimeters. To detect and track the person in the image sequences, a k-means clustering algorithm is used to adapt the background, as well as a Probabilistic Appearance Model (PAM) that represents people’s color appearance in terms of Gaussian mixture models and Haarwavelet based human feature detectors. In this work, a hierarchical recognition scheme is proposed by building a dynamic Bayesian network (DBN) that encompasses both coarse and fine level activity recognition. In [31] authors also propose a method for activity recognition that incorporates a video camera and RFID sensors. In this work, the object appearance models are trained without manual labeling and a dynamic Bayes network is designed to systematically integrate common-sense knowledge, the RFID sensor data and the vision sensor data to classify kitchen activities (e.g. preparing a specific recipe). The above mentioned proposals employ short range RFID sensors and wearable readers to achieve activity recognition. In this paper we examine the feasibility of utilizing long range UHF RFID readers in conjunction with hemispheric lens cameras in order to alleviate any requirement for action by the end user. Additionally, wearable devices would possibly introduce a level of discomfort to the end user. Finally, the work in [32], involves locating nomadic objects, on which an RFID tag is attached, and display them to the user in a video sequence. The method of this work can be used either online, for real-time object location in a user’s mobile device, or offline for post processing. In our paper we discuss the prospects of extending this approach with event recognition capabilities in order for it to be applicable in a wider spectrum.

4. INTEGRATING RFID WITH HEMISPHRIC VIDEO CAMERAS As clarified in Section 3, sensing location in real time is far from being a trivial task, let alone with the combination of two distinct

and diverse technologies, namely RFID and Video Processing. The essential components that would render such a system feasible are depicted in Figure 2 and can be described as follows:

4.1.1 Hardware RFID tags: We examine the use of small inexpensive passive RFID tags that can be attached to nearly everything. The choice of passive tags over active alleviates the need for maintenance and increases the life expectancy and number of “tagable” objects. On the downside, processing capabilities of tags are limited and they lose their autonomy, since they have to be prompted in order to communicate. However, their identification feature alone constitutes a powerful tool in an indoor assistive environment that can be exploited in various ways. RFID tags are distributed throughout a given space on objects of interest that need either to be identified or located. Instead of focusing on determining the absolute position of any given tag, finding its relative or symbolic position would be easier and less computationally demanding, without losing its usefulness. RFID Readers: An RFID reader constitutes the identification and location engine of the system. Depending on the manufacturer, readers may encompass special software for the determination of tag velocity and direction deriving from measurements and/or the communication with other devices. An important consideration is the number and the placement of the readers and their antennas. Given the uncertainty of tag locations in many cases, the optimization of readers and/or their antennas is a critical step in order to maximize the powering region in a designated area. This task is typically solved by trial and error; however in small indoor assistive environments like hospital or house rooms, it is relatively easier. Hemispheric Camera: As already mentioned, a high resolution hemispheric camera has a horizontal field of view of 360º and therefore it can significantly reduce bandwidth consumption, coordination computations and storage space, since recording and/or transmitting happens only when an event is occurring and only the relevant parts of the scene are saved. Consequently mounting a hemispheric camera is as simple as finding the location with the maximum coverage in a room, and that is usually the center of the ceiling in a square or rectangle room. Processing Unit: All data have to be gathered somewhere eventually in order to be processed. The actual processing unit can be anything from a server to a PDA depending on the computational requirements of the application.

4.1.2 Software Middleware: The software that resides between the RFID Reader, the Hemispheric Camera and the application that encapsulates the business logic for a given scenario, is necessary for making the application independent of the specific RFID and Surveillance technologies and for providing real time information to disparate applications, e.g. a drug inventory system in a hospital and a pharmaceutical supplier’s system. It handles both RFID and Video data. Client Application: The software that interacts with the end user and the Middleware contains all the rules and policies that model the real life business objects and their interactions. In a hospital environment for example, the decision of calling a nurse when the event of a patient leaving his bed is occurred, or automatically

placing medication orders when their quantity reaches a certain limit, is clearly a matter of business intelligence. Integration Component: The integration process is the real time fusion of RFID and Surveillance Video data in order to provide combined information to applications seamlessly. This process is highly dependent on the hardware capabilities and more specifically the data from the RFID Reader. If the RFID reader provides tag location information, then the integration is reduced to combining those data with the video feed. Otherwise methods for location determination based on data like time of arrival, time difference of arrival, angle of arrival, time of flight and the received signal strength, should be incorporated to the integration process. The integration component can be either a standalone component or it can be a part of the middleware or client application. HARDWARE RFID Tags

RFID Reader

Processing Unit

Hemispheric Camera

SOFTWARE MIDDLEWARE Integration

Client App

DataBase

Figure 2 – An overview of the components of a system integrating RFID with surveillance video data

4.2 Environment Conditions The conditions of the surrounding environment play a major role in the accuracy of both RFID and video apparatus. Although controllable, the environmental conditions in an indoor space may greatly influence the measurements of RFID readers and hemispheric cameras, producing erroneous results. Factors such as temperature, humidity, vibration and shock may diversify the location specified by the reader, and lightning conditions may render the camera useless. Depending on the manufacturer, the typical operating temperature for an RFID inlay ranges from -25º C to 70º C. For demanding applications such as heat sterilization for medical items, there are industrial tags available in the market that will withstand temperatures as high as 250º C. Another relatively insignificant factor influencing RFID is humidity, which mostly affects signal scattering. However, perhaps the most important factor to be considered when deploying RFID is the penetration of the signal through metal and liquids. Liquids have the effect of absorbing or reflecting a carrier wave. The degree of this influence is determined by the operating frequency band used. Also metallic obstacles reflect or refract radio signals especially in ultra high frequency, something that greatly deteriorates its use on ferrous material environments. As far as video surveillance, the most crucial factor that hinders robust identification is time-varying changing lighting conditions. Specifically, during the period of a day, illumination in a scene changes from natural to artificial lighting, or varies due to weather

conditions. Nevertheless, indoor surveillance can mitigate the effects of environmental lighting, since they can be moderately controlled. A highly complex environment (e.g. the existence of multiple objects in a scene) can also decrease automatic surveillance performance. Moreover, by design, a camera CCD inserts noise in the sensor readings. Therefore, discriminating a target (an object or a person) from the background can produce unwanted artifacts, which may render the surveillance procedure ineffective. Finally, an intelligent monitoring system that uses a camera cannot distinguish targets that are occluded, for instance a person that is hidden from another. Regarding environmental conditions, RFID and Video surveillance technologies can complement each other and partially reduce their unwanted effects. The RFID technology can robustly and uniquely identify an object, or a person, in a scene. Therefore, object occlusion and spurious artifacts due to noise, lighting conditions and environment complexity can be dealt with.

4.3 Security Issues The process of integrating RFID with video, besides the potentials it introduces, it carries along the security challenges met in the use of each of its components separately. Especially when sensitive data are involved, such as in e-health applications, a system integrating RFID with video necessitates the assurance of high availability, integrity, trust and confidentiality. Since the most security threats against the RFID component of the system stem from its the wireless nature and of RFID, possible attacks that can be launched against it can generally be classified into layers [34], the most important of which are (a) physical layer attacks that involve direct action against the actual equipment, (b) network layer attacks that involve the interruption on the communication between the RFID reader and the tags, and (c) application layer attacks that involve the modification of the actual data of the tags, readers, cameras or their underlying systems (such as databases). Another classification can be made in terms of availability, integrity and confidentiality:

4.3.1 Availability

number or the measurement data that are collected by the readers for location determination. Tag and reader masquerading, man in the middle attack, signal strength corruption, signal synthesis and replaying are the most common attacks.

4.3.3 Confidentiality Since the fusion of RFID and Video can produce sensitive information of extreme importance such as the location and the condition of an item it should be available strictly to authorized individuals, entities or processes. Confidentiality can be easily compromised via active or passive eavesdropping. Malicious entities can monitor the message traffic between the tags and the readers, and analyze their patterns (passive active eavesdropping) or try to crack the security keys exchanged during the process (active eavesdropping).

5. INTEGRATION APPLICATIONS Answering questions like who, where, when and how in real time is beyond doubt a radical improvement to the efficiency of a large number applications. The identification of tag carrying objects as well as the time of identification can be certainly provided by RFID, while semantics from events can be added via video together with human interference. Despite the efforts made so far, the determination of tag location in real time, can be certainly improved. Therefore, the combination of RFID and video would greatly contribute to this end and ultimately lay the foundations of pervasive applications of the future. What differentiates the combined use of RFID and Video in various applications is essentially the determination of the events of interest. An example of an application of the integration of RFID with Video is presented in Figure 3. A hemispheric camera is mounted on the ceiling of the entrance of a building and the antennas of an RFID reader are placed upon the two adjacent walls between the entrances. People carrying RFID tags are automatically recognized and logged, while people without tags are photographed for future reference. The following sections describe some scenarios where this kind of integration could be very useful.

Readiness to provide the desired services when expected is perhaps the most critical challenge in the design process of any system and it can be lost after some sort of denial of service attack (DOS) against the system as a whole, or against one of its individual components. The most common attacks on the availability of the system can be physical attacks, jamming the air interface and masquerading as a reader. Since tags, readers and cameras are exposed in the environment in most application scenarios, a malicious entity can simply destroy them. Physical attacks can be dealt with armoring the RFID readers and camera or placing the tags in such a way that destruction of tags would equal to the destruction of the item carrying it. Integrating RFID with video enables each component to act as complementary to the other, thus enabling the identification of an individual physical attack relatively easily. On the other hand, attacks such as jamming the interface or masquerading as the reader are more difficult to be dealt with, since their effects cannot be captured from the video data.

4.3.2 Integrity In any indoor environment, RFID tags are also exposed to attacks aiming to tamper their data such as their unique identification

Figure 3 – An example of the integration of RFID with a hemispheric video camera in the entrance of a building.

5.1 Event Logging and Behavior Analysis in Internet of Things (IoT) applications Events recognized by video processing technologies, linked with identification information provided by RFID tags, can be logged and further processed in order to infer behavior of the individuals monitored. This application can be extremely useful in building better customer relationships in stores (e.g. a supermarket). By categorizing the client’s behavior in conjunction with data mining

techniques, products could be rearranged in order to optimize the browsing activity. The identification of clients’ behavior could lead to novel personalized commercial techniques. Similarly, data processing techniques can be applied on exhibitions (e.g. a museum or an art gallery). In this case, the interaction of the visitors with the exhibits can be monitored, enabling the administrators to take actions (e.g. rearranging the exhibits), in order to improve visitor experience. The combination of Video with RFID could also prove useful for storing of digital memories, this way leading to the creation of a digital personal journal by taking into account static knowledge about objects as well as situational observations and historical data acquired through various knowledge locations like the Web.

return. Therefore on trespassing, the system would record intruder activities and would be able to provide live coverage of the incident.

5.2 Healthcare Environments

Integrating processing capabilities to everyday objects commences the next level of human-computer interaction where practically everything gains autonomy and human intervention is minimized. The increased availability and evolution of wearable machinereadable tags accelerates the unification of the physical with the digital world and helps to blur the distinctive characteristics of the underlying technologies utilized in a variety of environments.

The healthcare industry incorporates several sectors dedicated to effectively providing services and products for the prevention or treatment of health incidents of individuals. Whether provided in hospitals, clinics or nursing homes, health services can greatly benefit from tools and applications that incorporate automated identification, location and event detection. Locating individuals: Determining location of healthcare personnel, patients or visitors is a primary concern for the immediate reaction to emergency situations. Knowing the relative or symbolic position or presence of healthcare personnel can optimize response to any call for assistance. Tracking the movement of patients can immediately issue alert on the event of leaving the premises or entering a restricted area. Managing throughput: By optimizing the allocation of personnel, rooms and equipment, phenomena like extended waiting times or overcrowding would be minimized. Inventorying of supplies and equipment: Since RFID readers can scan multiple tags simultaneously, keeping inventory can be effortless and more efficient. Also since the level of supplies can be monitored, orders could be made automatically when supplies reached a critical level, without ever being in a situation of running out of important medical resources. Avoiding drug peddling and theft of equipment: Smuggling of expensive and illegal substances or drugs is not uncommon in hospitals, even by authorized personnel. By employing RFID tags on objects of interest, events like the removal of an item from its original location and the persons involved would be immediately known to the authorized for that purpose personnel. Smart cabinets: Patients can insert their medications inside smart shelves in their homes or hospital rooms and get notified about the right time to take any of them as well as their proper dosages. Patients will even be notified on time when they have to visit their local pharmacy shop to get additional supplies [33].

5.3 Indoor Surveillance The combined results of video and RFID tag surveillance can be used to increase security. The integrated surveillance solution can monitor activities in a scene and issue alerts when security has been breached. When identifying people, data provided by RFID can be verified via face detection from the video. In a home application, specific events or event sequences can be detected, in order for the system to automatically change state. For instance, the system can automatically enter security mode in the absence of the residents and return to normal mode on their

Moreover, indoor surveillance with both RFID and hemispheric cameras can facilitate the realization of pervasive applications for increased well-being. Automating trivial daily activities such as turning devices on or off can be easily accomplished by sensing closeness of people to objects. This can also optimize energy consumption. Also, by applying personalization techniques such a system could adapt to the unique needs of its users, thus leading to a truly smart home.

6. CONCLUSIONS

Recent advances in RFID and super wide video camera technologies enable the integration of RFID systems and hemispheric imaging video cameras in order to produce accurate information on the state of their environments and the events taking place inside them. This paper examines the fundamental characteristics of RFID and Hemispheric Imaging technologies, as well as the factors that influence the feasibility of their integration. After discussing the related work in the fields of automatic identification, real time positioning, video surveillance and their combination, this paper examines the needs of such integration in terms of hardware, software and the environmental conditions, along with the security challenges they introduce.

7. REFERENCES [1] ITU Internet Reports 2005 : The Internet of Things, Genève, ITU, 2005, accessed on April, 2010 from www.itu.int/osg/spu/publications/internetofthings/ [2] EPCglobal Inc., EPCglobal Standards Overview, accessed on April 2010 from http://www.epcglobalinc.org/standards/ [3] International Standards Organization, ISO/IEC 18000. Information technology - Radio frequency identification for item management, April 2010 [4] Liu, H., Darabi, H., Banerjee, P., Liu, J.: Survey of Wireless Indoor Positioning Techniques and Systems. IEEE Trans on Systems, Man & Cybernetics 37(6), 1067–1080 (2007) [5] W. Hu, T. Tan, L. Wang, and S. Maybank, “A survey on visual surveillance of object motion and behaviors”, IEEE Trans. on Systems, Man, and Cybernetics, 34(3), 2004 [6] I. Lutkebohle, "Capabilities and Limitations of Visual Surveillance", 22nd Chaos Communication Congress, Berlin, Germany, December 27th to 30th, 2005. [7] Mobotix AG, Hemispheric Q24, accessed on April 2010 from http://www.mobotix.com/other/content/view/full/25611 [8] L. M. Ni,Y. Liu,Y. C. Lau, and A. P. Patil, “LANDMARC: Indoor location sensing using active RFID,” Wireless Networks, vol. 10, no. 6, pp. 701–710, Nov. 2004.

[9] Guang-yao Jin , Xiao-yi Lu , Myong-Soon Park, An Indoor Localization Mechanism Using Active RFID Tag, Proceedings of the IEEE International Conference on Sensor Networks, Ubiquitous, and Trustworthy Computing -Vol 1 (SUTC'06), p.40-43, June 05-07, 2006

[22] Christos-Nikolaos Anagnostopoulos, Ioannis Anagnostopoulos, Vassilis Loumos, Eleftherios Kayafas: A License Plate-Recognition Algorithm for Intelligent Transportation System Applications. IEEE Transactions on Intelligent Transportation Systems 7(3): 377-392 (2006)

[10] J. Hightower, R. Want, and G. Borriello, “SpotON: An indoor 3D location sensing technology based on RF signal strength,” Univ. Washington, Seattle, Tech. Rep. UW CSE 2000–02-02, Feb. 2000.

[23] P. Turaga, R. Chellappa, V. S. Subrahmanian, O. Udrea: Machine recognition of human activities: A survey. IEEE Trans. Circuits Syst. Video Technol., vol. 18, no. 11, pp. 1473–1488 (2008)

[11] D. Haehnel et al., Mapping and Localization with RFID Technology. ICRA’04, pp. 1015-1020, April 2004

[24] Zhou Z, Chen X, Chung Y, He Z, Han T, Keller J (2008) Activity analysis, summarization and visualization for indoor human activity monitoring. IEEE Trans Circuits Syst Video Technol 18(11):1489–1498

[12] M. Jo and H. Y. Youn, “Intelligent recognition of RFID tag position,” Electronics Lett., vol. 44, no. 4, pp. 308–310, Feb. 2008. [13] Teddy Ko, "A survey on behavior analysis in video surveillance for homeland security applications," aipr, pp.18, 2008 37th IEEE Applied Imagery Pattern Recognition Workshop, 2008 [14] S. Benhimane, H. Najafi,M. Grundmann, Y. Genc, N. Navab & E.Malis. 2008 Real-time object detection and tracking for industrial applications. In International Conference on Computer Vision Theory and Applications, Funchal, Portugal, January 2008. [15] C. Anagnostopoulos, Anagnostopoulos I., Vergados D., Kouzas G., Kayafas E., Loumos V., and Stassinopoulos G. High performance computing algorithms for textile quality control. Mathematics and Computers in Simulation, 60:389– 400, 2002. [16] K. Kantola, R. Tenno, 2009 Machine vision in detection of corrosion products on SO2 exposed ENIG surface and an in situ analysis of the corrosion factors, Journal of Materials Processing Technology, Volume 209, Issue 5, 1 March 2009, Pages 2707-2714. [17] Xiangqian Peng, Youping Chen, Wenyong Yu, Zude Zhou, Guodong Sun, 2008. An online defects inspection method for float glass fabrication based on machine vision. International Journal of Advanced Manufacturing Technology 39(11), pp. 1180-1189 [18] Jun Sun, Qiao Sun and Brian Surgenor 2008, An Adaptive Machine Vision System for Parts Assembly Inspection, Advances in Computational Algorithms and Data Analysis (Book), pp. 185-198 [19] T. Torres, J. M. Sebastian, R. Aracil, L. M. Jimenez, and O. Reinoso, “Automated Real-Time Visual Inspection System for High-Resolution Superimposed Printings,” Image and Vision Computing, vol. 16, pp. 947-958, 1998. [20] V. Kastrinaki, M. Zervakis, K. Kalaitzakis, A survey of video processing techniques for traffic applications, Image and Vision Computing, Volume 21, Issue 4, 1 April 2003. [21] A. S. Huang, D. Moore, M. Antone, E. Olson, and S. Teller, “Multisensor lane finding in urban road networks,” in Proceedings of Robotics: Science and Systems, Zurich, Switzerland, June. 2008.

[25] Thomas B. Moeslund, Adrian Hilton, Volker Kruger, A survey of advances in vision-based human motion capture and analysis, Computer Vision and Image Understanding, Volume 104, Issues 2-3, Special Issue on Modeling People: Vision-based understanding of a person's shape, appearance, movement and behaviour, November-December 2006, Pages 90-126 [26] Thierry Pun, Patrick Roth, Guido Bologna, Konstantinos Moustakas, and Dimitrios Tzovaras, “Image and Video Processing for Visually Handicapped People,” EURASIP Journal on Image and Video Processing, vol. 2007, Article ID 25214, 12 pages, 2007 [27] R. J. Radke, S. Andra, O. Al-Kofahi, and B. Roysam, “Image change detection algorithms: A systematic survey,” IEEE Trans. Image Processing,vol. 14, no. 3, pp. 294–307, March 2005. [28] Yilmaz, A., Javed, O., and Shah, M. 2006. Object tracking: A survey. ACM Comput. Surv. 38, 4 (Dec. 2006) [29] Hui-Huang Hsu, Zixue Cheng, Tongjun Huang, Qiu Han 2006. Behavior Analysis with Combined RFID and Video Information, Ubiquitous Intelligence and Computing (Book). pp. 176-181. [30] Park, S., Kautz, H. (2008). Hierarchical recognition of activities of daily living using multi-scale, multi-perspective vision and RFID. The 4th IET International Conference on Intelligent Environments, Seattle, WA, July 2008 [31] Wu, J., Osuntogun, A., Choudhury, T., Philipose, M., and Rehg, J.M., A Scalable Approach to Activity Recognition based on Object Use. In: Proc. of the 11th Int. IEEE Conference on Computer Vision, ICCV'07, pp. 1-8. (2007) [32] Liu, H., Darabi, H., Banerjee, P., Liu, J.: Survey of Wireless Indoor Positioning Techniques and Systems. IEEE Trans on Systems, Man & Cybernetics 37(6), 1067–1080 (2007) [33] D. Wan. Magic medicine cabinet: A situated portal for consumer healthcare. In Proceedings of the International Symposium on Handheld and Ubiquitous Computing, Karlsruhe, Germany, September 1999. [34] A. Mitrokotsa, M. R. Rieback and A.S. Tanenbaum, “Classification of RFID Attacks”, Proc. Int'l Workshop on RFID Technology, pp. 73-86, 2008.