Concept of data processing in multi-sensor system for ...

1 downloads 0 Views 6MB Size Report
Security systems for the protection of strategically-important objects and critical infrastructure (such as military ... package developed by FLIR Systems [13].
Concept of data processing in multi-sensor system for perimeter protection R. Dulski*, M. Kastek, P. Trzaskawka, T. Piątkowski, M. Szustakowski, M. Życzkowski Institute of Optoelectronics, Military University of Technology, ul. gen. Sylwestra Kaliskiego 2, 00-908 Warsaw, Poland ABSTRACT The nature of recent terrorist attacks and military conflicts as well as the necessity to protect bases, convoys and patrols gave serious impact to the development of more effective security systems. Widely-used so far concepts of perimeter protection with zone sensors will be replaced in the near future with multi-sensor systems. This kind of systems can utilize day/night cameras, IR uncooled thermal cameras as well as millimeter-wave radars detecting radiation reflected from target. Ranges of detection, recognition and identification for all targets depends on the parameters of the sensors used and the observed scene itself. Apart from the sensors the most important elements that influence the system effectiveness is intelligent data analysis and a proper data fusion algorithm. A multi-sensor protection system allows to achieve significant improvement of detection probability of intruder. The concept of data fusion in multi-sensor system has been introduced. It is based on image fusion algorithm which allows visualizing and tracking intruders under any conditions. Keywords: IR cameras, perimeter protection, detection range, sensor fusion

1. INTRODUCTION Security systems for the protection of strategically-important objects and critical infrastructure (such as military bases, airfields, harbours, fuel tanks, water reservoirs and the like) are developed for over 20 years. However, the growing threat of terrorist attacks gave new momentum to the development of new concepts of effective protection systems that are able to respond to increasing capabilities and resources of terrorist groups. Technological progress in both hardware and software areas makes it possible to create and test the security systems that meet all the recent requirements. Such common devices used for many years in security systems as daylight (VIS) cameras with CCD sensors [1] are now paired up with low-cost infrared cameras with uncooled microbolometer focal plane arrays. The application of both camera types makes the system less dependent from scene illumination, as the IR camera detects the thermal contrast between the object and its background. Furthermore it provides day/night observation capability and increases the probability of threat detection, recognition and identification [2-3]. Both camera types are, however, sensitive to weather conditions (e.g. fog water particles not only block the visual spectrum but also reduce the transmission of the atmosphere in 3-5 μm spectral band) [4-5]. As a result yet another sensor should be applied, capable of undisturbed operation in harsh weather conditions. Such requirement can be meet by microwave ground radar [6]. System employing the complementary devices operating in three different spectral bands (microwave, visual and infrared) is able to operate in any given weather conditions. It also follows the principle of dual, passive/active intruder detection technique [7]. Another important benefit resulting from the application cameras operating in different spectral bands is the possibility to apply image fusion and advanced data processing that leads to automatic intruder detection and tracking.

*[email protected]; phone +48 22 6 839 383; fax +48 22 6 668 8950; wat.edu.pl

2. OPERATING PRINCIPLE OF A SECURITY SYSTEM The idea of a multi-sensor security system and its effective area of surveillance is presented in Fig.1 [8].

Figure 1. Idea of multi-sensor system for perimeter protection.

Radar sensor has the longest theoretical detection range, so it is the sensor that should detect the intruder first [9]. During system operation the radar data are constantly updated and the possible intruder locations are transferred to camera control units. The cameras lock on the target and begin its tracking. Recorded video data are analyzed by the system in the real time, as well as other data that influence the detection process. At the final stage of data processing all available sensor data are combined and synthetic information is presented in a graphic form on the operator’s panel. General view of a single multi-sensor platform utilizing all aforementioned sensors is presented in Fig.2.

Figure 2. Multi-sensor platform: radar dome and pan-tilt-zoom camera module (above).

Sensor platform is located at appropriate height to ensure desired system parameters, especially required detection range.

During initial testing of data fusion algorithms the platform was mounted 6 meters above ground level. At this height the following ranges of human target detection were obtained for particular sensors: radar sensor – 780 meters, VIS camera – 760 meters, IR camera – 480 meters. Effectiveness and functionality of a system do not depend only on applied sensors. The implemented control software is equally important, especially detection algorithms and graphical user interface (Fig.3.).

Figure 3. A snapshot of system operator’s console: digital map with overlayed actual camera observation zones and live feed from video camera (bottom right).

The system control software (data transfer between sensors, control, visualization) is based on Nexus software package developed by FLIR Systems [13]. Pure Nexus system does not provide all necessary functions required by the described security system. Image fusion, target detection and tracking algorithms and final data synthesis is preformed by separate, dedicated software. It also performs internal system testing and reports all detected malfunctions of key system components. The control software also supervises the final information that is presented to system operator; it reduces the amount of information by removing redundant or insignificant data and automatically triggers the alarms.

3. CONCEPT OF DATA SYNTHESIS IN A SECURITY SYSTEM Effective exploitation (by a system operator) of all available information in all data channels requires the effective synthesis of sensor data to be applied [14-16]. Final effectiveness of a multi-sensor system should be (and usually is) better than the that of a simple set of individual, independent sensors. The real benefit of multi-sensor setup is achieved only when the sensors provide information complementary to each other. The application of a multi-sensor system diminishes the possibility of loosing a target track due to loss of signal (e.g. when one sensor looses the sight of a target, others may still see it). Another example of complementary sensor operation can be observed due to different spectral bands and principles of operation of particular sensors. For example, radar sensor and IR camera have different properties (e.g. field of view, spectral band, spatial resolution). In a multi-sensor setup an omni-directional sensor (radar) can be used to detect the target and then cue the high-resolution camera for final target identification. As it was already mentioned, the effectiveness of particular sensors depends on many factors, like weather conditions, background properties, distance and countermeasures used by an intruder. It may happen that single sensor has to operate in conditions far from optimal. The application of different sensors that are differently influenced by those external conditions assures constant proper system operation. Passive IR sensors (and VIS cameras) do not provide distance information, only angular dimensions can be extracted. The combination of radar sensor and a camera gives both the distance and target size data.

Data synthesis is a multi-layer process [17]. In case of the perimeter protection system the initial data processing has to be performed and the target status has to be determined. The estimation of target status comprises of the characterization of an object itself and its movement. Object characterization is a process of data evaluation ( not only sensor signals) that leads to complete target recognition: detection, direction, classification and identification [18]. Detection is defined as confirmation of the presence of the target. Classification means that the detected target is assigned to one of the pre-determined classes of objects (e.g. human, tank, APC, truck). Identification level is achieved when the precise object description within its class can be made (human: armed assailant, tank: light). Higher level of target discrimination imposes higher requirements on sensor resolution and signal-to-noise ratio at its output. The application of particular data synthesis algorithms in a multi-sensor security system depends on the overall system concept of operation. For example, object tracking relies usually on raw sensor data processing performed by a central system processing unit, whereas identification algorithms use complex data analysis at every functional level of the system.

Figure 4. Concept of data flow in a multi-sensor system.

The general scheme of operation is presented in Fig.5 and the detailed description of main stages and algorithms of data synthesis in the proposed security system for perimeter protection will be described below.

DETECTION (max. 1000m)

RADAR DIRECTION

INTRUDER

DATA FUSION

(RADAR-LOCATION)

IR

IMAGE FUSION

VIS IDENTYFICATION

(max 100m)

(RADAR, GPS and IMAGES)

(IR and VIS)

(optional) IDENTYFICATION IN IMAGE

Figure 5. Concept of sensor data synthesis using radar, VIS and IR cameras.

At the moment when radar sensor detects the possible target [6] the range and bearing data are used to direct the cameras mounted on the sensor platform. Image analysis module matches the fields of view of both IR and VIS cameras (Fig.6). At this time the image fusion process (synthesis of images) is initiated.

Figure 6. Snapshot of control software screen showing the digital map of protected area with camera location , fields of view of a system cameras (blue area – VIS camera, red area – IR camera) and corresponding live images.

It can be seen from the above image (Fig 6.) that cameras have different fields of view and resolution. The synthesis of such images can be performed in a following way: [18-20]:

~ f = fiVIS ⊕ fi IR ( s, θ , t ) , ~ IR

where f i ( s,θ , t ) is a processed image f i

IR

(1)

from IR camera after the following transformations: resizing by a

factor s, shifting by a vector t and rotation by an angle θ . In order to overlay two images the values of s and θ have to be calculated as well as operator of the synthesis ⊕ . When the aforementioned coefficients are determined, the image synthesis is performed using discrete wavelet transform (DWT) [14, 17]. For a visual image

I VIS and infrared image

I IR , the DWT-based image synthesis algorithm can be described as [21]: f = ω −1 (φ (ω ( I VIS ), ω ( I IR ))) ,

(2)

where ω is a DWT, ω-1 is inverse DWT, φ is a certain rule of image synthesis and f is a final synthesized image. Every image is analyzed and as a result some areas are marked, in which the presence of a target is suspected. IR image is “binarized” using pre-defined detection thresholds and after that the mask distinguishing certain objects is obtained. Mask is created by assigning high brightness level (white color) to any area above detection threshold and low level (black color) to the rest of the image. In order to distinguish target from background the detection threshold (temperature value) is adaptively calculated – objects of temperatures lower than threshold are treated as background. Algorithms used for defining threshold brightness value are, in case of visual images, usually based on histogram analysis. For thermal images the application of the probability density function describing the occurrence of certain temperature values is more appropriate. This function is then discretized in the temperature domain and the histogram showing the occurrence of certain temperature ranges is obtained. During initial tests the histogram-based analysis was yet not applied and arbitrary threshold level at 60% of dynamic range was used instead. In the next step of image processing the chosen color is attributed to the target mask from IR image [8] and the mask is overlaid onto visual image (Fig. 7.).

Figure 7. Concept of image synthesis: thermal image (left), temperature threshold mask (middle) and VIS image with overlaid mask (right).

Final image is created by appropriate merging of the image obtained according to the above procedure with an IR image (Fig. 8).

IR image

VIS image

IR + VIS fused image

Figure 8. Final result of image fusion algorithm: IR image, VIS image and the result of image fusion.

4. INTRUDER TRACKING Two tracking algorithms were implemented in the presented security system [22- 24]. First one is a standard motion detection algorithm working in the visual data channel (Fig. 9), provided by Nexus system.

Figure 9. Control software screen with marked motion detection area and event log window.

Second algorithm uses IR image data and it was created and implemented to enhance the capabilities of the presented system. This algorithm will be described in detail. Multi-sensor security system must have minimal, guaranteed reaction time and in this aspect it can be treated as a real-time system. That was the reason why, during the research, an emphasis was put on implementing of also real-time tracking method. There are tracking methods that meet this requirement, such as feature tracking Mean-Shift algorithm or gradient algorithm Sum-of-Squared-Differences (SSD). During the simulation tests it was found that Mean-Shift algorithm is not effective when the target occupies small number of pixels (it often happens with IR imaging). As a result SSD tracking algorithm was finally implemented in the system. SSD algorithm finds the target location by analyzing the differences between two subsequent frames. The changes in target position are estimated by the calculations of spatial and temporal gradients. SSD coefficient defines the difference between two image fragments. Both fragments have to be the same size and they are usually rectangular. Assuming that

two image fragments (further referred to as window) have the dimension (2h+1) by (2h+1) and their centers are at coordinates (x,y) and (u,v) respectively, their SSD coefficient can be calculated according to the following equation:

[

SSD = ∑ ( f n −1 (x + i, y + j ) − f n (u + i, v + j ))

2

]

(3)

i, j

where:

i, j ∈ [− h, h] - location of point with respect to the centers of both compared fragments, h – coefficient representing the size of an object. Presented algorithm has good short-term (frame by frame) efficiency, but its long-term effectiveness may be compromised, because small disturbances, target occlusions, collisions or noise may disturb the tracking. Te test results show, however, that this algorithm can be successfully applied for target tracking on IR images. Sample tracks of an object on real IR image data is presented in Fig.10.

Figure 10. Intruder tracking by SSD algorithm.

Additionally, in order to ensure high quality of thermal images, the special algorithm for IR image enhancement was implemented. Its operation will be described in the next chapter.

5. ENHANCEMENT OF INFRARED IMAGES Specific features of thermal images make them difficult to perceive by a human operator [21]. The interpretation of such image can be very subjective, greatly influenced by thermal properties of observed objects and background. Even optimal IR camera setting for a given observation (focus, temperature range, picture framing) does not guarantee successful target detection. This problem can be significantly reduced by applying one of the image enhancement methods [25]. The use of methods that work well for visual images (e.g. histogram-based ones) does not usually render satisfactory results in case of thermal images. Much better results can be obtained using methods utilizing adaptive modifications of image histogram. The method applied in the presented system uses the algorithm, which compresses those intensity levels that rarely occur in the processed image. This is obtained by conditionally treating similar intensity levels as the same ones. In the first step the image histogram is calculated and the threshold value for the compression procedure is determined. Compressed intensity levels are transformed into new values using look-up table (LUT). Then the histogram is stretched in order to fully utilize available tonal range. As a result the intensity levels in the final image are more uniformly distributed. The effects of the image enhancement according to the above method is presented in Fig.11 [26, 27].

Figure 11. Thermal image before (left) and after the application of image enhancing algorithm (right).

The final step of data synthesis process preformed in the presented security system is referencing the radar and GPS data to the digital map of a monitored area.

6. CONCLUSIONS The application of data fusion techniques significantly improves the functional parameters of a security system. Day and night capability and the fusion of IR and VIS images makes the resulting image more clear and easier to comprehend by a human operator [11]. Detected targets can be easily marked and tracked. The applied enhancement of IR images additionally improves the image quality, especially in harsh weather conditions. Effective motion detection and target tracking capabilities allow for the constant monitoring of intruder activity and choosing the best course of action. Integrated control center consisting of computer, displays and software provides the operator with raw and fused sensor data, and thanks to simple graphical user interface the system handling is easy and intuitive (Fig. 12). The system design based on Nexus architecture [13] allows for scalability and easy sensor and system integration.

Figure 12. Integrated control center of a multi-sensor security system.

The developed data fusion method in a proposed security system is an universal one and can be applied in any given system for perimeter protection. Its application in a specific system requires, however, some optimization, taking into account the choice of sensors, operating conditions and expected efficiency. This paper presents the results of a research project financed from the state science budget for the years 2010 – 2012.

REFERENCES [1] Szustakowski M., Ciurapinski W. M., Życzkowski M., “Trends in optoelectronic perimeter security sensors”, Proc. SPIE 6736 (2007). [2] Życzkowski M., Szustakowski M., Kastek M., Ciurapiński W. M., Sosnowski T., “Module multisensor system for strategic objects protection”, WIT Transactions on Information and Communication Technologies, Vol. 42, 123-132 (2009). [3] Dulski R., Szustakowski M., Kastek M., Ciurapiński W., Trzaskawka P., Życzkowski M., “Infrared uncooled cameras used in multi-sensor systems for perimeter protection”, Proc. of SPIE Vol. 7834, 783416, (2010). [4] Kastek M., Sosnowski T, Piątkowski T., “Passive infrared detector used for detection of very slowly moving of crawling people”, Opto-Electronics Review 16 (3), 328-335 (2008). [5] Madura H., “Method of signal processing in passive infrared detectors for security systems”, WIT Transactions on Modeling and Simulation 46, 757-768 (2007). [6] Baker C. J., Griffits H. D., “Bistatic and Multistatic Radar Sensor for Homeland Security”, www.natoasi.org/sensors2005/papers/baker.pdf. [7] Cory P. at al, “Radar – Based Intruder Detector for a Robotic Security System”, www.nosc.mil/robots/pubs/spie3525b.pdf. [8] Ciurapiński W., Dulski R., Kastek M., Bieszczad G., Trzaskawka P., “Data fusion concept in multispectral system for perimeter protection of stationary and moving objects”, Proc. of SPIE Vol. 7481, 748111, (2009).

[9] Dulski R., Kastek, M., Bieszczad G., Trzaskawka P. Ciurapiński W., “Data fusion used in multispectral system for critical protection”, WIT Transactions on Information and Communication Technologies, Vol. 42, 165-173 (2009). [10] Holst C., “Testing of infrared imaging systems”, JVC New York, (1995). [11] Dulski R., Niedziela T., “Verification of the correctness of thermal imaging modeling”, Optica Applicata Vol. XXXI No 1, p. 193-202 (2001). [12] Dulski R., Madura H., Piątkowski T., Sosnowski T., “Analysis of a thermal scene using computer simulations”, Infrared Physics & Technology 49, 257-260 (2007). [13] “NEXUS FLIR Networked Systems”, www.flir.com [14] Klein, L. A., “Sensor and Data Fusion. Concepts and Applications”, SPIE, (1993). [15] Steinberg A., “Sensor and Data Fusion”, The Infrared & Electro-Optical Systems Handbook, Vol. 8, Bellingham (1993). [16] Szustakowski M., Ciurapinski W. M., Życzkowski M., Palka N., Kastek M., Dulski R., Bieszczad G., Sosnowski T., “Multispectral system for perimeter protection of stationary and moving objects”, Proc. of SPIE Vol. 7481, 74810D (2009). [17] Hall D. L., Llinas J., “An introduction to multisensor data fusion”, Proc. IEEE, vol. 85 no. 1, 6-23 (1997). [18] Lipton and all, “Critical Asset Protection, Perimeter and Threat Detection Using Automated Video Surveillance”, www.objectvideo.com. [19] Riley T., Moira S., “Image Fusion Technology for Security and Surveillance Application”, Proc. SPIE 6402, (2006). [20] Smith M., Heather J.P., “Review of Image Fusion Technology in 2005”, Proc. SPIE 5782, (2005). [21] [Lipton at al, Moving “Target Detection and Classification from Real-Time Video”, Proc IEEE, Workshop on Application of Computer Vision, (1998). [22] Isard M., Blake A. “Contour tracking by stochastic propagation of conditional density”, in: European Conference on Computer Vision, pp. 343-356, (1996). [23] Hager G., Belhumeur P., “Efficient region tracking with parametric models of geometry and illumination”, IEEE Trans. Pattern Anal. Mach. Intell. 20 (10), pp. 1025-1039, (1998). [24] Fukunaga K., “Introduction to Statistical Pattern Recognition”. Academic Press, second edition, 1990. [25] Dulski R., “Enhancement of the quality of IR images”, Proc. of the Advanced Infrared Technology and Applications AITA 9, Leon, 271 - 274 (2008). [26] Dulski R., Sosnowski T., Kastek M., Trzaskawka P., “Enhancing image quality produced by IR cameras” Proceedings of SPIE Vol. 7834, 783415, (2010). [27] Życzkowski M., Szustakowski M., Ciurapiński W., Pałka N. and Kastek M., “Integrated optoelectronics security system for critical infrastructure protection”, Przegląd Elektrotechniczny, Vol. 86 (10), 157-160 (2010).

Suggest Documents