Automatic decision support in heterogeneous ... - Semantic Scholar

4 downloads 188769 Views 520KB Size Report
365 Innovation Drive, Memphis, TN 38152, USA ... computer systems, the world-wide-web, in communication networks, and in various areas of the society1-4.
Automatic decision support in heterogeneous sensor networks Robert Kozma*a, Tim Tanigawaa, Orges Furxhia,b, Sergi Consula, b a

Center for Large-Scale Integrated Optimization and Networks (CLION) The University of Memphis, 310 FedEx Institute of Technology 365 Innovation Drive, Memphis, TN 38152, USA b

Department of Electrical & Computer Engineering The University of Memphis, TN 38152, USA ABSTRACT

There is a need to model complementary aspects of various data channels in distributed sensor networks in order to provide efficient tools of decision support in rapidly changing, dynamic real life scenarios. Our aim is to develop an autonomous cyber-sensing system that supports decision support based on the integration of information from diverse sensory channels. Target scenarios include dismounts performing various peaceful and/or potentially malicious activities. The studied test bed includes Ku band high bandwidth radar for high resolution range data and K band low bandwidth radar for high Doppler resolution data. We embed the physical sensor network in cyber network domain to achieve robust and resilient operation in adversary conditions. We demonstrate the operation of the integrated sensor system using artificial neural networks for the classification of human activities. Keywords: Sensor network, Cyber sensing, Radar imaging, Dismount, Information integration, Decision support, Neural network.

1. INTRODUCTION There is a need to model complex interactions in distributed sensor networks in order to provide efficient tools of decision support in rapidly changing, dynamic real life scenarios. In practical terms, decisions must be made rapidly based on the combination of diverse sensory channels, which presents formidable challenges to efficient decision support. Hence, we experience the paradoxical situation of being able to sense various events in physical space without having the capability of processing them efficiently to support robust decisions in cyber space. Optimally functioning sensor networks should not be rigid rather they should allow flexible adaptation to changing conditions in real time. Networks with hub structure are widespread in cyber domain and they have the important advantage of being resistant to random attacks. They have, however, their Achilles' heel; namely, well-targeted malicious attacks aimed at their major hub nodes can paralyze the operation of the overall network. Resilient sensor networks cannot entirely depend of any predefined network hierarchy. On the contrary, the desired sensor network should be reconfigurable including the combination of short-range connections characterizing localized events, as well as medium- and long-range connections responsible for large-scale synchronous responses across the overall system. Such networks have been studied in computer systems, the world-wide-web, in communication networks, and in various areas of the society1-4. These technological advances have important implications on the development of resilient sensor networks. There is a broad range of research activities in various academic and government laboratories on sensor networks involving advanced radar imaging, especially related to the characterization of human activities in various real life scenarios 5-7. In this work we address the question of the basic building block of such a sensor network, i.e., the interactive pair of two sensory channels with diverse sensory modalities. *[email protected]; phone +1-901-678-4968; fax 1 901-678-2480; www.clion.memphis.edu

The present paper is organized as follows. We start with the description of the employed sensors, including the highresolution radar imaging facility ARSIL. Next we describe the experimental setup of our sensors and introduce the applied signal processing tools and algorithms for decision support to characterize the observed human dismount activity. The studied sensors include Ku band high bandwidth radar and a K band low bandwidth radar system. The first sensor provides high resolution range data and the second provides high resolution Doppler data. The developed system can be supplemented in the future with additional sensors and additional sensor modalities, including optical cameras, acoustic, infrared sensors and other sensor modalities.8-10 In the following section, we introduce the classification results of experiments with the various sensors. Finally we analyze the complementary features of the various sensors, and discuss their role in producing a robust, integrated decision support system in cyber domain.

2. DESCRIPTION OF THE EXPERIMENTAL SENSOR SYSTEMS 2.1 Radar imaging facility The Advanced Radar Imaging and Sensor Integration Laboratory (ARSIL) is a newly established facility part of the University on Memphis Center for Large-Scale Integrated Optimization and Networks (CLION). It was made possible by a partnership between the Air Force Research Laboratory (AFRL), Sensors Directorate, Dayton OH and the University of Memphis. ARSIL conducts research aimed at developing pervasive sensor systems capable of monitoring movement of humans and vehicles in challenging recognition situations with high noise and clutter. The facility includes an anechoic radar chamber capable of radar cross section (RCS) and radar imaging measurements. The camber can be used for device testing and algorithm development. It includes Ku-Band and X-band radar systems, Agilent network analyzers (1-50GHz), and Scientific Atlanta azimuth and elevation positioners. The reconfigurable anechoic chamber has the dimensions 16ft. x 36ft and ceiling height of 9ft. The reconfigurable feature allows simulation of uncluttered, semi-cluttered, and cluttered environments. Figure 1 shows some images of the radar imaging chamber.

Figure 1. Snapshots of the radar imaging facility; (left) transmitter end of the chamber with reconfigurable absorber panels; (right) Chamber wall and target area in the back.

2.2 High bandwidth radar system The high bandwidth radar system operates in the Ku band and is a frequency modulated continuous wave (FMCW) radar. The frequency is swept across 4GHz of bandwidth starting from 13.4GHz. The radar system is implemented using an HP8510 Network Analyzer. A functional diagram of this system is shown in Figure 2. During the frequency sweep, the I and Q channels of the vertically polarized receiver are sampled with 401 points. The sweep time is set to the minimal available on the 8510, 92ms. The data for each sweep is transferred to a computer through the GPIB data bus. After each transfer the data is time stamped with the computer system time. Data from a new sweep is acquired every 342ms. This

long time between sweeps makes the collected data unsuitable for Doppler processing, however the data can provide high range resolution information.

Figure 2. Functional diagram of the high bandwidth FCMW radar system implemented in an HP8510 network analyzer

2.3 Low bandwidth radar system The low bandwidth radar system operates in the K band and is a FMCW radar. At the core of this radar system is KBand radar transceiver from MACOM (MACS-007802-0M1R1V) with a 10dB horn antenna. The transceiver is a Gunnplexer with an electronically controlled bandwidth of 300MHz centered around 24.125GHz. The system consists of the transceiver, a ramp generator, amplifier and filtering stages, and a data acquisition card. A functional diagram of this system is shown in Figure 3.

Figure 3. Functional diagram of the low bandwidth FCMW radar system implemented with the MACOM transceiver.

The ramp signal ranges from 0.5V to 10V and its frequency is 1KHz. The filtering and amplifying stages condition the baseband signal from the receiver for sampling by the data acquisition card. The data acquisition is synced to the ramp generator and for each ramp 200 data points are collected and time stamped with the computer’s system time. The relatively high frequency of the ramp signal allows for range and Doppler processing of the signal. However, for the experiments we conducted, only the Doppler data contains useful information since the range resolution is low due to the low RF bandwidth. The system described above is a bench top system and was used to collect data for this paper due to the offering of reconfigurable parameters such as modulation and sampling. The researchers at CLION have developed a similar system that they call the pocket radar.

The pocket radar is a short range 24GHz radar system for range and speed measurements. The system is capable of integrating other sensors capable of detecting vibration, IR or light.8,10 The current system provides highly integrated and autonomous pocket radar. It has a K-Band MACOM radar transceiver with miniature horn antenna, integrated with a custom design board, for power management, signal conditioning, and modulation. Wireless data transmission is completed using EasySen board, see Figure 4. The pocket radar operates on a 9V battery with a current consumption in active mode is 200mA. Additional sensors can be added (infrared, vibration, light, etc.,) for a distributed multi-sensory monitoring. Its physical dimensions are 4 x 1.5 x 2 inch, and a weight of 350g.

! Figure 4. Pocket radar photo illustrating the K-band radar transceiver with horn antenna, EasySen wireless transmission board, and custom-designed power management and signal processing board. The length of the unit as shown is 4 inches.

Parameter

Table 1. Parameters of the pocket radar Value

Frequency Bandwidth Waveform Pulse Repetition Frequency (PRF) Samples per Pulse Number of Chirps Coherent Integration Time (CIT)

24.125GHz 300MHz FMCW (chirp) 1000Hz 118 8 8ms

The pocket radar is undergoing a revision and future functionality will include reconfigurable modulation, synchronized sampling, reconfigurable sampling rates, improved signal conditioning, integrated wireless and on board signal processing. With this added functionality the pocket radar can serve as a node in a distributed sensor network.

3. EXPERIMENTAL SETUP 3.1 Activities We conducted a series of experiments in the semi-anechoic chamber to detect and classify different human activities. We recorded different types of movements: striking, patrolling, pointing a gun, and throwing an object. A description of the activities is provided in Table 2. The activities were performed by four subjects. The subjects continuously changed their position slightly with respect to the sensors during their performance to create additional data diversity. Data was collected with both radar systems simultaneously and the data was registered offline using the timestamps in the data. Each subject performed the same activity continuously for approximately seven minutes during which time 1250 data sequences were recorded. Table 2. Description of the performed activities

Activity Striking Patrolling Pointing a gun Throwing an object

Description Swinging a hammer in the up and down directions Walking back and forth Aiming a gun to an object while standing Swinging of the arm as to throw an abject

3.2 Sensor placement A schematic for the placement of the sensors is shown in Figure 5. Besides capturing different sensory modes the sensors were also placed in such a way that each captures a different aspect of the activity. This simulates the distributed nature of the network. The data was time stamped to simulates GPS synchronization of a distributed sensor network

Figure 5. Placement of the sensors relative to each other and the performance area

4. SIGNAL PROCESSING AND COMPUTATIONAL APPROACH 4.1 Data acquisition and signal preprocessing The range data radar has a bandwidth of 4GHz providing range resolution of 3.75cm. This range resolution is sufficient for capturing most human activities. For example during the striking activity the size of the human varies between 40cm (arms level with body) to 1m (arms extended away from the body). Range data is obtained from the raw I and Q channel radar data by Fourier transforming the data. The sweep time for the data collection is 92ms and the data transfer time to the computer is approximately 250ms. Therefore in one second approximately three sweeps can be acquired. If the range data is plotted versus time each activity displays distinct features. However, because of the low “frame rate” of 3Hz, many motions are not detected. In addition, because of the relatively long sweep time of 92ms, many motions slower than 10Hz are averaged and their features are lost. Range data vs. time are shown in Figures 6.a through 6.d for all 4 activities. The Doppler data radar has a sweep time of 1ms. If 93 sweeps are collected and Doppler data is extracted the velocity resolution is 0.0675m/s. This velocity resolution is sufficient to capture most human motions during the activities of interest; for example, the maximum velocity that was measured during our data collections was less than 2m/s (see Figures 6.e – 6.h). The number of sweeps was chosen to match the sweep time of the range data radar. For the same reason 93 sweeps where recorded every 342ms. Because of this, when the Doppler data is plotted versus time many motions are missed and the plots appear to be discontinuous. Doppler data vs. time are shown in Figures 6.e through 6.h. These data are recorded simultaneously with the data plotted in 6.a through 6.d. The Doppler data is obtained from the raw radar data by way of a two dimensional Fourier transform and a projection on the velocity axis. Since the Doppler data radar is a FMCW radar, both range and Doppler data are available. However, the range data has very low resolution (0.5m) and does not contain any useful information. For this reason the data is reduced in dimensionality by averaging the Doppler data across all the range data of interest. The classification of the activities is performed with the help of artificial neural networks. The preprocessed range versus time and Doppler versus time data are supplied as inputs to the appropriate networks. We chose to use 10 time frames per input corresponding to a monitoring time of approximately 3.4 seconds. For this purpose a window of 3.4 seconds

(10 frames) is selected and is reshaped into a column vector. The entries of the column vector become the input to the respective neural networks.

(a)

(b)

(c)

(d)

(e) (f) (g) (h) Figure 6. (a) Through (d) high resolution range data vs. time for the four activities. From left to right the activities are, striking, patrolling, pointing a gun, and throwing. (e) Through (h) high resolution Doppler data vs. time for the same four activities, recorded at the same time as the activities shown in (a)-(d).

4.2 Neural network classification and classification fusion We trained neural networks for each of the sensors. The neural network for the range data radar takes 1010 inputs, and has 2 hidden layers one with 63 neurons and one with 21 neurons, and 4 outputs corresponding to the 4 activity classes. The neural network for the Doppler data radar takes 730 inputs and has 2 hidden layers one with 63 neurons and one with 21 neurons, and 4 outputs. For both networks all the activation functions are Sigmoid, the training method is Scaled Conjugate Gradient Back Propagation, and the performance measurement is Minimum Square Error with regularization. We had 1400 training instances, 300 validation instances and 300 testing instances. The classifications from the two sensor neural networks were fused together through a third neural network with 8 inputs (the 4 outputs from the two sensor neural networks), one hidden layer with 10 neurons and 4 outputs. The fusing neural network can be augmented with additional inputs from the output of classifiers from other sensors. This classifier approach is demonstrated in Figure 7. In the next section we present the results from each of the sensor neural networks and from the fussing neural network.

Figure 7 Classification fusion approach. Outputs from the sensor neural networks are used as inputs by the fusion neural network. The output of the fusion neural network is the final classification

5. EXPERIMENTAL RESULTS AND DISCUSSION 5.1 Results of the range data radar classifier The results of the neural network classifier trained with range data are shown in Table 3. This is the confusion matrix for the training data set. The most confused classification is that of data from striking and throwing. This is also observed in data patterns shown in Figure 6. The patterns in 6.a and 6.d have the most irregular features of all. Patrolling on the other hand was not confused as much, probably due to the periodic pattern. Pointing a gun was also classified well, probably due to the non-varying nature of the data (see Figure 6.c)

Input Class

Table 3. Confusion matrix for test data for the range data radar classification; only true positive percentages are shown

Activity Striking Patrolling Pointing gun Throwing

Striking 71.2% 4.3% 6.2% 16.2%

Output Class Patrolling 2.7% 94.2% 0.0% 0.0%

Pointing gun 11.0% 0.0% 88.9% 5.4%

Throwing 15.1% 1.4% 4.9% 78.4%

Total inputs 73 69 81 74

5.2 Results of the Doppler data radar classifier The results of the neural network classifier trained with Doppler data are shown in Table 4. This is the confusion matrix for the training data set. The most confused classification again was that of data from striking and throwing. This is also observed in data patterns shown in Figure 6.e and 6.h. The patterns resemble each other. Patrolling was not confused as much and had the highest probability of correct of all. Again this was probably due to the periodic pattern. Pointing a gun was also classified well but was often confused with striking, probably because sometimes the subject turned too fast when switching pointing positions (see Figure 6.g) and those turns were confused for swinging of the arm when striking. Over all, this classifier performed poorly compared to the range data classifier due to the large confusion of the three activities, striking, pointing a gun and throwing.

Input classes

Table 4. Confusion matrix for test data for the Doppler data radar classification; only true positive percentages are shown.

Striking Patrolling Pointing gun Throwing

Striking 39.4% 2.6% 15.4% 36.0%

Output class Patrolling 2.8% 89.5% 3.8% 5.3%

Pointing gun 16.9% 2.6% 71.8% 4.0%

Throwing 40.8% 5.3% 9.0% 54.7%

Total inputs 71 76 78 75

5.3 Results from the classification fusion The outputs of the two sensor classifiers were supplied to the fusion neural network. The confusion matrix for the training data set is shown in Table 5. The results are higher than the other two classifiers. This is due to the orthogonal nature of the two sensors that were used but can also be attributed to the distributed nature of the sensor setup. For example the data in Figure 6.b and 6.d look similar however 6.f and 6.h look different, also 6.h and 6.e look similar however 6.a and 6.d look different. The actions of pointing a gun and throwing were again classified as the action of striking; however, the confusion is less than for the single sensor classifiers (compare with Tables 3 and 4).

Input classes

Table 5. Confusion matrix for test data for the fused classification; only true positive percentages are shown.

Striking Patrolling Pointing gun Throwing

Striking 82.9% 0.0% 3.0% 1.2%

Output class Patrolling 2.6% 100.0% 3.0% 0.0%

Pointing gun 9.2% 0.0% 90.9% 0.0%

Throwing 5.3% 0.0% 3.0% 98.8%

Total inputs 76 73 66 85

The data presented in Tables 3 through 5 is summarized in the plot shown in Figure 8. The fusion classifier performed better than the single sensor classifiers across all the classes.

Figure 8. Illustration of the performance of each classifier for each of the four classes. For each activity pattern (striking, patrolling, gun pointing, throwing) the results obtained by the multi-modal fusion are better than the results for the individual classifiers.

6. CONCLUSION We have studied the role of complementary information from multiple sensors on the classification performance of a distributed sensor system. We constructed a simple distributed sensor network with two sensor nodes and an overall decision module. One sensor provides range information over time and the other provides velocity information over time. Data from each sensor is preprocessed and is used as input to the decision mode based on artificial neural networks. The outputs of the individual sensor classifiers are used as input to a fusion neural network, which provided

the desired classification. We demonstrated that the fusion neural network performs better than any of the individual sensors neural networks when the sensors capture complementary features. In the future we plan to extend the sensor network with a range of sensor modalities, including optical/video, acoustic, vibration sensors. Acknowledgements: The present work has been supported in part by AFOSR Cognition and Decision Support program, Dr. Jay Myung, PM.

REFERENCES [1] [2] [3] [4]

Watts DJ, Strogatz SH (1998) Collective dynamics of ‘small-world’ networks. Nature 393: 440-442. Barabasi A-L, Albert R [1999] Science 286: 509--512. Albert R, Barabasi A-L [2002] Statistical mechanics of complex networks. Rev Mod Physics 74: 47-97. Newman M, Barabasi A-L, Watts, DJ eds. [2006] The structure and dynamics of networks. Princeton Studies in Complexity, Princeton University Press, Princeton, NJ. x+582 [5] Kim, Y. and Ling, H., “Human Activity Classification Based on Micro-Doppler Signatures Using a Support Vector Machine”, IEEE Transactions on Geoscience and Remote Sensing, Vol. 47, No. 5, May 2009 [6] Thiel, M. and Sarabandi, K., “Ultrawideband Multi-Static Scattering Analysis of Human Movement Within Buildings for the Purpose of Stand-Off Detection and Localization”, IEEE Transaction on Antennas and Propagation, Vol. 59, No. 4, April 2011. [7] Jansson, T.P., Kostrzewski, A. A. Ternovskiy, I. V. “Real-time ATR for unattended visual sensor wireless networks,” Proc. of SPIE, 4393, pp. 166-172, 2001. [8] Kozma, R., L. Wang, K. Iftekharuddin, E. McCracken, et al., “Multi-modal sensor system integrating COTS technology for surveillance and tracking,” IEEE Int. Radar Conf., RADAR2010, Arlington, VA, May 11-14, 2010 IEEE Press. [9] Iftekharuddin et al, 2011 Iftekharuddin, K., Khan, M.M.R., McCracken, E., Wang, L., Kozma, R. (2011) “Autonomous wireless radar sensor mote integrating a Doppler radar into a sensor mote and its application in surveillance and target material classification,: Proc. SPIE, Vol. 8134, 813403. [10] Kozma, R., L. Wang, K. Iftekharuddin, E. McCracken, E. I. Khan, M., S.Bhurtel, and R.M.Demirer, R.M. “RadarEnabled Collaborative Sensor Network Integrating COTS Technology for Surveillance and Tracking,” Special Issue on Collaborative Sensor Networks, Sensors, 11 (2012).

Suggest Documents