Detection and Classification of Finer-Grained

0 downloads 0 Views 6MB Size Report
Jun 15, 2016 - Abstract: The through-wall detection and classification of human activities are ... way vital signs through non-metallic obstacles such as clothes.
sensors Article

Detection and Classification of Finer-Grained Human Activities Based on Stepped-Frequency Continuous-Wave Through-Wall Radar Fugui Qi † , Fulai Liang † , Hao Lv, Chuantao Li, Fuming Chen and Jianqi Wang * Department of Medical Electronics, School of Biomedical Engineering, Fourth Military Medical University, Xi’an 710032, China; [email protected] (F.Q.); [email protected] (F.L.); [email protected] (H.L.); [email protected] (C.L.); [email protected] (F.C.) * Correspondence: [email protected]; Tel.: +86-29-8477-9259 † The first two authors Fugui Qi and Fulai Liang contributed equally to this work and should be regarded as co-first authors. Academic Editor: Vittorio M. N. Passaro Received: 30 March 2016; Accepted: 10 June 2016; Published: 15 June 2016

Abstract: The through-wall detection and classification of human activities are critical for anti-terrorism, security, and disaster rescue operations. An effective through-wall detection and classification technology is proposed for finer-grained human activities such as piaffe, picking up an object, waving, jumping, standing with random micro-shakes, and breathing while sitting. A stepped-frequency continuous wave (SFCW) bio-radar sensor is first used to conduct through-wall detection of finer-grained human activities; Then, a comprehensive range accumulation time-frequency transform (CRATFR) based on inverse weight coefficients is proposed, which aims to strengthen the micro-Doppler features of finer activity signals. Finally, in combination with the effective eigenvalues extracted from the CRATFR spectrum, an optimal self-adaption support vector machine (OS-SVM) based on prior human position information is introduced to classify different finer-grained activities. At a fixed position (3 m) behind a wall, the classification accuracies of six activities performed by eight individuals were 98.78% and 93.23%, respectively, for the two scenarios defined in this paper. In the position-changing experiment, an average classification accuracy of 86.67% was obtained for five finer-grained activities (excluding breathing) of eight individuals within 6 m behind the wall for the most practical scenario, a significant improvement over the 79% accuracy of the current method. Keywords: finer-grained human activity; comprehensive range accumulation; human micro-Doppler; through-wall; support vector machine

1. Introduction In recent years, non-contact penetrating detection and classification technology directed toward human activity have attracted broad research attention due to its promising and crucial value in practical applications such as public security and protection, anti-terrorism operations, and disaster rescue [1–3]. Undoubtedly, if finer-grained human activities (e.g., waving, jumping when staying in roughly one spot, picking up an object, or standing in roughly one spot with random micro-movements) behind a wall can be detected without contact and classified accurately, soldiers or police officers could grasp detailed information about the actions of a target without risking direct confrontation. Furthermore, at disaster sites in which the environment is temporarily hazardous or impossible for the approach of rescuers, such as after a chemical explosion or a seismic landslide, if the remote detection of trapped individuals by penetrating the wall or chemical smoke can be realized and human motion features can be identified, the rescuers will be provided with critical motion state information of the Sensors 2016, 16, 885; doi:10.3390/s16060885

www.mdpi.com/journal/sensors

Sensors 2016, 16, 885

2 of 17

trapped in advance. Therefore, the through-wall detection, analysis and classification of finer-grained human activities would be very helpful in decision-making and formulating rescue strategies, greatly improving rescue efficiency and operational effectiveness. Bio-radar is a novel kind of radar combining the technology of biomedical engineering and radar, which aims to detect in a non-contact way vital signs through non-metallic obstacles such as clothes and walls [4]. Ultra-wideband (UWB) bio-radar has gradually become a research hotspot of human location and motion detection [5,6] due to its advantages of strong anti-jamming capability, good penetrability, and high-range resolution. A new form of UWB radar, known as stepped-frequency continuous-wave (SFCW) radar, transmits a series of discrete tones in a stepwise manner, covering the radar bandwidth in the time domain to realize the UWB. Therefore, it has several unique advantages over conventional impulse radio-ultra wideband (IR-UWB) radar [7,8], such as a higher dynamic range and mean power for higher range and resolving power, high reliability, stability of the radar signal, and relatively easy implementation. Therefore, this paper develops a SFCW radar to conduct the through-wall detection of different human finer-grained activities. To date, relying on the effective analysis tool—the Time-Frequency transform—some complex human activities have already been classified under free space conditions, mainly based on the corresponding micro-Doppler features differences caused by limb movement, such as the classification of waving, standing, breathing, and stooping [1], the classification of one-arm swing walking, two-arm swing walking and no-arm swing walking [9] as well as the classification of armed and unarmed personnel [10]. The experimental results show that the foregoing method realizes highly accurate classification in free space. However, penetrating detection is necessary in practical applications such as anti-terrorism actions and disaster rescue. If the through-wall radar echo is greatly attenuated or the objective is far from the radar, the micro-Doppler features of the motion will become very weak and the signal to noise ratio (SNR) will be greatly reduced, which may reduce the recognition of those features and decrease the classification accuracy [11–13]. Therefore, making reasonable and effective enhancements of the weak micro-Doppler features of finer-grained human activity radar signals is crucial. Because the useful information modulated by body movements is distributed in different range bins of the UWB radar signal [14], the most effective way to enhance the micro-Doppler signal is to make full use of the radar signals in different range bins. Currently, the common methods provide range averages [15,16] or range accumulations [17] before analyzing in the time and frequency domains, which are often used for the detection of human respiration and heartbeat signals that feature small movement amplitudes, strong regularity, and good stability. However, for finer-grained human activities that feature large amplitude changes, are non-stationary, and weakly periodic, the movement of each scattering center (each limb structure) of the moving body will be roughly distributed in different range bins. Consequently, both of two common methods will be eliminated because they are likely to miss or change the feature information of the motions. Therefore, considering the UWB radar signal characteristics of human activity, this paper proposes a comprehensive range-accumulation time-frequency transform (CRATFR) method based on an inverse weight coefficient, which strengthens the micro-Doppler features of human activity signals by making full use of the time-frequency information of different scattering center signals in the different range bins of the UWB radar signals. After the motion detection and features extraction, our intention is to accurately classify different finer-grained human activities at any position. In this paper, a mature support vector machine (SVM) was adopted for activity classification. However, due to the influence of factors such as through-wall attenuation and detection range changes, it is inevitable that micro-Doppler features will undergo attenuation in any time-frequency distribution. As a consequence, one activity at a given range may have similar eigenvalues derived from micro-Doppler spectrogram as another activity at a different range. If an overall SVM is applied, the possibility of misjudging the corresponding activities at different distance positions will be greatly increased. Thus, we are faced with another challenge:

Sensors 2016, 16, 885

3 of 17

how to reduce the possibility of misjudging different finer-grained human activities as a result of distance changes? To address this issue, this research introduces an optimal self-adapting support vector machine (OS-SVM) based on prior information of the human position to classify the different finer-grained human activities. Since UWB radar is capable of realizing the accurate through-wall positioning [6,18,19] of the moving body, a corresponding SVM group for the classification of motion states at different positions could be established based on the corresponding eigenvalue data of finer-grained human activities. During the prediction, the radar first senses the environment through the wall. As it is a UWB radar, it has very good range resolution, and therefore, it can locate the target in range very accurately. Then, the corresponding optimal SVM classifier that is tailored to that specific range is selected and applied self-adaptively. Through the above analysis, the paper finally proposes a through-wall detection and classification technology for finer-grained human activity based on SFCW radar, mainly comprising the acquisition of human activity information, the CRATFR, the extraction of the eigenvalues, and pattern recognition based on the OS-SVM. In this paper, six representative finer-grained human activities are classified by the proposed technology, including piaffe (walking on the spot), picking up an object, waving, jumping, standing with random micro-shakes, and breathing while sitting (all of these activities are implemented when human staying in place). This paper is organized as follows: Section 2 describes the SFCW radar through-wall detection system and data acquisition environment. Section 3 introduces the methods for finer-grained human activity radar signal processing and classification. Section 4 comprises the performance analysis and verification of the through-wall detection and classification technology for finer-grained human activity using specific experiments, including a through-wall fixed-point experiment and a through-wall variable-position experiment. 2. SFCW Radar Through-Wall Detection System for Finer-Grained Human Activities and Experimental Establishment The SFCW radar system is illustrated in Figure 1. The detailed operating parameters are as follows: operation bandwidth, 0.5–3.5 GHz; adjustable number of transmitter step points, 101–301; transmitter frequency sampling interval, 30 MHz; maximum transmitting power, 10 dBm; dynamic range, >72 dB; analog-to-digital conversion (ADC) accuracy, >12 bit; sampling rate, 250 Hz; and maximum unambiguous range, 5 m. The plane logarithmic spiral antenna system consists of a single-transmitting and double-receiving antenna array. Cross polarization is adopted between the transmitting and receiving antennas. During the latter processing, we just use one channel signal derived from one of two receiving antennas which has a better performance. When detecting human motion, the principles of the SFCW radar may be described as follows: assuming that the pulse repetition interval (PRI) of the SFCW signal is T, the transmitting pulse width is T, the initial work frequency is f 0 , the frequency step increment is ∆f, and the stepped frequency is N, then the stepped frequency signal St (t) may be written as follows:

St ptq “

Nÿ ´1 n “0

#

T 2

¯ ´ ˛ t ´ n ` 12 ˚ T ‚exp tj p2π p f 0 ` n∆ f q t ` ϕ0 qu rect ˝ T ¨

(1)

ď t ă T{2 , n = 0,1,2, . . . N ´ 1 indicates the modulation period and ϕ0 0 else refers to the initial phase of the transmitted signal.

where, rect

`t˘ T



1 ´

activities are classified by the proposed technology, including piaffe (walking on the spot), picking up an object, waving, jumping, standing with random micro-shakes, and breathing while sitting (all of these activities are implemented when human staying in place). This paper is organized as follows: Section 2 describes the SFCW radar through-wall detection Sensors 2016, 16, 885 4 of 17 system and data acquisition environment. Section 3 introduces the methods for finer-grained human activity radar signal processing and classification. Section 4 comprises the performance analysis and that the radial distance of the objective relative to the radar is R(t), the target echo can verificationAssuming of the through-wall detection and classification technology for finer-grained human be described as follows: activity using specific experiments, a through-wall fixed-point experiment and a through¯ ´ including ¨ ˛ 1 ´ 1 t ´ n ` 2 ˚ T ´ τ ptq wall variable-positionNÿ experiment. rect ˝

Sr ptq “

n “0

T

‚ρn exp tj p2π p f 0 ` n∆ f q pt ´ τ ptqq ` ϕ0 qu

(2)

2. SFCW Radar Through-Wall Detection System for Finer-Grained Human Activities and where, ρnEstablishment refers to the backscattering coefficient of the objective and is assumed constant within the Experimental 2Rptq

radar observation period, which is set as ρ uniformly; τ ptq “ c refers to the time delay of the The SFCW radar system illustratedmoment, in Figure Thetodetailed parameters are as objective at R(t) relative to theistransmitting and c1.refers the speed operating of light. follows: operation 0.5–3.5 filtering GHz; adjustable numberon ofthe transmitter step points, 101–301; Frequencybandwidth, mixing and a low-pass process are conducted radar echo, for a sampling interval T, so that a sampling point can be obtained within each of the modulation periods. Assuming transmitter frequency sampling interval, 30 MHz; maximum transmitting power, 10 dBm; dynamic range remains unchanged (ADC) within a frame signal>12 duration, the zero intermediate range, that >72the dB;objective’s analog-to-digital conversion accuracy, bit; sampling rate, 250 Hz; and frequency echo of the objective can be discretized as:

maximum unambiguous range, 5 m. The plane logarithmic spiral antenna system consists of a singletransmittingS pm, and double-receiving Cross polarization is f qadopted the nq “ ρexp t´j2π p f 0 ` n∆ antenna f q τ ppm ´ 1qarray. NT ` nTqu “ ρexp t´j2π p f 0 ` n∆ τ pm, nqu between (3) transmitting and receiving antennas. During the latter processing, we just use one channel signal where m = 1,2,3, . . . M indicates the quantity of sampling frames, τ pm, nq “ τ ppm ´ 1q NT ` nTq refers derived from one of two receiving antennas which has a better performance. to the sampling of τ ptq at pm ´ 1q NT ` nT moment.

Figure 1. Schematic diagram of the SFCW radar system. Figure 1. Schematic diagram of the SFCW radar system.

The formula above is the common form of the SFCW radar echo. If m is a fixed value, the

When detecting human motion, the principles of the SFCW radar may be described as follows: target echo can be regarded as the sampling on different frequencies at the same moment. Then, a assuming that the pulse repetition interval (PRI) ofmotion the SFCW is through T, the transmitting pulse width one-dimensional time-range image of the objective can besignal obtained the computation is T, theofinitial work frequency f0, the frequency step increment is Δf, and on thethe stepped frequency is the inverse discrete Fourieristransform (IDFT), so as to conduct Doppler analysis time-range sequence frequency of objective range bins. N, thenimage the stepped signal St(t) may be written as follows: A through-wall detection experiment of human activity by the SFCW radar is shown in Figure 2a. The subject is on one side of the laboratory brick wall (which is about 30 cm thick), and faces the radar which is placed against the wall. During each acquisition of data, only one subject stands on one side of the wall. The subject executes specific movements according to personal motion habits and characteristics.

image sequence of objective range bins. A through-wall detection experiment of human activity by the SFCW radar is shown in Figure 2a. The subject is on one side of the laboratory brick wall (which is about 30 cm thick), and faces the radar which is placed against the wall. During each acquisition of data, only one subject stands on one side of the wall. The subject executes specific movements according to personal motion habits5 of and Sensors 2016, 16, 885 17 characteristics.

Figure 2. detection of human activity. (b) Preprocessed SFCWSFCW radar radar signal signal of piaffeFigure 2. (a) (a)Through-wall Through-wall detection of human activity. (b) Preprocessed of type movement. piaffe-type movement.

3. Analysis and Classification of Finer-Grained Human Activities If the weak and fuzzy micro-Doppler time-frequency features due to through-wall attenuation can be enhanced effectively, efficient recognition of finer-grained human activity can be certainly realized by extracting reasonable eigenvalues and adopting an appropriate pattern recognition method. This section first introduces the working principles of the CRATFR method based on an inverse weight coefficient, and then proves the enhancement effects. Then, eight eigenvalues are selected which reflect the motion characteristics according to the features of the SFCW radar signal spectrum of the finer-grained human activity, and the extraction method and its significance are briefly described. Finally, this section introduces the working principles of the proposed OS-SVM pattern recognition and classification method based on prior human position information. 3.1. Principles of Comprehensive Range Accumulation Time-Frequency Transform (CRATFR) Based on Inverse Weight Coefficient During the detection of human activity using the UWB radar, different delay echoes obtained from objectives at different distances from the UWB radar can be acquired at the same time. These echoes are stored in a two-dimensional data matrix R after being amplified and sampled: ! ) R “ rpmq rns : m “ 1, . . . , M, n “ 1, . . . N

(4)

where m and n indicate the range and time indexes, respectively. M refers to the quantity of sampling points along the distance axis, and determines the radar detection distance. N refers to the quantity of sampling points along the time axis, which determines the total time of the data together with the sampling frequency and can be regarded as a series of signals on the range axis: sig “ tsigi |ptq : i “ 1, ..Nu. As shown in Figure 2b, a preprocessed SFCW radar signal of piaffe activity is obtained after conducting successive preprocessing steps such as background removal and low-pass filtering (cutoff frequency, 60 Hz) on the original echo. Obviously, the motion features are mainly distributed in the range scope of 2.5–3.8 m, referred to as the effective range scope. The motion changes can be clearly seen in the preprocessed echo image, and each scattering center of the human body will span several range bins during movement. As Smith et al. indicated, when using a time-frequency transform to extracting the micro-Doppler features for motion feature analysis [20], the use of different range bin signals is very important.

Sensors 2016, 16, 885

6 of 17

To verify the advantages of the proposed CRATFR method based on inverse weight coefficients, this paper takes the conventional method (time-frequency transform (TFR), based on range average) as the reference method. The basic principles of the reference and proposed methods are as follows: 1.

TFR based on range average method. In this approach, the averaging of different range bin signals extracted within an effective range scope with an interval is first conducted to obtain sigaverage , and then, a time-frequency analysis is performed: sigaverage “

k ÿ sigi˚ N 1

2.

k

i “ 1 . . . . . . .k

(5)

where N indicates the length of the interval, and k indicates the total number of distance bin signals. CRATFR based on inverse weight coefficient. In this case, the joint time-frequency-range representation (JTFRR) [21] is first obtained, as shown in Figure 3.

Sensors 2016, 16, 885

6 of 16

Figure Figure 3. 3. The joint time-frequency-range time-frequency-range representation representation of ofthe the UWB UWBradar radarsignal. signal.

Then, the range binbin signal is performed to obtain the timeThen, the time-frequency time-frequencytransform transformofofeach each range signal is performed to obtain the frequency representation (TFR). The TFRs of different range bins are collected in order to obtain the time-frequency representation (TFR). The TFRs of different range bins are collected in order to obtain JTFRR three-dimensional spectrum of the complete human-activity UWB radar signal. AsAs shown in the JTFRR three-dimensional spectrum of the complete human-activity UWB radar signal. shown Figure 3, the three coordinate axes indicate the range, time, and frequency, respectively. Then, the in Figure 3, the three coordinate axes indicate the range, time, and frequency, respectively. Then, accumulation of the obtained from from each range bin signal according to the corresponding inverse the accumulation of TFRs the TFRs obtained each range bin signal according to the corresponding weight coefficient along the range is taken. CRATFR of the whole is inverse weight coefficient along theaxis range axis isFinally, taken. the Finally, the CRATFR of themotion wholesignal motion obtained. signal is obtained. To enhance enhance the the micro-Doppler this paper adopted an To micro-Doppler features features formed formedby bythe themotion motionofoflimbs, limbs, this paper adopted inverse weight coefficient to conduct the comprehensive rangerange accumulation of different timean inverse weight coefficient to conduct the comprehensive accumulation of different frequency spectra, shown in Equation (6): time-frequency spectra, shown in Equation (6):

s    s    s  ....  s

d ω 1˚ s 1` ω 2˚ s ` 2 . . . . ωn ˚ s n sd “ n n 2 2 1 1

(6) (6)

where  indicates the weight corresponding to the TFR of range bin i, while si indicates the where ω ι indicates the weight corresponding to the TFR of range bin i, while si indicates the corresponding TFR. For i = 1,2,… n, n indicates the number of range bins within the effective range corresponding TFR. For i = 1,2, . . . n, n indicates the number of range bins within the effective scope. range scope. The selection of the weight coefficient ω is based on the energy characteristics of the SFCW radar The selection of the weight coefficient ω is based on the energy characteristics of the SFCW radar finer-grained human activity signal. Taking the piaffe signal shown in Figure 2b as an example, finer-grained human activity signal. Taking the piaffe signal shown in Figure 2b as an example, within within the effective range of motion features, the middle part of the range axis mainly comes from the effective range of motion features, the middle part of the range axis mainly comes from torso torso movement, with its large scattering area, and features strong energy. The energy gradually weakens as the range moves away from the middle, because the signal arises mainly from limb movements, with their small scattering areas (i.e., arms and legs). When the signal through the wall is attenuated or the objective moves away from the radar, the micro-Doppler features caused by arms and legs will attenuate sharply. Hence, we apply a larger weight to the range bin TFR with weaker energy, while a smaller weight is given to the range bin TFR with stronger energy. Based on this

Sensors 2016, 16, 885

7 of 17

movement, with its large scattering area, and features strong energy. The energy gradually weakens as the range moves away from the middle, because the signal arises mainly from limb movements, with their small scattering areas (i.e., arms and legs). When the signal through the wall is attenuated or the objective moves away from the radar, the micro-Doppler features caused by arms and legs will attenuate sharply. Hence, we apply a larger weight to the range bin TFR with weaker energy, while a smaller weight is given to the range bin TFR with stronger energy. Based on this method, the micro-Doppler features afforded by the movement of limbs will be enhanced effectively in the time-frequency domain. In this paper, a short-time Fourier transform (STFT) (0.42 s Hanning window) is applied to generate the corresponding TFR for each range bin signal. Because we have conducted the operation of taking absolute value on the output channels before reducing the background based on amplitude, only the positive frequency can be showed in the Doppler spectra. Comprehensive time-frequency analyses based on the reference and proposed methods were conducted for the piaffe activity at a distance of 4 m behind the wall, with the results shown in Figure 4. As can be seen from Figure 4, more distinguishable, regular, and stronger motion features can be extracted from the high frequency positions by the proposed method than the reference, due to its advantages of the micro-Doppler enhancement effect and its excellent immunity to interference by noise and clutter. Furthermore, further experiments indicated that the advantages of this method are highlighted during long-range Sensors 2016, 16, 885 7 of 16 through-wall detection.

Figure m behind behind the the wall, wall, Figure 4. 4. Time-frequency Time-frequency spectra spectra of of SFCW SFCW radar radar signal signal for for piaffe piaffe at at aa distance distance of of 44 m based based on on different different range range accumulation accumulation methods: methods: (a) (a) Reference Reference method method and and (b) (b) Proposed Proposedmethod. method.

Finally, the the CRATFR CRATFR spectrograms spectrograms of of six six different different finer-grained finer-grained human human activities activities from one one Finally, particular subject subject are are shown shown in in Figure Figure 5. 5. particular The CRATFR spectrogram based on the inverse weight coefficient is capable of clearly distinguishing the rhythmic changes due to different activities, and the micro-Doppler features formed by trunk movement are obviously different from those formed by limb movements. The specific performance can be divided into the following categories: 1.

2. 3.

The human body stays roughly in place during finer-grained human activities. Since the scattering area of the human trunk is the largest, the strongest part of the radar echo spectrogram comes from the low-frequency Doppler signals formed by torso motion. At the same time, the periodic micro-Doppler signals above the torso arise mainly from the legs and arms. Except for breathing while sitting, the Doppler frequencies of the other five activities basically range from 0 to 7 Hz. In terms of the micro-Doppler characteristics, the highest frequencies from piaffe and jumping can reach more than 60 Hz, but piaffe has more regular micro-Doppler features. The spectral features due to picking up an object are generally similar to those of waving, with the highest

Sensors 2016, 16, 885

8 of 17

frequency around 25 Hz, while its micro-Doppler features occupy a larger spectral area than those of waving, since waving only involves the movement of a single arm within a small range, in addition to body movement. Compared with the other four activities, those of standing with random micro-shakes and breathing while sitting fewer features. Figure 4. Time-frequency spectra of SFCW radar signalexhibit for piaffe at a micro-Doppler distance of 4 m behind the However, wall, the former activity has stronger Doppler features accompanied by discontinuous weak based on different range accumulation methods: (a) Reference method and (b) Proposed method.Doppler features formed by the random shaking of the arms. the CRATFR ofdiffer six different finer-grained human activities from one 4. Finally, The activity periods ofspectrograms all six activities significantly. particular subject are shown in Figure 5.

Figure 5. CRATFR spectrograms spectrograms of six finer-grained human activities performed at 3 m behind a wall: Figure wall: (a) piaffe; piaffe; (b) (b) picking picking up up an an object; object; (c) waving; waving; (d) jumping; (e) standing with random micro-shakes; (a) micro-shakes; and (f) (f) breathing breathing while while sitting. sitting. and

3.2. Feature Extraction According to the CRATFR spectrograms shown in Figure 5, different activities reveal interesting and distinct features. To facilitate the extraction of distinct characteristics from the spectra of different activities, noise submerged in the Doppler signal is removed according to threshold values obtained by conducting contrastive analysis on background signals and the measured signals of human activity. As shown in Figure 6, the rest of the spectrogram will substantially comprise only activity information.

According to the CRATFR spectrograms shown in Figure 5, different activities reveal interesting and distinct features. To facilitate the extraction of distinct characteristics from the spectra of different activities, noise submerged in the Doppler signal is removed according to threshold values obtained by conducting contrastive analysis on background signals and the measured signals of human activity. As16,shown in Figure 6, the rest of the spectrogram will substantially comprise only activity Sensors 2016, 885 9 of 17 information.

Figure 6. Some spectrogram. Some of of the eight features extracted from the CRATFR spectrogram.

Based on the analysis of the spectra of the different activities above, eight features are selected Based on the analysis of the spectra of the different activities above, eight features are selected to characterize the Doppler signatures; Some of these are illustrated in Figure 6: (1) the offset of the to characterize the Doppler signatures; Some of these are illustrated in Figure 6: (1) the offset of the Doppler signal; (2) the band-width (BW) of the micro-Doppler signal; (3) the main period of human Doppler signal; (2) the band-width (BW) of the micro-Doppler signal; (3) the main period of human body activity; (4) the secondary period of human body activity; (5) the standard deviation (STD) of body activity; (4) the secondary period of human body activity; (5) the standard deviation (STD) of the the valuable portion of the CRATFR (non-zero row); (6) the ratio of the total BW of the Doppler signal valuable portion of the CRATFR (non-zero row); (6) the ratio of the total BW of the Doppler signal to to the period of limb activity; (7) the area ratio of the limb characteristics to the torso characteristics the period of limb activity; (7) the area ratio of the limb characteristics to the torso characteristics in in the total valuable portion of the CRATFR (non-zero value); and (8) the STD of the envelope (upper the total valuable portion of the CRATFR (non-zero value); and (8) the STD of the envelope (upper envelope) of the CRATFR. envelope) of the CRATFR. Characteristic (1) is related to the speed of limb motion, and faster swings of the arms or legs will cause a larger BW. Factor (2) describes the frequency range formed by limb movement, namely, the speed variation range, while (3) describes the time required for completing a motion. Feature (4) describes the cycle formed by minor limb movements except the overall motion cycle during the motion process, and (5) is related to the dynamic range of the activity. Characteristic (6) shows the spectral distribution features of the whole motion, which may be large if the activity has a short cycle and intense motion, such as piaffe, or may be small, as when picking up an object. (7) is related to the scattering areas and amplitudes of the limb and trunk movements involved in the motion, and (8) reveals the non-stabilization of the motion. 3.3. Classification by Optimal Self-Adaption Support Vector Machine (OS-SVM) Based on Prior Human Position Information Having described the extraction of features, we next turn to classification by the SVM in combination with eigenvectors. As a classifier [22,23] for solving nonlinear boundary problems, SVM is mainly used for optimizing and generating corresponding models based on a training-data support feature and corresponding known category name. Based on this model, the testing data can be classified accurately through the support vector. Thus, we adopted the sophisticated LibSVM3.1 algorithm developed by Chang and Lin [24], which can realize automatic multi-classification. During parameter selection, a radial basis function was selected in advance as the kernel function of the SVM on the basis of experimental results and the characteristics of this kernel function [25]. For two other parameters, the penalty parameter c and the kernel parameter g, we utilized four-fold cross-validation to conduct automatic selection based on the minimum classification error principle. To classify different human activities at changing positions, the current conventional method first acquires the overall SVM through training based on all data at all positions. Then, the classification of data at any position is realized based on the overall SVM. However, taking piaffe as an example (Figure 7), with the increase of the through-wall detection range, the high-frequency micro-Doppler signals which are clearly distinguishable in finer-grained human activities will

Position Information Having described the extraction of features, we next turn to classification by the SVM in combination with eigenvectors. As a classifier [22,23] for solving nonlinear boundary problems, SVM is mainly used for optimizing and generating corresponding models based on a training-data support feature and corresponding known category name. Based on this model, the testing data can be Sensors 2016, 16, 885 10 of 17 classified accurately through the support vector. Thus, we adopted the sophisticated LibSVM3.1 algorithm developed by Chang and Lin [24], which can realize automatic multi-classification. During gradually willbasis easily cause the of different at different parameterattenuate. selection, aThis radial function wasmisjudgment selected in advance as theactivities kernel function of therange SVM positions. Therefore, based on the pre-existing mature technology for human through-wall on the basis of experimental results and the characteristics of this kernel function [25]. For tworange other positioning radar, this paperc proposes new classification by OS-SVM based on parameters,via theUWB penalty parameter and the akernel parameter g,method we utilized four-fold crossprior human position information. validation to conduct automatic selection based on the minimum classification error principle.

Figure Figure7.7.Spectrograms Spectrogramsof ofthe thepiaffe piaffeactivity activityat atdifferent differentpositions: positions:(a) (a) 33 m, m, (b) (b) 44 m, m, (c) (c) 55 m, m, and and (d) (d) 66 m. m.

Toschematic classify different at changing positions, the current conventional method A diagramhuman of the activities new classification method is shown in Figure 8. In the training first acquires overall through training on all the stage, the data the obtained at SVM different standard rangebased positions R “data rR1 , at R2 ,all . . . ,positions. Ri , . . . Rn´Then, 1 , Rn s are classification of data at any position is realized based on the overall SVM. However, taking piaffe used to establish the corresponding pattern recognition models Model = [Model1 , Model2 , . . . , Modelas i, . . . ,Modeln ´1 , Modeln ]. In the testing stage, according to the prior human position information r obtained from the SFCW radar, the best model—Model Best —is determined automatically based on the principle of proximity. Then, the classification of the motion feature data at this position will be conducted based on Model Best . In this research, the R values at the standard position are as follows: R “ rR1 , R2 , R3 , R4 s “ r3, 4, 5, 6s (unit: m), and the pattern recognition models are Model = [Model1 , Model2 , Model3 , Model4 ] During testing, for example, if the prior human position information r is equal to 4.6, Rbest “ R3 is obtained according to themin p|Ri ´ r|q principle, and the optimal model Model Best “ Model3 . Then, the human activity data at position r will be classified based on Model3 .

on

. In this research, the R values at the standard position are as follows: = [ , , , ] = [3,4,5,6] (unit: m), and the pattern recognition models are Model = [Model1, Model2, Model3, Model4] During testing, for example, if the prior human position information r is equal to 4.6, = is obtained according to the min ( | − |) principle, and the optimal model = Then, Sensors 2016, 16, 885 11 of 17 the human activity data at position r will be classified based on .

Figure Figure 8. 8. Operation Operationdiagram diagram of of OS-SVM OS-SVM based based on on prior prior human human position position information. information.

4. Experimental Results and Discussion 4. Experimental Results and Discussion This section presents the verification and discussion of the effectiveness and superiority of the This section presents the verification and discussion of the effectiveness and superiority of the through-wall detection and classification technology for finer-grained human activities, based on a through-wall detection and classification technology for finer-grained human activities, based on a classification experiment of six finer-grained human activities at a fixed through-wall position and a classification experiment of six finer-grained human activities at a fixed through-wall position and a second experiment of five finer-grained human activities at variable positions. second experiment of five finer-grained human activities at variable positions. 4.1. 4.1. Pattern Pattern Recognition Recognition Training Training Feature Feature Set Set Generation Generation When When using using the the SVM SVM algorithm algorithm to to conduct conduct data data training training and and classification, classification, the the eight eight features features extracted from the CRATFR spectrograms are taken as the eigenvalue group. As a special extracted from the CRATFR spectrograms are taken as the eigenvalue group. As a special case case in in the the changing-position changing-position mode, mode, an an experiment experiment in in the the fixed-position fixed-position is is used used to to verify verify the the effectiveness effectiveness of of the the new CRATFR algorithm and the eigenvalues obtained in this paper. During data acquisition in the new CRATFR algorithm and the eigenvalues obtained in this paper. During data acquisition in the fixed fixed position, position, the the subject subject faces faces the the radar radar behind behind the the wall wall at at aa distance distance of of 33 m m (±0.3 (˘0.3m) m)from fromthe theradar. radar. The data set is obtained by extracting the features of the measured data acquired for eight subjects The data set is obtained by extracting the features of the measured data acquired for eight subjects performing performing six six activities. activities. Each Each subject subject performed performedeach eachactivity activitytwelve twelverepeats repeatsfor foraa period period of of 88 ss per per repetition. repetition. Thus, Thus, the the total total number number of of the the features features set set is is 576. 576. The The mean mean values values of of the the eight eight features features of of the the six sixactivities activitiesare areshown shownin inTable Table 1. 1. Clearly, Clearly, they they are are quite different, and therefore, they may be reliably and reasonably applied in the recognition quite different, and therefore, they may be reliably and reasonably applied in the recognition and and classification classification system. system. Table 1. Mean values of the eight selected features. Activity

Feature (1)

Feature (2)

Feature (3)

Feature (4)

Feature (5)

Feature (6)

Feature (7)

Feature (8)

(1) (2) (3) (4) (5) (6)

72.15 38.63 32.75 85.84 18.68 7.60

48.96 24.13 21.80 62.54 6.52 0

1.10 2.64 1.92 2.60 3.32 3.47

1.77 2.79 1.54 1.95 3.02 2.88

66.32 72.28 45.50 67.73 76.23 17.76

67.84 15.14 18.83 45.85 7.82 3.39

4.94 1.88 1.18 4.40 0.65 0

18.25 9.33 8.59 23.51 2.71 2.58

For the classification experiment of the different activities at varying through-wall positions, the data from five activities (excluding breathing) at positions of 4, 5, and 6 m (˘0.3 m) were also acquired, in addition to the existing 3 m data described above. During data collection, every activity at each position was collected twelve repeats. Ultimately, 1920 feature sets were obtained.

4.2. Fixed-Position Human Activity Classification In this test, we used 3/4 of the data obtained at fixed positions as the training data set and the remaining was used to generate the validation data set. When using LibSVM, two scenarios can occur during the selection of the training and validation data (Figure the Sensors 2016, 16,9). 885The first scenario selects eight repeats from each subject as the training data, and 12 of 17 remaining four repeats as the validation data. The second scenario selects all the repeats of 6 subjects as the training data, and uses the data from the remaining two subjects as the validation data. The 4.2. Fixed-Position latter would moreHuman closelyActivity reflect Classification a practical application, and thus, is more realistic and practical because it classifies the activities of people on the information of known In this test, we used 3/4 of theunknown data obtained at based fixed positions as the training data people. set and the The final judgment of accuracy and the corresponding optimal parameters c and g are obtained remaining was used to generate the validation data set. through theusing comprehensive averages ofcan four-fold cross-validations. data was divided into 4 When LibSVM, two scenarios occur during the selectionThe of the training and validation quarters for training and testing purposes and the different errors relates to the error obtained when data (Figure 9). The first scenario selects eight repeats from each subject as the training data, and the validating on each quarter of the data while training on the remainder. remaining four repeats as the validation data. The second scenario selects all the repeats of 6 subjects as To facilitate the graphic of the classification the six activities (piaffe, the training data, and uses the representation data from the remaining two subjectsresults, as the validation data. The latter picking up an object, waving, jumping, standing with random micro-shakes, and breathing whileit would more closely reflect a practical application, and thus, is more realistic and practical because sitting) arethe respectively classifies activities ofnumbered unknown1–6. people based on the information of known people.

Figure 9. Schematic diagrams of the two selection scenarios. (a) The first scenario; (b) The Figure 9. Schematic diagrams of the two selection scenarios. (a) The first scenario; (b) The second scenario. second scenario.

According to the parameter selection results from the four-fold cross-validations shown in The10a final judgment of accuracy the corresponding parameters c and are obtained Figures and 11a, the best averageand classification accuracyoptimal of the six activities for thegfirst scenario through comprehensive averages of four-foldselections cross-validations. The into 4 is 98.78%,the with the corresponding best parameters c = 5.278 and g =data 1.741.was Thedivided best accuracy quarters for training and testing purposes and the different errors relates to the error obtained is 93.23% for the second scenario, with the best parameters selections c = 5.278 and g = 1.0. Thewhen first validating onaeach quarter of thevalidation data whileerror training thesecond remainder. scenario has slightly smaller thanon the one. It is probable that the target Tooffacilitate the data graphic representation the six activities (piaffe, source the testing is the same as thatof ofthe theclassification training dataresults, in the first scenario, so that theypicking share up an object, waving, jumping, standing with random micro-shakes, and breathing while sitting) are a certain similarity. The classification errors distributions of the four-fold cross-validations for the respectively numbered 1–6. According to the parameter selection results from the four-fold cross-validations shown in Figures 10a and 11a, the best average classification accuracy of the six activities for the first scenario is 98.78%, with the corresponding best parameters selections c = 5.278 and g = 1.741. The best accuracy is 93.23% for the second scenario, with the best parameters selections c = 5.278 and g = 1.0. The first scenario has a slightly smaller validation error than the second one. It is probable that the target source of the testing data is the same as that of the training data in the first scenario, so that they share a certain similarity. The classification errors distributions of the four-fold cross-validations for the two scenarios are shown in Figures 10b and 11b. The first scenario is more stable than the second one. For the more realistic case, i.e., the second scenario, activities (2) (picking up an object) and (3) (waving) were inclined to be confused. This is probably because the Doppler features of picking up an object are similar to those of waving. Finally, the breathing while sitting activity (6) has the smallest classification error in both scenarios, because the Doppler signature of breathing is so small that it is clearly distinct from the other motions.

two scenarios scenarios are are shown shown in in Figures Figures 10b 10b and and 11b. 11b. The The first first scenario scenario is is more more stable stable than than the the second second one. one. two For the more realistic case, i.e., the second scenario, activities (2) (picking up an object) and (3) For the more realistic case, i.e., the second scenario, activities (2) (picking up an object) and (3) (waving) were were inclined inclined to to be be confused. confused. This This is is probably probably because because the the Doppler Doppler features features of of picking picking up up (waving) an object are similar to those of waving. Finally, the breathing while sitting activity (6) has the smallest an object are similar to those of waving. Finally, the breathing while sitting activity (6) has the smallest classification error in in both both scenarios, scenarios, because because the the Doppler Doppler signature signature of of breathing breathing is is so so small small that that it 17 is classification error it is Sensors 2016, 16, 885 13 of clearly distinct distinct from from the the other other motions. motions. clearly SVM Average Average Error=1.22% Error=1.22% SVM Error1=2.08% Error1=2.08% Error2=0.69% Error2=0.69% Error3=0.69% Error3=0.69% Error4=1.39% Error4=1.39%

Pred Predicicted tedClass Class

66 55 44 33 22 11 000 0

11

22

33Actual Class 44 Actual Class (b) (b)

55

66

Figure 10. 10. Classification Classification results results from from four-fold four-fold cross-validation cross-validation for for the the first first scenario. scenario. (a) (a) Parameter Parameter Figure scenario. selection results; results; (b) (b) Classification Classification errors errors distribution of of the the four-fold four-fold cross-validation cross-validation (the (the horizontal horizontal selection represents the actual activity class represented as the numbers 1–6, the vertical axis represents axis represents the actual activity class represented as the numbers 1–6, the vertical axis represents the represents the actual activity class represented as the numbers 1–6, the vertical axis represents the predicted activity class represented as the numbers 1–6). predicted activity classclass represented as the 1–6).1–6). the predicted activity represented as numbers the numbers SVM Average Average Error=6.77% Error=6.77% SVM Error1=3.47% Error1=3.47% Error2=5.56% Error2=5.56% Error3=0.69% Error3=0.69% Error4=17.36% Error4=17.36%

66

Predict PredictClass Class

55 44 33 22 11 000 0

11

22

33 44 Actual Class Class Actual (b) (b)

55

66

Figure 11. 11. Classification Classification results results from from four-fold four-fold cross-validation cross-validation for for the the second second scenario scenario (a): (a): The The Figure Figure 11. Classification results from four-fold cross-validation for the second scenario (a): The parameter selection selection results. results. (b) (b) Classification Classification errors errors distribution distribution of of the the four-fold four-fold cross-validation. cross-validation. parameter parameter selection results. (b) Classification errors distribution of the four-fold cross-validation.

4.3. Changing-Position Changing-Position Human Human Activity Activity Classification Classification 4.3. 4.3. Changing-Position Human Activity Classification In the the classification classification tests tests of of human human activities activities with with changing changing positions, positions, to to meet meet the the requirements requirements In In the classification tests ofonly human activities with changing positions, to meet the requirements for for practical application, we focus on the second scenario proposed in this paper. Further, for practical application, we only focus on the second scenario proposed in this paper. Further, practical application, we only focus on the second in through-wall this paper. Further, because of because of of the sharp sharp signal signal attenuation that occursscenario with the theproposed increasing detection range, because the attenuation that occurs with increasing through-wall detection range, the sharp signal attenuation that occurs with the increasing through-wall detection range, it is difficult it is is difficult difficult for for the the weak weak signal signal due due to to breathing breathing while while sitting sitting to to show show clear clear and and distinguishable distinguishable it for the weak signal due to breathing while sitting to show clear and distinguishable features in the features in in the the time-frequency time-frequency spectrum. spectrum. Thus, Thus, the the other other five five activities activities were were selected selected for for the testing. testing. features the time-frequency spectrum. Thus, the other five activities were selected for the testing. During the the experiment, experiment, the the reference reference method method (i.e., (i.e., the the conventional conventional overall overall SVM SVM classification classification During During the experiment, the all reference method (i.e., the conventional overall SVM classification method) was applied to classify the data at all positions for the eight subjects. The data from six six method) was applied to classify all the data at all positions for the eight subjects. The data from method) was applied to classify all the data at all positions for the eight subjects. The data from six subjects at at all all positions positions were were used used as as the the training training data data to to obtain obtain the the overall overall model. model. When When testing, testing, the the subjects subjects at all was positions were used as the the data training data totwo obtain the overall model. When testing, overall model used for classifying of the other individuals at any position. However, overall model was used for classifying the data of the other two individuals at any position. However, the model was used for classifying the data of group the other any position. the overall proposed OS-SVM method for the the experimental experimental wastwo ableindividuals to achieve achieveatmore more efficient the proposed OS-SVM method for group was able to efficient However, the proposed OS-SVM method for the experimental group was able to achieve more efficient classification of four groups of data at positions of 3, 4, 5, and 6 m (˘0.3 m) because Modeli at each Ri results from the training data of six persons at the corresponding positions, and the data from the remaining two subjects refer to validation data. Ultimately, the final average classification accuracy was obtained from the four-fold cross-validation.

Sensors 2016, 16, 885

13 of 16

classification of four groups of data at positions of 3, 4, 5, and 6 m (±0.3 m) because Modeli at each Ri results from the training data of six persons at the corresponding positions, and the data from the remaining two subjects refer to validation data. Ultimately, the final average classification accuracy Sensors 2016, 16, 885 14 of 17 was obtained from the four-fold cross-validation. As shown in Figure 12, for the classification of human activities at changing positions within 6 m, As shownmethod in Figure 12, for the classification human activities at error, changing positions within the reference produced a higher averageofrate of classification 21.88%, whereas the 6method m, the proposed reference method produced a higher average rate of classification error, 21.88%, whereas the in this paper afforded excellent performance, with an average error of 13.33% (the method proposed in this paper12.29%, afforded excellent with an average errorinofFigure 13.33%13. (the mean value of 8.54%, 13.74%, and 18.75%performance, for the four distances), as shown In mean value ofthe 8.54%, 13.74%, 12.29%, and of 18.75% for the four shown are in Figure In other words, classification accuracies the reference anddistances), proposed as methods 78.12%13.and other words, the classification of the reference and proposed methodsisare 78.12% and 86.67%, 86.67%, respectively. This isaccuracies because the training model of the OS-SVM more similar to the respectively. This because the training the OS-SVM is positions more similar tothose the eigenvalues of the eigenvalues of theis actual motion signalsmodel in theofcorresponding than in the reference actual motion signals with in thethe corresponding positions thandetection those in range, the reference method. Meanwhile, method. Meanwhile, increase of through-wall the classification error rate with the increase of through-wall detection range,atthe classification rate18.75% increases from increases gradually, from 8.54% at 3 m, 13.37% 4 m, 12.29% at 5error m, and at 6gradually, m. This trend 8.54% 3 m, 13.37% 4 m, 12.29% at through-wall 5 m, and 18.75% at 6 m. This trend occurs because the increase occursatbecause the at increase of the range inevitably weakens the Doppler highof the through-wall inevitablythese weakens the Doppler high-frequency and reduces these frequency features range and reduces feature differences among the features different activities, thus feature differences among the different activities, thus increasing the misjudgment rate. increasing the misjudgment rate. Common SVM Average Error=21.88% Error1=16.25% Error2=30.83% Error3=11.67% Error4=28.75%

Predict Class

5 4 3 2 1 0 0

1

2

3 4 Actual Class

5

Figure 12. Four-fold cross-validation cross-validation error error distribution distribution of SensorsFigure 2016, 16, 885 14 of 16 12. Four-fold of the the reference reference method method at at changing changingpositions. positions.

Predict Class

Predict Class

Furthermore, through the overall observation of Figure 13a–d, the probability of classification error caused by confusion between the (1) piaffe and (4) jumping motions is the highest. We think 4m SVM Average Error= 13.74% 3m SVM Average Error=8.54% that this is because the high and low frequency features of the two activities are relatively strong and Error1=11.67% Error1=8.33% Error2=17.5% similar. Further, according to the results Error2=6.67% obtained at 5 and 6 m, we found that activity (5), standing Error3=4.17% Error3=10% 5 Error4=25% Error4=5.83% with random micro-shakes, is easily misjudged as (2), picking up an object, or (3), waving, shown in 5 4 Figure 13c,d. This shows that finer-grained human activities with fewer and weaker high-frequency 4 micro-Doppler features are harder to classify when the through-wall detection range increases. 3 3 2

2 1

1 0 0

1

2

3 4 Actual Class (a)

0

5

0

1

2

3 Actual Class (b)

4

6m SVM Average Error=18.75%

5m SVM Average Error= 12.29%

Figure 13. Cont.

Error1=14.17%

Error2=11.67% Error3=10%

Error2=23.33% Error3=8.33%

Error1=7.5%

4 3 2

Error4=29.17%

4 3 2 1

1 0

5

Error4=20%

Predict Class

Predict Class

5

5

0

1

2

3 Actual Class (c)

4

5

0

0

1

2

3 Actual Class (d)

4

5

Figure 13. Four-fold cross-validation error distributions of proposed method at different positions:

Predi

Predi

3 2

2 1

1

Sensors 2016, 0 16, 885 0

1

2

3 4 Actual Class (a)

0

5

0

1

2

3 Actual Class (b)

4

Error1=14.17%

Error1=7.5% Error2=11.67% Error3=10%

Predict Class

Predict Class

3 2

Error2=23.33% Error3=8.33%

5

Error4=20%

4

Error4=29.17%

4 3 2 1

1 0

15 of 17

6m SVM Average Error=18.75%

5m SVM Average Error= 12.29%

5

5

0

1

2

3 Actual Class (c)

4

5

0

0

1

2

3 Actual Class (d)

4

5

Figure 13. Four-fold cross-validation error distributions of proposed method at different positions: Figure cross-validation (a) 3 m,13. (b) Four-fold 4 m, (c) 5 m, and (d) 6 m. error distributions of proposed method at different positions: (a) 3 m, (b) 4 m, (c) 5 m, and (d) 6 m.

5. Conclusions Furthermore, through the overall observation of Figure 13a–d, the probability of classification In this paper, we first selected SFCW UWB radar to conduct the through-wall detection of six error caused by confusion between the (1) piaffe and (4) jumping motions is the highest. We think common finer-grained human activities according to actual detection environments that would be that this is because the high and low frequency features of the two activities are relatively strong and encountered in applications such as anti-terrorism activities and security. We then introduced a similar. Further, according to the results obtained at 5 and 6 m, we found that activity (5), standing comprehensive range-accumulation time-frequency transform (CRATFR) method based on an with random micro-shakes, is easily misjudged as (2), picking up an object, or (3), waving, shown in inverse weight coefficient according to the characteristics of the finer-grained human activity signals Figure 13c,d. This shows that finer-grained human activities with fewer and weaker high-frequency detected by the SFCW radar. This method can make full use of the time-frequency information of micro-Doppler features are harder to classify when the through-wall detection range increases. different range bins of the UWB radar signal, so as to enhance the micro-Doppler time-frequency spectral features of the finer-grained human activities. The results indicate that different activities 5. Conclusions show obvious feature differences in their time-frequency spectra. In thisthe paper, we first selected SFCW UWB radar to conduct the through-wall detection of six After time-frequency analysis of human activities, we classified the different motions by common finer-grained human activities according to actual detection environments that would be extracting eight effective eigenvalues from the spectra resulting from the CRATFR analysis and encountered applications as anti-terrorism activities and security. We then utilizing SVM.inAt a position 3such m behind a wall, the data classification accuracies for sixintroduced activities ofa comprehensive range-accumulation time-frequency transform (CRATFR) method based on an inverse eight persons were 98.78% and 93.23%, respectively, for the two scenarios defined in this paper. weight coefficient according to the characteristics of the finer-grained human activity signals detected Furthermore, to correct the deficiency wherein conventional SVM will easily misjudge activities at by the SFCW radar. This method can make full use of the time-frequency information of different range different through-wall ranges, this paper proposed a novel classification method, the optimal selfbins of the support UWB radar signal, so as to enhance the micro-Doppler time-frequency spectral features of adaption vector machine (OS-SVM), based on prior human position information. the finer-grained humanincluded activities. results indicate that different activities showfeatures obviousoffeature Accordingly, the model in The the OS-SVM better captures the time-frequency actual differences in their time-frequency spectra. motion signals in the corresponding positions. At the same time, it avoids the misjudgment that easily After the time-frequency of human activities,activities we classified the different motions occurs because of the similarityanalysis of the features of different in different positions whichby is extracting eight effective eigenvalues from the spectra resulting from the CRATFR analysis and utilizing caused by the weakening of the Doppler high-frequency signals. Experimentally, within the 6 m SVM. At a position m behind a wall, the data classification forsubjects six activities of eight persons range, the average 3classification accuracy of five activities accuracies data of eight is 86.67%, while the were 98.78%accuracy and 93.23%, respectively, formethod the twoisscenarios defined in this Furthermore, recognition of current common only 78.12%. Thus, thispaper. research is of great to correct the deficiency wherein conventional SVM will easily misjudge activities at different through-wall ranges, this paper proposed a novel classification method, the optimal self-adaption support vector machine (OS-SVM), based on prior human position information. Accordingly, the model included in the OS-SVM better captures the time-frequency features of actual motion signals in the corresponding positions. At the same time, it avoids the misjudgment that easily occurs because of the similarity of the features of different activities in different positions which is caused by the weakening of the Doppler high-frequency signals. Experimentally, within the 6 m range, the average classification accuracy of five activities data of eight subjects is 86.67%, while the recognition accuracy of current common method is only 78.12%. Thus, this research is of great significance for the through-wall detection and classification of finer-grained human activities. With further research, it should provide important information for practical applications such as anti-terrorism activities.

Sensors 2016, 16, 885

16 of 17

Acknowledgments: This work was supported by National Science & Technology Pillar Program (No.2014BAK12B02) and the National Natural Science Foundation of China (No. 61327805). Author Contributions: The authors acknowledge the participants for helping with data acquisition. Thanks are given to Fulai Liang, Hao Lv, Chuantao Li, Fuming Chen, and Jianqi Wang for the suggested corrections. Conflicts of Interest: The authors declare no conflict of interest.

References 1. 2. 3. 4.

5. 6. 7.

8. 9.

10.

11.

12. 13. 14. 15.

16.

17. 18.

Fairchild, D.P.; Narayanan, R.M. Micro-Doppler radar classification of human motions under various training scenarios. Proc. SPIE 2013, 8734. [CrossRef] Bryan, J.; Kwon, J.; Lee, N.; Kim, Y. Application of ultra-wide band radar for classification of human activities. IET Radar Sonar Navig. 2012, 6, 172–179. [CrossRef] Ritchie, M.; Fioranelli, F.; Griffiths, H. Multistatic human micro-Doppler classification of armed/unarmed personnel. IET Radar Sonar Navig. 2015, 9, 857–865. Zhang, Y.; Jing, X.; Jiao, T.; Zhang, Z.; Lv, H.; Wang, J. Detecting and identifying two stationary-human-targets: A technique based on bioradar. In Proceedings of the 2010 First International Conference on Pervasive Computing Signal Processing and Applications (PCSPA), Harbin, China, 17–19 September 2010. Amin, M.; Sarabandi, K. Special issue on remote sensing of building interior. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1267–1268. [CrossRef] Yarovoy, A.; Ligthart, L.; Matuzas, J.; Levitas, B. UWB radar for human being detection. IEEE Aerosp. Electron. Syst. Mag. 2006, 21, 10–14. [CrossRef] Yarovoy, A.; Zhuge, X.; Savelyev, T.; Ligthart, L. Comparison of UWB technologies for human being detection with radar. In Proceedings of the European Radar Conference, EuRAD 2007, Munich, Germany, 10–12 October 2007; pp. 295–298. Liu, L.; Liu, S. Remote Detection of Human Vital Sign with Stepped-Frequency Continuous Wave Radar. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 775–782. [CrossRef] Lyonnet, B.; Ioana, C.; Amin, M.G. Human gait classification using micro-Doppler time-frequency signal representations. In Proceedings of the 2010 IEEE Radar Conference, Washington, DC, USA, 10–14 May 2010; pp. 915–919. Fioranelli, F.; Ritchie, M.; Griffiths, H. Classification of unarmed/armed personnel using the NetRAD multistatic radar for micro-Doppler and singular value decomposition features. IEEE Geosci. Remote Sens. Lett. 2015, 12, 1933–1937. [CrossRef] Kim, Y.; Ling, H. Human activity classification based on micro-Doppler signatures using an artificial neural network. In Proceedings of the 2008 IEEE Antennas and Propagation Society International Symposium, San Diego, CA, USA, 5–11 July 2008; pp. 1–4. Kim, Y.; Ling, H. Human activity classification based on micro-Doppler signatures using a support vector machine. IEEE Trans. Geosci. Remote Sens. 2009, 47, 1328–1337. Zenaldin, M.; Narayanan, R.M. Features associated with radar micro-Doppler signatures of various human activities. Proc. SPIE 2015, 9461. [CrossRef] Lv, H.; Lu, G.; Jing, X.; Wang, J. A new ultra-wideband radar for detecting survivors buried under earthquake rubbles. Microwave Opt. Technol. Lett. 2010, 52, 2621–2624. [CrossRef] Li, W.Z.; Li, Z.; Lv, H.; Lu, G.; Zhang, Y.; Jing, X.; Li, S.; Wang, J. A new method for non-line-of-sight vital sign monitoring based on developed adaptive line enhancer using low centre frequency UWB radar. Prog. Electromagn. Res. 2013, 133, 535–554. [CrossRef] Li, Z.; Li, W.; Lv, H.; Zhang, Y.; Jing, X.; Wang, J. A novel method for respiration-like clutter cancellation in life detection by dual-frequency IR-UWB radar. IEEE Trans. Microwave Theory Tech. 2013, 61, 2086–2092. [CrossRef] Zhu, Z.; Wang, X.; He, Y. Preliminary study of UWB-radar’echo signal and recognition methods about life’s microwaving characteristic. Appl. Res. Comput. 2010, 27, 597–599. Li, J.; Zeng, Z.; Sun, J.; Liu, F. Through-wall detection of human being’s movement by UWB radar. IEEE Geosci. Remote Sens. Lett. 2012, 9, 1079–1083. [CrossRef]

Sensors 2016, 16, 885

19.

20. 21. 22. 23. 24. 25.

17 of 17

Wang, Y.; Yu, X.; Zhang, Y.; Lv, H.; Jiao, T.; Lu, G.; Li, Z.; Li, S.; Jing, X.; Wang, J. Detecting and monitoring the micro-motions of trapped people hidden by obstacles based on wavelet entropy with low centre-frequency UWB radar. Int. J. Remote Sens. 2015, 36, 1349–1366. [CrossRef] Smith, G.E.; Ahmad, F.; Amin, M.G. Micro-Doppler processing for ultra-wideband radar data. Proc. SPIE 2012, 8361. [CrossRef] Chen, V.C. Joint time-frequency analysis for radar signal and imaging. In Proceedings of the 2007 IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23–28 July 2007; pp. 5166–5169. Cristianini, N.; Shawe-Taylor, J. An Introduction to Support Vector Machines and Other Kernel-Based Learning Methods; Cambridge University Press: Cambridge, UK, 2000. Friedman, J.; Hastie, T.; Tibshirani, R. The Elements of Statistical Learning; Springer: Berlin, Germany, 2001. Chang, C.-C.; Lin, C.-J. LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2011, 2, 27. [CrossRef] Gutta, S.; Huang, J.R.; Jonathon, P.; Wechsler, H. Mixture of experts for classification of gender, ethnic origin, and pose of human faces. IEEE Trans. Neural Netw. 2000, 11, 948–960. [CrossRef] [PubMed] © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).

Suggest Documents