2014 American Control Conference (ACC) June 4-6, 2014. Portland, Oregon, USA
Target Detection and Target Type & Motion Classification: Comparison of Feature Extraction Algorithms Yue Li1 Asok Ray1 Thomas A. Wettergren2 Keywords: Feature extraction; Pattern classification; Sonar sensing; Surveillance in ocean environment
Abstract— This paper addresses sensor network-based surveillance of target detection and target type & motion classification. The performance of target detection and classification could be compromised (e.g., due to high rates of false alarm and misclassification), because of inadequacies of feature extraction from (possibly noisy) sensor data and subsequent pattern classification over the network. A feature extraction algorithm, called symbolic dynamic filtering (SDF), is investigated for solving the target detection & classification problem. In this paper, the performance of SDF is compared with two commonly used feature extractors, namely, Cepstrum and principal component analysis (PCA)). Each of these three feature extractors is executed in conjunction with three wellknown pattern classifiers, namely, k-nearest neighbor (k-NN), support vector machine (SVM), and sparse representation classification (SRC). Results of numerical simulation are presented based on a dynamic model of target maneuvering and passive sonar sensing in the ocean environment. These results show that SDF has a consistently superior performance for all tasks – target detection and target type & motion classification.
I. I NTRODUCTION Major tools of target detection & classification include adaptation of signal amplitude thresholds [1], Bayesian estimation [2], and pattern recognition [3]. Recently, there has been much interest in classification of target type & motion over wireless sensor networks under uncertain environments. In the context of target detection & classification using acoustic sensors in the ocean environment, Scrimger et al. [4] analyzed a set of 50 source spectra obtained from merchant ships among passenger/ferries, cargo ships, and tankers, where both mean spectra and source-level histograms were largely insensitive to the ship class. Rajagopal et al. [5] reported target classification of different vessels with passive sonar sensors that were subjected to multiple sources of the noise radiated from surface ships. Gorman [6] developed a neural network learning procedure to classify the sonar signals between two undersea targets. Azimi et al. [7] introduced wavelet packets into the neural network learning algorithm for target classification at low SN R situation. Kang et al. [8] analyzed the frequency spectrum signature of passive sonars from the perspectives of pattern recognition and compared back propagation neural network (BPNN) and 1 Y. Li and A. Ray are with the Department of Mechanical Engineering, Pennsylvania State University, University Park, PA 16802-1412, USA email:
[email protected],
[email protected] 2 T.A. Wettergren is with the Naval Undersea Warfare Center (NUWC), Newport, RI 02841-1708, USA and also with Department of Mechanical Engineering, Pennsylvania State University, University Park, PA 168021412, USA email:
[email protected]
978-1-4799-3274-0/$31.00 ©2014 AACC
k-nearest neighbor (k-NN) methods on the features extracted from the Welch power spectral densities for multiple target types. Jin et al. [3] investigated target detection followed by target type classification within a hierarchical structure of unattended ground sensors (UGS), based on symbolic dynamic filtering (SDF ) [9]. In SDF, the sensor time series data are symbolized to construct probabilistic finite state automata (PFSA) and low-dimensional feature vectors are obtained from PFSA. Mallapragada et al. [10] also used SDF for feature extraction to develop a language measuretheoretic tool of pattern classification to identify the type and motion of mobile robots. Following the work of Bahrampour et al. [11], this paper makes a comparative evaluation of three tools, namely, Cepstrum-based, PCA-based, and SDF-based, of feature extraction for target detection & classification in ocean environments by using passive sonar sensing systems. To evaluate the performance of feature extraction algorithms, all three feature extractors have been tested for the same sets of data and the extracted features are used in conjunction with three pattern classifier tools, namely, k-nearest neighbor (kNN) [12], support vector machines (SVM) [12], and sparse representation classification (SRC) [13]. The paper is organized into four main sections including the present section. Section II describes the dynamic model of moving targets used for numerical simulation. Section III presents the algorithms of different feature extraction methods. Section IV presents the results of target detection & classification based on an ensemble of time series generated by the dynamic model introduced in Section II. Section V summarizes and concludes this paper with recommendations for future research. II. DYNAMIC M ODEL OF M OVING TARGETS Passive sonar sensors (e.g., an array of hydrophones) record changes in the ambient acoustic pressure around the device. Since this pressure field is usually analyzed spectrally, commonly used modeling approaches in both military applications [14] and laboratory studies [15] consider the acoustic field as an ensemble of its spectral components. Although this approach is useful for short-duration signals as well as for signals that are comprised of a simple single tone, it may not be effective for capturing the intermittently occurring subtle changes in the pressure levels that are attributable to target motion, especially for maneuvering
1132
(1)
where n(t) that is the ambient noise. The rationale for the assumption of additive noise in Eq. (1) is that, over a given region of interest, the noise in the received signal is independent of the sensor location x. Furthermore, the noise is assumed to be uniform over a frequency band [15]. However, the signature ytgt (x, t) of a target signal is a function of the time-varying distance dtgt (x, t) between the sensor and the moving target, where the target position xT (t) is a known function of time. Then, the distance to the target is simply given by
250
Sensor Target Trajectory
200 150 100 50 0 −200
dtgt (x, t) = d(x, xT (t)) where d(·, ·) is the standard Euclidean distance function. For generating the time series ytgt (x, t) over a finite set of time steps in the time interval of length T = nδt, which is discretized as
Tr ,
{(t0 + dtgt (x, t0 )/c) , (t0 + δt + dtgt (x, t0 + δt)/c) , . . . ,
(2)
(t0 + nδt + dtgt (x, t0 + nδt)/c)}
100
7 6 5 4 3 2 1 0
200
0
30
40
50
(a) Sensor target scenario
(b) Target nominal acoustic signature −7
x 10
Power/Frequency of Signal
Recorded Acoustic Signal
20
Frequency (Hz)
1
0.5
0
−0.5
−1
10
Width (m)
0
20
40
60
80
100
2
x 10
1.5
1
0.5
0
0
10
20
30
40
50
Time (s)
Frequency (Hz)
(c) Acoustic signal recorded at sensor
(d) Power Spectrum Density of captured signals
Fig. 1.
Acoustic Signal for a Single Target
The time series of acoustic noise is generated according to its power spectral density characteristics, where different types of targets have distinguishable noise spectra from each other. In this paper, there are two types of targets that emit acoustic noise that are distinguishable based on their power/density distributions. An example of the received acoustic signal for a single target passing by a sensor site is presented in Fig. 1. III. F EATURE E XTRACTORS This section introduces three different feature extraction algorithms, namely, Cepstrum [13], PCA [12], and SDF [9]. Although the details of these algorithms have been well explained in earlier publications, this paper focuses on the underlying concepts of each feature extraction method for time series data recorded by passive sonar sensors. A. Cepstrum for Feature Extraction
where c is the nominal speed of sound propagation in the medium at the point of interest (e.g., the default value is c = 1500 m/s). The effects of the signal attenuation are realized by computing the source signal degradation at each time step in Td as ytgt (x, t) = S(t)L(dtgt (x, t))
0
8
−5
1.5
t ∈ Td , {t0 , t0 + δt, t0 + 2δt, . . . , t0 + nδt} Assuming that the acoustic signature being radiated by the target is statistically stationary over the time interval of length T , the source signature is converted into an effective temporal realization via a standard inverse fast Fourier transform. For each time step in the desired discrete time interval Td , the distance from the source to the target is computed as dtgt (x, t). There are two dominant effects from this distance on the received signature. The first effect accrues from the fact that the source level is attenuated by the acoustic medium, where the attenuation loss is assumed to follow the inverse-square law due to spherical spreading as it is dominant at the short ranges of interest [16]. The second effect accrues from the fact that there is a time delay that varies as a function of the distance due to the time of propagation. It is necessary to compute the received discrete-time interval Tr that may not be identical to a simple translation of the source time interval (i.e., Tr 6= Td +a constant). The sampling time intervals of the received signal are computed as
−100
Power/Frequency of Target
yrec (x, t) = ytgt (x, t) + n(t)
where S(t) is the source energy level of the target (i.e., inverse FFT of the source spectra) and L(d) is the spherical spreading loss for distance d. The resulting degraded signal levels in ytgt (x, t) are then mapped to the corresponding time in Tr . Finally, a cubic spline interpolation of the resulting signal levels is performed at the nonuniform time steps in Tr to a set of uniform time steps over the same interval. Then, a random realization of the noise component is generated numerically for addition to the resulting signature to create the modeled received acoustic signal.
Length (m)
targets (i.e., targets that change their speed and/or direction). This paper assumes that the targets are moving in the ocean environment and that a typical target emits a statistically stationary noise signature. A sensor, located at a position x, receives an acoustic signature ytgt (x, t) from the target, Then, the received signal at the sensor site is obtained as
(3)
Cepstrum-based feature extraction has been widely used in speech recognition and acoustic signal classification [17]. The advantage of Cepstrum is to equalize the importance of different frequency ranges for sensor signal by taking logarithm in the frequency scale instead of the original linear scale [18].
1133
In this paper, all small values of frequency coefficients are discarded before taking the inverse Fourier transform. The first Nc components are used to generate Cepstrum features for classification. The goal here is to enhance the computation efficiency at each sonar sensor node, while retaining the critical frequency components of the signal. Algorithm 1 for Cepstrum-based feature extraction is presented below.
Symbol Sequence … ! ""#"" !…
#
Alphabet $={0,1,2,3}
"
States Q={A,B,C,D}
" !
!
" B
A
# !
"
!
! C
"
#
D
#
Partitioning of Preprocessed Sensor Data
Fig. 2.
#
Finite State Machine
Underlying concept of finite state automata (FSA)
Algorithm 1 Cepstrum for feature extraction Input: Time series data sets x ∈ R1×N ; Cut-off sample Nf (where Nf ≤ N ); and dimension of the Cepstrum feature Nc (where Nc ≤ Nf ). Output: Extracted Cepstrum-based feature p ∈ R1×Nc of the time-series x 1: Compute the magnitude of FFT |F (ω)| of the given time series where ω = 1, . . . , N 2: Store the first Nf frequency components and discard the rest 3: Compute fc (t) = ℜ F −1 (log |F (ω)|) , t = 1, . . . , Nf ,where F (ω) is the Fourier transform of the signal f(t); the operator F −1 is the inverse Fourier transform; and ℜ(z) indicates the real part of a complex scalar z. 4: Compute the feature p = [fc (1) fc (1) . . . fc (Nc )]
B. Principal Component Analysis for Feature Extraction The principal component analysis (PCA) for feature extraction has been widely used in diverse applications [12]. Compared to Cepstrum that only takes advantage of imbedded information in a time series, PCA is extracts the signal information by projecting the training data into the lowdimensional feature space [12]. Algorithm 2 for PCA-based feature extraction is presented below. Algorithm 2 Principal component analysis for feature extraction Input: Training time series data sets xj ∈ R1×N , j = 1, . . . , M ; Tolerance η ∈ (0, 1); and test time series data set x ∈ R1×N Output: Extracted feature vector p ∈ R1×m for the time-series x 1: Construct the “centered version” training data matrix X ∈ RM ×N , where each row xj has zero mean. 2: Compute the matrix S = (1/M )XXT 3: Compute the normalized eigenvectors {vi } of S with their corresponding eigenvalues {λi } in the decreasing order of magnitude T X vi 4: Compute the normalized eigenvectors ui = √ 1 M λi Pm PM 5: Find the smallest m ≤ M such that i=1 λi > η i=1 λi 6: Construct (N × m) projection matrix W = [u1 , u2 , ..., um ] 7: Generate (1 × m) reference patterns p = xW
C. Symbolic Dynamic Filtering (SDF) This subsection presents pertinent information regarding a recently developed pattern recognition tool, called symbolic dynamic filtering (SDF) [9], which has been used for target detection & classification in a variety of physical applications. In SDF, a time series of sensor signals is first converted into a symbol sequence that, in turn, leads to the construction of a probabilistic finite state automaton (PFSA) [19][20][21][22]. Figure 2 illustrates the concept of constructing (deterministic) finite state automata (FSA) from (possibly preprocessed) time series.
1) Symbolization of Time Series: This step requires partitioning (also known as quantization) of the time series data of the measured signal. The signal space is partitioned into a finite number of cells that are labeled as symbols, i.e., the number of cells is identically equal to the cardinality |Σ| of the (symbol) alphabet Σ. As an example for the one-dimensional time series in Fig. 2, the alphabet Σ = {α, β, γ, δ}, i.e., |Σ| = 4, and three partitioning lines divide the ordinate (i.e., y-axis) of the time series profile into four mutually exclusive and exhaustive regions. Considerations for the choice of alphabet size |Σ| include the maximum discrimination capability of a symbol sequence and the associated computational complexity. For partitioning tool for the ensemble of time series data, this paper makes use of maximum entropy partitioning (MEP) that maximizes the entropy of the generated symbols. In MEP, the information-rich cells of a data set are partitioned finer and those with sparse information are partitioned coarser, i.e., each cell contains (approximately) equal number of data points under MEP. 2) Construction of probabilistic finite state automata (PFSA): D-Markov machines are models of probabilistic languages, where the future symbol is causally dependent on the (most recently generated) finite set of (at most) D symbols. Thus, D-Markov machines form a proper subclass of PFSA with applications in various fields of research such as anomaly detection [9] and robot motion classification [10]. A D-Markov chain is a statistically (quasi-)stationary stochastic process S = · · · s−1 s0 · · · s1 · · · , where the probability of occurrence of a new symbol depends only on the last D symbols, i.e., P [sn | sn−1 · · · sn−D · · · ] = P [sn | sn−1 · · · sn−D ] Words of length D on a symbol string are treated as the states of the D-Markov machine before any state-merging is executed. As a consequence of having D = 1, the number of states is equal to the number of symbols, i.e., |Q| = |Σ|, where the set of all possible states is denoted as Q = {q1 , q2 , . . . , q|Q| } and |Q| is the number of (finitely many) states. The (estimated) state transition probabilities are defined as N (ql , qk ) ∀qk , ql ∈ Q (4) p (qk | ql ) , P i=1,2,...,|Q| N (ql , qi ) where N (ql , qk ) is the total count of events when qk occurs adjacent to ql in the direction of motion. Having computed all
1134
of these probabilities p (qk | ql ) ∀qk , ql ∈ Q, the (estimated) state transition probability matrix of the PFSA is given as p (q1 | q1 ) . . . p q|Q| | ql . . .. (5) Π = ... . . . · · · p q|Q| | q|Q| p q1 | q|Q| By appropriate choice of partitioning, it is ensured that the resulting Markov chain model yields an irreducible stochastic matrix Π. The rationale is that, under statistically stationary conditions, the probability of every state being reachable from any other state within finitely many transitions must be strictly positive [23]. Algorithm 3 for SDF-based feature extraction is presented below.
of two distinct types of target noise and two different interpretations of target curved motion as depicted in Fig. 3. Targets randomly travel inside the surveillance region at fixed speed of 3m/s (or 6 knots), while the passive sonar sensor records the time series of acoustic amplitude at a sampling frequency of 100Hz for every case. Each target class contains data set of 100 simulation cases. There are also 400 cases with no target presented (i.e., pure environment noise) are generated for target detection problem. In this section, these three classification problems are treated separately, because the main objective is to compare the performance of Cepstrum, PCA and SDF as feature extractors. Thus, the detection & classification problems are treated separately to yield an unambiguous comparison. 250
Algorithm 3 Symbolic dynamic filtering for feature extraction
250
Sensor Target Trajectory
150 100
150 100
50
50
0
0 −200
−100
0
100
200
−200
100
200
(a) Convex from the view of sensor. The turning angle is a uniform distribution from π/2 to 3π/4.
(b) Concave from the view of sensor. The turning angle is a uniform distribution from π/2 to 3π/4.
Fig. 3.
Sensor interpretations of curved target motion 8
Target Acoustic Noise Type I
6 5 4 3 2 1 0
Target Acoustic Noise Type II
7
Power/Frequency
Power/Frequency
IV. R ESULTS AND D ISCUSSION
This subsection discusses the target detection & classification problem for the signal sensor case. One passive sonar sensor is trained and tested for three binary classification problems: (i) detection of target presence, (ii) classification of the target noise type (after it is detected), and (iii) classification of the target motion (in conjunction with (ii)) from the convexity or concavity of the motion curve. Three different commonly used classifiers have been applied for target detection & classification in conjunction with each of all feature extractors described in last section. 1) Target Simulation and Generation of Data Sets: In simulation, a passive sonar sensor is deployed at central bottom position of a given surveillance area (500m×250m). Four target classes are generated by the dynamic model introduced in Section II. Each target class is a combination
0
Width (m)
7
A. Binary Classifications of Signal Sensor
−100
Width (m)
8
This section presents the results of target detection & classification based on the ensemble of (sonar sensor) time series generated from a dynamic model of moving targets in a simulated ocean environment. First, the performance of the three feature extraction algorithms is compared on different binary classification problems for the single sensor case. Then, the detection and classification capability of a single passive sonar sensor is tested by numerical simulation on a sensor network for target detection, classification, and tracking.
Sensor Target Trajectory
200
Length (m)
Input: Training time series data sets xj ∈ R1×N , j = 1, . . . , M , a test time series data set x ∈ R1×N , and number of symbols |Σ| Output: Extracted SDF-based feature vector p ∈ R1×|Σ| for the timeseries x 1: Initialize y = ∅ 2: for j = 1Sto M do 3: y = y xj 4: end for 5: Partition y using MEP to obtain the (common) partition vector ℘ 6: Use ℘ on the test data set x to obtain the symbol string s 7: Construct the (irreducible) state transition probability matrix Π by using Eqs. (4) and (5)
Length (m)
200
6 5 4 3 2 1
0
10
20
30
40
50
0
0
10
20
30
40
50
Frequency (Hz)
Frequency (Hz)
(a) Target noise type I: Gaussian with mean 20 Hz and standard deviation 5 Hz
(b) Target noise type II: Gaussian with mean 22 Hz and standard deviation 5 Hz
Fig. 4.
Different noise types of moving targets
2) Classification Results of Single Sensor Case: A passive sonar sensor is deployed to collect acoustic signals for detection and classification of moving targets. Features extracted from the acoustic signals by each of the PCA, Cepstrum, or SDF algorithm for each task are respectively assigned into training and test sets. The extracted features are then tested with three classifiers: (i) support vector machines (SVM) [12], (ii) k-nearest neighbor (k-NN) [12], (iii) sparse representation classier (SRC) [24]. While SRC has been recently introduced as a pattern classification tool [25], SVM and k-NN are well-known standard tools of pattern classification [3] [24]. Design parameters are optimized for each of the three classifiers. A linear kernel with a maximum of 2000 iterations is chosen for the SVM classifier [12]; the neighborhood size k = 3 is chosen for the k-NN classifier [12]; and an upper bound of error ǫ = 0.1 is chosen for the SRC classifier [24]. Decisions of target detection & classification are made as explained below.
1135
Target Trajectory Sensors with convex view Sensors with concave view
Target Trajectory Sensors with convex view Sensors with concave view
200 150
100
100
Length (m)
150
50 0
150
100
50 0
50
0
−50
−50
−50
−100
−100
−100
−150
−150
−150
−200 −250
−200 −250
−200
−150
−100
−50
0
50
100
150
200
250
Target Trajectory Sensors with convex view Sensors with concave view
200
Length (m)
200
Length (m)
250
250
250
−200
−150
−100
Width (m)
−50
0
50
100
150
200
−200 −250
250
−200
−150
−100
−50
0
50
100
150
200
250
Width (m)
Width (m)
(a) Case I: All nodes with correct results. (b) Case II: One node with concave view (c) Case III: One node with convex view The entry and exit locations lie between two misclassifies itself as convex view. misclassifies itself as concave view. adjacent nodes with different results. Fig. 5.
Simulation results for target detection and classification in a sensor network TABLE I C ONFUSION M ATRICES FOR TARGET D ETECTION k-NN
SVM
SRC
Target Present
No Target
Target Present
No Target
Target Present
No Target
Cepstrum
Target Present No Target
200 0
0 200
200 0
0 200
200 0
0 200
PCA
Target Present No Target
200 174
0 26
200 119
0 81
0 0
200 200
SDF
Target Present No Target
200 0
0 200
200 0
0 200
200 0
0 200
TABLE II C ONFUSION M ATRICES FOR TARGET N OISE T YPE C LASSIFICATION k-NN
SVM
SRC
Type I
Type II
Type I
Type II
Type I
Type II
Cepstrum
Type I Type II
100 0
0 100
100 0
0 100
100 0
0 100
PCA
Type I Type II
85 90
15 10
53 53
47 47
56 35
44 65
SDF
Type I Type II
93 0
7 100
100 0
0 100
100 0
0 100
TABLE III C ONFUSION M ATRICES FOR TARGET M OTION C LASSIFICATION k-NN
SVM
SRC
Convex
Concave
Convex
Concave
Convex
Concave
Cepstrum
Convex Concave
67 21
33 79
63 20
37 80
92 23
8 77
PCA
Convex Concave
100 100
0 0
99 43
1 57
25 0
75 100
SDF
Convex Concave
80 3
20 97
81 12
19 88
81 5
19 95
Tables I, II and III present averaged confusion matrices related to the tasks of target detection, target type classification, and target motion classification, respectively. The results show that SDF, as a feature extractor, clearly yields superior performance for all three tasks. While Cepstrum works well for target detection and target type classification, its perfor-
mance is slightly degraded for target motion classification. In all three tasks, the PCA-based feature extraction appears to be inadequate in terms of achieving high accuracy regardless of the choice of a classifier.
1136
B. Application in sensor network for local target tracking The results of target detection and target type & motion classification are presented under simulated scenarios of and multiple sensors. To generate these results, 12 passive sonar sensors are placed in the (300m× 300m) simulated surveillance region as displayed in Fig. 5. For each of the simulation runs, there is one target traveling across surveillance region, as shown by blue curves in Fig. 5. Each sensor first performs target detection. If a sensor node makes a positive detection, then it conducts target noise type classification. In the simulation, all sensors records the results for these two tasks of target detection and type classification. Subsequently, each sensor node reports the results on target motion classification. It is seen from the results of the three simulation runs that the target trajectory information (e.g., entry location, exit location, and motion curve pattern) can be predicted from the reports generated by the sensor nodes. V. S UMMARY, C ONCLUSIONS , AND F UTURE W ORK This paper addresses sensor network-based surveillance in the ocean environment, and the objective is to achieve high probabilities of correct decisions for target detection and target type & motion classification with low false alarm rates. To this end, performance of three feature extraction algorithms, namely, Cepstrum, Principal Component Analysis (PCA) and Symbolic Dynamic Filtering (SDF) have been compared. Each of these feature extraction algorithms has been executed in conjunction with three classification algorithms, namely, k-Nearest Neighbor (k-NN), Support Vector Machines (SVM) and Sparse Representation Classification (SRC). The results show that the performance of SDF-based feature extraction is consistently superior to that of PCAbased feature extraction and is, on the average, better than that of Cepstrum in terms of successful detection, false alarm, and overall correct classification rates. Much theoretical and experimental research is needed before the presented algorithms of target detection and classification can be used in practice. In this regard, a few key topics of future research are delineated below. (i) Comparison with additional methods of feature extraction (i.e., besides Cepstrum and PCA) in conjunction with other classification algorithms (i.e., besides k-NN and SVM). (ii) Development of a rigorous field test procedure for validating robustness of SDF-based feature extraction under different conditions (e.g., sensor configuration and presence of multiple targets) and data types (i.e., other than acoustic amplitude). R EFERENCES [1] K. Mukherjee, A. Ray, T. Wettergren, S. Gupta, and S. Phoha, “Realtime adaptation of decision thresholds in sensor networks for detection of moving targets,” Automatica, vol. 47, no. 1, pp. 185–191, 2011. [2] C. Kreucher and B. Shapo, “Multitarget detection and tracking using multisensor passive acoustic data,” Oceanic Engineering, IEEE Journal of, vol. 36, no. 2, pp. 205–218, 2011.
[3] X. Jin, S. Sarkar, A. Ray, S. Gupta, and T. Damarla, “Target detection and classication using seismic and PIR sensors,” IEEE Sensors Journal, vol. 12, no. 6, pp. 1709–1718, 2012. [4] P. Scrimger and R. M. Heitmeyer, “Acoustic source-level measurements for a variety of merchant ships,” tech. rep., DTIC Document, 1988. [5] R. Rajagopal, B. Sankaranarayanan, and P. Ramakrishna Rao, “Target classification in a passive sonar-an expert system approach,” in Acoustics, Speech, and Signal Processing, 1990. ICASSP-90., 1990 International Conference on, pp. 2911–2914, IEEE, 1990. [6] R. P. Gorman and T. J. Sejnowski, “Analysis of hidden units in a layered network trained to classify sonar targets,” Neural networks, vol. 1, no. 1, pp. 75–89, 1988. [7] M. R. Azimi-Sadjadi, D. Yao, Q. Huang, and G. J. Dobeck, “Underwater target classification using wavelet packets and neural networks,” Neural Networks, IEEE Transactions on, vol. 11, no. 3, pp. 784–794, 2000. [8] C. Kang, X. Zhang, A. Zhang, and H. Lin, “Underwater acoustic targets classification using welch spectrum estimation and neural networks,” in Advances in Neural Networks–ISNN 2004, pp. 930–935, Springer, 2004. [9] A. Ray, “Symbolic dynamic analysis of complex systems for anomaly detection,” Signal Processing, vol. 84, pp. 1115–1130, July 2004. [10] G. Mallapragada, A. Ray, and X. Jin, “Symbolic dynamic filtering and language measure for behavior identification of mobile robots,” IEEE Transactions on System, Man, and Cybernetics Part B: Cybernetics, vol. 42, pp. 647–659, June 2012. [11] S. Bahrampour, A. Ray, S. Sarkar, T. Damarla, and N. Nasrabadi, “Performance comparison of feature extraction algorithms for target detection and classification,” Pattern Recogntion Letters, vol. 34, pp. 2126–2134, December 2013. [12] C. Bishop, Pattern Recognition. New York, NY, USA: Springer, 2006. [13] N. Nguyen, N. Nasrabadi, and T. Tran, “Robust multi-sensor classification via joint sparse representation,” in Proceedings of the 14th International Conference on Information Fusion (FUSION), pp. 1–8, 5-8 July 2011. [14] D. Wagner, Naval Operations Analysis, 3rd ed. New York, NY, USA: U.S. Naval Institute, 2009. [15] R. Urick, Principles of Underwater Sound, 3rd ed. New York, NY, USA: McGraw Hill, 2009. [16] L. Kinsler, A. Frey, A. Coppens, and J. Sanders, Fundamentals of Acoustics, 4th ed. New York, NY, USA: Wiley, 1999. [17] D. Childers, D. Skinner, and R. Kemerait, “The cepstrum: A guide to processing,” Proceedings of the IEEE, vol. 65, no. 10, pp. 1428–1443, 1977. [18] A. Oppenheim and R. Shafer, Digital Signal Processing. Prentice Hall, NJ, USA, 1975. [19] G. Pola and P. Tabuada, “Symbolic models for nonlinear control systems: Alternating approximate bisimulations,” SIAM Journal of Control and Optimization, vol. 48, no. 2, pp. 719–733, 2009. [20] E. Vidal, F. Thollard, C. de la Higuera, F. Casacuberta, and R. Carrasco, “Probabilistic finite-state machines - Part I and Part II,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, pp. 1013–1039, 2005. [21] M. Vidyasagar, “The complete realization problem for hidden Markov models: a survey and some new results,” Mathematics of Control, Signals, and Systems, vol. 23, pp. 1–65, December 2011. [22] P. Adenis, Y. Wen, and A. Ray, “An inner product space on irreducible and synchronizable probabilistic finite state automata,” Mathematics of Control, Signals, and Systems, vol. 23, no. 1, pp. 281–310, January 2012. [23] A. Berman and R. Plemmons, Nonnegative Matrices in the Mathematical Sciences. Philadelphia, PA, USA: SIAM Publications, 1994. [24] N. H. Nguyen, N. M. Nasrabadi, and T. D. Tran, “Robust multi-sensor classification via joint sparse representation,” in Information Fusion (FUSION), 2011 Proceedings of the 14th International Conference on, pp. 1–8, IEEE, 2011. [25] J. Wright, A. Yang, A. Ganesh, S. Sastry, and Y. Ma, “Robust face recognition via sparse representation,” The IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 210–227, 2008.
1137