Synthetic aperture radar target detection using a ... - SPIE Digital Library

5 downloads 0 Views 386KB Size Report
demonstrate the effectiveness of the proposed method, results are com- pared with those of the conventional constant-false-alarm-rate algorithm and the neural ...
Optical Engineering 45共7兲, 077002 共July 2006兲

Synthetic aperture radar target detection using a neural network with fractal dimension Yu-Chang Tzeng National United University Department of Electronics Engineering Maio-Li, Taiwan

Kun-Shan Chen National Central University Center for Space and Remote Sensing Research Chung-Li, Taiwan

Abstract. The fractal dimension is used in conjunction with a neural network to quantify effects of the chaotic behavior of radar clutter on the geometric aspects of target detection by synthetic aperture radar. To demonstrate the effectiveness of the proposed method, results are compared with those of the conventional constant-false-alarm-rate algorithm and the neural network technique. It is shown that the use of the fractal dimension substantially improves the detection performance based on some figures of merit, including detection rate, false-detection rate, and loss detection rate. © 2006 Society of Photo-Optical Instrumentation Engineers.

关DOI: 10.1117/1.2227019兴

Subject terms: target detection; SAR; fractal dimension; neural network. Paper 050634R received Aug. 9, 2005; revised manuscript received Dec. 11, 2005; accepted for publication Dec. 13, 2005; published online Jul. 20, 2006.

1

Introduction

Automatic detection of targets in a radar image is an important and yet complicated problem with many applications, such as assessing the battlefield situation over large areas to locate individual targets on land, sea, or air. To detect a target in the presence of heavy clutter, the radar echoes from the target have to compete with the return from the clutter. Various approaches based on statistical techniques, such as knowledge-based and model-based techniques, physical principles, detection theory, neural networks, genetic algorithms, and multiresolution processing, have been proposed for automatic target detection. Among them, neural network approaches1–4 are widely adopted on account of their advantages of adaptivity, optimality, ruggedness, robustness, and parallelism, to name some. Khabou and Gader1 adopted an entropy-optimized shared-weight neural network for automatic target detection. They extracted the entropy as a means of evaluating features and added an entropy maximization term to the objective function to enhance the entropy. Qu et al.2 utilized a structure-context-based fuzzy neural network 共SCBFNN兲 for automatic target detection. The SCBFNN utilizes a fuzzy measure to depict the network’s output objective function and uses targets’ structure context to confine the neighborhood weighting and to retain target attributes such as profile and shape. Perlovsky et al.3 introduced a model-based neural network whose adaptive learning is based on a priori knowledge 共physical laws of electromagnetic scattering兲 with adaptations to target detection. The network combines deterministic and statistical aspects of data. Its interconnections provide for fuzzy association between the data and models, and its learning mechanism provides for the adaptation of model parameters. Leung et al.4 used a GA-RBF neural network to reconstruct the sea-clutter dynamical system from a time series of radar images without requiring a priori knowledge 0091-3286/2006/$22.00 © 2006 SPIE

Optical Engineering

and to enhance the ability to defeat small objects embedded in the sea. The genetic algorithm 共GA兲 was first used to search for the optimum values of the radial basis function 共RBF兲 parameters to obtain an optimal reconstruction of sea-clutter dynamics based on a RBF neural network. Other researchers adopted multiresolution processing for clutter modeling and target detection. Subotic et al.5 used multiresolution processing of SAR images to exploit the signature differences between natural clutter and man-made objects to detect targets. They developed statistical models for multiresolution SAR signatures of natural clutter and man-made objects. Greer et al.6 developed a maximumlikelihood multiresolution-based approach to the laser radar image processing problem. The expectation-maximum 共EM兲 algorithm was used for fitting a multiresolution Haar wavelet basis to laser radar range data to achieve a computationally efficient and numerically robust procedure. To detect a target embedded in clutter, the radar echoes from a target have to compete with the echoes from the clutters. The radar echo from clutter is usually assumed to have some probability distribution based on its statistical properties.3 In recent studies, however, the clutter is shown to be chaotic in a way that may be modeled by a nonlinear deterministic dynamical system.4 As a result, clutter modeling becomes a problem of chaotic system reconstruction from time series measurements. Let 兵xi其 be a chaotic time series generated by a k-dimensional dynamical system. According to the Takens embedding theorem,7 reconstructing dynamics from a chaotic time series becomes a single-step prediction problem4 xi = f共xi−1, . . . ,xi−m兲

共1兲

for i 艌 m and m 艌 2k + 1, in which f is the mapping function and m is the embedding dimension. The theorem states that a dynamical system can be reconstructed from a sequence of observations. For the case of radar images lacking time series, the clutter may be treated as a spatial chaotic system because of the chaotic scattering and nonlinear wave phenomena.8 Therefore, the clutter signal can be recon-

077002-1

July 2006/Vol. 45共7兲

Tzeng and Chen: Synthetic aperture radar target detection¼

structed in the spatial domain instead of in the time domain. Based on these arguments, in the next section, the fractal dimension is naturally used to quantify the chaotic behavior of the clutter in its geometric aspects, and subsequently used to reconstruct the clutter signal. A dynamic learning neural network 共DLNN兲9 that is applied to reconstruct the spatial chaotic clutter model is used in conjunction with the estimated fractal dimension to perform the detection under various clutter depths. In Sec. 3, experimental results are demonstrated using the proposed approach and are compared with those of conventional constant-false-alarm-rate 共CFAR兲 algorithm and the neural network technique. Finally, some conclusions are drawn from this study. 2 Automatic Target Detector Before proceeding to describe the proposed approach, we briefly outline the problem of automatic target detection to facilitate our descriptions that follow. The received signal at the i’th line and j’th sample of an image may be expressed as H1:xij = sij + nij ,

共2a兲

H0:xij = nij ,

共2b兲

where sij is the target signal and nij is the clutter process. The conventional CFAR detector requires us to determining a threshold for a given probability of false alarm 共PFA兲. Therefore, the signal detection can be formulated as a binary decision problem xij 苸



H1 , xij ⬎ ␩ , H0 , xij 艋 ␩ ,



共3兲

where ␩ is the decision threshold on the received signal. It is noted that PFA = P共x 艌 ␩兲 under the hypothesis H0. The threshold may be set to a desired value from the distribution of the clutter signal. In what follows, we apply DLNN to reconstruct the clutter dynamical model without requiring time series of radar images for target detection. DLNN is a modified MLP neural network with two modifications: 共1兲 every node in the input layer and in all hidden layers is fully connected to the output layer and 共2兲 the activation function is removed from each output node, permitting the output of the network to be expressed as a linear function of the output weight vector in terms of the polynomial basis function10 y = Wz,

xij 苸



H1 , ␧ij = 兩xij − xˆij兩 ⬎ ␣ , H0 , ␧ij = 兩xij − xˆij兩 艋 ␣ ,



共5兲

where ␣ is the decision threshold on the prediction error. The probability distribution for the hypothesis H0 can be estimated numerically using histogram of the clutter prediction error. However, in Eq. 共1兲 a suitable embedding dimension m has to be determined in order to effectively reconstruct the clutter signals. It is understood that the more complex the nonlinear clutter model is, the larger the embedding dimension m should be selected. This results in a larger window size and more extensive computation effort to train the neural network. The condition m 艌 2k + 1 for some constant k is a sufficient but not necessary condition for dynamic reconstruction. The procedure of finding a suitable m is called embedding, and the minimum integer m that achieves dynamic reconstruction is called the embedding dimension. A reliable method for estimating the embedding dimension is the method of false nearest neighbors.13 In this method, a systematic survey of the data points and their neighbors in dimension k = 1, then k = 2, and so on, is made. When the apparent neighbors stop being “unprojected” by adding more elements to Eq. 共1兲, the value of k yields an estimate for the embedding dimension m. To ease the computational burden, we use the fractal dimension8 to quantify the chaotic behavior of the radar clutter on the geometric aspects. For simplicity, the fractal dimension is determined by box counting. The fractal dimension D is defined to be the number that satisfies14 Nr = lim ␥r−D ,

共6兲

r→0

where r is the side length of the boxes that cover the space occupied by the geometric objects under consideration, Nr is the number of boxes needed to contain all the points of the geometric objects, and ␥ is a proportionality constant. By taking the logarithm of both sides of Eq. 共6兲, we can find

共4兲

where W is a long weight matrix comprising weights connected to the network outputs, and z is a long input vector comprising the activations within the network. Hence, the Kalman filtering technique11,12 can be directly applied to solve the linear equation 共4兲, which in turn adjusts the weights of the neural network, and to limit the training error to a preselected bound. To apply the DLNN for target detection at the training phase, a subimage containing no target is selected to serve as training samples. In order to approximate the spatially, chaotic system 共the clutter dynamical model兲, the training pattern is arranged so that the clutter signal xij is the desired output and so that a w ⫻ w Optical Engineering

window centered at xij contains the input signals. Thus, the predicted clutter signal xˆij can be obtained by applying the selected window to the trained neural network. When a received signal carrying target information is to be predicted, the resulting expected error is inherently large. Thus, the target detection problem can be characterized as a binary decision problem



D = lim − r→0



log Nr log ␥ + . log r log r

共7兲

As r becomes very small, the second term in Eq. 共7兲 goes to zero, and we may approximate D = − lim r→0

log Nr . log r

共8兲

There exist a number of techniques to estimate the fractal dimension of an image. Among them, the differential box-counting 共DBC兲 technique15 has proved to be computationally the least complex and to be easy to implement.

077002-2

July 2006/Vol. 45共7兲

Tzeng and Chen: Synthetic aperture radar target detection¼

Fig. 1 Experimental results 共case 1兲: 共a兲 original SAR image, 共b兲 CFAR method, 共c兲 DLNN method, and 共d兲 DLNN+ fractal dimension.

An M ⫻ M image is partitioned into grid of size s ⫻ s where M / 2s ⬎ 1 and r = s / M is estimated. At grid 共i , j兲, let the minimum and maximum gray levels of the image in this grid be ᐉ and u, respectively. The number of boxes between the minimum and maximum gray levels at grid 共i , j兲 is counted by nr共i, j兲 = u − ᐉ + 1.

Nr = 兺 nr共i, j兲.

Using Eqs.共9兲 and 共10兲 and Eq. 共8兲, the fractal dimension at each image pixel dij can be estimated for hypothesis testing as described below. When a received signal containing a target is predicted, its fractal dimension is expected to be greater than that of the clutter. Therefore, the target detection becomes a binary decision problem H1 , dij ⬎ ␤ , H0 , dij 艋 ␤ ,



共11兲

where ␤ is the decision threshold on the fractal dimension, and PFA = P共d 艌 ␤兲 is the probability of false alarm under the hypothesis H0. The probability distribution for the hypothesis H0 hypothesis can be estimated numerically using the histogram of the clutter fractal dimension. Now the performance of an automatic target detector may be assessed by the following detection evaluation measures: detection rate 共DR兲, false-detection rate 共FDR兲, and loss detection rate 共LDR兲. Let the standard area 共As兲 be the Optical Engineering

DR =

Ac , As

共12a兲

FDR =

兩Ad − Ac兩 , As

共12b兲

LDR =

兩As − Ac兩 . As

共12c兲

共10兲

i,j



area occupied by the target, the detected area 共Ad兲 be the detected target area, and the correct area 共Ac兲 be the area of the overlap region between the standard area and detected area. The measures of detection rate, false-detection rate, and loss detection rate are then defined as follows2:

共9兲

The total number of boxes in the whole region of interest is simply the summation of the number of boxes in all grids:

xij 苸

Fig. 2 Same as Fig. 1, but for case 2.

The detection performance measures of Eq. 共12兲 are used to test the proposed method, and the results are given in the next section. 3 Results and Discussion In this section, we evaluate the proposed detector using the MSTAR SAR image data set.15 The images were acquired at ␹ band with 1-ft spatial resolution. There are a total of 17 image sets, each set containing a particular target imaged at certain depression angle. Each data set contains hundreds of target aspect poses 共azimuth angles兲 for that target at a particular angle of depression. Details of target and radar imaging geometry, radar parameters, and target parameters are given in Ref. 14. In this study, a total of four test images, representing high degrees of difficulty to detect, were selected to verify and validate the proposed approach. To start with the neural network, a set of 32⫻ 32 subimages over the selected area was used as a training sample

077002-3

July 2006/Vol. 45共7兲

Tzeng and Chen: Synthetic aperture radar target detection¼

Fig. 5 Original image 共a兲, and effects of threshold setting on feature extraction based on neural network with fractal dimension: 共b兲 98%, 共c兲 97%, 共d兲 96%, 共e兲 95%.

Fig. 3 Same as Fig. 1, but for case 3.

to approximate the clutter dynamical model, as described previously. In addition, it is a common practice in neural networks to arrange the training pattern so as to select the clutter signal xij as the desired output and to have a 3 ⫻ 3 window centered at xij as input signals. Once the neural network completes its learning phase, it is ready for processing phase. The next step is fractal dimension estimation. The fractal dimension dij at the i’th

line and j’th sample of the test image is computed from a 9 ⫻ 9 共M = 9兲 window centered at pixel xij. To apply the differential box-counting 共DBC兲 technique, the 9 ⫻ 9 window is further partitioned into a grid of 3 ⫻ 3 subwindows 共s = 3兲. The probability of false alarm, PFA, is set to 0.02 for all methods under comparison. Other values may be chosen by trial and error or by users. To serve as a reference, the conventional CFAR algorithm is also applied to all cases. Figures 1–4 display the original SAR image 共a兲 and detected images from CFAR 共b兲, DLNN 共c兲, and the present method 共d兲 for different targets considered in this study. From the four cases, the conventional CFAR produces poor results; the target structure is difficult to identify. This is due to its lack of ability to suppress strong clutter surrounding the target 关Fig. 1共a兲兴. It can be seen that the proposed method, namely, neural network with fractal dimension, presents the least noise over the whole image, implying that the detected target is more energy-concentrated instead of scattering over the surroundings. Without the fractal information, the DLNN seems to perform satisfactorily except that some background noise is left. The payoff is better preservation of target structures as seen by taking a closer look at Figs. 1共c兲 and 1共b兲. The balance between the back-

Table 1 Detection performance comparison of various methods 共case 1兲.

Method

CFAR

DLNN

DLNN⫹ Fractal

Ad

367

334

336

Ac

224

241

274

DR

67.6%

74.8%

85.1%

FDR

44.4%

28.9%

19.3%

LDR

30.4%

25.2%

14.9%

Fig. 4 Same as Fig. 1, but for case 4.

Optical Engineering

077002-4

July 2006/Vol. 45共7兲

Tzeng and Chen: Synthetic aperture radar target detection¼ Table 2 Detection performance comparison of various methods 共case 2兲.

Table 4 Detection performance comparison of various methods 共case 4兲.

Method

CFAR

DLNN

DLNN⫹ Fractal

Method

CFAR

DLNN

DLNN⫹ Fractal

Ad

383

385

330

Ad

339

331

333

Ac

169

195

239

Ac

174

221

216

DR

46.7%

53.9%

66.0%

DR

52.4%

66.6%

65.1%

FDR

59.1%

52.5%

25.1%

FDR

49.7%

33.1%

35.2%

LDR

53.3%

46.1%

34.0%

LDR

47.6%

33.4%

34.9%

ground noise removal and structure preservation may be adjusted by setting the threshold value, as is discussed in connection with Fig. 5. The same observations can be drawn from Figs. 2–4. Overall, the DLNN with fractal dimension offers higher detection rate, lower false-detection rate, and lower loss detection rate. Comparisons of performance measures among the three methods are given in Tables 1–4. Table 1 indicates that in case 1 共As = 322兲, CFAR gives only 67.6% detection rate, 44.4% false-detection rate, and 30.4% loss detection rate. When the DLNN is used, the detection rate is increased to 74.8%, the false-detection rate is reduced to 28.9%, and the loss detection rate is improved to 25.2%, all superior to CFAR. When the fractal detection method is applied, the detection rate is further improved to 85.1%, the falsedetection rate is reduced to 19.3%, and the loss detection rate is also reduced, to 14.9%. As can be seen, the improvement is substantial and consistent by these performance measures. As already emphasized, setting different thresholds in Eq. 共11兲 may produce inappreciable detection measures. This is illustrated in the last two cases 共case 3 and case 4兲, seen in Table 3 and Table 4, where DR and FDR slightly deteriorate an using the fractal dimension, as compared to DLNN alone. Combining visual inspection and statistical measures, it is obvious that the use of fractal information with the neural network yields advantages in combatting

Table 3 Detection performance comparison of various methods 共case 3兲.

Method

CFAR

DLNN

DLNN⫹ Fractal

Ad

333

332

331

Ac

292

308

295

DR

65.9%

69.5%

66.6%

FDR

9.3%

5.4%

8.1%

LDR

34.1%

30.5%

33.4%

Optical Engineering

heavy clutter and thus effectively extracting target features. The threshold setting is adjusted to preserve target structure while suppressing background noise. Figure 5 shows the detected images using different thresholds varying from 95% to 98%. A lower threshold generally produces a noisier image but preserves target structure better. At this point, what is the optimum threshold has yet to be determined, but it seems to relate to target size and image resolution. It certainly requires further study. 4 Conclusions In this paper, fractal dimension has been used to quantify the effects of chaotic behavior of the clutter on the geometric aspects of target detection. A clutter signal may be modeled by a nonlinear deterministic dynamical system. A dynamic learning neural network 共DLNN兲 has been shown to be able to reconstruct the clutter signal in conjunction with fractal information. The proposed approach has been applied to a SAR image from the MSTAR public release data set for target detection. The detection performance of the conventional CFAR, the DLNN, and the fractal detection method are compared. The results clearly suggest that the use of fractal information in a neural network is an efficient and effective means of SAR feature extraction from heavy clutter. References 1. M. A. Khabou and P. D. Gader, “Automatic target detection using entropy optimized shared-weight neural network,” IEEE Trans. Neural Netw. 11共1兲, 186–193 共2000兲. 2. J. Qu, C. Wang, and Z. Wang, “Structure-context based fuzzy neural network approach for automatic target detection,” in IEEE Int. Geoscience and Remote Sensing Symp. Proc. (IGARSS 2003), Vol. 2, pp. 767–769 共2003兲. 3. L. I. Perlovsky, W. H. Schoendorf, B. J. Burdick, and D. M. Tye, “Automatic target detection using entropy optimized shared-weight neural network,” IEEE Trans. Image Process. 6共1兲, 203–216 共1997兲. 4. H. Leung, N. Dubash, and N. Xie, “Detection of small objects in clutter using a GA-RBF neural network,” IEEE Trans. Aerosp. Electron. Syst. 38共1兲, 98–117 共2002兲. 5. N. S. Subotic, B. J. Thelen, J. D. Gorman, and M. F. Reiley, “Multiresolution detection of coherent radar targets,” IEEE Trans. Image Process. 6共1兲, 21–35 共1997兲. 6. D. R. Greer, I. Fung, and J. H. Shapiro, “Maximum-likelihood multiresolution laser radar range imaging,” IEEE Trans. Image Process. 6共1兲, 36–46 共1997兲. 7. F. Takens, “Detecting strange attractors in turbulence,” in Proc. Symp. on Dynamical Systems and Turbulence, D. A. Rand and L. S. Young, Eds., Springer, Berlin 共1981兲. 8. R. C. Hilborn, Chaos and Nonlinear Dynamics, Oxford Univ. Press, New York 共1994兲.

077002-5

July 2006/Vol. 45共7兲

Tzeng and Chen: Synthetic aperture radar target detection¼ 9. Y. C. Tzeng, K. S. Chen, W. L. Kao, and A. K. Fung, “A dynamic learning neural network for remote sensing applications,” IEEE Trans. Geosci. Remote Sens. 32共5兲, 1096–1102 共1994兲. 10. M. S. Chen and M. T. Manry, “Conventional modeling of the multilayer perceptron using polynomial basis functions,” IEEE Trans. Neural Netw. 4共1兲, 164–166 共1993兲. 11. R. G. Brown and P. Y. C. Hwang, Introduction to Random Signals and Applied Kalman Filtering, Wiley, New York 共1983兲. 12. G. J. Bierman, Factorization Methods for Discrete Sequential Estimation, Academic Press, New York 共1977兲.

Optical Engineering

13. H. D. I. Abarbanal, Analysis of Observed Chaotic Data, SpringerVerlag, New York 共1996兲. 14. N. Sarkar and B. B. Chaudhuri, “An efficient differential boxcounting approach to compute fractal dimension of images,” IEEE Trans. Syst. Man Cybern. 24共1兲, 115–120 共1994兲. 15. Center for Imaging Science, MSTAR SAR Database, http:// cis.jhu.edu/data.sets/MSTAR/. Biographies and photographs of authors not available.

077002-6

July 2006/Vol. 45共7兲

Suggest Documents