TARGET DETECTION IN MULTIPLE-VIEWING THROUGH-THE-WALL ...

3 downloads 0 Views 320KB Size Report
in fire, earthquakes and avalanches. It also allows behind- the-wall detection and surveillance of suspected criminals and outlaws. Typical sensing for indoor ...
TARGET DETECTION IN MULTIPLE-VIEWING THROUGH-THE-WALL RADAR IMAGING Christian Debes1 , Moeness G. Amin2 and Abdelhak M. Zoubir1 1

Signal Processing Group Technische Universit¨at Darmstadt Darmstadt, Germany Index Terms— Through-the-wall, radar imaging, array signal processing, synthetic aperture radar, image fusion

1. INTRODUCTION Electromagnetic waves passing through a medium are subject to amplitude and phase distortion. There is an increasing need to develop algorithms to combat wall impairing effects, permitting detection, classi¿cation, and location of human behind walls [1]. Seeing optically obscured objects behind obstacles such as walls, doors, and other visually opaque materials is a powerful tool for a variety of civilian, law enforcement, and military applications. It has been recently sought out in rescue missions, searching for victims and survivors in ¿re, earthquakes and avalanches. It also allows behindthe-wall detection and surveillance of suspected criminals and outlaws. Typical sensing for indoor targets is based on emission and reception of radio frequency (RF). RF signal waveforms below S-band are capable of penetrating through building walls, but the resulting observations are sensitive to a wide range of factors, including layout geometry, interior and exterior wall thicknesses, wall EM scattering signatures, existence of doors and windows, and clutter objects within the building. In many urban sensing situations, it is possible to have access to different sides around the structure, allowing multiviewing imaging. Fusing a set of multi-viewing Throughthe-wall (TTW) radar images and generating a single reference image has recently become a topic of interest. In [2], each recorded radar image was binarized by applying a direct thresholding scheme. Image fusion was then performed by pixel-wise multiplication. Therefore, a pixel corresponds to a target only if a strong reÀection at that particular position is obtained in each radar image. This method shows improved results compared to the scenario when only a single image is processed. The effect of clutter, which has different RF signatures at different views, is strongly reduced. In this paper we perform image fusion using hypothesis testing rather than simple image thresholding schemes. We evaluate TTW radar image statistics and based on these results derive a constant false-alarm rate detector. We hereby assume imaging through concentrate walls with targets detectable at least from one view. The work by Moeness Amin is supported by ONR, grant no N00014-07C-0413

2

Center for Advanced Communications Villanova University Villanova, PA, USA 2. RADAR IMAGE STATISTICS

Fusion strategies for multiple location synthetic aperture radar (SAR) images using classical detection schemes have been used by Papson and Narayanan in [3]. They assume all magnitude images to be independent and identically distributed (i.i.d.), both the null and alternative hypotheses are described by Gaussian distributions. We evaluated typical indoor TTW radar imaging scenarios and found out that noise and clutter closely follow a Rayleigh distribution whereas samples corresponding to reÀections from targets can roughly be approximated by a Gaussian distribution. Depending on the amount of clutter the scale parameter of the √ Rayleigh dis√ tribution typically takes values 0.03 < σ1 < 0.05. More details on this will be presented in the full paper. 3. HYPOTHESIS TESTS Based on our experimental results we model the null and alternative hypothesis as   2 B(i, j) − B(i,j) 2σ2 1 p(B(i, j)|H0 ) = T ·e σ12   (B(i,j)−μ0 )2 − 1 2σ2 0 ·e p(B(i, j)|H1 ) = T √ 2πσ0 where T (·) is an operation for truncation and normalization such that  1  ∞ p(ϕ|Hi )dϕ = p(ϕ|Hi )dϕ = 1, i = 0, 1 −∞

0

Assuming all images from multiple views to be i.i.d. we obtain the likelihood ratio (LR) as LR(i, j) =

M  p(Bm (i, j)|H1 ) H1 ≷ γ p(Bm (i, j)|H0 ) H0 m=1

where M is the total number of radar images to be fused. The fused binary image can then be calculated as  1, LR(i, j) > γ F (i, j) = 0, LR(i, j) ≤ γ Using the Neyman-Pearson theorem [4] we obtain a constant ∞ false-alarm rate detector by evaluating α = γ fL (L|H0 )dL where α is the false-alarm rate and fL (L|H0 ) is the likelihood

í6

1

í4

0.9

Crossrange (ft)

í2

0.8

0

0.7

2

0.6 pD

4

6 18

(a) 3D scene of interest

16

14

12 Downrange (ft)

10

8

0.5 0.4

6

(b) Typical B-scan image

0.3 0.2

Fig. 1. Experimental setup

Image 1 Image 2 Both

0.1 0

M

1) function. For i.i.d. images γ is given by γ = p(β|H p(β|H0 ) where β - assuming noise and clutter

to follow a Rayleigh distribution - can be calculated as β = −2ln(α) · σ02 .

5. FULL PAPER The full paper will contain a more detailed treatment of the radar image statistics in different indoor scenarios to motivate our assumptions. Further, the hypothesis test will be demonstrated in more detail and we will provide more information about the radar system we used and the wideband beamforming algorithm applied to form the B-scan images.

0.2

0.3

0.4

0.5 pFA

0.6

í6

í6

í4

í4

í2

í2 Crossrange (ft)

Crossrange (ft)

The wideband beamforming algorithm for TTW radar imaging presented in [5] is used to form the B-scan images (downrange vs. crossrange cut through the 3D scene of interest). We consider the scenario shown in Figure 1 (a) consisting of a metal sphere on a table, a chair and a metal dihedral on a column. A typical background-subtracted B-scan image obtained at the height of the table legs is shown in Figure 1 (b). The four table legs can be detected in the lower left quarter, as well as strong noise and clutter contributions all across the radar image. The setup from Figure 1 is now used to obtain two TTW radar images at the table legs height, one from the front and one from the side wall. After performing a rotation operation to align the two sensor-speci¿c coordinate systems the detection scheme presented√in Section 3 is used to perform image fusion with σ1 = 0.04. In Figure 2 the receiver operating characteristics is shown when using the proposed detection scheme on the individual (M = 1) as well as on both (M = 2) radar images. It can be seen that the fusion of images from multiple locations improves the probability of detection given a constant false-alarm rate. In Figure 3 (d) we exemplarily show the binary fused image F (i, j) for a falsealarm rate α = 1%. We observe an improvement compared to the results which we obtain when using the detection scheme only on one of the images - shown in Figure 3 (a) and (b) and the simple pixel-wise multiplication of the two individual binary images shown in Figure 3 (c).

0.1

0.7

0.8

0.9

1

Fig. 2. Receiver operating characteristic

0

0

2

2

4

4

6 18

16

14

12 Downrange (ft)

10

8

6 18

6

(a) Front wall image

Crossrange (ft)

4. EXPERIMENTAL RESULTS

0

í6

í4

í4

í2

í2

0

2

4

4

16

14

12 Downrange (ft)

10

8

14

12 Downrange (ft)

10

8

6

0

2

6 18

16

(b) Side wall image

í6

Crossrange (ft)



6

6 18

16

(c) Simple multiplication

14

12 Downrange (ft)

10

8

6

(d) LR test

Fig. 3. Fusion results, α = 1% 6. REFERENCES [1] D.D. Ferris and N.C. Currie, “A survey of current technologies for through-the-wall surveillance (TWS),” in Proceedings of the SPIE, 1998, vol. 3577, pp. 62–72. [2] F. Ahmad and G.M. Amin, “Wideband synthetic aperture imaging for urban sensing applications,” Journal of the Franklin Institute, March 2008. [3] S. Papson and R. M. Narayanan, “Multiple location SAR/ISAR image fusion for enhanced characterization of targets,” in Proceedings of the SPIE, Volume 5788, pp. 128-139 (2005)., May 2005, vol. 5788, pp. 128–139. [4] S. M. Kay, Fundamentals of Statistical Signal Processing, Volume 2: Detection Theory, Prentice Hall PTR, 1998. [5] F. Ahmad, M.G. Amin, and S.A. Kassam, “Through-thewall wideband synthetic aperture beamformer,” in Antennas and Propagation Society International Symposium, 2004. IEEE, 2004, vol. 3, pp. 3059–3062 Vol.3.

Suggest Documents