I. INTRODUCTION
Adaptive Pulse Compression via MMSE Estimation SHANNON D. BLUNT, Member, IEEE University of Kansas KARL GERLACH, Fellow, IEEE U.S. Naval Research Laboratory
Radar pulse compression involves the extraction of an estimate of the range profile illuminated by a radar in the presence of noise. A problem inherent to pulse compression is the masking of small targets by large nearby targets due to the range sidelobes that result from standard matched filtering. This paper presents a new approach based upon a minimum mean-square error (MMSE) formulation in which the pulse compression filter for each individual range cell is adaptively estimated from the received signal in order to mitigate the masking interference resulting from matched filtering in the vicinity of large targets. The proposed method is compared with the standard matched filter and least-squares (LS) estimation and is shown to be superior over a variety of stressing scenarios.
Manuscript received January 4, 2005; revised February 7, 2005; released for publication August 24, 2005. IEEE Log No. T-AES/42/2/863680. Refereeing of this contribution was handled by P. Lombardo. This work was supported by the United States Office of Naval Research (ONR 31). Authors’ addresses: S. D. Blunt, Dept. of Electrical Engineering and Computer Science, Eaton Hall, 1520 West 15th St., Rm. 3034, University of Kansas, Lawrence, KS 66015, E-mail: (
[email protected]); K. Gerlach, U.S. Naval Research Laboratory, Washington, D.C. 20375.
c 2006 IEEE 0018-9251/06/$17.00 ° 572
In active sensing applications such as radar pulse compression [1], ultrasonic nondestructive evaluation for structural integrity [2]—[3], biomedical imaging [4], and seismic estimation [5], it is desired to detect the presence, and subsequently the location, of scatterers (e.g., targets, discontinuities, etc.) within a particular medium (e.g., free space, the human body, etc.). To accomplish this, a directed signal is transmitted into the medium where it is reflected back to the receiver by the scatterers in the medium. The respective delays associated with the reflected signals that arrive at the receiver constitute a spatial profile that is representative of the respective ranges to the scatterers. Due to practical power constraints, the transmitted signal often takes the form of a rectangular pulse that is phase or frequency modulated in order to obtain high spatial resolution, which is inversely proportional to the bandwidth of the modulated pulse. After transmission, the modulated pulse (or waveform) is reflected by scatterers in the medium such that the received signal at the sensor is comprised of delayed, attenuated versions of the transmitted waveform. A filter matched to the transmitted waveform is then typically used to extract the high-resolution spatial profile from the received signal which also contains noise. In radar terminology, this extraction process is known as pulse compression [1]. For a solitary “point” scatterer in the presence of white Gaussian noise, the matched filter (i.e., matched to the transmitted signal and system noise) has been shown [1] to maximize the output signal-to-noise ratio (SNR) and thereby maximize (in some sense) the detectability of scatterers in noise. The matched filter correlates the delayed, attenuated versions of the transmitted waveform as they arrive at the receiver thereby providing the range and relative complex amplitudes of the scatterers. The correlation of the matched filter with the waveform is, in fact, the autocorrelation of the waveform. The waveform autocorrelation (and more generally the waveform ambiguity function [1]) reveals the inherent ability of the particular waveform to extract a single scatterer in the presence of other nearby scatterers which is measured by the relative autocorrelation sidelobes (or range sidelobes). For example, the Lewis-Kretschmer P3 discrete polyphase code [6], whose normalized autocorrelation is the solid line in Fig. 1 (for the case of waveform length N = 30), has range sidelobes that are approximately 20 dB lower than the peak. As this example illustrates, the presence of a large scatterer in a sidelobe can mask the presence of a small scatterer that is aligned with the autocorrelation peak. For a given waveform, the matched filter performance for closely-spaced scatterers is
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 42, NO. 2 APRIL 2006
Fig. 1. Correlation of receive filters with transmitted waveform for Lewis-Kretschmer P3 code (N = 30).
fundamentally limited due to the masking problem of the waveform autocorrelation. To partially alleviate the masking problem, long mismatched filters [7—12] have been proposed which suppress the sidelobes somewhat at the cost of a small loss in SNR, which is acceptable if some small scatterers can be unmasked and subsequently detected. However, the mismatched filter can only partially mitigate masking because the sidelobe levels still scale with the power of the large target(s) and edge effects due to long filter length can exacerbate target eclipsing. Furthermore, good performance for the mismatched filters is limited to certain classes of waveforms, which can make them more easily identifiable (this is of concern to military radar applications where low probability of intercept (LPI) may be required). Conceptually, in order to completely eliminate masking in the matched filter context (for a general waveform), a receive filter for a given range cell must be closely matched to the transmitted waveform while also canceling the masking interference from each of the potential nearby scatterers. Hence, the receive filter must be adaptive to the received signal since the range profile is unknown a priori and the appropriate receive filter will be unique for each individual range cell. The matched filter as well as mismatched filters can be implemented as an analog or digital filter. In the digital domain, the spatial profile of the medium is represented as a discrete impulse response, which may be relatively long depending upon the application. Therefore, it is often desired to estimate only a relatively small portion of the impulse response in order to satisfy computation and memory constraints. We denote this portion of the impulse response as the processing window which corresponds to a similar portion of the received signal. For this received signal formulation, least-squares (LS) approaches have
been proposed [13—16] that decouple neighboring range cells in order to eliminate masking. It has been shown that the LS approach is optimal in the mean-square error (MSE) sense when the additive noise is white [17]. However, an accurate LS estimate requires that no significant scatterers exist in close proximity outside of the processing window because the existence of these scatterers can cause severe misestimation (as will be shown). This misestimation occurs because scatterers outside of the processing window are not accounted for in the LS signal model. The first work to consider adaptivity in the range domain was done by Gabriel [18, 19] for continuous linear frequency modulation (LFM) waveforms. In this work adaptive range processing was performed using the discretized returns from many pulses to form a sample covariance matrix and thereby null the interference from neighboring targets such that range superresolution could be obtained for the inverse synthetic aperture radar (ISAR) imaging application. Additional work on range superresolution can be found in [20]. Other methods using range adaptivity by subtracting the effects of large targets (as opposed to nulling) via the CLEAN algorithm can be found in [21] and [22]. In the work presented here we consider the adaptive suppression of range sidelobes at the nominal range resolution (i.e., not superresolution). Let the radar waveform be represented (or well approximated) on receive by an N-length polyphase code sequence where each element of the code sequence is a complex sample of the transmitted waveform which is sampled at the Nyquist rate. A new technique is introduced here in the spirit of the matched filter whereby the appropriate N-length receiver filter for each particular range cell is estimated adaptively from the received signal using a minimum mean-square error (MMSE) formulation [17]. Hence a different adaptive receive filter, which is optimized in some sense, is used for each individual range cell. In terms of the radar application, it can therefore be viewed as a means of achieving “adaptive” pulse compression. The algorithm resulting from the MMSE formulation, known as reiterative MMSE (RMMSE) and which was partially presented in [23] and [24], is shown to accurately estimate small targets (low SNR) in the presence of considerably larger targets (several orders of magnitude) and is rather robust to Doppler mismatch. Furthermore, the fact that RMMSE is implemented as an N-length adaptive receive filter similar to the matched filter indicates that RMMSE does not suffer from the edge effects encountered by long mismatched filters or the out-of-window scatterer problem experienced by the LS method. We observe that the problem addressed here is the accurate estimation of the radar range profile at the resolution determined by the radar bandwidth and is
BLUNT & GERLACH: ADAPTIVE PULSE COMPRESSION VIA MMSE ESTIMATION
573
therefore not related to spectral superresolution as has been addressed, for instance, as a means to remove cross-range ambiguities in SAR imaging due to large clutter discretes [25—27]. The primary measure of “goodness” for our adaptive methodology will be the MSE metric associated with estimating the ground truth complex amplitudes of the range profile. In contrast, the characterization of a standard pulse compressor uses metrics such as mismatch loss, integrated sidelobe level, peak sidelobe level, etc. to quantify the performance of a given waveform/pulse compressor implementation. These metrics provide the designer with a sense of how well a waveform/pulse compressor will work in a variety of less than ideal conditions such as due to Doppler mismatch or large/small target scenarios. In lieu of not having a more exact statistical representation of the input range profile, these metrics are necessary. However, if certain second-order moments related to the ground truth were known, a more effective matched filtering solution at a given range cell (i.e., maximizing SNR) is possible (the Weiner filter). Furthermore, it can be shown that the Weiner filter for a particular range cell provides the best estimate of a range cell complex amplitude (i.e., MMSE). Adaptive pulse compression (APC) is a methodology whereby the second-order moments for each range cell are estimated and used to approximate the MMSE filter solution. Thus the important metric for the proposed approach is how close (in the MSE sense) can the ground truth (the unknown complex range profile) amplitudes be measured. The remainder of this work is organized as follows. Section II introduces the discrete domain representation of the matched filter along with the LS signal model. Section III develops the RMMSE algorithm. In Section IV the operation of the RMMSE algorithm is discussed as pertaining to matching to the received signal and the unmasking of small scatterers. Some practical implementation issues are discussed in Section V along with an implementation strategy that employs the matrix inversion lemma to reduce the computational load. Simulation results are presented in Section VI. II. DISCRETE SIGNAL MODEL Matched filtering has been shown [1] to maximize the received SNR of a solitary point scatterer in the presence of additive white Gaussian noise (AWGN) by convolving the received return signal with a complex-conjugated time-reversed copy of the transmitted waveform. Matched filtering can be represented in the discrete domain as the operation xˆ MF (`) = sH y˜ (`) 574
(1)
where xˆ MF (`) is the matched filter estimate of the `th delayed sample of a length-L section of the range profile impulse response for ` = 0, : : : , L ¡ 1 (i.e., the processing window), the length-N vector s = [s0 s1 ¢ ¢ ¢ sN¡1 ]T represents the sampled version of the transmitted waveform, y˜ (`) = [y(`) y(` + 1) ¢ ¢ ¢ y(` + N ¡ 1)]T is a vector of N contiguous samples of the complex received signal, and (¢)T and (¢)H are the transpose and complex-conjugate transpose (or Hermitian) operations, respectively. Each discrete sample of the received signal can be expressed as y(`) = x˜ T (`)s + v(`)
(2)
where x˜ (`) = [x(`) x(` ¡ 1) ¢ ¢ ¢ x(` ¡ N + 1)]T are N contiguous samples of the range profile impulse response and v(`) is additive noise. The matched filter output of (1) can therefore be rewritten as xˆ MF (`) = sH AT (`)s + sH v˜ (`)
(3)
where v˜ (`) = [v(`) v(` + 1) ¢ ¢ ¢ v(` + N ¡ 1)]T and 2 x(`)
6 6 x(` ¡ 1) 6 A(`) = 6 .. 6 4 .
x(` ¡ N + 1)
¢¢¢
x(` + 1) x(`)
..
.
..
..
.
.
¢¢¢
x(` ¡ 1)
x(` + N ¡ 1)
3
7 7 7 7 7 x(` + 1) 5 .. .
x(`)
(4) is a collection of N length-N sample-shifted snapshots (in the columns) of the impulse response. From (4), it can be seen that if x(`) is a solitary point scatterer, then (3) effectively reduces to (sH sx(`) + sH v) so that only SNR determines the detectability of the scatterer. Hence, the matched filter is optimal in this case. However, whenever any of the off-diagonal elements of A(`) are large relative to x(`), the matched filter will mask the true value of x(`) regardless of its SNR. To alleviate the masking problem, LS solutions have been proposed in [7]—[16]. Note that the optimal mismatched filter [7—12] is effectively subsumed by the LS formulation. The LS formulation approximates the length-(L + N ¡ 1) received signal vector y that corresponds to the processing window as yLS = Sx + v
(5)
where x = [x(0) x(1) ¢ ¢ ¢ x(L ¡ 1)]T is the vector containing the L samples of the range profile impulse response that fall within the processing window, v = [v(0) v(1) ¢ ¢ ¢ v(L + N ¡ 2)]T is a vector of L + N ¡ 1 additive noise samples, and the convolution of the transmitted waveform with the range profile impulse response is approximated as the matrix
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 42, NO. 2 APRIL 2006
multiplication 2 s
0
6 . 6 .. 6 6 6 6s 6 N¡1 6 6 6 0 Sx = 6 6 6 .. 6 . 6 6 6 6 6 6 .. 4 . 0
0 s0 .. .
¢¢¢ ..
¢¢¢
. ..
sN¡1 ..
. ..
. ..
. ..
¢¢¢
.
¢¢¢
.
0
0 3 .. 7 . 7 7 7 7 7 7 7 .. 7 . 7 7x 7 7 0 7 7 7 s0 7 7 7 .. 7 . 5
(6)
sN¡1
in which S is a banded (L + N ¡ 1) £ L matrix. The LS model of (5) and (6) has been applied to radar pulse compression [7—16], ultrasonic nondestructive evaluation [2, 3], biomedical imaging [4], and seismic estimation [5]. In general, the LS formulation yields the length-L range profile estimate as xˆ LS = (SH S)¡1 SH y:
(7)
For the received signal model of (5), it can be shown that the LS solution of (7) is optimal in the MSE sense when the additive noise is white [17] and when y ¼ yLS . However, the latter assumption is known to fail whenever there exists significant scatterers within N ¡ 1 samples outside of the processing window. The result is severe misestimation of the range profile impulse response. In the following section the RMMSE algorithm is developed. For the MMSE signal model, we revisit the matched filter formulation and note that when ` = 0 in (4), the matched filter model does account for nearby scatterers outside of the processing window (the same is also true for ` = L ¡ 1). III. REITERATIVE MINIMUM MEAN-SQUARE ERROR ESTIMATION While the standard matched filter maximizes SNR for a solitary point target in noise, it is known to mask the presence of small scatterers when in the proximity of a large scatterer. To completely mitigate the effects of masking requires that the receive filter be adaptively estimated from the received signal for each individual impulse response coefficient. One way in which this can be done is to successively alternate between estimating the range profile impulse response and the respective receive filters. In this paper the alternating estimation strategy is applied in the MMSE formulation [17] which is a Bayesian estimation approach that takes prior information into account to improve estimation accuracy. The RMMSE algorithm employs MMSE estimation in successive stages in which the impulse
response estimate from the previous stage is used as a priori information for the current stage. Note that this involves the reprocessing of data in an open-loop manner as opposed to closed-loop processing for which there is a desired signal with which to compare. First, we construct the signal model. From (2) and (3), the collection of N samples of the received return signal can be expressed as y˜ (`) = AT (`)s + v˜ (`):
(8)
This is the same received signal model used in the matched filter formulation. To develop the MMSE estimator, the matched filter s in (1) is replaced with the N £ 1 MMSE filter, denoted w(`), in which the exact form of the MMSE filter is dependent upon the particular range cell x(`) to be estimated and hence is unique for each delay index `. Thereafter, the standard MMSE cost function [17] J(`) = E[jx(`) ¡ wH (`)y˜ (`)j2 ]
(9)
is minimized for each individual delay index ` = 0, 1, : : : , L ¡ 1, where E[¢] denotes expectation. Note that E[x(`)] = x(`) as the impulse response is assumed to be relatively stationary over the length of the waveform. Also, we assume that neighboring impulse response terms are uncorrelated. As usual, the MMSE cost function is minimized by differentiating with respect to w¤ (`) and then setting the result equal to zero. The MMSE filter is found to take the form w(`) = (E[y˜ (`)y˜ H (`)])¡1 E[y˜ (`)x¤ (`)]
(10)
where ¤ denotes complex conjugation. After substituting for y˜ (`) from (8) and assuming that the impulse response is uncorrelated with the noise we obtain (11) w(`) = ½(`)(C(`) + R)¡1 s where ½(`) = jx(`)j2 , and R = E[v˜ (`)v˜ H (`)] is the N £ N noise covariance matrix. Any prior information regarding the noise can be employed via R and as such this formulation subsumes the “whitening matched filter” described in [1]. Upon assuming neighboring impulse response terms are uncorrelated, the (i, j)th element of the N £ N matrix C(`) = E[AT (`) s sH A¤ (`)] is ci,j (`) =
·U X
n=·L
¤ ½(` ¡ n + i ¡ 1)sn sn¡i+j
(12)
in which ·L = maxf0, i ¡ jg is the summation lower bound and ·U = minfN ¡ 1, N ¡ 1 + i ¡ jg is the upper bound. The full matrix C(`) can be written as C(`) =
N¡1 X
½(` + n)sn sH n
(13)
n=¡N+1
where sn contains the elements of the waveform s shifted by n samples and the remainder
BLUNT & GERLACH: ADAPTIVE PULSE COMPRESSION VIA MMSE ESTIMATION
575
zero-filled, e.g., s2 = [0 0 s0 ¢ ¢ ¢ sN¡3 ]T and s¡2 = [s2 ¢ ¢ ¢ sN¡1 0 0]T . From (13) it can be seen that C(`) is positive semi-definite because it is comprised of 2N ¡ 1 rank-1 positive semi-definite matrices. Hence, as R is positive definite, C(`) + R is also positive definite and thereby invertible. However, as discussed in a later section, steps must be taken to prevent C(`) from becoming ill-conditioned which may occur as a result of alternating estimation. In its current state the MMSE filter for a given impulse response coefficient is a function of the powers of the surrounding range cells, which in practice are unavailable. We account for this lack of prior knowledge by initially assuming that the noise is negligible and setting all the initial impulse response estimates to be equal. Therefore, the initialization-stage MMSE filter reduces to the form à N¡1 !¡1 X H ˆ » w s s s (14) = n n
n=¡N+1
ˆ is invariant with respect to the delay `. where w Hence, the initialization-stage MMSE filter can be precomputed and then implemented in the same way as the traditional matched filter. Interestingly enough, it is found that due to the lack of any prior knowledge, the initialization-stage MMSE filter has a normalized cross correlation with the transmitted waveform that is quite similar to the normalized matched filter autocorrelation as demonstrated in Fig. 1 for the example of a length N = 30 P3 waveform [6]. Note that the RMMSE formulation is not specific to a particular waveform as was demonstrated by the results in [24] for a random-phase waveform. However, while it has been found to yield very good performance for polyphase waveforms, the use of nonlinear waveforms, which may result in scalloping losses, is a topic of future research. The initial estimate of the impulse response found by applying the initialization-stage MMSE ˆ is used as a priori information to the first filter w reiteration-stage MMSE filter. This is done by employing the MMSE filter formulation from (11) in which the respective powers of the range cells ˆ are taken from the results of the initialization ½(`) stage. A similar procedure is repeated for each successive stage. It has been found via simulation that the number of stages required is dependent upon the SNR of the large scatterers as well as their density. The RMMSE algorithm does especially well when the range profile is somewhat sparsely parameterized (i.e., highly spiky), as is often the case with high-resolution applications. This is to be expected as the reiteration-stage MMSE filters are somewhat similar to adaptive sidelobe cancelers for array processing applications [28] in which the number of degrees of freedom with which to suppress 576
Fig. 2. Operation of RMMSE algorithm for 3 stages.
nearby interfering scatterers is limited by the length of the waveform N. As can be seen in (11) and (13), the length of the processing window decreases by 2(N ¡ 1) range cells at each stage because N ¡ 1 impulse response terms at each end of the processing window are employed to update the estimate of the remaining terms within the processing window. As the RMMSE algorithm has been found to require only a few (i.e., 2—4) stages, this reduction in the processing window size is negligible if L À N. Conversely, if it is feasible to slightly increase the size of the processing window, M(N ¡ 1) additional impulse response terms at the beginning and end of the processing window can be included, where M is the number of desired stages. Note that M is a system design parameter and any value of M > 1 will yield performance superior to the matched filter in terms of unmasking small scatterers. Also, the 2(N ¡ 1) range cell estimates at each stage are not lost but are simply not further updated and therefore in the worst case (at the outermost edges) are estimated with the same accuracy as the standard matched filter. Fig. 2 illustrates the operation of the RMMSE estimator for three stages. The four basic steps of the algorithm are specified in Table I. IV. DISCUSSION OF THE RMMSE ALGORITHM This section discusses in a heuristic sense the RMMSE algorithm’s matching and unmasking capability for the cases of a solitary point scatterer in noise and for a small scatterer masked by a large nearby scatterer. It is assumed that the initialization stage (using a fixed, matched receive filter) has
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 42, NO. 2 APRIL 2006
TABLE I Operation of RMMSE Algorithm 1. Collect the received samples fy(¡(M ¡ 1)(N ¡ 1)), : : : , y(L ¡ 1 + M(N ¡ 1))g, which comprise the length-L processing window along with the (M ¡ 1)(N ¡ 1) impulse response coefficients prior to the processing window and the M(N ¡ 1) coefficients after the processing window. 2. Apply the initialization-stage RMMSE matched filter ˆ from (14) to y(`) in the same manner as (1) to w0 (`) = w obtain the initial impulse response coefficient estimates fxˆ 1 (¡(M ¡ 1)(N ¡ 1)), : : : , xˆ 1 (L ¡ 1 + (M ¡ 1)(N ¡ 1))g. 3. Compute the power estimates ½ˆ 1 (`) = jxˆ 1 (`)j2 for the impulse response delay indices ` = ¡(M ¡ 1)(N ¡ 1), : : : , L ¡ 1 + (M ¡ 1)(N ¡ 1) which are used to compute the first reiteration-stage filters w1 (`) as in (11); then apply w1 (`) to obtain fxˆ 2 (¡(M ¡ 2)(N ¡ 1)), : : : , xˆ 2 (L ¡ 1 + (M ¡ 2)(N ¡ 1))g.
Fig. 3. Correlation of matched and MMSE receive filters with transmitted waveform for Lewis-Kretschmer P3 code (N = 30).
4. Repeat step 3, changing the indices where appropriate, until the desired length-L processing window is reached.
provided some initial range cell estimates and that a properly chosen waveform is employed which possesses sufficient autocorrelation properties (e.g., say ¡20 dB sidelobes relative to the peak). Also, the noise is assumed to be white Gaussian.
orthogonal. Therefore, applying the matrix inversion lemma [29] to (16) and simplifying yields µ ¶ ½(`) w(`) ¼ s ¾v2 + ½(`)N 0 ¶ N¡1 X µ ±¸½(`) ¡ s : ¾v2 (¾v2 + ¸(N ¡ jnj)) n n=¡N+1 n6 =0
(17)
A. MMSE Filter for a Solitary Point Scatterer For a point scatterer in noise with no other nearby scatterers, we can approximate C(`) from (13) as C(`) = ½(`)s0 sH 0 +¸
N¡1 X
sn sH n
(15)
n=¡N+1 n6 =0
where ¸ = maxf±½(`), ¾v2 =Ng is the expected value of the range cells surrounding the single scatterer. The value ±½(`) is the maximum sidelobe level resulting from the initialization stage (or perhaps the first reiteration stage) when the scatterer is quite large (e.g., ± ¼ ¡20 dB and ½(`) has an SNR greater than 20 dB) and ¾v2 =N is the normalized noise power for when the scatterer has low SNR or previous stages have already driven the sidelobes into the noise. Inserting (15) into (11), the MMSE filter formulation for a solitary point scatterer is 2
6 w(`) = ½(`) 4½(`)s0 sH 0 +¸
N¡1 X
n=¡N+1 n6 =0
3¡1
2 7 sn sH n + ¾v I5
s0 : (16)
Under the assumption that the given waveform s has sufficiently low autocorrelation sidelobes, the vectors sn and sm for n 6 = m can be approximated as
Finally, due to the influence of ±¸, the summation term of (17) will be quite small and hence the first term will dominate resulting in µ ¶ ½(`) w(`) ¼ s (18) ¾v2 + ½(`)N where we note that s0 = s. From (18) we see that the MMSE filter for the solitary scatterer approximately reduces to a scaled version of the normalized matched filter, and for a scatterer with a nominally detectable SNR (say ¸ 10 dB), becomes simply w ¼ (1=N)s (i.e., the normalized matched filter). Thus wH (x(`)s) ¼ x(`), which is the complex amplitude of the `th range cell, and the resulting mismatch error will be small. This is intuitively satisfying as the MMSE estimate of a solitary point target in noise, having no interference from any other nearby targets, would maximize the SNR of the target in order to minimize the estimation error. B. MMSE Filter for a Masked Scatterer To examine the capability of the MMSE filter to unmask a small scatterer we consider an example. For a particular range cell of interest, there exists a very large scatterer (60 dB SNR) 5 range cells prior. Fig. 3 presents the respective autocorrelations and
BLUNT & GERLACH: ADAPTIVE PULSE COMPRESSION VIA MMSE ESTIMATION
577
cross correlations of the normalized matched filter, the initialization-stage RMMSE filter, and the first reiteration-stage RMMSE filter for the range cell of interest when a length N = 30 P3 code is used. While the standard matched filter and the initialization-stage RMMSE yield similar results, the reiteration-stage RMMSE has a deep null placed ¡5 range cells from the peak. Hence, the RMMSE algorithm adaptively mitigates the range sidelobe interference that results from the large scatterer thereby enabling the presence of a small scatterer to be detected. V. IMPLEMENTATION ISSUES This section addresses the implementation of the RMMSE algorithm in a robust and computationally efficient manner. One approach to enhance computational efficiency is to employ parallel processing. Also, the nature of the RMMSE algorithm enables the use of a rank-1 update at each successive range cell thereby greatly reducing computation. Robustness of the RMMSE algorithm can be achieved by effectively compressing the dynamic range of the range cell estimates as well as setting a lower bound on the range cell estimates such that none are allowed to go to zero.
cell, the matrix C(` + 1) + R can be written as · ¸ Q g C(` + 1) + R = H g b
(20)
where b is a scalar, g is (N ¡ 1) £ 1, and Q is the same matrix from (19). Examining (19) and (20), the similarity of C(`) and C(` + 1) can be exploited as follows by using the matrix inversion lemma. First, we apply the permutation matrix ¸ · T 0 1 (21) P = N¡1 IN¡1 0N¡1 in which IN¡1 is the (N ¡ 1) £ (N ¡ 1) identity matrix and 0N¡1 is an (N ¡ 1) £ 1 vector of zeros, to obtain ¸ · Q d : (22) D = PT (C(`) + R)P = H d a If we then define d˜ = [dT 0]T and g˜ = [gT 0]T , the matrix C(` + 1) + R can be written as ˜ T + e (g˜ ¡ d) ˜ H + (b ¡ a)e eT C(` + 1) + R = D + (g˜ ¡ d)e N N N N
(23) where eN = [0 ¢ ¢ ¢ 0 1] has length N. Given (23), it is straightforward to show using the matrix inversion lemma that T
(C(` + 1) + R)¡1 = D¡1 ¡ D¡1 U(¡ ¡1 + VH D¡1 U)¡1 VH D¡1
(24) A. Computational Efficiency Beyond the initialization stage, it is necessary to invert the N £ N matrix C(`) + R for every range cell to be estimated, which requires O(N 3 L) operations at each stage. This can cause a computational burden whenever N or L is large. However, the inversion of the roughly L matrices (when L À N) can be divided into smaller groups and computed in parallel. For this reason, the computational complexity scales with the number of parallel processors and hence can be more computationally feasible given enough processors. Also, note that RMMSE can be implemented with the standard matched filter as the first stage whereby additional stages of the algorithm need only be applied in regions surrounding large targets in order to minimize computation. The computational burden can be further reduced by employing a variation of the matrix inversion lemma [29] to invert C(`) + R in sequential fashion (assuming R has the form ¾v2 I, where ¾v2 is the noise power). We write C(`) + R in the following form: · ¸ a dH C(`) + R = (19) d Q where a is a scalar, d is (N ¡ 1) £ 1, and Q is (N ¡ 1) £ (N ¡ 1). From (13), for the next contiguous range 578
˜ e e ], in which ¡ = diagf1, 1, (b ¡ a)g, U = [(g˜ ¡ d) N N ˜ and V = [eN (g˜ ¡ d) eN ]. Hence, given (C(`) + R)¡1 computed for a single range cell at each stage, the RMMSE filter for all other range cells can be calculated with O(N 2 L) operations instead of O(N 3 L). Therefore, for M stages and L À N, the RMMSE algorithm requires O((M ¡ 1)N 2 L) operations. B. Robustness From (13) it can be seen that C(`) + R is positive definite and therefore invertible. However, note that if ½(`) À ¾v2 and ½(`) À ¸ in (16), the matrix C(`) + R can become ill-conditioned. Also, when ½(`) does not have sufficient SNR to be extracted from the noise (or when there is simply no scatterer to be extracted), the ˆ in (18) can be quite small which, power estimate ½(`) after successive stages, can approach zero thereby causing ill-conditioning as well. As both of these issues relate to the dynamic range of the range cell estimates, they can be resolved by employing a simple heuristic modification to the RMMSE formulation. A heuristic approach to prevent ill-conditioning is to partially compress the dynamic range of the power estimates of the range cells and the noise power. ˆ = jxˆ (`)j2 with This is accomplished by replacing ½(`) ® ˆ = jxˆ (`)j and (under the white noise assumption) ½(`) replacing ¾v2 with ¾v® , for 0 · ® · 2. For the case
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 42, NO. 2 APRIL 2006
of large SNR scatterers, using ® < 2 reduces the effective SNR and thereby alleviates the possibility of ill-conditioning that was previously mentioned for (16). Conversely, when there are no scatterers present, ˆ ¿ ¾v2 =N may be the range cell estimates with ½(`) driven towards zero. However, when ® is reduced, ˆ are not as small thereby the resulting values of ½(`) hindering their convergence towards zero. It has been found based upon extensive experimentation via simulation that values of 1:1 · ® · 1:7 with 2 to 4 stages of the RMMSE algorithm (including the initialization stage) tend to yield the best results. Furthermore, ® should be set at the high end (near 1.7) initially to quickly drive down the sidelobes from large SNR scatterers and then decrease (to near 1.1 at the final stage). This is analogous to the adaptive step-size parameter often used in closed-loop iterative algorithms such as least mean square (LMS) [30]. It is a topic of future research to determine if optimal values of ® can be found as a function of the surrounding range cell estimates. Finally, an additional heuristic approach that can be used to alleviate possible ill-conditioning is to set a lower bound upon the magnitudes of the range cell estimates. This is done so that slightly larger values of ® can be used to drive down the sidelobes from large scatterers more quickly without driving smaller range cell estimates to zero.
TABLE II MSE Performance NMF Case Case Case Case Case
1: 2: 3: 4: 5:
Low SNR target High SNR target Outside window Fast target Many fast targets
to baseband) is defined as µ ¶ j¼n2 s(n) = exp , N
¡15 ¡29 ¡28 ¡25 ¡31
dB dB dB dB dB
LS ¡14 ¡73 ¡30 ¡28 ¡42
dB dB dB dB dB
RMMSE ¡16 ¡67 ¡70 ¡28 ¡43
n = 0, 1, : : : , N ¡ 1
dB dB dB dB dB
(25)
and whose autocorrelation properties were illustrated by the solid line in Fig. 1. Note that the RMMSE algorithm is waveform independent as evidenced by its use with random polyphase waveforms [24] as well. The ground truth radar range profiles consist of targets in noise. The additive noise is modeled as zero-mean complex Gaussian. For all cases the final processing window of the ground truth impulse response consists of L = 100 range cells. The RMMSE results are compared with the ground truth impulse response, as well as with the results obtained from using LS estimation and the normalized matched filter (NMF), which is normalized by N so that results are consistent with the other estimators.
VI. SIMULATION RESULTS
A. Case 1: Low SNR Point Target
To demonstrate the robust performance of the RMMSE algorithm for radar pulse compression, we compare RMMSE, LS, and the standard matched filter for five cases. The first case consists of a low SNR point target (scatterer) in noise which is the primary motivation for using standard matched filtering. The second case is typical of the scenario often addressed for LS estimation techniques and consists of an impulse response with a single high SNR target in noise in which the matched filter is known to suffer from range sidelobes that may mask nearby small targets. The third case has a second large target that resides at a range just prior to the processing window. In this case, the accuracy of the LS estimator is expected to degrade significantly since the received signal model does not account for this region. In the fourth case, the three approaches are compared for a single point target with Doppler shift. The final case is the most stressing, with several large and small targets distributed in range with potentially severe Doppler shifts. The respective MSE performance of the different approaches for the five cases is presented in Table II. The waveform used for all five cases is the length N = 30 polyphase modulated Lewis-Kretschmer P3 code [6], which upon receive (after down-conversion
For the low SNR point target scenario we examine the case of a target that has an SNR of 0 dB (before extraction from noise). It is known that the matched filter yields the optimum output SNR so it provides a means of determining how close the RMMSE algorithm comes to optimal for this case. Note that in many radar applications, it is desired to detect targets with much lower SNR which necessitates either increased transmitter power (may or may not be feasible) or the use of much longer waveforms. The RMMSE algorithm can be implemented in the same manner regardless of the length of the waveform. For the current example scenario, two stages of the RMMSE algorithm are employed (i.e., one reiteration stage), with ® = 1:1. Fig. 4 presents the range profile estimates from RMMSE and NMF along with the ground truth in which it can be seen that both methods are able to extract the target (or conversely to suppress the noise). In terms of MSE, NMF achieves ¡15 dB, LS attains ¡14 dB, and the MSE for the RMMSE method is ¡16 dB and ¡16 dB for the two stages, respectively. As the SNR is low, the second stage of RMMSE has little effect, which was to be expected. The deviation of RMMSE from the optimum target SNR can be determined by computing the mismatch loss which, for comparison with the
BLUNT & GERLACH: ADAPTIVE PULSE COMPRESSION VIA MMSE ESTIMATION
579
Fig. 4. Extraction of low SNR point target from noise using Lewis-Kretschmer P3 code (N = 30).
Fig. 5. Results for high SNR point target in noise using Lewis-Kretschmer P3 code (N = 30).
NMF, is found as jwH sj2 Mismatch Loss = ¯ ¯2 : ¯1 ¯ ¯ sH s¯ ¯N ¯
(26)
The mismatch loss of the final stage of RMMSE for the range cell of the target is found to be 0.38 dB. Hence, RMMSE achieves nearly a perfect match on the target. Similar results have been found for RMMSE when using much longer waveforms to extract much smaller targets. B. Case 2: High SNR Point Target Whereas the matched filter yields the optimum SNR of a point target in noise, it produces range sidelobes whenever the output SNR of the target exceeds the autocorrelation sidelobes of the transmitted waveform. For the scenario in which the noise power is ¡60 dB down from the target, we compare the performance of NMF, RMMSE, and LS. Two stages of the RMMSE algorithm are employed with the reiteration stage using ® = 1:5. Fig. 5 illustrates the results for this case in which, as expected, NMF suffers from range sidelobes. The RMMSE algorithm and LS, however, are able to estimate the range profile down to the level of the noise and as such both closely overlap ground truth. In terms of MSE the NMF achieves ¡29 dB, LS realizes ¡73 dB, and the RMMSE algorithm has ¡31 dB after the first stage and reaches ¡67 dB after the second stage. C. Case 3: High SNR Target Outside of the Processing Window For the scenario just described, the LS and RMMSE estimators perform almost identically. 580
Fig. 6. Extended ground truth with high SNR target outside of processing window.
However, when a significant target is present just outside of the processing window, such as is depicted in Fig. 6 for the final processing window and surrounding range cells, the LS estimator is expected to degrade substantially. The noise and power is again set as ¡60 dB down from the targets. Three stages of the RMMSE algorithm are employed with ® = 1:6 and 1.1 for the two reiteration stages. The results are presented in Fig. 7 in which the matched filter possesses range sidelobes from both of the targets and the LS technique suffers from severe misestimation because it cannot account for the target prior to the processing window. The RMMSE algorithm, on the other hand, performs the same as in the previous case estimating the range profile down to the noise floor. In terms of MSE, the NMF achieves ¡28 dB, LS only attains ¡30 dB, and the RMMSE algorithm has ¡30 dB after the first stage, ¡56 dB after the second stage, and ¡70 dB after the third stage. As
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 42, NO. 2 APRIL 2006
Fig. 7. Results for high SNR target outside of processing window using Lewis-Kretschmer P3 code (N = 30).
Fig. 8. Results for high SNR target with substantial Doppler shift using Lewis-Kretschmer P3 code (N = 30).
compared with the previous case, the extra stage for RMMSE was necessary. The reason for this is that the respective range sidelobes for the two targets overlaps and thus an additional iteration was needed to completely decouple their effects in order to completely mitigate the range sidelobes in the region near range cell index ` = 20. Note that the improvement by RMMSE is enabled because it can incorporate knowledge of range cell estimates from outside the processing window. Also, it is shown in Section VIE that targets that lie just outside of the range interval for which RMMSE is applied do not degrade its estimation accuracy within the processing window. D. Case 4: High SNR Target with Large Doppler Shift To determine the RMMSE algorithm’s tolerance to Doppler we compare performance for a high SNR point target that possesses a Doppler shift of 40± in phase over the length of the waveform with noise power set to ¡60 dB with respect to the target. This Doppler shift is analogous to a Mach 5 target illuminated by a 1 ms pulse from an X-band radar. Three stages of RMMSE are employed with ® = 1:5 and 1.2 for the two reiteration stages. The results for NMF, LS, and RMMSE are depicted in Fig. 8. NMF exhibits the usual range sidelobes. However, LS reveals substantial performance degradation relative to the case without Doppler due to the modeling error that results from Doppler mismatch. The RMMSE algorithm, while degraded somewhat as a result of Doppler mismatch, is still able to significantly outperform either NMF or LS. The MSE results for the three techniques are ¡25 dB for NMF, ¡28 dB for LS, and ¡27 dB, ¡28 dB, and ¡28 dB for the three stages of RMMSE. Note that the Doppler mismatch dictates the minimum achievable MSE and this is
Fig. 9. Results for dense target scenario with Doppler using Lewis-Kretschmer P3 code (N = 30).
why RMMSE and LS have the same final MSE yet RMMSE has much lower sidelobes. E. Case 5: Dense Target Scenario with Doppler The final case we examine is a stressing scenario with numerous targets separated in range with greatly varying powers and random Dopplers. Large targets exist just outside of the processing window as well as outside of the range interval to which RMMSE is applied. The noise power is set to ¡60 dB with respect to unity. The targets possess Doppler shifts that are at most §10± over the length of the waveform. For this case, four stages of the RMMSE algorithm are employed with ® = 1:5, 1.3, and 1.1 for the reiteration stages. Fig. 9 depicts the results from the different techniques along with the ground truth in which it is found that for both LS and NMF there are targets that remain
BLUNT & GERLACH: ADAPTIVE PULSE COMPRESSION VIA MMSE ESTIMATION
581
masked and are therefore undetectable. The RMMSE algorithm on the other hand performs far better and is even able to resolve the ¡40 dB target at range cell index ` = 30 from the noise and the much larger neighboring targets. In terms of MSE, the NMF attains only ¡31 dB, LS does better with ¡42 dB, while the RMMSE algorithm has ¡32 dB after the first stage and maintains ¡43 dB after the second stage and through the fourth stage. The minimum MSE is limited by Doppler mismatch on the large targets which is the reason it does not decrease after the second stage. However, the RMMSE algorithm does continue to uncover the smaller targets as is evident from Fig. 9.
simultaneously received waveforms in the same spectrum. This latter topic is concerned with the effective detection of the range profile illuminated by a given radar in the presence of other radars that share the same spectrum. REFERENCES [1]
[2]
[3]
VII. SUMMARY Pulse compression is used primarily in sensing applications such as radar, seismic estimation, biomedical imaging, and ultrasonic nondestructive evaluation, as a means of obtaining high spatial resolution. After reception, the range profile estimate is typically obtained by either matched filtering with respect to the transmitted waveform or LS estimation. However, matched filtering results in sidelobes due to the autocorrelation of the transmitted waveform and LS estimation can suffer from severe misestimation due to inherent approximations in its formulation. In contrast, the RMMSE algorithm is proposed which adaptively estimates the MMSE filter that matches the actual received return signal for each individual range cell. The result is a robust estimator that mitigates autocorrelation sidelobes to the level of the noise. The RMMSE algorithm has been shown in a variety of scenarios to exhibit very small mismatch loss for small targets in noise and is also robust to Doppler mismatch. In fact, the RMMSE algorithm is capable of unmasking very small targets in the presence of several closely-spaced large targets possessing considerable Dopplers. This methodology was demonstrated using polyphase-coded waveforms (discrete-time phase modulated). It remains a topic of future research to determine the performance of the proposed method with continuous-time phase-modulated waveforms (such as linear or nonlinear FM). Unlike discrete-time waveforms, the continuous-time waveforms may result in a scalloping loss whenever the discrete model of the range profile and the actual received return signal are not perfectly synchronous. Additional topics of future research are the extensions to multiple pulses and antenna elements (i.e., Doppler-range, space-range, and space-time-range adaptive processing), adaptively accounting for Doppler-mismatch of individual targets, fast approximations to RMMSE for real-time computation, and adaptive separation/pulse compression of 582
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
Skolnik, M. I. Introduction to Radar Systems, (3rd ed.). New York: McGraw-Hill, 2001, 339—369. O’Brien, M. S., Sinclair, A. N., and Kramer, S. M. High resolution deconvolution using least-absolute-values minimization. In Proceedings of the Ultrasonics Symposium, Dec. 4—7, 1990, 1151—1156. Suh, D-M., Kim, W-W., and Chung, J-G. Ultrasonic inspection of studs (bolts) using dynamic predictive deconvolution and wave shaping. IEEE Transactions Ultrasonics, Ferroelectrics, and Frequency Control, 46, 2 (Mar. 1999), 457—463. Misaridis, T. X., Gammelmark, K., Jorgensen, C. H., Lindberg, N., Thomsen, A. H., Pedersen, M. H., and Jensen, J. A. Potential of coded excitation in medical ultrasound imaging. Ultrasonics, 38 (2000), 183—189. Yarlagadda, R., Bednar, J. B., and Watt, T. L. Fast algorithms for lp deconvolution. IEEE Transactions on Acoustics, Speech, and Signal Processing, ASSP-33, 1 (Feb. 1985), 174—182. Lewis, B. L., and Kretschmer, F. F. Linear frequency modulation derived polyphase pulse compression codes. IEEE Transactions on Aerospace and Electronic Systems, AES-18, 5 (Sept. 1982), 637—641. Treitel, S., and Robinson, E. A. The design of high-resolution digital filters. IEEE Transactions on Geoscience Electronics, GE-4, 1 (June 1966), 25—38. Ackroyd, M. H., and Ghani, F. Optimum mismatched filter for sidelobe suppression. IEEE Transactions on Aerospace and Electronic Systems, AES-9 (Mar. 1973), 214—218. Baden, J. M., and Cohen, M. N. Optimal peak sidelobe filters for biphase pulse compression. In Proceedings of the IEEE International Radar Conference, May 1990, 249—252. Baden, J. M., and Cohen, M. N. Optimal sidelobe suppression for biphase codes. In Proceedings of the National Telesystems Conference, Mar. 1991, 127—131. Sato, R., and Shinrhu, M. Simple mismatched filter for binary pulse compression code with small PSL and small S/N loss. IEEE Transactions on Aerospace and Electronic Systems, 39, 2 (Apr. 2003), 711—718. Blinchikoff, H. J. Range sidelobe reduction for the quadriphase codes. IEEE Transactions on Aerospace and Electronic Systems, 32, 2 (Apr. 1996). Felhauer, T. Digital signal processing for optimum wideband channel estimation in the presence of noise. IEE Proceedings, Pt. F, 140, 3 (June 1993), 179—186.
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 42, NO. 2 APRIL 2006
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
Song, S. M., Kim, W. M., Park, D., and Kim, Y. Estimation theoretic approach for radar pulse compression processing and its optimal codes. Electronic Letters, 36, 3 (Feb. 2000), 250—252. Zrnic, B., Zejak, A., Petrovic, A., and Simic, I. Range sidelobe suppression for pulse compression radars utilizing modified RLS algorithm. In Proceedings of the IEEE International Symposium Spread Spectrum Techniques and Applications, vol. 3, Sept. 1998, 1008—1011. Sarkar, T. K., and Brown, R. D. An ultra-low sidelobe pulse compression technique for high performance radar systems. In Proceedings of the IEEE National Radar Conference, May 1997, 111—114. Kay, S. M. Fundamentals of Statistical Signal Processing: Estimation Theory. Upper Saddle River, NJ: Prentice-Hall, 1993, 219—286 and 344—350. Gabriel, W. F. Superresolution techniques in the range domain. In Proceedings of the IEEE International Radar Conference, May, 1990, 263—267. Gabriel, W. F. Superresolution techniques and ISAR imaging. Naval Research Laboratory Memorandum Report, 6714, Sept. 21, 1990. Liao, X., and Bao, Z. Radar target recognition using superresolution range profiles as features. In Proceedings of SPIE International Symposium on Multispectral Image Processing, vol. 3545, Sept. 1998, 397—400. Tsao, J., and Steinberg, B. D. Reduction of sidelobe and speckle artifacts in microwave imaging: the CLEAN technique. IEEE Transactions on Antennas and Propagation, 36, 4 (Apr. 1988), 543—556. Bose, R., Freedman, A., and Steinberg, B. D. Sequence CLEAN: A modified deconvolution technique for microwave imaging of contiguous targets. IEEE Transactions on Aerospace and Electronic Systems, 38, 1 (Jan. 2002), 89—97.
[23]
[24]
[25]
[26]
[27]
[28]
[29]
[30]
Blunt, S. D., and Gerlach, K. A novel pulse compression scheme based on minimum mean-square error reiteration. In Proceedings of the IEEE International Radar Conference, Sept. 3—5, 2003, 349—353. Blunt, S. D., and Gerlach, K. Adaptive pulse compression. In Proceedings of the IEEE International Radar Conference, Apr. 26—29, 2004, 271—276. Fante, R. L. Adaptive nulling of SAR sidelobe discretes. IEEE Transactions on Aerospace and Electronic Systems, 35, 4 (Oct. 1999), 1212—1218. DeGraaf, S. R. Sidelobe reduction via adaptive FIR filtering in SAR imagery. IEEE Transactions on Image Processing, 3, 3 (May 1994), 292—301. DeGraaf, S. R. SAR imaging via modern 2-D spectral estimation methods. IEEE Transactions on Image Processing, 7, 5 (May 1998), 729—761. Van Trees, H. L. Optimum Array Processing. New York: Wiley, 2002. Moon, T. K., and Stirling, W. C. Mathematical Methods and Algorithms for Signal Processing. Upper Saddle River, NJ: Prentice-Hall, 2000, 258—264. Haykin, S. Adaptive Filter Theory. Upper Saddle River, NJ: Prentice-Hall, 2002.
BLUNT & GERLACH: ADAPTIVE PULSE COMPRESSION VIA MMSE ESTIMATION
583
Shannon D. Blunt (S’96–M’02) received the B.S., M.S., and Ph.D. degrees in electrical engineering in 1999, 2000, and 2002, respectively, from the University of Missouri—Columbia (MU). From 2002 to 2005 he was with the Radar Division of the U.S. Naval Research Laboratory in Washington, D.C. In 2005 he joined the faculty of the Department of Electrical Engineering and Computer Science at the University of Kansas. Dr. Blunt received the Donald K. Anderson Graduate Student Teaching award in electrical engineering from MU in 2000 and the MU Outstanding Graduate Student award in electrical engineering in 2001. He also received the 2004 Naval Research Laboratory Alan Berman Research Publication Award. He is a member of Eta Kappa Nu and Tau Beta Pi. His research interests are in adaptive signal processing for radar and communications with an emphasis on waveform diversity techniques.
Karl Gerlach (M’81–F’02) was born in Chicago, IL. He received his B.S. in 1972 from the University of Illinois, Urbana, and his M.S. and D.Sc. from George Washington University, Washington, D.C., in 1975 and 1981, respectively, all in electrical engineering. Since 1972, he has been employed by the Naval Research Laboratory in Washington, D.C. From 1972 to 1976, he worked on experimental submarine communications systems and from 1976 to the present he has been with the Radar Division where his research interests include adaptive signal processing and space-based radar. Dr. Gerlach was the 1986 recipient of the IEEE AESS Radar Systems Panel award. 584
IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 42, NO. 2 APRIL 2006