4496
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 51, NO. 8, AUGUST 2013
Double-Channel Bistatic SAR System With Spaceborne Illuminator for 2-D and 3-D SAR Remote Sensing Robert Wang, Senior Member, IEEE, Yunkai Deng, Member, IEEE, Zhimin Zhang, Yunfeng Shao, Jixiang Hou, Gang Liu, and Xiayi Wu
Abstract— This paper presents a double-channel hybrid bistatic synthetic aperture radar (SAR) system. It can be implemented with the available illuminator (e.g., spaceborne and airborne SAR systems) to acquire the 2-D and 3-D microwave images for remote-sensing applications. This system can be used as a test system for the validation of complex bistatic acquisitions, novel synchronization algorithms, and advanced imaging techniques. In this paper, we will demonstrate the time and phase synchronization strategies, fast time-domain imaging algorithm, 2-D bistatic SAR, and 3-D bistatic stereo radargrammetry in SAR images. Based on the proposed bistatic system, the acquired bistatic image has a much better sensitivity than its monostatic counterpart, which will facilitate the detection and recognition of targets based on their characteristic and bistatic scattering behaviors. Finally, the experiment results highlight the differences between monotatic and bistatic SAR images. Index Terms— Bistatic synthetic aperture radar (BiSAR), stereo radargrammetry SAR.
I. I NTRODUCTION
B
ISTATIC synthetic aperture radar (BiSAR) is characterized by different locations for transmitter and receiver and hence offers considerable capability, reliability, and flexibility in designing BiSAR missions [1]–[11]. The bistatic configuration also brings additional benefits when compared to a monostatic SAR system like frequent monitoring, resolution enhancement, reduced vulnerability for military applications, reduced costs using existing illuminators of opportunity with several receive-only systems, and also the possibility of forward- or backward-looking SAR imaging [1]–[17]. Bistatic systems with stationary illuminators (transmitters) appear interesting in this context, as they allow small and lightweight receive-only unmanned aerial vehicles to produce bistatic SAR images. The passive receivers in the stationary configuration with a feasible illuminator could be used to monitor the
Manuscript received July 19, 2012; revised January 16, 2013; accepted March 1, 2013. Date of publication May 13, 2013; date of current version July 22, 2013. This work was supported by the “Hundred Talents Program” of The Chinese Academy of Sciences. The authors are with the Space Microwave Remote Sensing System Department, Institute of Electronics, Chinese Academy of Sciences, Beijing 100190, China (e-mail:
[email protected];
[email protected];
[email protected];
[email protected];
[email protected];
[email protected];
[email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TGRS.2013.2252908
places of interest and further multidimensional information, such as bistatic stereo radargrammetry measurements, stereo radargrammetry-infereometric SAR, resolution enhancement, and multibaseline infereometric SAR, and so on [3], [15]. This paper proposes a double-channel bistatic receiving system that works with some available illuminators, e.g., airborne or spaceborne SAR systems. It was built by Department of space microwave remote sensing system, Institute of Electronics, Chinese Academy of Sciences (IECAS). This system can flexibly work at L and C bands. In order to validate the foregoing system, several bistatic SAR experiments have been successfully performed in 2011 by Department of space microwave remote sensing system, IECAS. In the experiments, L-band spaceborne satellite SAR is operated, as the separate transmitter, in stripmap mode, and the doublechannel is mounted on a hill, as shown in Fig. 1. This bistatic configuration can be named as a moving/stationary configuration [5], [13], [14], [16]. The similar hybrid bistatic systems and experiments have been presented in [5], [13], [14], and [16]. In 2006, a C-band stationary receiver for the hybrid SAR experiment has been developed by Remote Sensing Laboratory at the Universitat Politecnica de Catalunya where ENVISAT and ERS-2 are used as transmitters [16]. Subsequently, in December 2007, Frauhofer-FHR performed a hybrid bistatic experiment where a stationary transmitter is used and FHR’s phased array multifunctional imaging radar (PAMIR) acts as a moving receiver [5]. In summer of 2009, the Center for Sensorsystems, University of Siegen has developed a X-Band stationary receiver which can work together with the TerraSAR-X [13], [14]. In the double-channel receiving system, one antenna is orientated directly to the looking direction of transmitter to receive the transmitted pulse from the satellite and further trigger another receiving channel whose receiving antenna is steered toward the illumination scene. In this way, the receiving channel for the echo signal can be synchronized in time and phase by the direct channel for transmitter pulse. For this moving/stationary configuration, only the moving platform contributes to the azimuth modulation, whereas the stationary platform introduces a range offset to the range migration trajectories of targets at the same range. The offset is determined by the azimuth position of different targets with respect to the stationary platform [5]. It is the space-
0196-2892/$31.00 © 2013 IEEE
WANG et al.: BISTATIC SAR SYSTEM FOR REMOTE SENSING
4497
Ch1:Direct signal Ch2:Echo Singal LNA LNA
Z
Band Pass Filter
Band Pass Filter
mixing Direct pulse
Receiver
Local mixing Oscillator
A/D Synchronization
A/D Signal Detection
Synchronization
Y
Digtal Mixing e sit po int o m pr C oot f
LowPass Filter
Grou
Fig. 1.
Digtal Mixing
FPGA Module
LowPass Filter
Data Recording
X nd ra nge
Fig. 2.
System configuration of double-channel bistatic SAR system.
Fig. 3.
Digital and monitoring setups.
Imaging geometry in the spaceborne-stationary configuration.
variant offset that makes it a difficult processing in the frequency-domain. Therefore, in this paper, we will use the fast time-domain algorithm to focus the acquired raw data. Furthermore, using the two acquired data with different incidence angles, the 3-D stereo radargrammetry InSAR image can be obtained. Moreover, from the obtained bistatic and monostatic SAR images, the different presentation between monostatic and bistatic images is highlight, which is caused by the differences in the illumination geometry and scattering mechanisms. Furthermore, bistatic data can be combined with monostatic data to obtain a highly informative set of multiangle observations. For example, the clear presentation of road and crossway in the bistaic image will facilitate the detection and recognition of targets based on their characteristic and bistatic scattering behavior. Based on this system, one can further explore the bistatic scattering behavior [3] and monitor land subsidence and its induced geological hazard with the bistatic synthetic aperture radar interferometry and tomography in the future [8]–[11]. This paper is organized as follows. In Section II, the primary system configuration is described. Section III presents the imaging strategy and approaches. In Section IV, 2-D and 3-D bistatic SAR images are shown to highlight the different scattering behaviors between bistatic and monostatic configurations. Finally, conclusion is reported in Section IV. II. S YSTEM C ONFIGURATION AND P ERFORMANCE A NALYSIS A. Hardware Configuration The designed system consists of two channels, i.e., Ch1 and Ch2, as shown in Figs. 2 and 3. The Ch1 is used to receive the direct signal from the illuminator, i.e., spaceborne SAR. The Ch2 is designed to receive the reflected echo signal from the illuminated scene.
In the double-channel receiving system, two same horn antennas are used, and designed with a beam width of 15° in both directions and a receiving gain of 15 dB, as listed in Table I. Signals from Ch1 and Ch2 are recorded simultaneously. Due to data-rate limitations, a continuous data acquisition is not possible. Therefore, appropriate receiving gates have to
4498
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 51, NO. 8, AUGUST 2013
TABLE I S YSTEM PARAMETERS OF R ECEIVING C HANNEL Receiving Channels (Ch1 and Ch2) Antenna receive gain 15 dB Carrier frequency 1.25 GHz Sampling frequency 533 MHz Antenna size 0.567 m (R) × 0.423 m(A) Altitude 343 m Spaceborne Transmitter Bandwidth 62 MHz Altitude 650 Km
Fig. 4.
value is greater than the Threshold, the Module generates PRF signal and HOLD signal. The high level of PRF signal lasts for several clocks and the HOLD signal lasts until the entire frame of data is sampled. The Synchronization Module stops comparison if the HOLD signal is asserted and restarts the comparison after the HOLD signal is deasserted. Furthermore, interfering signals must be dealt with. Otherwise, it might lead to a false trigger. In our system, interfering signals that may cause a false trigger are filtered by analog Band-pass Filter and suppressed by comparator and digital phase-locked loop. After triggering the receiving channel for the echo signal, both the channels are started at the same time to record the sampled echo signal and whole transmitted direct pulse. Using the recorded direct pulse, phase synchronization can be precisely implemented. The geometry synchronization is achieved by a precise steering of the beam orientation of the Ch2 and preknowledge of orbit and looking angle of illuminator.
Pulse synchronization using direct signal.
be defined, which is achieved by pulse-synchronization using a direct-path signal. A precise timer is always running in the receiver and the starting-time of each receiving gates is recorded as auxiliary data. B. Synchronization Issue The bistatic synchronization (i.e., geometry, phase, and time) is an essential step for bistatic SAR data acquisition [5]–[7], [12]. Any deviation between the two oscillators will cause a residual modulation of the recorded azimuth signal, and drift of the receiving window of receiver with respect to that of the transmitter and may prevent a proper recording of the echo signal [6], [7]. In [5] and [12], the windowed data acquisition is synchronized with the directly received satellite pulses. After this synchronization signal, the PAMIR works according to its own clock. The recorded synchronization chirp signal is also used as phase synchronization. In [6] and [7], TerraSAR-X and TanDEM-X systems employ mutual exchange of radar pulses to achieve the phase synchronization. The local electrode atom probe pulse repetition intervals (PRIs) are introduced to synchronize the receiving window. In this presented experiment, since the horn antenna in the Ch1 is steered directly to the looking direction of spaceborne transmitter, it can capture the direct pulse from the illuminator with high SNR, and further accurately detect the direct pulse. The detected direct pulse is used as a time synchronization signal. In this synchronization strategy, the A/D Sample Module works all the time. And the data from A/D Sample will be sent to Synchronization Module and FPGA-Processing Module at the same time. Fig. 4 shows the used pulse-synchronization system to trigger the receiver’s internal PRF, which determines the timing of the echo window. After the system reset, the Synchronization Module starts to compare the A/D value with the Threshold (Threshold here should be a little lower than the maximum of the A/D value at the beginning of the frame and much higher than the amplitude of the noise). If the A/D
C. NESZ The noise-equivalent sigma zero (NESZ) is the well-known abbreviation of noise equivalent sigma zero in SAR [3], [23], [24]. It is a measure of the sensitivity of the system to areas of low radar backscatter and given by the value of the backscatter coefficient corresponding to a signal-to-noise ratio of unity, SNR = 1. Therefore, it also represents the lowest backscatter coefficient in the illuminated scene, which the SAR system is able to detect. It can be defined as NESZ =
2 r 2 kT FL (4π)3 r0R n 0T PTAV G T G R λ2 Acell Ta
(1)
where r0R and r0T are the closest slant ranges from target to the receiver and transmitter, respectively; PTAV represents the average transmit power; G T and G R are the gains of the transmitter and receiver; λ is the wavelength; Ta denotes the joint integration time; Acell is the area of the resolution cell of the bistatic image; k is the Boltzmann constant; Tn is the total effective system noise temperature; F is the receiver noise figure; L is the system loss factor. Using the parameters listed in Table I, the NESZ is shown in Fig. 5. The NESZ for the bistatic SAR image varies from −79 to −51 dB for near and far range, as shown Fig. 5(a). Such a wide variation is caused by the fact that the incidence angle of the stationary receiver ranges from 65° to 87°. It is more than 24-dB better than the monostatic spaceborne acquirement. Especially in the near range, it is 52-dB better than the monostatic case. This increase in NESZ is mainly due to the much shorter range from the stationary receiver to the scene. Thus, this acquired configuration with a stationary receiver has a much better sensitivity than its monostatic counterpart [3]. D. Resolution Resolution is the degree to which two or more point targets of approximately equal amplitude and arbitrary constant phase can be separated in one or more physical dimensions, such
WANG et al.: BISTATIC SAR SYSTEM FOR REMOTE SENSING
4499
(a) (a)
(b)
(b)
Fig. 5. NESZ for the imaged scene. The azimuth axis denotes the imaged scene position in an along-track direction; the range axis is the imaged scene position in an across-track direction. (a) NESZ for bistatic SAR image. (b) NESZ difference between bistatic and monostatic SAR images.
Fig. 6. Two-dimensional resolution for this bistatic configuration. The azimuth axis denotes the imaged scene position in an along-track direction; the range axis is the imaged scene position in an across-track direction. (a) Ground range resolution. (b) Azimuth resolution.
as range and Doppler [18], [19]. For monostatic SAR, the ground-range resolution and azimuthresolution can obviously be defined as c (2Br sin θinc ) and v Ba , respectively. c is the speed of light; Br is the bandwidth of transmitted chirp signal; θinc denote s the incidence angle of monostatic SAR system where transmitter and receiver have the same value; v is the effective velocity of platform. However, in a bistatic scenario, the definition becomes more complex since the intersection of the isorange surfaces with the ground surface is ellipsoid with the transmitter and receiver as foci rather than circle. The bistatic ground range resolution is defined as the projection of slant range resolution in the gradient direction of the slant range [3]. The gradient direction gives the direction of the maximum change in slant range. Therefore, the bistatic ground range resolution can be formulated as [3]
formulated as [3]
ρg =
c c 1 1 = Br ∇ R B Br sin θT + sin θ R
(2)
where θT and θ R are the incidence angles of transmitter and receiver, respectively. The azimuth resolution can be arrived at using similar approach as the derivation for range resolution. It can be
1 1 1 = · · ρa = Ta Ta ∇ f Dop
vT vR + λr0T λr0R
−1 (3)
where f Dop is the Doppler history. Using the parameters listed in Table I, the ground range resolution varies from 3.0 to 4.6 m, as shown in Fig. 4(a). The resolution of the targets located near the receiver deteriorates along the ground swath since the incidence angle of receiver increases. With a synthetic aperture time of 2.84, the azimuth ranges from 9.62 to 9.7 m, as shown in Fig. 6(b). III. I MAGING T ECHNIQUES A. Signal Model Assuming a moving spaceborne SAR system as the illuminator, the proposed double-channel stationary system is used as a receiver. One is to record the signal directly from illuminator, and another is to receive the echo signal reflected from the illuminated scene. For the channel Ch1, the received signal after demodulation is defined as R D (τ ) R D (τ ) exp −j 2π (4) g D (τ, t) = sl t − c λ
4500
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 51, NO. 8, AUGUST 2013
where sl (t) represents the transmitted signal. R D (τ ) denotes the instantaneous slant range from the transmitter to the Ch1, and given by R D (τ ) =
2 + v2τ 2 rD
(5)
where r D is defined as the range of closest approach from the illuminator to the channel Ch1, i.e., zero Doppler slant range. And, to simplify, the corresponding zero Doppler time is defined as the azimuth time origin and assumed to be 0 when the instantaneous slant range from the illuminator to the channel Ch1 is equal to the range of closest approach. The received signal, g E , from a point target located at (τ0T , r0R ) after demodulation is given by g E (τ, t, τ0T , r0R ) = w (τ − τcb ) ws (τ0T ) σ (τ0T , r0R ) RT (τ ) + R R (τ0T , r0R ) ×sl t − c RT (τ ) + R R (τ0T , r0R ) × exp −j 2π λ (6) where τ0T is zero Doppler time of transmitter; w (τ − τcb ) represents the composite antenna pattern centered on azimuth time τcb , simplified as a uniform illumination over the ground. The second rectangle function determines the extension of the illuminated area. σ (τ0T , r0R ) is backscattering coefficient of the point target located at (τ0T , r0R ); R R (τ0T , r0R ) is referred to the stationary slant range from the receiver (i.e., Ch2) to the point target; RT (τ ) represents the instantaneous slant range from the transmitter to the point target, and they are given by 2 + v 2τ 2 (7) R R (τ0T , r0R ) = r0R 0T 2 + v2 RT (τ ) = r0T (8) (τ − τ0T )2 . B. Fast Time-Domain Processing The main principle of the fast backprojection algorithm is to use a part of the coherent addition of the adjacency object (e.g., ( p + 1, q) , ( p − 1, q)), to replace the corresponding part of the point ( p, q). Assuming there are N-range lines and M samples per range line, the whole range compressed data √ are divided into N data blocks along the azimuth direction and each data block is treated as a sub-aperture. Then, the subimage can be focused using the corresponding sub-aperture by traditional BP algorithm. Finally, the sub-images are used to obtain the finial image with the secondary phase compensation. Fig. 7 is the block diagram of the proposed FBP algorithm. The red part represents the sub-image process and the green part is the final image process. This principle is proposed in literature [20], based on which we modified the previous algorithm. There are two modifications from the previous algorithm. First, the proposed fast back projection algorithm in the paper uses the direct signal for the match filter to carry out the range compression. Second, the proposed fast-back projection algorithm adds secondary phase correction to increase the image quality. There are three advantages of the first modification. First, the applied algorithm can reduce the requirement of measurement precision of the satellite orbit and the receiver position,
Range Compress data
Compensation phase
Step 1 Sub-image process
Secondary correction phase
Step 2 Final image process
Fig. 7.
Block diagram of FBP.
because the phase of the direct signal contains the geometry information of the satellite orbit, the scene and the receiver position. Using that information, the algorithm can reduce the effect of the measurement error. Second, the fast-back projection uses the approximation range instead of precise range to reduce the computational complexity, which affects the image quality. The effect of the approximation increases with the magnitude of the range cell migration. The direct signal includes the information of the range cell migration. So, in this way, the proposed method can reduce the effect of the approximation of the fast-back projection. Third, there is a path delay due to temporal-spatial atmospheric inhomogeneity. Using the direct signal as the match filter to compress the echo in the range compression process can reduce the effect of the atmospheric delay, because the atmospheric delay is almost same for both the direct signal and echo. The advantage of the second modification is as follows. The fast back projection uses the approximation range instead of the precise range to reduce the computational complexity. The difference between the approximation range and the precise range is in the order of several centimeters. That difference is much less than the range resolution, but is equivalent to the wavelength of the carry frequency. So the difference affects the magnitude slightly, but the effect of the phase is seriously affected. Because the BP is based on the coherent accumulation, the phase error affects the BP result significantly. So the error is unacceptable, especially for X-band SAR. Another characteristic of the difference between the approximative range and the precise range is that it is nearly a constant value in each sub-block, so we can correct the phase by the secondary-phase correction to reduce the approximation-induced error to the phase. Assuming that the slant range and azimuth resolutions are ρr and ρa , and the closest slant range of the processed scene is R0R , and thus the position of the target (τnT , rm R ) is (x n , ym ), given by n = 1, 2, . . . , N x n = nρa (9) ym = R0R + mρr m = 1, 2, . . . , M .
WANG et al.: BISTATIC SAR SYSTEM FOR REMOTE SENSING
4501
Step 1: Sub-image process 1) First, the range-compressed data are oversampled along the range direction by zeros padding the spectrum using the property of the Fourier transform. 2) Second, the whole range-compressed data are divided into blocks along the azimuth direction √ and each data block, treated as a sub-aperture, has N range lines of data. Assume the finial image grid is (x n , ym ), the j j jth sub-image grid is x n , ym , the relationship between them, which can be referred to as interlace sub-image grid method, can be described as follows: j √ √ xn = N nρa + jρa /α1 n = 1, 2, . . . , N α1 j m = 1, 2, . . . , Mα2 y = R0R + mρr /α2
Z
Direct pulse
Y
e sit po int o m pr C oot f
m
(10) where, α1 and α2 are the over sample ratios along the azimuth direction and the range direction, respectively. Each sub-image can be focused using traditional BP with the according data block. Taking the jth sub-image for example, the sub-image azimuth focusing is formulated as √
ds ( j, n, m) =
N −1
Receiver
X
Grou n
d ran
ge
Fig. 8. Geometry of the configuration of the Bistatic Stereo radargrammetry SAR.
j j dr j, i, index τi ; x n , ym
where τs j is the start azimuth time of the jth data block and τ e j is the end azimuth time of the jth data block.
j j The approximated close-form solution of (15) is (11) ·φc τi ; x n , ym (see Appendix A) √
where, ds ( j, n, m) is element of the jth sub-image; ⎧
j j ⎪ N /α1 n j = round (n − j/α1) ⎪ is element of the jth range comdr j, i, index τi , x n , ym ⎪
√ ⎨
j j · N /α + j/α1 1 (16) pressed data block; φc τi , x n , ym is the compensation phase;
⎪ j ⎪ ⎪ vτ −x n j ρa τi is the azimuth time of the ith range line ⎩ mj = m − nj − n .
ym ρr j j j j index τ ; x n , ym = round RRD τ ; x n /v, ym Fs β/c (12) Assuming the n j , m j is the solution for the j th sub
j j j j φc τi ; x n , ym = exp j 2π f 0 RRD τi ; x n /v, ym /c (13) image, the secondary phase correction can be formulated as (see Appendix B) where φs ηc j ; n j , m j = exp j 2π f 0 RRD τc j ; x m /v, yn
2
2
j j j j ym + v 2 τi − x n /v RRD τi ; x n /v, ym = j j (17) −RRD τc j ; x m j /v, yn j /c
2 2 j j 2 + v2τ 2 where ηc j is the middle azimuth time of the j th data block. + ym + x n − r D i The finial image azimuth focusing can be formulated as (14) i=0
√
where, round(·) is a function of rounding the elements to the nearest integers, Fs is the range sampling rate and β is the range over sample ratio. Step 2: Obtain the final image In this step, the final image is obtained using the series of sub-images obtained in the foregoing steps. Due to the image grid (n, m), it needs position of (x n , ym ) in the finial to find approximated point n j , m j in each sub-image, and further sum them up after secondary phase correction. The approach to find n j , m j in the proposed FBP algorithm can be formulated as an optimization problem
τe j j j τs j RRD (τ ; x n /v, ym ) − RRD τ ; x n j /v, ym j dτ min √ m j ,n j n = 0, 1, . . . , N α , m = 0, 1, 2 . . . , Mα j 1 j 2 (15)
image (n, m) =
N −1
ds j, n j , m j · φs τc j ; n j , m j . (18)
j =0
C. Three-Dimensional Imaging Because we use the direct signal as the match filter to compress the echo in the range direction, the imaging geometry relationship is not the same as the previous one. The position of satellite S1 is S1 (x s1 , ys1 , z s1 ) and the position of satellite S2 is S2 (x s2 , ys2 , z s2 ). The position of the synchronization receiver is A (0, y A , z A ) and the position of the echo receiver B is B (0, y B , z B ). The position of the target on the ground is T (τT , y, z). The geometry of the configuration is shown in Fig. 8.
4502
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 51, NO. 8, AUGUST 2013
For track 1 R D1 (τT ) = R R1 (τT , y, z) = RT 1 (y, z) = RRD1 (τT , y, z) = =
(y S1 − y A )2 + (z S1 − z A )2 + v 2 τT2 (19) (20) (y B − y)2 + (z B − z)2 + v 2 τT2 (21) (y S1 − y)2 + (z S1 − z)2 R (y, z) + R R1 (τT , y, z) − R D1 (τT ) T 1 (y S1 − y)2 + (z S1 − z)2 + (y B − y)2 + (z B − z)2 + v 2 τT2 − (y S1 − y A )2 + (z S1 − z A )2 + v 2 τT2 . (22)
For track 2 R D2 (τT ) = R R2 (τT , y, z) =
(a)
(y S2 − y A )2 + (z S2 − z A )2 + v 2 τT2 (23) (y B − y)2 + (z B − z)2 + v 2 τT2
(24)
RT 2 (y, z) = (y S2 − y)2 + (z S2 − z)2 (25) RRD2 (τT , y, z) = RT 2 (y, z) + R R2 (τT , y, z) − R D2 (τT ) = (y S2 − y)2 + (z S2 − z)2 + (y B − y)2 + (z B − z)2 + v 2 τT2 − (y S2 − y A )2 + (z S2 − z A )2 + v 2 τT2 . (26)
(b)
Combining (22) and (26) can obtain (27), shown at the bottom of the page. Solving the equations in (27), one can get the real y, z. IV. E XPERIMENT R ESULTS To validate focusing the performance of the developed bistatic signal model and processing approaches, the processing was performed on the real spaceborne/stationary bistatic SAR data. The data were collected by the proposed double-channel bistatic receiving system, where the L-band spaceborne illuminator worked in the strip-map mode. To obtain the bistatic 3-D image, two flight experiments need to be performed, where the transmitter, i.e., L-band spaceborne illuminator, is operated at the different incidence angles of 47.6° and 36.6° with a time interval of seven days. A. Two-Dimensional Imaging Experiment By using the proposed signal model and the developed fast time-domain processing approach, the focused bistatic SAR
(c) Fig. 9. (a) Spaceborne monostatic SAR image. (b) Bistatic SAR image processed by the processed approach. (c) Optical image from Google Earth.
image is shown in Fig. 9(b). For comparison, the corresponding monostatic spaceborne SAR image and optical image of the processed scene are shown in Fig. 9(a) and (b).
⎧ ⎨ (y S1 − y)2 + (z S1 − z)2 + (y B − y)2 + (z B − z)2 + v 2 τ 2 = RRD1 (τT , y, z) + (y S1 − y A )2 + (z S1 − z A )2 + v 2 τ 2 T T ⎩ (y − y)2 + (z − z)2 + (y − y)2 + (z − z)2 + v 2 τ 2 = R 2 2 2 2 S2 S2 B B RD2 (τT , y, z) + (y S2 − y A ) + (z S2 − z A ) + v τT T (27)
WANG et al.: BISTATIC SAR SYSTEM FOR REMOTE SENSING
4503
A
A
B
B
(b)
(a)
A
B
(c) Fig. 10. Comparison of monostatic SAR and bistatic SAR images. (a) Monostatic PAMIR SAR image. (b) Bistatic SAR image. (c) Optical image from Google Earth.
For the spaceborne/stationary configuration, only the moving platform contributes to the azimuth modulation, whereas the stationary platform introduces a range offset to the range migration trajectories of targets at the same range [5]. The offset is determined by the azimuth position of different targets with respect to the stationary platform. It is the space-variant offset that makes it a difficult processing in the frequencydomain [5]. However, the space-variant offset can be dealt with by using the fast time-domain-imaging algorithm. Because a stationary receiver was used, the illumination pattern of the horn antenna becomes clearly visible. In addition, large shadows are present in the left-up area of Fig. 9(b) due to the small depression angle of the horn antenna (10°). In this paper, we further show the differences between the monostatic SAR image and bistatic SAR image in the performed two spaceborne/stationary bistatic SAR experiments. For more details, the common areas in the monostatic and bistatic SAR images marked by the solid line, are zoomed, and chosen for a comparison with the corresponding optical image, shown in Fig. 10. Fig. 10 shows a zoomed area-bearing evidence of the SNR difference between the monostatic and bistatic SAR images. In the bistatic [Fig. 10(b)] and optical [Fig. 10(c)] images, marked by the red capital letter A, it can be seen that the road and crossway are clearly visible.
However, they are not visible in the monostatic spaceborne image [Fig. 10(a)]. This different presentation of same scene is caused by the differences in the illumination geometry and scattering mechanisms, which is a direct consequence of the different incident angles and heights between transmitter and receiver [23]. For the performed experiments, the spaceborne illuminator remains constant along the whole composite swath. However, the incidence angle of the stationary receiver ranges from 65° to 87° along the composite swath. Another interesting example highlighting the difference between monostatic and bistatic scattering behaviors can be seen in the area marked by red capital letter B of Fig. 10. It is clear that a square area is visible in bistatic SAR and optical images, and invisible in the monostatic SAR image. B. Three-Dimensional Imaging Experiment Using the proposed 3-D image approach, we can obtain the elevation heights from two bistatic SAR image. The obtained elevation height information of the illuminated scene was mapped into the 2-D bistatic SAR image, as shown in Fig. 11. Fig. 11(a) shows the surface model generated from the elevation estimates (converted to height relative to the reference point of scene center). In the left-up and right-up
4504
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 51, NO. 8, AUGUST 2013
A PPENDIX A The approximated close-form solution for (15) can be derived as follows. In (15), the slant range difference can be formulated as
j j RRD (τ ; x n /v, ym ) − RRD τ ; x n j /v, ym j 2 + v 2τ 2 = ym2 + v 2 (τ − x n /v)2 + ym2 + x n2 − r D
2
2 j j ym j + v 2 τ − x n j /v − (a)
+ =
2 j ym j
+
j 2 xn j
2 − r D + v 2τ 2
2
2 j j ym j + v 2 τ − x n j /v ym2 + v 2 (τ − x n /v) −
2 j . (28) + ym2 + x n2 − ym2 + x n j 2
For further simplification, we expand (28) in a first-order j Taylor series around x n j = x n as
j j RRD (τ ; (x n ) /v, ym ) − RRD τ ; x n j /v, ym j = ym2 + v 2 (τ − x n /v)2 (b)
2 j − + v 2 (τ − x n /v)2 y mj Fig. 11. Bistatic Stereoradargrammetry SAR image. (a) Digital elevation model information. (b) Bistatic SAR image.
⎞ j
parts of Fig. 11(a), it can be seen that the height of the small hill can be identified. V. C ONCLUSION An innovative technique for the 2-D and 3-D microwaveimaging remote sensing, based on double-channel hybrid bistatic SAR system, has been built and validated by two spaceborne/stationary bistatic experiments. In this paper, we proposed the time and phase synchronization strategies, and fast time-domain imaging algorithm. Based on the built instrument and proposed imaging algorithm, we obtain very nice bistatic SAR images. From the obtained bistatic and monostatic SAR images, the different presentation of same scene is a highlight that is caused by the differences in the illumination geometry and scattering mechanisms. Furthermore, bistatic data can be combined with monostatic data to obtain a highly informative set of multiangle observations. For example, the clear presentation of road and crossway in the bistatic image will facilitate the detection and recognition of targets based on their characteristic bistatic scattering behavior. Moreover, the clear boundary information of field facility, as shown in Fig. 10, would be helpful for the segmentation and classification of natural and man-made targets. Based on this system, we will further explore the bistatic scattering behavior and monitor land subsidence and its induced geological hazard with the bistatic synthetic aperture radar interferometry and tomography in the future.
v (τ − x n /v) x n j − x n ⎟ ⎟. −
⎠ 2 j ym j + v 2 (τ − x n /v)2 j Due to the fact of ym − ym j ym , one can obtain
2 j ym j + v 2 (τ − x n /v)2 ≈ ym2 + v 2 (τ − x n /v)2 .
Thus, (29) can be approximated as
j j RRD (τ ; (x n ) /v, ym ) − RRD τ ; x n j /v, ym j
j v (τ − x n /v) x n j − x n = − . (ym )2 + v 2 (τ − x n /v)2
(29)
(30)
(31)
Substituting (31) into (15) yields min ⎧ ⎫
v(τc −xn /v) xnj −xn ⎪ ⎪ ⎨ √ ⎬ j 2 2 2 (ym ) +v (τc −xn /v) . (32) ⎪ ⎪ √ ⎩ ⎭ n j = 0, 1, . . . N α1 , m j = 0, 1, 2, . . . , Mα2
m j ,n j
Furthermore, (32) can be expressed as v (τc − x n /v) min (ym )2 + v 2 (τc − x n /v)2 m j ,n j
j xn j − xn . (33) × √ n j = 0, 1, . . . N α1 , m j = 0, 1, 2, . . . , Mα2
WANG et al.: BISTATIC SAR SYSTEM FOR REMOTE SENSING
4505
Thus, we can obtain √ √
n j = round (n − j/α1) N /α1 · N /α1 + j/α1. (34) Based on (29) and (30)
So
j j RRD (τ ; (x n ) /v, ym ) − RRD τ ; x n j /v, ym j = ym2 + v 2 (τ − x n /v)2
2 j − ym j + v 2 (τ − x n /v)2
⎞ j v (τ − x n /v) x n j − x n ⎠. − ym2 + v 2 (τ − x n /v)2
j j dr ( j, i, index (τi ; x n , ym )) ≈ dr j, i, index τi ; x n j , ym j . (43)
(35)
Using the same method, we expand (28) in a first-order j Taylor series around ym j = ym as
j j RRD (τ ; (x n ) /v, ym ) − RRD τ ; x n j /v, ym j
j v (τ − x n /v) x n j − x n = ym2 + v 2 (τ − x n /v)2 + ym2 + v 2 (τ − x n /v)2
⎛ ⎞ j ym ym j − ym ⎠ − ⎝ ym2 + v 2 (τ − x n /v)2 + 2 2 2 ym + v (τ − x n /v)
j j v (τ − x n /v) x n j − x n − ym ym j − ym = . (36) ym2 + v 2 (τ − x n /v)2 Letting
j j RRD (τ ; (x n ) /v, ym ) − RRD τ ; x n j /v, ym j = 0
yields mj = m −
j vτ − x n j ρa ym ρr
nj − n
(37)
(38)
√ √
⎧ ⎪ · n = round N /α N /α − j/α ) (n 1 1 1 ⎪ ⎨ j + j/α 1 (39)
j ⎪ ⎪ vτ −x ρ a n ⎩ j nj − n . mj = m − ym ρr A PPENDIX B Substituting (11) into (18) yields
image (n, m) = =
N−1
dr (i, index (τi ; x n , ym )) · φc (τi ; x n , ym )
i=0 √ N −1 N−1
j =0
√ i =0
dr ( j, i, index (τi ; x n , ym ))
·φc (τi ; x n , ym ) where
j j Since x n j , ym j is the approximate integer solution of (37), the following condition can generally be satisfied
j j (42) RRD (τ ; x n /v, ym ) − RRD τ ; x n /v, ym ≤ ρr .
(40)
j j j j φ τi , τc j ; x n j , y m j , n j , m j = φ c τi ; x n j , y m j ·φs τc j ; n j , m j . (41)
Substituting (11) and (13) into (41) yields
j j φ τi , τ c j ; x n j , y m j , n j , m j
j j = φ c τi ; x n j , y m j · φ s τc j ; n j , m j
j j = exp j 2π f 0 RRD τi ; x n j /v, ym j /c · exp j 2π f 0 RRD τc j ; x n /v, ym
j j −RRD τc j ; x n j /v, ym j /c ⎧
⎤ ⎫ ⎡ j j ⎪ ⎪ RRD τi ; x n j /v, ym j ⎪ ⎪ ⎨ ⎢
⎥ ⎬ ⎥ /c j j = exp j 2π f 0 ⎢ ⎣ −RRD τc j ; x n j /v, ym j ⎦ ⎪ ⎪ ⎪ ⎪ ⎩ ⎭ +RRD τc j ; x n /v, ym
j j = exp j 2π f 0 RRD τi , τc j ; x n j /v, ym j , x n /v, ym /c . (44) where
j j RRD τi , τc j ; x n j /v, ym j , x n /v, ym
j j j j = RRD τi ; x n j /v, ym j − RRD τc j ; x n j /v, ym j +RRD τc j ; x n /v, ym . (45) Using (14) into (45) gives
j j RRD τi , τc j ; x n j /v, ym j , x n /v, ym
j j j j = RRD τi ; x n j /v, ym j − RRD τc j ; x n j /v, ym j +RRD τc j ; x n /v, ym 2
2
2 2 j j j j 2 = ym j + v τi − x n j /v + ym j + x n j 2 + v2τ 2 − rD i 2
2
j j − ym j + v 2 τcj − x n j /v +
2 j ym j
+
j 2 xn j
2 + v2τ 2 − rD cj
2 2 2 + (ym ) + v τcj − x n /v + (ym )2 + (x n )2 2 + v2τ 2 − rD cj
2
2 j j ym j + v 2 τi − x n j /v =
2
2 j j − ym j + v 2 τcj − x n j /v + RRD (τi ; x n /v, ym ) . (46)
4506
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 51, NO. 8, AUGUST 2013
Equation can be simplified by expanding it into first-order Taylor series around τi = τcj
j j RRD τi , τc j ; x n j /v, ym j , x n /v, ym
j vτcj − x n j v τi − τcj = RRD (τi ; x n /v, ym ) +
j
ym j
2
j
+ v 2 τcj − x n j /v
≈ RRD (τi ; x n /v, ym ) .
2
(47)
The substitution of (47) into (44) leads to
j j φ τi , τc j ; x n j , y m j , n j , m j ≈ exp { j 2π f 0 RRD (τi ; x n /v, ym ) /c} = φc (τi ; x n , ym ) .
(48)
Furthermore, using (43) and (48) into (49) yields image (n, m)
√ .
/ N −1 N −1 j j
dr j, i, index τi ; x n j , ym j . ≈ ·φc (τi ; x n , ym ) j =0 i=0 √
(49)
Using the traditional BP algorithm, the azimuth focusing for target (x n ym ) can be formulated as
image (n, m) =
=
N−1
dr (i, index (τi ; x n , ym )) · φc (τi ; x n , ym )
i=0 √ N −1 N−1
j =0
√ i =0
dr ( j, i, index (τi ; x n , ym ))
·φc (τi ; x n , ym ) . So
(50)
image (n, m) ≈ image (n, m) .
(51)
ACKNOWLEDGMENT Finally, the authors must thank their colleagues, Mr. G. Liu, Mr. X. Wu, Mr. D. Xie, and Dr. H. Shun for their important contributions to this paper.
R EFERENCES [1] R. Wang, Y. K. Deng, O. Loffeld, H. Nies, I. Walterscheid, T. Espeter, J. Klare, and J. H. G. Ender, “Processing the Azimuth-Variant Bistatic SAR data by using monostatic imaging algorithms based on 2-D principle of stationary phase,” IEEE Trans. Geosci. Remote Sens., vol. 49, no. 10, pp. 3504–3520, Oct. 2011. [2] O. Loffeld, H. Nies, V. Peters, and S. Knedlik, “Models and useful relations for bistatic SAR processing,” IEEE Trans. Geosci. Remote Sens., vol. 42, no. 10, pp. 2031–2038, Oct. 2004. [3] G. Krieger, and A. Moreira, “Spaceborne bi- and multistatic SAR: Potential and challenges,” IET Radar Sonar Navigat., vol. 153, no. 3, pp. 184–198, Jun. 2006. [4] D. Mssonnet, “Capabilities and limitations of the interferometric cartwheel,” IEEE Trans. Geosci. Remote Sens., vol. 39, no. 3, pp. 506–520, Mar. 2002.
[5] R. Wang, O. Loffeld, Y. L. Neo, H. Nies, I. Walterscheid, T. Espeter, J. Klare, and J. H. G. Ender, “Focusing bistatic SAR data in airborne/stationary configuration,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 1, pp. 452–465, Jan. 2010. [6] G. Krieger, A. Moreira, H. Fiedler, I. Hajnsek, M. Werner, M. Younis, and M. Zink, “TanDEM-X: A satellite formation for high-resolution SAR interferometry,” IEEE Trans. Geosci. Remote Sens., vol. 45, no. 11, pp. 3317–3341, Nov. 2008. [7] M. Cherniakov, Bistatic Radar: Emerging Technology. New York, NY, USA: Wiley, 2007. [8] A. Rucci, A. Ferretti, A. Monti Guarnieri, and F. Rocca, “Sentinel 1 SAR Interferometry applications: The outlook for submillimeter measurements,” Remote Sens. Environ., vol. 120, pp. 156–163, Feb. 2012. [9] F. Bovenga, J. Wasowski, D. O. Nitti, R. Nutricato, and T. Chiaradia, “Using COSMO/Skymed X-band and ENVISAT C-band SAR interferometry for andslides analysis,” Remote Sens. Environ., vol. 119, pp. 172–285, Apr. 2012. [10] T. Strozzi, P. Teatini, and L. Tosi, “TerraSAR-X reveals the impact of the mobile barrier works on Venice coastland stability,” Remte Sens. Environ., vol. 113, no. 12, pp. 2682–2688, Dec. 2009. [11] R. Lanari, F. Casu, M. Manzo, and P. Lundgren, “Applications of the SBAS-DInSAR technique to fault creep: A case study of the Hayward fault, California,” Remote Sens. Environ., vol. 109, no. 1, pp. 20–28, Jul. 2007. [12] I. Walterscheid, T. Espeter, A. R. Brenner, J. Klare, J. H. G. Ender, H. Nies, R. Wang, and O. Loffeld, “Bistatic SAR experiments with PAMIR and TerraSAR-X setup, processing, and image results,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 8, pp. 3268–3279, Aug. 2010. [13] H. Nies, F. Behner, S. Reuter, O. Loffeld, and R. Wang, “SAR experiments in a bistatic hybrid configuration for generating PolInSAR Data with TerraSAR-X Illumination,” in Proc. 8th Eur. Conf. Synth. Aperture Radar, Jun. 2010, pp. 1–4. [14] F. Behner and S. Reuter, “HITCHHIKER - Hybrid Bistatic High Resolution SAR experiment using a stationary receiver and TerraSAR-X Transmitter,” in Proc. 8th Eur. Conf. Synth. Aperture Radar, Jun. 2010, pp. 1–4. [15] A. Renga and A. Moccia, “Performance of stereoradargrammetric methods applied to spaceborne monostatic-bistatic synthetic aperture radar,” IEEE Trans. Geosci. Remote Sens., vol. 47, no. 2, pp. 544–560, Jan. 2009. [16] P. Lopez-Dekker, J. J. Mallorqui, P. Serra-Morales, and J. Sanz-Marcos, “Phase synchronization and Doppler centroid estimation in fixed receiver bistatic SAR systems,” IEEE Trans. Geosci. Remote Sens., vol. 46, no. 11, pp. 3459–3471, Nov. 2008. [17] Y. L. Neo, F. H. Wong, and I. G. Cumming, “A two-dimensional spectrum for bistatic SAR processing using series reversion,” IEEE Geosci. Remote Sens. Lett., vol. 4, no. 1, pp. 93–96, Jan. 2007. [18] Y. L. Neo, F. Wong, and I. G. Cumming, “Processing of azimuthinvariant bistatic SAR data using the range Doppler algorithm,” IEEE Trans. Geosci. Remote Sens., vol. 46, no. 1, pp. 14–21, Jan. 2008. [19] N. J. Willis. Bistatic Radar. Norwood, MA, USA: Artech House, 1991. [20] M. H. Lars Ulander, H. Hellsten, and G. Stenstrom, “Syntheticaperture radar pro-cessing using fast factorized back-projection,” EEE Trans. Aerosp. Electron. Syst., vol. 39, no. 3, pp. 760–776, Jul. 2003. [21] F. H. Wong and T. S. Yeo, “New applications of nonlinear chirp scaling in SAR data processing,” IEEE Trans. Geosci. Remote Sens., vol. 39, no. 5, pp. 946–953, May 2001. [22] M. D. Desai and W. K. Jenkins, “Convolution back projection image reconstruction for spotlight mode synthetic aperture radar,” IEEE Trans. Image Process., vol. 1, no. 4, pp. 505–517, Oct. 1992. [23] M. Rodriguez-Cassola, S. V. Baumgartner, G. Krieger, and A. Moreira, “Bistatic TerraSAR-X/F-Sar spaceborne-airborne SAR experiment: Description, data processing, and results,” IEEE Trans. Geosci. Remote Sens., vol. 48, no. 2, pp. 781–794, Feb. 2010. [24] M. Younis, S. Huber, A. Patyuchenko, and F. Bordoni, “Performance comparison of refelctor- and planar-antenna based digital beam-forming SAR,” Int. J. Antennas Propag., vol. 2009, no. 614931, pp. 1–13, Oct. 2009.
WANG et al.: BISTATIC SAR SYSTEM FOR REMOTE SENSING
Robert Wang (M’07–SM’12) received the B.S. degree in control engineering from the University of Henan, Kaifeng, China, in 2002, and the Dr.Eng. degree from the Graduate University of Chinese Academy of Sciences, Beijing, China, in 2007. He joined the Center for Sensorsystems, University of Siegen, Siegen, Germany, in 2007. He has been involved in the following projects: TerraSAR-X/PAMIR hybrid bistatic SAR experiment, PAMIR/stationary bistatic SAR experiment, PAMIR/stationary bistatic SAR experiment with non-synchronized oscillators, 3-D/4-D SAR tomography for high-resolution information extraction and monitoring earth’s dynamics, and Millimeter-wave FMCW SAR data processing and so on. In addition, he has been involved in some SAR projects for Fraunhofer-FHR. He is the author of a tutorial entitled “Results and progresses of advanced bistatic SAR experiments” presented at the European Radar Conference 2009 and the co-author of a tutorial entitled “Progress in bistatic SAR concepts and algorithms” presented at EUSAR2008. He has authored more than 90 papers since 2003, of which more than 30 are peer-reviewed and well-known journal papers. His current research interests include monostatic and bistatic SAR imaging, multibaseline for monostatic and bistatic SAR interferometry, high-resolution spaceborne SAR system and data processing, airborne SAR motion compensation, FMCW SAR system and Millimeter-wave SAR system. Dr. Wang has been a Research Fellow with the Spaceborne Microwave Remote Sensing System Department, Institute of Electronics, Chinese Academy of Sciences, since 2011, where he is currently funded by “100 Talents Programme of The Chinese Academy of Sciences.” Since 2012, he has been a Co-Principal Investigator for Helmholtz-CAS Joint Research Group concerning Space-borne Microwave Remote Sensing for Prevention and Forensic Analysis of Natural Hazards and Extreme Events. He has contributed to invited sessions on bistatic SAR at the European Conference on Synthetic Aperture Radar (EUSAR) 2008 and 2010. He has been chosen as Session Chair at EUSAR2012.
Yun Kai Deng (M’11) received the M.S. degree in electrical engineering from the Beijing Institute of Technology, Beijing, China, in 1993. He joined the Institute of Electronics, Chinese Academy of Sciences (IECAS), in 1993, where he worked on antenna design, microwave circuit design, spaceborne/airborne SAR technology. He has been the Leader of several spaceborne/airborne SAR programs and developed some key technologies of spaceborne/airborne SAR. He is currently a Research Scientist, a member of the scientific board, and the Director of Spaceborne Microwave Remote Sensing System Department, IECAS. His current research interests include spaceborne/airborne SAR technology for advanced modes, multifunctional radar imaging, and microwave circuit design. Since 2012, he has been a Principal Investigator for Helmholtz-CAS Joint Research Group concerning Space-borne Microwave Remote Sensing for Prevention and Forensic Analysis of Natural Hazards and Extreme Events. He has authored more than 100 papers since 2002, of which more than 40 are peer-reviewed and well-known journal papers.
Zhimin Zhang received his B.S. degree in electrical engineering from the Beijing Institute of Technology, Beijing, China, in 1992 and M.S. degree from the Graduate University of Chinese Academy of Sciences, Beijing, China, in 1995. He joined the Institute of Electronics, Chinese Academy of Sciences (IECAS), in 1995, where he worked on radar system design and signal processing. His current research interests include spaceborne/airborne SAR technology for advanced modes, real-time signal processing, and multifunctional radar imaging.
4507
Yunfeng Shao received the Bachelor’s degree from Shanghai Jiaotong University, Shanghai, China, in 2009. In September 2009, he enrolled in the Institute of Electronics, Chinese Academy of Sciences (IECAS). Currently, he is working toward his PhD degree in the Department of Space Microwave Remote Sensing System, IECAS. His current research interests include bistatic SAR imaging algorithm, stereoscopy bistatic SAR, bistatic InSAR, and TomoSAR processing.
Jixiang Hou received the Bachelor’s degree from Southeast University, Nanjing, China, in 2009. He is currently pursuing the Master’s degree with the Department of Space Microwave Remote Sensing System, Institute of Electronics, Chinese Academy of Sciences, Beijing, China. His current research interests include synchronization in bistatic SAR, SAR system design, and highspeed digital signal processing.
Gang Liu was born in Shandong, China, on May 15, 1986. He received the B.S. degree in electronic and information engineering from Sichuan University, Chengdu, China, in 2009. He is currently pursuing the Ph.D. degree in electronic and information engineering with the Department of Space Microwave Remote Sensing System, Institute of Electronics, Chinese Academy of Sciences, Beijing, China. His current research interests include interferometric synthetic aperture radar signal processing.
Xiayi Wu received the B.S. degree in automatic test and control and the M.S. degree from the Harbin Institute of technology, Harbin, China, in 1999 and 2007, respectively. He joined the Institute of Electronics, Chinese Academy of Sciences (IECAS), Beijing, China, in 1999, where he worked on auto-testing system design and signal processing. His current research interests include spaceborne/airborne SAR technology for advanced modes, and radar signal processing.