Frequency domain algorithm for bistatic SAR Vincent Giroux(1), Hubert Cantalloube(1), Franck Daout(2)
(1) ONERA -Chemin de la Huniere 91761 PALAISEAU CEDEX France- tel: +33 1 69 93 62 04; email:
[email protected],
[email protected]. (2) GEA university PARIS 10 PST Villes d’Avray -1 chemin desvallieres 92410 Villes d’Avray Franceemail:
[email protected] Vincent Giroux’s PhD thesis is funded by CNES (Centre National d’Etude Spatiale)
Abstract Bistatic SAR uses transmitter and receiver flying distinct platforms. There are two main configuration types. Time invariant configuration: velocity vectors are identical and parallel to the ground. Thus, equidistance lines and lines of equal radial velocity are translated along the trajectory as time grows. Non time invariant configuration: velocity vectors are not equal. There are two principal kinds of algorithms for computing (bistatic) SAR image: temporal domain synthesis such as fast back projection or frequency domain synthesis such as ω-k algorithm. The latter is more computationally efficient, particularly in monostatic case it is used in computing images with wide bandwidth. In bistatic case, ω-k algorithm was proposed for time invariant case. The main idea is to use a coordinate system in which one the expression of the phase was linear. In this article we propose to fit this idea to the non time invariant case.
1 Introduction
2 Squinted coordinate system
The principle of ω-k algorithm is to process the information in the frequency domain. The azimuth matched filter (which processes the aperture synthesis) consists on an original phase shift followed by a mapping from data frequency domain to image frequency domain. In the time invariant case, [1] and [2] propose to build the image in a coordinate system different from the cylindrical coordinate system used in monostatic case. The coordinate system differs along only one dimension (we use the fact that velocity vector is equal). In non time invariant case (see figure 1), we need a coordinate system different on both axes. This one depends in the average squint angle. In the first section, we present the squinted coordinate system used to build the image. Then in the second section, we show that this coordinate system is adapted to efficiently compute the image. In the third section the algorithm is summarised. And in the last section, we present an application of the algorithm for non time invariant case.
The aim of this section is to present the squinted coordinate system which we use to process the ω-k algorithm. In the monostatic case the squinted coordinate system for a point P is defined by projecting the point P on the trajectory according the average squint angle θ 0 : that give an azimuth and a distance (see figure 2). In the monostatic case, the image can be built in any squinted reference. In the following of article we call squint, C0 = 2 cos(θ 0 ).
Trajectorie of the platform
y β θ0 α/2
C 0 = 2 cos ( θ 0) r z P
x y
Σ
cylindrical coordinate system squinted coordinate system cartesian coordinate system
Figure 2: monostatic case: representation of different coordinate systems. transmitter trajectory Receiver trajectory z x y
Figure 1: bistatic configuration for non time invariant: The pictured zone is in front of the receiver
Now, we fit this idea to the bistatic case. Let assume that the ground is a plane denoted Σ. We note δ(P, u) the sum of distances of a point P of the ground to the transmitter and the receiver, at a long time u, which corresponds to the displacement of both platforms (though named long time, u is a distance along the trajectory). (x,y) is a Cartesian coordinate system of Σ. We can evaluate δ(P, u) -and its derivatives in x, y and u- in this Cartesian coordinate system. If we note C0 the medium squint, a new coordinate
system (α, β) of Σ can be defined1 . ∂u δ(P, β(P )) = −C0
(1)
α(P ) = δ(P, β(P ))
P ∈Σ
There are two important questions:
With u∗ (P, C) defined by the equation:
• Given a point P is there an unique couple (α,β)? • Conversely, given a couple (α,β) can we find an unique point P ? If we assume rectilinear uniform trajectories, with possibly different velocity vectors, the direct problem has an unique solution. However, even in the monostatic case, solutions of the inverse problem are not unique: there is the left-right ambiguity. In the following, we assume that the correspondence between P and (α,β) is one to one in the region of interest (e.g. the intersection of the antennae footprints). Thus, we can express the Jacobian of the coordinate change from (x,y) to (α, β). We define Jxy (P ) and Jαβ (P ). ∂α x(P ) ∂α y(P ) Jxy (P ) = (2) ∂β x(P ) ∂β y(P ) Jαβ (P ) =
∂x α(P ) ∂y β(P ) ∂y α(P )
∂y β(P )
(3)
Jαβ (P ) can be evaluated by differentiating (1), and Jxy (P ) by inverting Jαβ (P ).
3 Expression of spectral signal The aim of this section is to show how we can find an adapted form of spectral signal to process the algorithm, within the coordinate system defined in the previous section. The spectral signal can be evaluated by: ˜ ku ) = · · · S(k,
Z
σ(P ) P ∈Σ
Z
u∈Li
(4) G · exp (−ikφ(P, u, C)) dudP
Where σ(P ) the reflexivity of the point P, k the range spatial frequency, ku the azimuth spatial frequency (doppler), ku , φ(P, u, C) = δ(P, u) + Cu, Li the integration C = k path. G is an amplitude term: we assume it constant and equal to 1. In all the following (especially for the stationary phase theorem), we assume that all terms of amplitude are constant, i.e. that antennae patterns and propagation 1 We
are already compensated for. The stationary phase theorem yields: Z ˜ ku ) = S(k, σ(P ) exp (−ikφ(P, u∗ (P, C), C)) dP
use this notation for partial derivative: ∂u · =
∂ · ∂u
∂u δ(P, u∗ (P, C)) = −C
(5)
(6)
Before the inverse Fourier transform back to image space, we need to express the phase of S˜ as a linear function of the (α, β) coordinates. ZZ ˜ S(k, ku ) = σ(P ) · · · α,β
· exp (−i(kα (k, ku )α + kβ (k, ku )β + ψ 0 (k, ku ))) dαdβ (7) or by using the fact that the phase is proportional to k: ZZ ˜ S(k, ku ) = σ(P ) · · · α,β
· exp (−ik(Cα (C)α + Cβ (C)β + φ0 (C))) dαdβ
(8) with kα = k · Cα (C), kβ = k · Cβ (C) and ψ 0 (k, ku ) = k · φ0 (C). Hence, we can perform a mapping from (k, ku ) domain to (kα , kβ ) domain. To determine this linear mapping, we expand φ(P, u∗ (P, C), C) as a Taylor series at P0 the point of focus (e.g. the centre of the swath). Using equation (6), the phase can be simplified: φ(P, u∗ (P, C), C) = δ(P0 , u∗ (P0 , C)) + u∗ (P0 , C) · · · +∂α δ(P0 , u∗ (P0 , C)) · (α − α0 ) · · · +∂β δ(P0 , u∗ (P0 , C)) · (β − β 0 ) · · · +alg (P, C) (9) Thus, according to this Taylor expansion, the terms in equation (8) (Cα , Cβ and φ0 ) become: Cα (C) = ∂α δ(P0 , u∗ (P0 , C)) Cβ (C) = ∂β δ(P0 , u∗ (P0 , C)) φ0 (C) = φ(P0 , u∗ (P0 , C)) − Cα (C)α0 − Cβ (C)β 0 (10) At this point, Cα (C) and Cβ (C) can be evaluated, thanks to the formulas of Jacobian found in the previous section, (equations (2), (3)). Cα (C) ∂x δ(P0 , u∗ (P0 , C)) = Jxy (P0 ) Cβ (C) ∂y δ(P0 , u∗ (P0 , C)) (11)
And in particular: Cα (C0 ) ∂C Cα (C0 ) = Cβ (C0 ) ∂C Cβ (C0 )
1
0
C0
1
(12)
According to equation (12), kα ≈ k and kβ ≈ ku , at the first order in C. This makes easier the mapping from (k, ku ) domain to (kα , kβ ) domain with two successive one dimensional resampling from (k, ku ) domain to (k, kβ ) domain, and from (k, kβ ) domain to (kα , kβ ) domain.
- the signal is multiplied by the original filter k · φ 0 (C), - it is mapped from (k, ku ) to (k, kβ ) coordinates. Note that for the sake of the resampling accuracy, the origin coordinate (k, ku ) must be evaluated for each target coordinate (k, kβ ), therefore we should invert Cβ (C), - next, the mapping (k, kβ ) domain to (kα , kβ ) is performed. For the same reason, we should invert the function Cα (Cβ ), - at last, the two dimensional inverse Fourier transform yields the image in (α, β) coordinates.
Now, we have an estimation of the phase according to (α, β) coordinate system just by finding u∗ (P0 , C). We can assert the validity of this coordinate system by evaluating the algorithmic error in range alg (P, C) using (12): ∀C alg (P0 , C) = 0 = 0 (13) alg (P, C0 ) ∀P ∂C alg (P, C0 ) = 0
The resolutions Rα , Rβ , θβ (the angle between diffraction spikes (see figure 6)) can be evaluated in P (in first approximation): c R = α B c (15) Rβ = f · ∆C · arcsin(tan(−C )) 0 0 tan(θβ ) = −C0
Where B, the bandwidth, c the light speed, and f0 the central frequency, and ∆C, the variance of C, in function of Li the integration path: ∆C = Li · ∂u22 δ(P, β).
4 Algorithm In this section, the algorithm is summarised (see figure 3): - we have to choose a point P0 and the squint C0 . P0 is the centre of the swath, and C0 = −∂u δ(P0 , um ) with um the middle of acquisition, - we solve the equation (6) -by numerical methods- for P 0 for determining u∗ (P0 , C). thus deriving φ0 (C), Cα (C), and Cβ (C). Now we can sketch the SAR processor: - after a pre-summation stage, a two dimensional Fourier Transform is performed,
Evaluation of φ 0 (C) C α(C), C β(C)
FFT2
~ S( k , k u )
ORIGINAL PHASE SHIFT
~ S( k , k u )
MAPPING ( k , k u)−> ( k , k β)
~ S( k , k β )
MAPPING ( k , k β)−> ( k α, k β)
~ S( kα , k β )
According to this result alg (P, C) has only second and higher order terms in C:
alg (P, C) = a2 (P )C 2 + a3 (P )C 3 + a4 (P )C 4 + o(C 4 ) (14) If ∀(P, C), k · alg (P, C) < π4 , image degradation will be moderate. However, having a closed form expression for alg (P, C) allows for its correction during the motion compensation stage [4].
Resolution of u *(P 0,C)
RAW DATA s(d,u)
IFFT2 σ( α , β )
Figure 3: bistatic ω-k algorithm
5 Application b
a
V
R,x V
Xm VR,x X X
R T
V
T,y
ZT
X m
East Altitude
Center of test pattern
East
North
c
R,z
ZR
d ∆X
transmitter velocity Receiver velocity ∆Y
East
Altitude North East
North
Figure 4: simulated non time invariant bistatic configuration. a: top view; b: side view; c: perspective view; d: detail of the test pattern This section presents an application of the algorithm. A test-pattern was simulated (thirteen points). X-band is used (f0 = 10 Ghz is the central frequency), with B = 200 MHz (the bandwidth). The local georeference (East, North, Altitude) is chosen for Cartesian coordinate system (x,y,z). The reference point is the projection on the ground
range
α (m)
azimuth
β (m)
Amplitude (en dB) amplitude (dB)
Cut in range Resolution in azimuth : 0.92 (m) PSLR = −14.48 dB
−5
−10 −15 −20 −25 −30 −35 −40 −45 −50 1122
1124
1126
1128
1130 Azimuth : β (m)
1132
azimuth β (m)
1134
1136
1138
Figure 7: range profile of the reflector
0
Resolution in range : 1.3 m ; PSLR = −13.74 dB Cut in azimuth Resolution in Range : 1.3 (m) PSLR = −13.74 dB
−10
−20
−30
−40
−50
−60 2.113
2.1135
2.114
2.1145
2.115 2.1155 Range : α (m)
range α (m)
2.116
2.1165
2.117
2.1175 4
x 10
Figure 8: azimuth profile of the reflector
6 Conclusion We designed an efficient frequency domain algorithm for SAR bistatic system in non time invariant case. We have assumed rectilinear uniform trajectories because, in this case it is possible to formally derive δ(P, u) and its derivative -in a Cartesian coordinate system-, and to solve the equation (6). If we can evaluate δ(P, u) and its derivative in a more complex model, and solve the equation (6), the reasoning is the same. Indeed, real acquisitions, especially airborne, have significant trajectory deviations from the straight line. These deviations induce an error in propagation pathlength which adds up with alg (equation 13). In [4] is introduced a mocomp method for correcting both errors simultaneously.
[1] J.H. Ender: A step to bistatic SAR processing, EUSAR 2004 procs, Ulm.
Zoom sur le centre de la mire
4
Resolution in azimuth : 0.92 m ; PSLR = −14.48 dB
References
Figure 5: image of the test pattern x 10
0
Amplitude (dB) amplitude (dB)
of the middle of transmitter and receiver positions at the beginning of acquisition - u = 0 -. The test pattern is in the receiver flight direction. The point of focus P 0 is the test pattern centre. Other parameters of the simulation (figure 4) are: VT,x = 250 m/s; (Velocity of transmitter); XT = −1000 m; YT = 0 m;ZT = 2000 m (starting position of transmitter); VR,y = 250 m/s;VR,z = −10 m/s (velocity of receiver); XT = 1000 m; YT = 0 m;ZR = 1000 m (starting position of receiver); Xm = 11 km; Ym = 0 m; Zm = 0 m (test pattern centre); ∆X = 40 m, ∆Y = 40 m (spacing between point reflectors of the test pattern). The synthesised image medium squint and squint variance are: C0 = 0.9, ∆C = 0.04. Corresponding integration path length is about 500 m at the swath centre (we use constant ∆C hence rangedependent integration length). Approximate theoretical range and azimuth resolutions are 1.5 m and 1.1 m resp., with tan(θ β ) = −0.9. The figure 5 shows the image of the test pattern in the (α, β) coordinate system. Positions of point reflectors are correct (since the first order of the phase error is zero). Detail around a reflector at the tip of a test pattern branch (the red rectangular in figure 5) is shown on figure 6, together with its range (α) and azimuth (β) profiles (see figures 7 and 8). The measured resolution is 1.3 m in range and 0.92 m in azimuth (with θ β = 42deg). This is slightly better than the first approximated theoretical resolution (see [3] for a complete dissertation on resolution estimation versus measurement).
2.113
Range : α (m)
range α (m)
2.1135
2.114
2.1145
θβ
2.115
2.1155
2.116
2.1165
2.117 1124
1126
1128
1130 Azimuth : β (m)
azimuth β (m)
1132
1134
Figure 6: detail of a reflector in the test pattern
[2] V. Giroux, et al: Omega-k algorithm for SAR bistatic system, IGARSS 2005 procs, Seoul. [3] H. Cantalloube, et al: Airborne X-band SAR imaging with 10cm resolution - Technical challenge and preliminary results, IEE RSN special EUSAR2004 issue, 2005 [4] V. Giroux, et al: Motion compensation for bistatic SAR frequency domain algorithm, EUSAR 2006 procs, Dresden.