A Generalized Chirp-Scaling Algorithm for Geosynchronous Orbit SAR ...

5 downloads 0 Views 5MB Size Report
May 6, 2017 - Abstract: Geosynchronous Orbit Synthetic Aperture Radar (GEO SAR) ... GEO SAR staring observation has an ultra-long integration time that ...
sensors Article

A Generalized Chirp-Scaling Algorithm for Geosynchronous Orbit SAR Staring Observations Caipin Li 1,2 and Mingyi He 1, * 1 2

*

School of Electronics and Information, Northwestern Polytechnical University, N 1, Dongxiang Rd, Xi’an 710129, China; [email protected] China Academy of Space Technology (Xi’an), Weiqu Street, Xi’an 710100, China Correspondence: [email protected]; Tel.: +86-29-8843-1250

Academic Editor: Assefa M. Melesse Received: 19 March 2017; Accepted: 2 May 2017; Published: 6 May 2017

Abstract: Geosynchronous Orbit Synthetic Aperture Radar (GEO SAR) has recently received increasing attention due to its ability of performing staring observations of ground targets. However, GEO SAR staring observation has an ultra-long integration time that conventional frequency domain algorithms cannot handle because of the inaccurately assumed slant range model and existing azimuth aliasing. To overcome this problem, this paper proposes an improved chirp-scaling algorithm that uses a fifth-order slant range model where considering the impact of the “stop and go” assumption to overcome the inaccuracy of the conventional slant model and a two-step processing method to remove azimuth aliasing. Furthermore, the expression of two-dimensional spectrum is deduced based on a series of reversion methods, leading to an improved chirp-scaling algorithm including a high-order-phase coupling function compensation, range and azimuth compression. The important innovations of this algorithm are implementation of a fifth-order order slant range model and removal of azimuth aliasing for GEO SAR staring observations. A simulation of an L-band GEO SAR with 1800 s integration time is used to demonstrate the validity and accuracy of this algorithm. Keywords: staring observation; geosynchronous SAR; fifth-order slant range mode; improved chirp-scaling algorithm

1. Introduction Owing to its ability of all-weather and all-day Earth observation, synthetic aperture radar (SAR) has been widely applied in a variety of microwave remote sensing fields, such as disaster monitoring [1], topographic mapping [2], soil moisture estimation [3], oil spill observation [4], crop growth monitoring [5], forest cover measurement [6], wetland mapping [7], maritime surveillance [8–10], and so on. However, with increasing demands of wider imaging swath and shorter revisit period, it is difficult for low Earth orbit (LEO) SAR to satisfy the application requirements. To solve this problem, the Geosynchronous Orbit SAR (GEO SAR) concept has been put forward. Compared with LEO SAR, GEO SAR not only has more advantages, such as wider swath, and shorter revisiting period [11–16], but it also can realize long time observation of ground targets even with staring, which will be beneficial to its applications in microwave remote sensing fields. Staring observation is the classical spotlight mode with a steering to a rotation center within the imaged scene [17]. GEO SAR’s staring observation can be obtained using a certain beam forming strategy that could be implemented by adjusting the antenna beam direction, or through satellite attitude steering [18]. The first challenge in GEO SAR staring observation image formation is the integration time which is extended to thousands of seconds in comparison with the hundreds of seconds in GEO SAR conventional observation using strip-map mode. In this case, conventional GEO SAR frequency Sensors 2017, 17, 1058; doi:10.3390/s17051058

www.mdpi.com/journal/sensors

Sensors 2017, 17, 1058

2 of 15

domain algorithms, such as the modified chirp scaling (CS) algorithm [19,20], improved frequency domain focusing method [21,22], generalized Omega-K algorithm [23], and sub-aperture processing methods [24], cannot be directly used because of the inaccuracy of the corresponding slant models. In above algorithms, the slant range model based on the fourth model fails to precisely describe the range history of the ultra-long integration time in the staring observation. Zhao et al. [25] have proposed an improved RD imaging algorithm based on a fifth Doppler range model that is mainly used for strip-map SAR. Moreover, the Doppler range model in [25] does not consider the impact of “stop and go” assumptions. Yu et al. [26] have proposed a time-frequency scaling algorithm to correct the high-order azimuth variance in GEO SAR, but the imaging mode is still in strip-map mode. Conventional time domain processing algorithms, such as the back projection algorithm [27], are not restricted by the slant range model. However, due to the prohibitively heavy computation load and inefficient processing, they are not practical to use. Besides, in the thousand seconds of integration time, many factors would affect the imaging focus, such as atmospheric refractive index, orbit perturbation, etc. Several methods have been proposed to solve these problems. Ruiz et al. [28] proposed the method of retrieving the atmospheric phase screen (APS) and compensating its effects on GEO SAR focusing. To improve GEO SAR focusing influenced by perturbations, Dong et al. [29] recommended the use of accurate orbit measurements or some signal processing methods. The autofocus technique for phase error correction has been proven to be a robust technique that can compensate for the phase error caused by atmosphere or orbit perturbations [30]. Although the need for deeper understanding of the effect on GEO SAR imaging, this paper will focus on imaging processes. The other challenge encountered in GEO SAR staring observation of image formation is the azimuth aliasing as the spotlight SAR. Therefore, an imaging algorithm for GEO SAR staring observation should address the issues. This paper aims to study the main problems posed by above challenges. The main contributions of this paper are as follows: first, a fifth-order slant range model considering the impact of the “stop and go” assumption is developed for GEO SAR staring observations. Then, an accurate 2-D spectral based on series inversions is deduced. And a two-step processing method is proposed to remove spectrum aliasing. At last, an improved chirp scaling algorithm based on fifth-order slant range is obtained to perform high-order-phase coupling function compensation, range and azimuth compression. This paper is organized as follows: the signal model based on the fifth-order slant range for GEO SAR is investigated in Section 2. In Section 3, a two-dimensional spectrum of the echo is derived. In Section 4, the imaging algorithm is deduced in detail and the algorithm flow chart is given. Section 5 gives the simulation results to validate the improved imaging algorithm. Finally, conclusions are summarized in Section 6. 2. Signal Model for GEO SAR Staring Observation The geometry of satellite and ground targets for GEO SAR staring observation is shown in Sensors 2017, 17, 1058 3 of 16 Figure 1. RS  t a 

End time B

Start time A

Satellite Orbit

RT  ta  ✩ Point target 地面点目标

Figure 1. The geometry of satellite and ground targets. Figure 1. The geometry of satellite and ground targets.

In the case of staring observations, the antenna (moving from position A to position B) is constantly steered to the point target which results in a long illumination time. Assuming the position vector of the satellite and the target vector in the Earth fixed coordinate system to be R S  ta  and R

t 

respectively, and then the slant range from the satellite to the target point can be

Point Pointtarget target 地面点目标 地面点目标

Figure Figure1.1.The Thegeometry geometryofofsatellite satelliteand andground groundtargets. targets.

Sensors 2017, 17, 1058

3 of 15

InInthe thecase caseofofstaring staringobservations, observations,the theantenna antenna(moving (movingfrom fromposition positionAAtotoposition positionB)B)isis In the case of staring observations, the antenna (moving from position A to position B) is constantly constantly steered to the point target which results in a long illumination time. Assuming the constantly steered to the point target which results in a long illumination time. Assuming the steered to the point target which results in a long illumination time. Assuming the position vector of the position positionvector vectorofofthe thesatellite satelliteand andthe thetarget targetvector vectorininthe theEarth Earthfixed fixedcoordinate coordinatesystem systemtotobe be RRS S tata  satellite and the target vectorand in the Earth fixed coordinate system satellite to be RS (to t a )the and RT (t a )point respectively, and and RRT T tata  respectively, respectively, andthen thenthe theslant slantrange rangefrom fromthe the satellite to thetarget target pointcan canbe be and then the slant range from the satellite to the target point can be expressed as R(t a )=RS (t a )−RT (t a ), expressed expressedasas RR tata ==RRS S tata - -RRT T tata , ,where where tata isisthe theazimuth azimuthtime. time. where t a is the azimuth time. Taking group typical GEO SAR parameters, e.g., semi-major ofofthe isis42,164 Takingaaagroup group typical GEO SAR parameters, e.g., the semi-major axis theorbit orbit 42,164 Taking ofofof typical GEO SAR parameters, e.g., thethe semi-major axisaxis of the orbit is 42,164 km ◦ km and the orbit eccentricity is 0, the orbit inclination is 20° and the azimuth time is 1800s. From kmthe andorbit the eccentricity orbit eccentricity 0, the orbit inclination is 20° and the time azimuth time From is 1800s. From and is 0, theisorbit inclination is 20 and the azimuth is 1800s. Figure 2, Figure 2, the maximal phase error caused by the conventional fourth-order slant range model is Figure 2, the maximal phase error caused by the conventional fourth-order slant range model the maximal phase error caused by the conventional fourth-order slant range model is greater thanis greater than π/4 whereas slant range isisfar Nevertheless, the greater thanthe π/4 whereasthe thefifth-order fifth-order slant range model farless lessthan thanπ/4. π/4. Nevertheless, the π/4 whereas fifth-order slant range model is far lessmodel than π/4. Nevertheless, the fourth-order slant fourth-order slant range model cannot the ofof the algorithm. fourth-order slant meet range model cannotofmeet meet the requirement requirement the imaging imaging algorithm. range model cannot the requirement the imaging algorithm. Consequently, the fifth-order Consequently, the fifth-order range model is highly desired. Under the rule that phase error isisless Consequently, the fifth-order range model is highly desired. Under the rule that phase error less range model is highly desired. Under the rule that phase error is less than π/4, we can get the than π/4, we can get the relationship between different order slant range model and synthetic than π/4, we can get the relationship between different order slant range model synthetic relationship between different order slant range model and synthetic aperture time, whichand is shown in aperture aperture time,which whichisisshown shownininFigure Figure3.3. Figure 3. time, 0.02 0.02

Phase error(rad) Phase error(rad)

Phase error(rad) Phase error(rad)

22 11 00 -1-1 -2-2 00

1000 1000 Time(s) Time(s)

2000 2000

00 -0.02 -0.02 -0.04 -0.04 -0.06 -0.06 00

(a) (a)

1000 1000 Time(s) Time(s)

2000 2000

(b) (b)

Taylor series expansion order(number) Taylor series expansion order(number)

Figure 2.2.Phase Phase errors caused range mode. (a) the slant range Figure2. Phase errors caused byslant slant range mode. (a)Phase Phase error thefourth-order fourth-order slantmodel range Figure errors caused byby slant range mode. (a) Phase errorerror of theofof fourth-order slant range model as a function of time and (b) Phase error of the fifth-order slant range model as a function of time. model as a function of time and (b) Phase error of the fifth-order slant range model as a function of time. as a function of time and (b) Phase error of the fifth-order slant range model as a function of time. 77 66 55 44 33 22 11 00

1000 2000 3000 1000 2000 3000 Synthetic Syntheticaperture aperturetime(s) time(s)

Figure 3.3.Relationship order slant range model and synthetic aperture time. Figure3. Relationshipbetween betweendifferent differentorder orderslant slantrange rangemodel modeland andsynthetic syntheticaperture aperturetime. time. Figure Relationship between different

As can be seen from the Figure 3, the third-order slant range model can be used when the synthetic aperture time does not exceed 680 s. The fourth-order slant distance model can be used when not more than 1580 s, and so on. However, when the synthetic aperture time exceeds 2830 s, the sixth-order slant range model must be used. In the 1800 s of synthetic aperture time, the fifth diagonal model can meet the requirements. Using Taylor series expansion in azimuth time, the slant range is extended to a fifth-order model by following equation: 1 1 1 1 kR(t a )k = Rc + Vc t a + Ac t2a + Bc t3a + Cc t4a + Dc t5a + · · · 2 6 24 120

(1)

where, the expression of Rc , Vc , Ac , Bc , Cc can be obtained in [31]. The value of Dc can be further expressed as: Dc = (15x15 /32 − 75x13 x2 /4 + 45x3 x12 /2 + 45x1 x22 /2 − 30x4 x1 − 30x3 x2 + 60x5 ) RST

(2)

Sensors 2017, 17, 1058

4 of 15

in which: RST = |RST |, hRST ,VST i , kRST k2 hRST ,AST i+hVST ,VST i , kRST k2 hRST ,BST i/3+hVST ,AST i , kRST k2 hRST ,CST i/12+hVST ,BST i/3+hAST ,AST i/4 , kRST k2 hRST ,DST i/60+hVST ,CST i/12+hAST ,BST i/6 , kRST k2

x1 = 2 x2 = x3 = x4 = x5 =

RST =RS − RT , VST =VS − VT , AST =AS − AT , BST =BS − BT , CST =CS − CT , DST =DS − DT where R, V, A, B, C are all state vectors and represent the satellite and target positions vector, velocity vector, acceleration vector, rate of the acceleration vector, and the rate of the acceleration vector, respectively. The subscript S and T denote position and target, respectively. Operator h, i represents the inner product operation. The expression of DS and DT can be obtained with the following equations: DS

= µ2 VS /|RS |6 − 3 µ2 hRS , VS iRS /|RS |5 + 9 µAS hRS , VS i/|RS |5 + 9 µ h V S , V S i V S / | RS | 5 + 9 µ h RS , A S i V S / | RS | 5 + 9 µ h V S , A S i RS / | RS | 5 − 45µhRS , VS i2 VS /|RS |7 − 45 µhRS , VS ihVS , RS iRS /|RS |7 − 45 µhAS , RS ihRS , VS iRS /|RS |7 − 3 µ2 hRS , VS iRS /|RS |8 + 9 µ2 hRS , VS iRS /|RS |8 + 105 µhRS , VS i3 RS /|RS |9 DT =We ⊗ CT

(3)

(4)

where We is angular velocity vector of the Earth rotation, µ is the constant of gravitation and operator ⊗ stands for the outer product. In the slant range, because of the ultra-long integration time, the effect of perturbing orbital elements on the satellite and target kinematic parameters should be taken into account [29]. As indicated in [31], the satellite and target kinematic parameters that can be obtained from the orbital elements include the following: a, i.e., the semimajor axis of the orbit, e, i.e., the eccentricity of the orbit, i, i.e., the inclination of the orbit, and Ω, i.e., the longitude of the ascending node, f , i.e., the true anomaly, w, i.e., the argument of perigee. Ignoring the slight stochastic perturbation errors, the osculating orbital elements can be expressed as:                   

a = am + (∆a)sec + (∆a)l p + (∆a)sp e = em + (∆e)sec + (∆e)l p + (∆e)sp i = im + (∆i )sec + (∆i )l p + (∆i )sp Ω = Ωm + (∆Ω)sec + (∆Ω)l p + (∆Ω)sp f = f m + (∆ f )sec + (∆ f )l p + (∆ f )sp w = wm + (∆w)sec + (∆w)l p + (∆w)sp

(5)

where am , em , im , Ωm , f m , wm are the mean orbital elements; ∆a, ∆e, ∆i, ∆Ω, ∆ f , ∆w are the perturbing orbital elements; and the subscripts “sec”, “lp”, and “sp” denote the secular perturbation term, the longperiod term, and the short-period term, respectively. Therefore, by computation of the orbit perturbation parameters and the accurate measurement of the orbit, we can obtain the slant distance accurately. In the conventional SAR echo signal model, the transmission point and receiving point can be considered to be at the same position. However, for GEO SAR, the orbit height is about 36,000 km, so this assumption is invalid [22]. To facilitate deduction and use of the algorithm, the impact

Sensors 2017, 17, 1058

5 of 15

of “stop-and-go” assumption must be compensated. Subsequently the imaging algorithm can be expressed in the form of monostatic SAR. The error caused by the “stop-and-go” assumption on the slant range can be expressed as follows [22]: 1 1 ∆R(t a ) = ∆Rc + ∆Vc t a + ∆Ac t2a + ∆Bc t3a + · · · (6) 2 6 hV

i

,R

hB

,R

i+3hA

,V

i

hC

,R

i+4hB

,V

i

hR

,R

i

where ∆Rc = STc ST , ∆Ac = ST ST 2c ST ST , ∆Bc = ST ST 6c ST ST + ST2c ST , and c is the speed of light. The reached range error using the same orbit parameters as in Section 2 and the third-order approximation is 0.416 mm, leading to a phase error far less than π/4. Thus, the error distance caused by the “stop and go” assumption can be fully derived from the third-order approximation. Taking the error caused by the “stop and go” assumption into account, the slant range in GEO SAR can be written as: 1 1 1 1 R0 (t a ) ≈ R0c + Vc0 t a + Vc0 t2a + Bc0 t3a + Ctc 4a + Dc t5a (7) 2 6 24 120 where R0c = Rc + ∆Rc , Vc0 = Vc + ∆Vc , A0c = Ac + ∆Ac , Bc0 = Bc + ∆Bc . According to the SAR principle, the echo signal after demodulation can be expressed as:  s 0 ( t r , t a ) = wr

   2 !  2R0 (t a ) 4π f 0 R0 (t a ) 2R0 (t a ) wa (t a ) exp −j exp jπKr tr − tr − c c c

(8)

where tr is the range time, t a is the azimuth time, wr and wa stand for range envelope and azimuth envelope, respectively; f 0 denotes carrier frequency, Kr is the frequency modulate rate. R0 (t a ) is the slant range depending on azimuth time. 3. Two-Dimensional Spectrum Analysis The two-dimensional spectrum analysis plays an important role in frequency domain algorithms, and the improved imaging algorithm can be deduced from which. If the fast Fourier transform FFT is performed in the range domain, the echo signal can be written as: 4π( f 0 + f r ) R0 (t a ) s1 ( f r , t a ) = Wr ( f r )wa (t a ) exp −j c 



πf2 exp −j r Kr 

 (9)

where f r is the range frequency. Then, we perform the azimuth FFT and using the stationary phase principle and series inversion principle. The signal’s two-dimensional frequency domain expression can be obtained as: Θ( f r , f a )



2 −π Kfrr



h fa − A32 − 2( fcc + fr )



i2 fa 0 − 2( fcc + − V c fr ) i3 h i4 h i5  c fa c fa A4 A3 0 0 0 − Vc − 4 − 2( f c + fr ) − Vc − 5 − 2( f c + fr ) − Vc

4π ( f c + f r ) c

R0 c −

A1 2

h

where f a is azimuth frequency, and:               

1 A0c B0 A2 = − 0c3 Ac 2 3B0 − A0 C A3 = c 0 5 c c 6Ac 3 Bc0 7B0 C A4 = − 0 7 + c 0 6c − D0c5 2Ac 12Ac Ac

A1 =

(10)

Sensors 2017, 17, 1058

6 of 15

  f −1 f because f rc is far less than 1, so the series expansion can be represented in terms of 1 + f rc ,   −2   −3 f f 1 + f rc and 1 + f rc . With this substitution, the two-dimensional spectral phase can be rewritten as follows: Θ ( f r , f a ) ≈ ϕ0 ( f a ) + ϕ1 ( f a ) f r + ϕ2 ( f a ) f r 2 + ϕ3 ( f a ) f r 3

(11)

where ϕ0 ( f a ) is the azimuth phase, which is irrelevant to range frequency, can be expressed as:   λ3 A2 λ4 A3 λ5 A4 λ2 A1 4π 0 2 3 4 5 (12) Rc + ϕ0 ( f a ) = − ( fa − fd ) − ( fa − fd ) + ( fa − fd ) − ( fa − fd ) λ 8 24 64 160 2V 0

where f d = − λc , ϕ1 ( f a ) is the first order coupling phase describing RCM, which is relevant on range frequency can be expressed as: ϕ1 ( f a )

n h i h i 2 3 2 λ2 A1 λ3 A2 R + f − f + 2 f f − f − 2 f − f + 3 f f − f = − 4π ( ) ( ) ( ) ( ) c a a a d d d a d ch 8 24 id hd io 4 3 5 4 λ5 A4 λ4 A3 + 64 3( f a − f d ) + 4 f d ( f a − f d ) − 160 4( f a − f d ) + 5 f d ( f a − f d )

(13)

ϕ2 ( f a ) is the second-order coupling phase which contains the range modulation phase and the second-order cross-coupling, expressed as: ϕ2 ( f a ) =

π f a2 f c2



i λ4 A h i λA1 λ2 A2 λ3 A3 h π 4 − 6( f a − f d )2 − 10( f a − f d )3 − [3( f a − f d )] + 2 6 16 40 Kr

(14)

ϕ3 ( f a ) is the third-order coupling phase which is relevant on the third- order range frequency. It can be defined in: n h i 3 2 πf2 1 − λ 6A2 [4( f a − f d ) + f d ] + λ 16A3 10( f a − f d )2 + 4 f d ( f a − f d ) ϕ3 ( f a ) = − f 3a λA 2 ch io (15) 4 − λ 40A4 20( f a − f d )2 + 10 f d ( f a − f d ) 4. Imaging Algorithm In this section, we concentrate on the corresponding imaging algorithm. The proposed algorithm includes two parts. The first part, the azimuth preprocessing, is used to remove the aliasing in the staring observation mode; the second one is the improved CS algorithm. 4.1. Azimuth Preprocessing In the staring observation mode, the azimuth Doppler bandwidth is composed of two parts. One part is the beam coverage Doppler bandwidth of the radar, known as the instantaneous Doppler bandwidth as strip-map mode. The other part is steering of the beam around the observation point causing rotation of the Doppler center. It introduces an extra bandwidth called beam rotation bandwidth, which may be several times larger than the instantaneous Doppler bandwidth. In space-borne SAR, to obtain a larger imaging swath, pulse repetition frequency is usually designed by instantaneous Doppler bandwidth that will cause the azimuth spectrum aliasing in the staring imaging mode. To remove the aliasing, two methods are usually adopted. One is the sub-aperture method [32,33], where the data is divided into small segments in the azimuth and imaging respectively processing. However, the sub-aperture operation is inefficient and complex. The other one is azimuth preprocessing based on two-step. The two step processing method is illustrated in [34–36], usually used to remove the aliasing in spotlight SAR and sliding-spotlight SAR. We employ it to remove azimuth aliasing in staring observation. Subsequently, the signal aliasing is removed, resulting in imaging processing without ambiguity.

Sensors 2017, 17, 1058

7 of 15

4.2. Improved CS Algorithm Due to the curved trajectory and high order slant mode, classical imaging algorithms cannot be directly used in the case of GEO SAR. Therefore, an improved CS algorithm based on the fifth-order slant range is adapted to imaging processing. After the azimuth preprocessing, the signal third-order coupling phase can be compensated in frequency domain. The reference function is described as:       H1 f r , f a , Rre f = exp jϕ3 f a , Rre f f r 3

(16)

where Rre f is the slant range in the beam center. Then, the range inverse fast Fourier transform (IFFT) of the signal is computed, and the expression can be obtained through the following expression:       2 ϕ ( fa) π2 Wa ( f a − f d ) exp jϕ0 f a ; R0c − j S tr , f a ; R0c = Wr tr + 1 tr − td f a , R0c π ϕ2 ( f a )

(17)

where, td ( f a , R0c ) = − ϕ1 ( f a , R0c )/(2π ). The next step is to determine the scaling factor of the improved CS algorithm. Due to the high-order slant mode, the linear scaling factor is not easy to get and difficult to describe with the indicated equation. However, we can get it under the numerical polynomial fitting model as described in [37]. According to the numerical fitting, the linear scaling factor Cs ( f a ) can be obtained, and the line module frequency scaling function is described as:  

H2 tr , f a , Rre f





= expjπ(Cs ( f a ) − 1)

π





  tr − td f a , Rre f ϕ2 f a , Rre f

2 

(18)

After multiplying by the linear scaling function and computing the ranged FFT, the expression of the echo signals can be expressed as follows:     jϕ ( f ,R0 ) f 2 πf S(tr , f a0 ; R0c ) = Wr − ϕ ( f 0 ,R )rC ( f ) Wa ( f a − f d ) exp − 2 C a( f c) r s a 2 a c s a   4π f exp −j c r R0c + Rre f (C ( f a ) − 1) exp(−jϕ0 ( f a , R0c ))   2  4π2 1 0 exp −j c2 Cs ( f a )(Cs ( f a ) − 1) ϕ ( f 0 ;R0 ) Rc − Rre f 2

a

(19)

c

Then, the phase of the range cell migration correction phase and the secondary range compression phase are compensated in the 2-D frequency domain. The function can be expressed as:  

H3 tr , f a , Rre f



= expjπ

f r2 

Cs( f a ) ϕ2 f a , Rre f



  4π(Cs ( f a ) − 1) Rre f   exp j c

(20)

After applying range IFFT, the azimuth compression and residual phase functions are presented by a fifth-order polynomial in the frequency domain, which is given by: 



H4 tr , f a , Rre f



 2 2 ( C ( f ) − 1) C ( f )  4π s a s a   = exp(jϕ0 ( f a )) expj 2 R0c − Rre f  c ϕ2 f a , Rre f

(21)

Finally, the echo data are transformed into the 2-D time domain, and the image is obtained. The flowchart of the imaging algorithm is shown in Figure 4.

Sensors2017, 2017,17, 17,1058 1058 Sensors Sensors 2017, 17, 1058

16 89ofof15 9 of 16

Raw Echo Data Raw Echo Data

Third coupling Third coupling phase fuction phase H1 fuction H1

Range Range FFT FFT

S t ;r Srefreftaa; rrefref

Range Range IFFT IFFT Chirp scaling Chirp scaling function H2 function H2

Azimuth Azimuth FFT FFT



'



S t ;r Srefref ta' a; rrefref Azimuth Azimuth FFT FFT

Range Range FFT FFT

Azimuth Azimuth preprocessing preprocessing

S S

Range Range IFFT IFFT

ff;;rr 

 ref ref

Azimuth Azimuth IFFT IFFT

' 'a ref a ref

Azimuth Azimuth and compression compression and residual phase residual phase compensation compensation function H4 function H4

SAR SAR Imaging Imaging

Range Range migration migrationand correction correction and secondary secondary range range compression compression function H3 function H3

Figure 4. Flowchart of the imaging algorithm. Figure 4. Flowchart of the imaging algorithm.

5. Simulation Results 5. Simulation Results In order to validate the algorithm, simulation experiments are performed. The imaging quality order to tovalidate validatethe thealgorithm, algorithm,simulation simulationexperiments experiments are performed. The imaging quality In order are performed. The imaging quality of of the proposed algorithm is evaluated. The parameters used in our experiments are described in of the proposed algorithm is evaluated. The parameters used our experiments are described the proposed algorithm is evaluated. The parameters used in ourin experiments are described in Tablein 1. Table 1. Table 1. Table 1. Simulation Parameters. Table 1. Simulation Parameters. Table 1. Simulation Parameters. Parameters Value Parameters Value Parameters

Value

Orbit semi-major axis Orbit semi-major axis axis Orbit semi-major Orbit eccentricity Orbit eccentricity Orbit eccentricity Orbit inclination Orbit inclination Orbit inclination Argument of perigee Argument of perigee Argument of perigee Right ascension of ascending node Right ascension of ascending node node Right ascension of ascending Radar frequency Radar frequency Radar frequency Pulse repetition frequency Pulse repetition frequency Pulse repetition frequency Bandwidth Bandwidth Bandwidth Antenna beam width beam width AntennaAntenna beam width Off-nadir angle Off-nadir Off-nadir angle angle

42,164 km 42,164 kmkm 42,164 0 0 0 20° 20◦20° 95° 95◦95° 97° ◦ 97 97° 1.25 GHz 1.251.25 GHz GHz 90 Hz 90 Hz 90 Hz 80 MHz 80 MHz 80 MHz 0.5° 0.5◦0.5° 3.5° 3.5◦3.5°

The satellite orbit time is selected at perigee passage time 0 and the designed imaging scene The satellite orbit time is selected at perigee passage time 0 and the designed imaging scene orbit time at perigee passage time 0 andtime the designed imaging scene in with withThe sizesatellite of 80 km × 40 kmisisselected shown in Figure 5. The integration is 1800 s for the targets the with size of 80 km × 40 km is shown in Figure 5. The integration time is 1800 s for the targets in the size of 80 km × 40 km is shown in Figure 5. The integration time is 1800 s for the targets in the scene. scene. scene. T1 T1 40km 40km

T2 T2

20km 20km

T3 T3

Figure 5.Target Target positionsininthe the scene. Figure Figure 5. 5. Target positions positions in the scene. scene.

The target T2’s scene bandwidth including the extra bandwidth introduced by the steering of The target T2’s scene bandwidth including the extra bandwidth introduced by the steering of the antenna and the instantaneous bandwidth of the beam is calculated, which is about 260 Hz. The the antenna and the instantaneous bandwidth of the beam is calculated, which is about 260 Hz. The

Sensors 2017, 17, 1058

9 of 15

10 ofof 16 scene bandwidth including the extra bandwidth introduced by the steering 10 10of of16 16 the antenna and the instantaneous bandwidth of the beam is calculated, which is about 260 Hz. instantaneous bandwidth is varied with the perigee passage time, as shown in Figure 6. It be instantaneous bandwidth is with the perigee passage time, as shown in Figure 6. ItIt can can be The instantaneous bandwidth is varied with perigee passage time, shown Figure Itcan can instantaneous bandwidth is varied varied with thethe perigee passage time, as as shown in in Figure 6. 6. be seen that the instantaneous bandwidth is about 20 Hz at perigee passage time 0 h, far less than scene seen that the instantaneous bandwidth is about 20 Hz at perigee passage time 0 h, far less than scene be seen that the instantaneous bandwidth is about 20 Hz at perigee passage time 0 h, far less than seen that the instantaneous bandwidth is about 20 Hz at perigee passage time 0 h, far less than scene bandwidth. bandwidth. scene bandwidth. bandwidth.

Instantaneousbandwidth(Hz) bandwidth(Hz) Instantaneous bandwidth(Hz) Instantaneous

Sensors 2017, 17, 1058 The target T2’s Sensors 2017, 17, Sensors 2017, 17,1058 1058

100 100 100 80 80 80 60 60 60 40 40 40 20 20 20 0 10 20 00 10 20 10 20 Perigee passage time(hours) Perigee Perigee passage passage time(hours) time(hours)

Figure 6.The Theinstantaneous instantaneousbandwidth. bandwidth. Figure Figure Figure6.6. 6. The The instantaneous instantaneous bandwidth. bandwidth.

-40 -40 -40

Azimuth frequency(Hz) Azimuth frequency(Hz) Azimuth frequency(Hz)

Azimuth frequency(Hz) Azimuth frequency(Hz) Azimuth frequency(Hz)

The pulse repetition frequency is selected to 90 Hz. Figure 7a shows that the aliasing is occurs in The pulse repetition frequency selected 90 Hz. Figure 7a shows that the aliasing occurs Thepulse pulserepetition repetitionfrequency frequencyisis isselected selectedtoto to90 90Hz. Hz.Figure Figure7a 7ashows showsthat thatthe thealiasing aliasingisis isoccurs occursinin in The the echo spectrum. However, when the operation of the azimuth preprocessing is used, the aliasing the echo spectrum. However, when the operation of the azimuth preprocessing is used, the aliasing theecho echospectrum. spectrum.However, However, when operation of the azimuth preprocessing is used, aliasing the when thethe operation of the azimuth preprocessing is used, the the aliasing in in the azimuth is removed, as shown in Figure 7b. in is as in in the the azimuth azimuth is removed, removed, as shown shown in Figure Figure 7b. the azimuth is removed, as shown in Figure 7b. 7b.

-20 -20 -20 0 00 20 20 20 40 40 40 -5 -5 -5

0 00 Range frequency(Hz) Range Rangefrequency(Hz) frequency(Hz)

(a) (a) (a)

-500 -500 -500 0 00 500 500 500

5 5 75 x 10 77 xx 10 10

-5 -5 -5

0 00 Range frequency(Hz) Range Rangefrequency(Hz) frequency(Hz)

(b) (b) (b)

5 5 75 x 10 77 xx 10 10

Figure 7. Signal spectrum without and with azimuth pre-processing. (a) signal spectrum aliasing Figure Signal spectrum without and with azimuth pre-processing. (a) signal spectrum aliasing Figure7.7. 7.Signal Signalspectrum spectrumwithout withoutand andwith withazimuth azimuthpre-processing. pre-processing.(a) (a)signal signalspectrum spectrumaliasing aliasing Figure without azimuth pre-processing;(b) signal spectrum aliasing removed with azimuth pre-processing. without azimuth pre-processing;(b) signal spectrum aliasing removed with azimuth pre-processing. without azimuth pre-processing;(b) signal spectrum aliasing removed with azimuth pre-processing. without azimuth pre-processing;(b) signal spectrum aliasing removed with azimuth pre-processing.

100 200 300 400 500 600 100 200 100 Range(samples) 200 300 300 400 400 500 500 600 600 Range(samples) Range(samples)

(a) (a) (a)

100 100 100 200 200 200 300 300 300 400 400 400 500 500 500 600 600 600

Azimuth(samples) Azimuth(samples) Azimuth(samples)

100 100 100 200 200 200 300 300 300 400 400 400 500 500 500 600 600 600

Azimuth(samples) Azimuth(samples) Azimuth(samples)

Azimuth(samples) Azimuth(samples) Azimuth(samples)

5.1. Simulation Results of the Conventional Algorithm 5.1. 5.1. Simulation Simulation Results Results of of the the Conventional Conventional Algorithm Algorithm 5.1. Simulation Results of the Conventional Algorithm To compare the imaging results of algorithm, the conventional algorithm based on the To To compare compare the the imaging imaging results results of of algorithm, algorithm, the the conventional conventional algorithm algorithm based based on on the the To compare the range imaging results of algorithm, theimaging conventional algorithm based on the fourth-order fourth-order slant mode is used [19]. The results are shown in Figure 8. It can be fourth-order slant range mode is used [19]. The imaging results are shown in Figure 8. It can fourth-order slant range mode is used [19]. The imaging results are shown in Figure 8. It can be be slant range mode is used [19]. The imaging results are shown in Figure 8. It can be clearly seen that all clearly seen that all the point targets are severely defocused in the scene. clearly clearly seen seen that that all all the the point point targets targets are are severely severely defocused defocused in in the the scene. scene. the point targets are severely defocused in the scene.

100 200 300 400 500 600 100 200 100 Range(samples) 200 300 300 400 400 500 500 600 600 Range(samples) Range(samples)

(b) (b) (b)

Figure 8. Cont.

100 100 100 200 200 200 300 300 300 400 400 400 500 500 500 600 600 600

100 200 300 400 500 600 100 200 100 Range(samples) 200 300 300 400 400 500 500 600 600 Range(samples) Range(samples)

(c) (c) (c)

Sensors Sensors2017, 2017,17, 17,1058 1058

1011ofof1516

-20

300

400

-30 -20

250 300 350 Range(samples)

-30

(d)

200 300 400 Range(samples)

0

-4 -6 -8

-2 -4 -6

200 400 -8 Azimuth(samples)

600

200 400 (g) Azimuth(samples)

0 -5

-15 -10 -15

200 400 600 Azimuth(samples)

Amplitude(dB)

-10 -30 -20 -40

300 400 Range(samples)

(f)

(f) 0

-5

-5

-10

-10

200 400 600 (h) Azimuth(samples)

600

200

-30

300 400 0 Range(samples)

(e) -10

0 -20

200

-5

Amplitude(dB)

-2

0

Amplitude(dB)

(d)

Amplitude(dB)

Amplitude(dB)

0

11 of 16

-10

-40

250 (e) 300 350 Range(samples)

Amplitude(dB)

200

-30Range(samples)

-10

Amplitude(dB)

-10

0

-20

Amplitude(dB)

0

-20

-10

Amplitude(dB)

Amplitude(dB)

Sensors 2017, 17, 1058

-10

-30

0

0

Amplitude(dB)

Amplitude(dB)

0

0 0

200 400 Azimuth(samples)

200 400 (i) Azimuth(samples)

(g) results by using conventional (h) (i) Figure8.8. Imaging Imaging (a)(a) Target T1;T1; (b)(b) Target T2; Figure results by using conventionalfourth-order fourth-orderalgorithm algorithm Target Target (c) Target T3; (d) Range profile of point target T1; (e) Range profile of point target T2; (f) Range Imaging resultsprofile by using algorithm (b) Target T2; (c)Figure Target8.T3; (d) Range ofconventional point targetfourth-order T1; (e) Range profile(a) ofTarget point T1; target T2; (f)T2; Range profile of point target T3; (g) Azimuth profile of point target T1; (h) Azimuth profile of point target (c) Target T3; (d) Range profile of point target T1; (e) Range profile of point target T2; (f) Range profile of point target T3; (g) Azimuth profile of point target T1; (h) Azimuth profile of point target T2; profile of point target T3; (g) Azimuth profile of point target T1; (h) Azimuth profile of point target T2; (i) Azimuth profile of point target T3. (i) Azimuth profile of point target T3. T2; (i) Azimuth profile of point target T3.

5.2. Simulation Results of the Improved Algorithm 5.2. Simulation Results of the Improved Algorithm 5.2. Simulation Results of the Improved Algorithm The improved scaling imaging algorithm based on the fifth-order slant model is used for The improved scaling imaging algorithm based on the slantslant model is used for imaging The improved scaling imaging algorithm based on fifth-order the fifth-order model is used for imaging processing. The imaging results are shown in Figure 9. imaging processing. imaging results in areFigure shown9. in Figure 9. processing. The imaging The results are shown

400

300 300 400 400

500

600

600

600

200 300 400 500 600 200 Range(samples) 300 400 500 600

0

Amplitude(dB)

-30

-30

-40 250

-40 250

300 350 Range(samples)

0

300(d) 350 Range(samples)

(d)

-10

-10

-10 -20 -30 -20 -40 -30

(c)

0

-20 -10 -30 -20 -40 -30 -50 -40

-50 -40

-50

500

(c)

(b)

0

Amplitude(dB)

Amplitude(dB)

Amplitude(dB)

-20

400

600100 200 300 400 500 600 100 200 300 400 500 600 Range(samples) Range(samples)

0

-20

500

(b)

0 -10

400

100 200 300 400 500 600

(a)

-10

300 300

100 Range(samples) 200 300 400 500 600 Range(samples)

Range(samples)

(a)

200 200

600

600

100

100

500

Amplitude(dB)

500

500

Azimuth(samples)

400

200 200

Amplitude(dB)

300

300

Azimuth(samples)

200

Azimuth(samples)

200

100 100

Azimuth(samples)

100 100

100

Azimuth(samples)

Azimuth(samples)

100

250

250

300 350 Range(samples)

300 350 (e) Range(samples)

Figure 9. (e)Cont.

-50 250

300 350 Range(samples)

250

350 (f) 300 Range(samples)

(f)

Sensors 2017, 17, 1058

11 of 15

Sensors 2017, 17, 1058

12 of 16

0

250 300 350 Azimuth(samples)

-30

(g)

-40

-10 -20 0 -30 -10 -40 -20 -30

250 300 350 Azimuth(samples)

(h)

Amplitude(dB) Amplitude(dB)

-20 0 -30 -10 -40 -20

Amplitude(dB)Amplitude(dB)

Sensors -10 2017, 17, 1058 Amplitude(dB) Amplitude(dB)

0

0

12 of 16

-10 -20 0 -30 -10 -40 -20 -30 -40

250 300 350 Azimuth(samples)

(i)

-40

Figure 9. Imaging results at perigee passage time 0 h. (a) 2-D contour of point target T1; (b) 2-D

250 250 300 250 Figure 9. Imaging results350 at perigee passage time 0 h.300 (a) 2-D350 contour of point target T1; 300 (b) 2-D350 contour Azimuth(samples) Azimuth(samples) contour of point target T2; (c) 2-D contour ofAzimuth(samples) point target T3; (d) Range profile of point target T1; (e) of point target T2; (c) 2-D contour of point target T3; (d) Range profile of point target T1; (e) Range Range profile (g) of point target T2; (f) Range profile (h)of point target T3; (g) Azimuth profile (i) of point profiletarget of point target T2; (f) Range profile of point target T3; (g) Azimuth profile of point target T1; T1; (h) Azimuth profile of point target T2; (i) Azimuth profile of point target T3. Figure 9. Imaging results at perigee passage time 0 h. (a) 2-D contour of point target T1; (b) 2-D (h) Azimuth profile of point target T2; (i) Azimuth profile of point target T3. contour of point target T2; (c) 2-D contour of point target T3; (d) Range profile of point target T1; (e) The imaging performance including resolution (RZ), peak sidelobe rate (PSLR) and integrated Range profile of point target T2; (f) Range profile of point target T3; (g) Azimuth profile of point sidelobe ratio (ISLR) of pointincluding targets are resolution evaluated without windowing treatment as shown Table The imaging (RZ), peak sidelobe rate T3. (PSLR) andinintegrated target T1; performance (h) Azimuth profile of point target T2; (i) Azimuth profile of point target 2. In order to compare the imaging results obtained by proposed algorithm and algorithm in sidelobe ratio (ISLR) of point targets are evaluated without windowing treatment as shown in[19], Table 2. simulation results of the algorithm in [19] are also shown in Table 2. The imaging the performance resolutionby (RZ), peak sidelobe rate (PSLR) and integrated In order to compare imagingincluding results obtained proposed algorithm and algorithm in [19], sidelobe ratio (ISLR) of point targets are evaluated without windowing treatment as shown in Table simulation results of theTable algorithm in [19] are also shown Tablepassage 2. 2. Imaging results of point targets atin perigee time 0 h. 2. In order to compare the imaging results obtained by proposed algorithm and algorithm in [19], Proposed Algorithm in [19] simulation results of the algorithm in [19] also shown Table passage 2. Algorithm Table 2. Imaging results of are point targets at in perigee time 0 h.

Target Positions

Range Azimuth Range Azimuth RZ PSLR ISLR RZ PSLR ISLR at perigee RZ PSLR Algorithm ISLR Table 2.Proposed Imaging results of point targets passage time 0 h. inRZ Algorithm [19] PSLR ISLR (m) (dB) (dB) (m) (dB) (dB) (m) (dB) (dB) (m) (dB) (dB) Range Azimuth Range Azimuth Proposed in TargetT1 Positions3.80 −13.22 −9.83 Algorithm 2.11 −13.02 −9.35 4.75 −2.357 Algorithm −7.35 46[19] −4.21 −2.52 Target Range Range ISLR PSLR−9.75 ISLR 2.02RZ Azimuth PSLR −9.68 ISLR 3.81 RZ −2.865 PSLR RZ PSLR ISLR T2 3.86 RZ−13.27 −13.18 8.86 32 Azimuth −2.36 1.25 (m)PSLR (dB) ISLR (dB) RZ(m) PSLR (dB) ISLR (dB) (m) (dB) (dB) (m) (dB) (dB) Positions RZ RZ PSLR ISLR RZ PSLR ISLR T3 3.92 −13.20 −9.91 2.13 −13.05 −9.57 4.63 −1.08 −1.08 48 −5.01 −1.63 (m) 3.80 (dB)−13.22(dB) (dB) (dB) (m) −(dB) T1 −9.83 (m) 2.11 (dB) −13.02 (dB) −9.35 (m) 4.75 − 2.357 −7.35 46 4.21 −(dB) 2.52 T2 −13.27 −9.75 2.11 2.02 −13.02 −13.18 −9.35 −9.68 4.75 3.81 −2.357 −2.865 −7.35 8.86 32 − 2.36 −2.52 1.25 T1 3.80 3.86−13.22 −9.83 46 −4.21 Comparing3.92 the analytical utilizing the improved algorithm T3,−the in T3 −13.20 −results 9.91 2.02 2.13 −13.05 −9.57 3.81 4.63 −1.08 of−T1, 1.08T2 and 48 5.01PSLR − 1.63 T2 3.86 −13.27 −9.75 −13.18 −9.68 −2.865 8.86 32 −2.36 1.25 azimuth slightly decreases along the azimuth direction, which is mainly caused by the T3 3.92 −13.20 −9.91 2.13 −13.05 −9.57 4.63 −1.08 −1.08 48 −5.01 −1.63

azimuth-variant compressed function. Similarly, the PLSR and ISLR in range are also slowly falling

300 400 100 200 500 300 600 400 500

100 200 300 400 500 600 Range(samples)

(a)

600

200 300

100 400 200 500 300 600 400 500

100 200 300 400 500 600 Range(samples)

(b)

(a)

200 300

100 400 200 500 300 600 400 500

100 200 300 400 500 600 Range(samples)

(c)

600

600

100 200 300 400 500 600 Range(samples)

Azimuth(samples) Azimuth(samples)

200

Azimuth(samples) Azimuth(samples)

Azimuth(samples) Azimuth(samples)

Comparing the analytical utilizing improved T1, T2 in andrange T3, the along the range direction results which is mainly the caused by thealgorithm differentialofRCMC for PSLR the in Comparing the analytical results utilizing the improved algorithm of T1, T2 and T3, the PSLR in azimuth slightly decreases along the azimuth direction, which is mainly caused by the azimuth-variant degradation, whereas from the table can be seen clearly that imaging performance close to the azimuth slightly decreases along the azimuth direction, which is mainly caused by the compressed function. Similarly, the the PLSR and ISLR in range are alsoazimuth slowlyand falling along the range theoretical value which can meet demand of imaging. However, range defocusing azimuth-variant compressed function. Similarly, the PLSR and ISLR in range are also slowly falling occurs by is using the caused algorithm [19],differential therefore, RCMC the azimuth resolution and ISLR has whereas seriously direction which mainly by the for the RCMC degradation, along the range direction which is mainly caused by in therange differential in range for thefrom deteriorated. By comparison, the proposed algorithm has better performance. the table can be seen clearly that imaging performance close to the theoretical value which can degradation, whereas from the table can be seen clearly that imaging performance close to themeet In order to further demonstrate the advantage of the proposed algorithm, different orbital the demand of imaging. However, and of range defocusing occurs by using the algorithm theoretical value which can meetazimuth the demand imaging. However, azimuth and range defocusing [19], positions such as atresolution perigee passage timehas 2 h seriously are randomly selected forBy imaging simulation. In the therefore, theby azimuth and ISLR deteriorated. comparison, proposed occurs using the algorithm [19], therefore, the azimuth resolution and ISLR has the seriously case of satellite yaw angle control, the imaging results obtained are shown in Figure 10. deteriorated. By performance. comparison, the proposed algorithm has better performance. algorithm has better In order to further demonstrate theadvantage advantage of of the the proposed different orbital In order to further demonstrate the proposedalgorithm, algorithm, different orbital positions such as at perigee passage time 2 h are randomly selected for imaging simulation. Inthe the case positions such as at perigee passage time 2 h are randomly selected for imaging simulation. In 100 in Figure 10. 100of satellite yaw angle control, the 100 case imaging results obtained are shown of satellite yaw angle control, the imaging results obtained are shown in Figure 10.

100 200 300 400 500 600 Range(samples)

(b) Figure 10. Cont.

100 200 300 400 500 600 Range(samples)

(c)

Sensors 2017, 17, 1058

12 of 15

13 of 16

0

0

-10

-10

-10

-20 -30

Amplitude(dB)

0

Amplitude(dB)

Amplitude(dB)

Sensors 2017, 17, 1058

-20 -30 -40

-40 250

(d)

300 Range(samples)

250

350

(e)

-20 -30

350

0 -10

-10

Amplitude(dB)

Amplitude(dB)

-10

300 Range(samples)

(f)

0

Amplitude(dB)

-30 -40

250

300 350 Range(samples)

-20

-20 -30

-20 -30 -40

-40

-50

-40 200

250 300 350 Azimuth(samples)

(g)

400

250

300 Azimuth(samples)

(h)

350

250 300 350 Azimuth(samples)

(i)

Figure 10. Imaging results at perigee passage time 2 h. (a) 2-D contour of point target T1; (b) 2-D

Figure 10. Imaging results at perigee passage time 2 h. (a) 2-D contour of point target T1; (b) 2-D contour of point target T2; (c) 2-D contour of point target T3; (d) Range profile of point target T1; (e) contour of point T2;target (c) 2-D point of target (d) T3; Range profile of pointoftarget Range profiletarget of point T2;contour (f) Rangeofprofile pointT3; target (g) Azimuth profile point T1; (e) Range profile of point target T2; (f) Range profile of point target T3; (g) Azimuth profile of point target T1; (h) Azimuth profile of point target T2; (i) Azimuth profile of point target T3. target T1; (h) Azimuth profile of point target T2; (i) Azimuth profile of point target T3. The imaging performance at perigee passage time 2 h are evaluated without windowing treatment as performance shown in Tableat3.perigee As shown in Table 3, the is slightly different from Table 2, The imaging passage time 2 h resolution are evaluated without windowing treatment which is caused by the orbital characteristics. The ideal PSLR should be −13.26 dB. The maximal loss as shown in Table 3. As shown in Table 3, the resolution is slightly different from Table 2, which is of range and azimuth PSLRs is less than 0.23 dB, indicating good focusing quality. The range and causedazimuth by the ISLRs orbitalover characteristics. The ideal PSLR should be −13.26 dB. The maximal loss of range the whole swath are around the standard value, also indicating good focusing and azimuth quality. PSLRs is less than 0.23 dB, indicating good focusing quality. The range and azimuth ISLRs overInthe whole swathverify are around the standard value, indicating good focusing quality. order to further the improved algorithm, the also real image is used (Huashan Mountain, Shaanxi, China, obtained from sliding rail vehicle experiment) as the input radar cross section 3. Imaging results of point targets at perigee passage 2 h. in Table 1. The information in Table echo generation. The simulation parameters are the sametime as listed synthetic aperture time is 1800 s. Both the conventional algorithm and the improved algorithm are Range Azimuth used for focusing the SAR raw data. The imaging results of Huashan Mountain area are depicted. Target Positions

RZ (m) PSLR (dB) ISLR (dB) RZ (m) PSLR (dB) ISLR (dB) Table results of point−targets passage time−213.20 h. T1 3.82 3. Imaging −13.08 9.52 at perigee 2.35 −9.45 T2 3.93 −13.19Range −9.69 2.27 −Azimuth 13.16 −9.71 Target T3 Positions 4.01 −13.06 −9.65 2.47 −13.03 −9.35 RZ (m) PSLR (dB) ISLR (dB) RZ (m) PSLR (dB) ISLR (dB) T1 3.82 −13.08 −9.52 2.35 −13.20 −9.45 −13.19 −9.69 2.27 −13.16 order toT2further verify3.93 the improved algorithm, the real image is used (Huashan−9.71 Mountain, T3 4.01 −13.06 −9.65 2.47 −13.03 −9.35

In Shaanxi, China, obtained from sliding rail vehicle experiment) as the input radar cross section information generation. parameters are theresults sameare as obtained listed inbyTable As in canecho be seen from FigureThe 11, itsimulation is obvious that the well-focused the 1. The synthetic aperture time is 1800 s. Both the conventional algorithm and the improved algorithm improved scaling imaging algorithm based on the fifth-order slant model and as shown in Figure are used for SAR rawalgorithm data. Theresults imaging of Huashan are the depicted. 11b,focusing but the the conventional are results defocusing as shownMountain in Figure area 11a. All point As can simulation be seen from Figure 11, it is obvious that the well-focused results are obtained by the target results and real data simulation results prove the effectiveness of the improved algorithm. improved scaling imaging algorithm based on the fifth-order slant model and as shown in Figure 11b, but the conventional algorithm results are defocusing as shown in Figure 11a. All the point target simulation results and real data simulation results prove the effectiveness of the improved algorithm.

Sensors 2017, 17, 1058 Sensors 2017, 17, 1058

Azimuth

Azimuth

13 of 15 14 of 16

Range

(a)

Range

(b)

Figure Figure11. 11. Comparison Comparisonofofimaging imagingresults resultsof ofHuashan HuashanMountain Mountainarea. area.(a) (a)conventional conventionalalgorithm; algorithm; (b) improved algorithm based on the fifth-order slant model. (b) improved algorithm based on the fifth-order slant model.

6. Conclusions 6. Conclusions GEO SAR has recently received increasing attention because of its great potential for future GEO SAR has recently received increasing attention because of its great potential for future spaceborne SAR missions. An imaging algorithm for GEO SAR with staring observation is spaceborne SAR missions. An imaging algorithm for GEO SAR with staring observation is proposed. proposed. Based on the fifth-order slant range mode, the 2-D spectrum is deduced with Based on the fifth-order slant range mode, the 2-D spectrum is deduced with consideration of the consideration of the impact of “stop-and-go” assumption, and then a two step processing method is impact of “stop-and-go” assumption, and then a two step processing method is used to remove the used to remove the signal spectrum aliasing. An improved scaling algorithm is presented for signal spectrum aliasing. An improved scaling algorithm is presented for imaging processing. Finally, imaging processing. Finally, based on the orbit parameters and radar parameters, simulation based on the orbit parameters and radar parameters, simulation experiments have demonstrated that experiments have demonstrated that the algorithm is effective for an L-band GEO SAR with 1800 s the algorithm is effective for an L-band GEO SAR with 1800 s integration time. Besides, when it comes integration time. Besides, when it comes to longer synthetic aperture and wider swaths, the effects to longer synthetic aperture and wider swaths, the effects of spatial variation, atmospheric refractive of spatial variation, atmospheric refractive index and orbit perturbation on image formation and the index and orbit perturbation on image formation and the higher order slant model should be studied higher order slant model should be studied in depth. All of these will be major topics in our future in depth. All of these will be major topics in our future work. work. Acknowledgments: Financial support for this study was provided by the National Defense Key Laboratory Acknowledgments: Financial support for this study was provided by the National Defense Key Laboratory Foundation of China under Grants 9140C5305020902 and National Natural Science Foundation of China (project Foundation of China under Grants 9140C5305020902 and National Natural Science Foundation of China number 61420106007). (project number 61420106007). Author Contributions: All authors have made great contributions to the work. Caipin Li and Mingyi He carried out theContributions: theoretical framework, conceived and great designed the experiments; Caipin Li performed the experiments Author All authors have made contributions to the work. Caipin Li and Mingyi He carried and wrote the manuscript; Mingyi He gave insightful suggestions for the work and revised the manuscript. out the theoretical framework, conceived and designed the experiments; Caipin Li performed the experiments and wrote of theInterest: manuscript; Mingyi He gaveno insightful suggestions Conflicts The authors declare conflict of interest. for the work and revised the manuscript.

Conflicts of Interest: The authors declare no conflict of interest.

References

References 1. Martino, G.D.; Iodice, A.; Riccio, D.; Ruello, G. Disaster Monitoring by Extracting Geophysical Parameters 1. 2. 2. 3. 3. 4. 4. 5. 5. 6. 6. 7.

7. 8.

from SAR Data. In Proceedings IGARSS, Barcelona, Spain, 23–28 July 2007;Geophysical pp. 4949–4952. Martino, G.D.; Iodice, A.; Riccio, of D.;the Ruello, G. Disaster Monitoring by Extracting Parameters Heygster, G.; Dannenberg, J.; Notholt, J. Topographic Mapping the German Flats Analyzing SAR from SAR Data. In Proceedings of the IGARSS, Barcelona, Spain, of 23–28 July 2007;Tidal pp. 4949–4952. Images with the Waterline J.; Method. IEEE Trans. Geosci.Mapping Remote Sens. 2010, 48, 1019–1030. Heygster, G.; Dannenberg, Notholt, J. Topographic of the German Tidal Flats[CrossRef] Analyzing SAR Srivastava, H.S.; Patel, P.; Sharma, Y.; Navalgund, R.R. Large-area soil moisture Images with the Waterline Method. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1019–1030. estimation using multi-incidence-angle RADARSAT-1 IEEE Trans.R.R. Geosci. Remote Sens. 47, 2528–2535. [CrossRef] Srivastava, H.S.; Patel, P.; Sharma,SAR Y.;data. Navalgund, Large-area soil2009, moisture estimation using Brekke, C.; Solberg, A.H.S. Oil spill detection satellite Remote Sens. Environ. 2005, 95, multi-incidence-angle RADARSAT-1 SAR data.by IEEE Trans.remote Geosci. sensing. Remote Sens. 2009, 47, 2528–2535. 1–13. [CrossRef] Brekke, C.; Solberg, A.H.S. Oil spill detection by satellite remote sensing. Remote Sens. Environ. 2005, 95, Kurosu, T.; Fujita, M.; Chiba, K. Monitoring of rice crop growth from space using the ERS-1 C-band SAR. 1–13. IEEE Trans. Geosci.M.; Remote Sens. 1995, 33, 1092–1096. [CrossRef] Kurosu, T.; Fujita, Chiba, K. Monitoring of rice crop growth from space using the ERS-1 C-band SAR. Le Toan, T.; Quegan, S.; Davidson, M.W.J.; Balzter, H.; IEEE Trans. Geosci. Remote Sens. 1995, 33, 1092–1096. Paillou, P.; Papathanassiou, K. The BIOMASS mission: Mapping global forestS.; biomass to better understand cycle. Remote K. Sens. Environ. 2011, Le Toan, T.; Quegan, Davidson, M.W.J.; Balzter,the H.;terrestrial Paillou, carbon P.; Papathanassiou, The BIOMASS 115, 2850–2860. [CrossRef] mission: Mapping global forest biomass to better understand the terrestrial carbon cycle. Remote Sens. Henderson, A.J. Radar Detection of Wetland Ecosystems: A Review. Int. J. Remote Sens. 2008, 29, Environ. 2011,F.M.; 115,Lewis, 2850–2860. 5809–5835. [CrossRef] Henderson, F.M.; Lewis, A.J. Radar Detection of Wetland Ecosystems: A Review. Int. J. Remote Sens. 2008,

29, 5809–5835. Sadjadi, F. New Comparative Experiments in Range Migration Mitigation Methods Using Polarimetric Inverse Synthetic Aperture Radar Signatures of Small Boats. IEEE Radar Conference, Cincinnati, OH, USA, 19–23 May 2014; pp. 0613–0616.

Sensors 2017, 17, 1058

8.

9. 10.

11. 12. 13. 14. 15. 16.

17. 18. 19. 20. 21. 22. 23. 24.

25. 26. 27. 28. 29. 30. 31.

14 of 15

Sadjadi, F. New Comparative Experiments in Range Migration Mitigation Methods Using Polarimetric Inverse Synthetic Aperture Radar Signatures of Small Boats. In Proceedings of the IEEE Radar Conference, Cincinnati, OH, USA, 19–23 May 2014; pp. 0613–0616. Sadjadi, F. Radar beam sharpening using an optimum FIR filter. Circuits Syst. Signal Process. 2000, 19, 121–129. [CrossRef] Sadjadi, F.A. New Experiments in Inverse Synthetic Aperture Radar Image Exploitation for Maritime Surveillance. In Proceedings of the Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series, Baltimore, MD, USA, 13 June 2014; p. 09. Hobbs, S.; Mitchell, C.; Forte, B.; Holley, R. System design for geosynchronous synthetic aperture radar missions. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7750–7763. [CrossRef] Tomiyasu, K.; Pacelli, J.L. Synthetic aperture radar imaging from an inclined geosynchronous orbit. IEEE Trans. Geosci. Remote Sens. 1983, 21, 324–329. [CrossRef] Bruno, D.; Hobbs, S.E.; Ottavianelli, G. Geosynchronous synthetic aperture radar: Concept design, properties and possible applications. Acta Astron. 2006, 59, 149–156. [CrossRef] Bruno, D.; Hobbs, S.E. Radar imaging from geosynchronous orbit: Temporal decorrelation aspects. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2924–2929. [CrossRef] Sheng, W.; Hobbs, S.E. Research on compensation of motion, Earthcurvature and tropospheric delay in GEO SAR. Acta Astron. 2011, 68, 2005–2011. [CrossRef] Ruiz-Rodon, J.; Broquetas, A.; Makhoul, E.; Monti Guarnieri, A. Nearly zero inclination geosynchronous SAR mission analysis with long integration time for earth observation. IEEE Trans. Geosci. Remote Sens. 2014, 52, 6379–6391. [CrossRef] Mittermayer, J.; Wollstadt, S.; Prats-Iraola, P.; Scheiber, R. The TerraSAR-X Staring Spotlight Mode Concept. IEEE Trans. Geosci. Remote Sens. 2014, 52, 3695–3706. [CrossRef] Li, C.P.; He, M.Y. A novel attitude steering strategy for GEO SAR staring imaging. In Proceedings of the IEEE China SIP, Chengdu, China, 12–15 July 2015; pp. 630–634. Bao, M.; Xing, M. Chirp scaling algorithm for GEO SAR based on fourth-order range equation. Electron. Lett. 2012, 48, 56–57. [CrossRef] Hu, C.; Liu, Z.P.; Long, T. An improved CS algorithm based on the curved trajectory in geosynchronous SAR. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 795–808. [CrossRef] Liu, Z.P.; Hu, C.; Zeng, T. Improved secondary range compression focusing method in GEO SAR’. In Proceedings of the IEEE ICASSP, Prague, Czech Republic, 22–27 May 2011; pp. 1373–1376. Hu, C.; Long, T.; Liu, Z.P.; Zeng, T. An improved frequency domain focusing method in geosynchronous SAR. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5514–5528. Hu, B.; Jiang, Y.C.; Zhang, S.; Zhang, Y. Generalized Omega-K Algorithm for Geosynchronous SAR Image Formation. IEEE Trans. Geosci. Remote Sens. Lett. 2015, 12, 2286–2290. [CrossRef] Sun, G.C.; Xing, M.D.; Wang, Y.; Yang, J. A 2-D space-variant chirp scaling algorithm based on the RCM equalization and subband synthesis to process geosynchronous SAR data. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4868–4880. Zhao, B.J.; Han, Y.Z.; Gao, W.J.; Luo, Y.; Han, X. A New Imaging Algorithm for Geosynchronous SAR Based on the Fifth-Order Doppler Parameters. Prog. Electromagn. Res. B 2013, 55, 195–215. [CrossRef] Yu, Z.; Lin, P.; Xiao, P.; Kang, L.; Li, C. Correcting Spatial Variance of RCM for GEO SAR Imaging Based on Time-Frequency Scaling. Sensors 2016, 16, 1091. [CrossRef] [PubMed] Li, Z.; Li, C.S.; Yu, Z.; Zhou, J.; Chen, J. Back projection algorithm for high resolution GEO-SAR image formation. In Proceedings of the IGARSS, Vancouver, BC, Canada, 24–29 July 2011; pp. 336–339. Ruiz Rodon, J.; Broquetas, A.; Monti Guarnieri, A.; Rocca, F. Geosynchronous SAR focusing with atmospheric phase screen retrieval and compensation. IEEE Trans. Geosci. Remote Sens. 2013, 51, 4397–4404. [CrossRef] Dong, X.; Hu, C.; Long, T.; Li, Y. Numerical Analysis of Orbital Perturbation Effects on Inclined Geosynchronous SAR. Sensors 2016, 16, 1420. [CrossRef] [PubMed] Hu, C.; Tian, Y.; Zeng, T.; Long, T. Adaptive Secondary Range Compression Algorithm in Geosynchronous SAR. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 9, 1397–1413. [CrossRef] Jiang, M.; Hu, W.; Ding, C.; Liu, G. The effects of orbital perturbation on geosynchronous synthetic aperture radar imaging. IEEE Trans. Geosci. Remote Sens. 2015, 12, 1106–1110. [CrossRef]

Sensors 2017, 17, 1058

32. 33. 34. 35.

36. 37.

15 of 15

Mittermayer, J.; Moreira, A. Spotlight SAR processing using the extended chirp scaling algorithm. In Proceedings of the IEEE IGARSS, Singapore, 3–8 August 1997; pp. 2021–2023. Prats, P.; Scheiber, R.; Mittermayer, J.; Meta, A. Processing of sliding spotlight and TOPS SAR data using baseband azimuth scaling. IEEE Trans. Geosci. Remote Sens. 2010, 48, 70–780. [CrossRef] Lanari, R.; Tesauro, M.; Sansosti, E.; Fornaro, G. Spotlight SAR data focusing based on a two-step processing approach. IEEE Trans. Geosci. Remote Sens. 2011, 39, 1993–2004. [CrossRef] Liu, F.F.; Ding, Z.G.; Zeng, T.; Long, T. Performance analysis of two-step algorithm in sliding spotlights space-borne SAR. In Proceedings of the Radar Conference, Washington, DC, USA, 10–14 May 2011; pp. 965–968. Sun, G.C.; Xing, M.D.; Wang, Y.; Wu, Y. Sliding spotlight and TOPS SAR data processing without subaperture. IEEE Trans. Geosci. Remote Sens. Lett. 2011, 8, 1036–1040. [CrossRef] Huang, L.J.; Qiu, X.L.; Hu, D.H.; Ding, C.B. Focusing of medium-earth-orbit SAR with advanced nonlinear chirp scaling algorithm. IEEE Trans. Geosci. Remote Sens. 2011, 49, 500–508. [CrossRef] © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).