Document not found! Please try again

A Relative Radiometric Calibration Method Based on the ... - MDPI

18 downloads 0 Views 11MB Size Report
Mar 1, 2018 - calibration method based on histograms of side-slither data is developed. ..... (a) Raw side-slither data; (b) Primary image corrected using.
remote sensing Article

A Relative Radiometric Calibration Method Based on the Histogram of Side-Slither Data for High-Resolution Optical Satellite Imagery Mi Wang, Chaochao Chen *, Jun Pan

ID

, Ying Zhu and Xueli Chang

State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China; [email protected] (M.W.); [email protected] (J.P.); [email protected] (Y.Z.); [email protected] (X.C.) * Correspondence: [email protected] Received: 4 January 2018; Accepted: 27 February 2018; Published: 1 March 2018

Abstract: Relative radiometric calibration, or flat fielding, is indispensable for obtaining high-quality optical satellite imagery for sensors that have more than one detector per band. High-resolution optical push-broom sensors with thousands of detectors per band are now common. Multiple techniques have been employed for relative radiometric calibration. One technique, often called side-slither, where the sensor axis is rotated 90◦ in yaw relative to normal acquisitions, has been gaining popularity, being applied to Landsat 8, QuickBird, RapidEye, and other satellites. Side-slither can be more time efficient than some of the traditional methods, as only one acquisition may be required. In addition, the side-slither does not require any onboard calibration hardware, only a satellite capability to yaw and maintain a stable yawed attitude. A relative radiometric calibration method based on histograms of side-slither data is developed. This method has three steps: pre-processing, extraction of key points, and calculation of coefficients. Histogram matching and Otsu’s method are used to extract key points. Three datasets from the Chinese GaoFen-9 satellite were used: one to obtain the relative radiometric coefficients, and the others to verify the coefficients. Root-mean-square deviations of the corrected imagery were better than 0.1%. The maximum streaking metrics was less than 1. This method produced significantly better relative radiometric calibration than the traditional method used for GaoFen-9. Keywords: relative radiometric calibration; push-broom; high-resolution optical satellite; side-slither data; histogram matching; Otsu’s method

1. Introduction With the development of remote sensing technology, a new generation of high-resolution optical remote sensing satellites has been broadly applied to scientific experiments, land surveying, crop yield assessment, and environmental monitoring. The push-broom imaging mode is used by high-resolution satellites because of its high signal-to-noise ratio (SNR) [1]. Ideally, each detector in push-broom imaging mode should have exactly the same output when the incident radiance into the entrance pupil of the camera is uniform. However, this ideal state is not realized, due to several factors, and the response of each detector is not identical. Thus, streaking noise appears on the image, causing target distortion and visual effects that impact target interpretation. To improve the visual quality of the image and the identification of the target, the image must be corrected. This process is called relative radiometric correction, and is a key step in producing high-quality images. Relative radiometric calibration is an indispensable processing procedure for improving the quality of high-resolution optical satellite imagery [2,3].

Remote Sens. 2018, 10, 381; doi:10.3390/rs10030381

www.mdpi.com/journal/remotesensing

Remote Sens. 2018, 10, 381

2 of 21

Throughout the development of remote sensing technology, many relative radiometric calibration methods have been proposed [4]. According to the different ways to obtain the correction coefficients, relative radiometric calibration can be divided into three categories, including calibration-based methods, scene-based methods, and comprehensive methods, as described by Duan [5]. However, based on their efficiency and applicability, laboratory relative radiometric calibration, ground-based Remote Sens. 2017, 9, x FOR PEER REVIEW 2 of 22 relative radiometric calibration, and statistical relative radiometric calibration are the three most commonly used methods for relative radiometric of high-resolution data. 46 calibration-based methods, scene-based methods, calibration and comprehensive methods, assatellite described by The characteristics of the satellite be efficiency measuredand prior to launchlaboratory using therelative laboratory calibration 47 Duan [5]. However, based must on their applicability, radiometric 48 calibration, relative radiometric calibration, and statistical relative radiometric procedures [6]. ground-based The responses of sensors will change on-orbit, due to environmental changes, 49 calibration are the threeand mostother commonly used methods for relative calibration of highparticulate contamination, factors. Ground-based relativeradiometric radiometric calibration requires a 50 satellite data. such The characteristics of the measured prior to launch using largeresolution area of “Flat Fields”, as seas, deserts, orsatellite glaciersmust overbethe full field of view, which may be 51 the laboratory calibration procedures [6]. The responses of sensors will change on-orbit due to difficult to locate [7]. The statistical method can acquire high-accuracy radiometric coefficients with a 52 environmental changes, particulate contamination, and other factors. Ground-based relative large number of images and typical detector response stability, but requires a long time to collect data, 53 radiometric calibration requires a large area of “Flat Fields” such as seas, deserts, or glaciers over the and thus, does not meet the requirement oflocate time [7]. efficiency [8]. method can acquire high-accuracy 54 full field of view, which may be difficult to The statistical In light of the drawbacks of the traditional methods discussed new calibration data, 55 radiometric coefficients with a large number of images and typicalabove, detector response stability, butwhich are also called side-slither data, data wereand obtained bynot using the viewing 56 requires a long time to collect thus does meet thenormalization requirement ofsteered time efficiency [8].mode [9]. Figure 1 illustrates the drawbacks classic push-broom viewing mode and the new mode, normalization 57 In light of the of the traditional methods discussed above, newcalled calibration data, 58 which also bemode. called side-slither data, weremode obtained using the normalization viewing steered viewing This new imaging has by been applied to a satellitesteered that can rotate its 59 mode [9]. Figure 1 illustrates the classic push-broom viewing mode and the new mode, called array 90 degrees on the yaw axis, which means that every detector is ideally aligned to image the 60 steeredalong viewing This new imaging modesatellites has been incorporate applied to a satellite that canmode, samenormalization target as it moves themode. velocity vector. Numerous this imaging 61 rotate its array 90 degrees on the yaw axis, which means that every detector is ideally aligned to including Landsat 8 [10], RapidEye [11], Pleiades-HR [12], and QuickBird [13]. Because the CCDs 62 image the same target as it moves along the velocity vector. Numerous satellites incorporate this (charge-coupled device) or detectors are staggered, to ensure that the input of each detector is identical, 63 imaging mode including Landsat 8 [10], RapidEye [11], Pleiades-HR [12] and QuickBird [13]. Because the satellite needs to take imagesDevice) of a uniform field inare thestaggered, new mode bythat Landsat 8, and 64 the CCDs (Charge Coupled or detectors to used ensure the input ofQuickBird. each Whiledetector the arrangement the Pleiades-HR that the detector passes overmode the same 65 is identical,of the satellite needs toCCD take ensures images of a uniform field in the new used ground, by the pixel response is piecewise linear, and thus, input areaCCD must be wide enough to contain 66 Landsat 8, andfunction QuickBird. While the arrangement of thethe Pleiades-HR ensures that the detector a useful range radiances. 67 passes overofthe same ground, the pixel response function is piecewise linear and thus the input area 68

must be wide enough to contain a useful range of radiances. Before Scan

Classic push-broom viewing mode

1 2 3 4

A E I M

B F J N

C G K O

D H L P

FlightPath

Detector array

t1

A E I M

B F J N

t2

C G K O

D H L P

A E I M

B F J N

Image after t4

t3

C G K O

D H L P

A E I M

B F J N

C G K O

t4

D H L P

A E I M

B F J N

C G K O

D H L P

1 2 3 4 A B C D E F G H I J K L M N O P

t1 t2 t3 t4

time

69 70

(a)

Detector array

A E I M 71 72 73 74 75 76 77

B F J N

C G K O

D H L P

FlightPath

Normalization steered viewing mode (side-slither scan) t1

1 2 3 4

Before Scan

A E I M

B F J N

C G K O

t2

D H L P

A E I M

B F J N

C G K O

t3

D H L P

A E I M

B F J N

Image after t4

t4

C G K O

D H L P

A E I M

B F J N

C G K O

D H L P

1 2 3 4 C C G C G K C G K O

t1 t2 t3 t4

time

(b) Figure 1. (a) Classic push-broom viewing mode; (b) Normalization steered viewing mode, also be

Figure 1. (a) Classic push-broom viewing mode; (b) Normalization steered viewing mode, also be called side-slither scan. called side-slither scan.

For this analysis, the sensor response was treated as linear throughout the instruments range of response. Being linear, histograms of side-slither data could be used for the analyses. Figure 1(b) shows the process of side-slither scan and the pixel arrangements of the raw data. Due to different

Remote Sens. 2018, 10, 381

3 of 21

Remote Sens. 2018, 10, x FOR REVIEW 21 For this analysis, the PEER sensor response was treated as linear throughout the instruments3 of range of response. Being linear, histograms of side-slither data could be used for the analyses. Figure 1b shows For this analysis, the sensor response was treated as linear throughout the instruments range of the process of side-slither and the arrangements raw Due to different response. Being linear,scan histograms ofpixel side-slither data couldofbethe used fordata. the analyses. Figure 1balong-track shows and across track samplings [14,15], the same features are not at a line in side-slither data, therefore, the process of side-slither scan and the pixel arrangements of the raw data. Due to different pre-processing is required. Instead of the enhanced horizontal correction proposed by [16], this along-track and across track samplings [14,15], the same features are not at a line in side-slither data,paper introduced line detection tois ensure the efficiency and accuracy of calibration data. Then, Otsu’s therefore, pre-processing required. Instead of the enhanced horizontal correction proposed by [16], this[17] paper introduced line detection to ensure accuracy of calibration data. Then, method was used to extract key points thatthe areefficiency used forand obtaining coefficients. Otsu’s method is Otsu’s [17] was used to extract key points that are used obtaining coefficients. Otsu’s mainly usedmethod for one-dimensional image segmentation, and can be for effective for background and object method is mainly used for one-dimensional image segmentation, and can be effective for background segmentation. This paper adapted Otsu’s method to precisely determine the key points of each detector. and object segmentation. This paper adapted Otsu’s method to precisely determine the key points of Within a specified range of histograms, Otsu’s method can identify the histogram with the maximum each detector. Within a specified range of histograms, Otsu’s method can identify the histogram with key point of change for energy. This key point accurately reflects the reaction of each detector to the the maximum key point of change for energy. This key point accurately reflects the reaction of each samedetector input. Histogram has beenmatching used to determine the to specified range of histograms. Hence, to the samematching input. Histogram has been used determine the specified range of the accuracy of relative radiometric calibration was verified using histogram matching and Otsu’s histograms. Hence, the accuracy of relative radiometric calibration was verified using histogram method. To validate themethod. calibration coefficients, side-slither data have been used for verification matching and Otsu’s To validate the calibration coefficients, side-slither data have been used of globalforrelative radiometric accuracy. objects imagedVarious in the classic viewing verification of global relativeVarious radiometric accuracy. objectspush-broom imaged in the classicmode push-broom viewing mode have also been used to verify the accuracy of coefficients. The on-orbit have also been used to verify the accuracy of coefficients. The on-orbit coefficients, which are obtained are obtained using histogram matching on theto data collectedin from September to used usingcoefficients, histogram which matching on the data collected from September December 2015 [18], were December in 2015 [18], were used for comparison. This paper is organized as follows. Section 2 for comparison. This paper is organized as follows. Section 2 introduces the relative radiometric introduces the relative radiometric correction model, which is the basis of the proposed method. In correction model, which is the basis of the proposed method. In Section 3, details of the relative Section 3, details of the relative radiometric calibration method based on the histogram of side-slither radiometric calibration method based on the histogram of side-slither data are provided; obtaining data are provided; obtaining high-precision key points through histogram matching and Otsu’s high-precision key pointsin through histogram matching covered in thisthe section. method are covered this section. Section 4 showsand the Otsu’s results method of imageare correction using Section 4 shows the results of image correction using the coefficients, which are described and analyzed coefficients, which are described and analyzed in detail. In Section 5, the analysis of data in detail. In Sectionand 5, the results analysis of data pre-processing and the wereofdescribed detail. pre-processing were described in detail. Finally, the results conclusions this studyin are provided in Section 6. Finally, the conclusions of this study are provided in Section 6.

2. Model The Model of Relative RadiometricCorrection Correction 2. The of Relative Radiometric Using push-broom imagingsystem, system, streaking streaking noise onon thethe imagery, due due to the Using the the push-broom imaging noiseappears appears imagery, to the different responses of each detector. The method proposed, herein, is based on a linear response different responses of each detector. The method proposed, herein, is based on a linear response model. model. In this section, the formulas for relative radiometric correction are described, along with the In this section, the formulas for relative radiometric correction are described, along with the derived derived relative radiometric coefficients. Before the launch of GaoFen-9 (GF-9), the relationship relative radiometric coefficients. Before the launch of GaoFen-9 (GF-9), the relationship between input between input radiance and output DN (Digital Number) was tested and calibrated using a radiance and output DN system. (DigitalThese Number) was tested calibrated using laboratory calibration laboratory calibration relationships of oneand detector are shown in aFigure 2, and appear system. These relationships of one detector are shown in Figure 2, and appear linear. linear.

Figure 2. The relationship between input radiance and output DN (Digital Number).

Figure 2. The relationship between input radiance and output DN (Digital Number).

Remote Sens. 2018, 10, 381

4 of 21

Therefore, the response of the detectors can be described by Equation (1), which represents the conversion from incident radiance to digital counts, and can be described as follows: Remote Sens. 2018, 10, x FOR PEER REVIEW 4 of 21 DN Gi × L can + Bbe = 0, 1, . . by . , nEquation −1 i , idescribed Therefore, the response of thei = detectors (1), which represents the conversion from incident radiance to digital counts, and can be described as follows:

(1)

where DNi is the raw detector response in digital counts for the i-th detector, L is the incident radiance, (1) = × + , = 0, 1, … , − 1 Gi is the gain for i-th detector, Bi is the bias measured in the absence of light for i-th detector, and n is where of detectors is the rawindetector response in the digital counts the -th the incident the number the array. Ideally, output DNfor values are detector, the same ifiseach detector has radiance, is the gain for -th detector, is the bias measured in the absence of light -th ideal the same response to the incident radiance, L. According to this assumption, the responsefor under detector, and is the number of detectors in the array. Ideally, the output DN values are the same conditions can be described as follows: if each detector has the same response to the incident radiance, . According to this assumption, the response under ideal conditions can be described as follows:

DN = G × L + B = ̅× +

(2)

(2)

where DN is the ideal response in digital counts, G is the gain, and B is the bias measured in the where is the ideal response in digital counts, ̅ is the gain, and is the bias measured in the absence of light. Equation (3) can obtained by combining Equations (1) and described absence of light. Equation (3)be can be obtained by combining Equations (1) (2), andand (2), can andbe can be as follows: described as follows: DN = gi × DNi + bi , i = 0, 1, . . . , n − 1 (3) =g ×

+ b , = 0, 1, … ,

−1

(3)

G ̅ ̅ wherewhere gi = gGG =and B− i =b band = Gi−× B×i , i

(g,i ,b radiometric coefficients of the i-th detector. ( gi ) ,are b ) the are relative the relative radiometric coefficients of the -th In the laboratory calibration system, the mean response of the detectors to the same incident radiance detector. In the laboratory calibration system, the mean response of the detectors to the same incident is taken as theisstandard output (DN), and then is used is to used obtain i ,bi ). andleast thensquares the leastmethod squares method to (g obtain radiance taken as the standard output ( ),the In the relative radiometric calibration method based on side-slither data, the accuracy of the (g ,b ). relative radiometric coefficients on the accuracy of (DNdata, explain how to In the relative radiometric(gcalibration method based on side-slither the We accuracy of the i , DN). i ,bi ) depends obtain highlyradiometric accurate values of (DN , DN) in Section 3. accuracy of ( relative coefficients (g i,b ) depends on the , ). We explain how to obtain highly accurate values of (

,

) in Section 3.

3. Methods 3. Methods

Figure 3 shows the relative radiometric calibration method based on side-slither data. This Figure 3 showsinto the three relative radiometric calibration method of based side-slither data. This of method can be divided parts: pre-processing, extraction key on points, and determination method can be divided into three parts: pre-processing, extraction of key points, and determination coefficients. Pre-processing ensures that each input line is seeing the same targets. Key point extraction of coefficients. Pre-processing ensures that each input line is seeing the same targets. Key point is the primary technique in the process; histogram matching, and Otsu’s method were used to extract extraction is the primary technique in the process; histogram matching, and Otsu’s method were used the key points. Finally, least squares was used to obtain the relative radiometric coefficients. to extract the key points. Finally, least squares was used to obtain the relative radiometric coefficients. Raw data

Line detection

Determine ranges using Histogram matching

Extract Key Points using Otsu’s method

Key points

Obtaining Coefficients

Gauss filter

Extraction of Key Points

Data pre-processing

Primary correction

Standard data

Calculate coefficients using Least-square

coefficients

Figure based on on side-slither side-slither data. data. Figure 3. 3. The The relative relative radiometric radiometric calibration calibration method method based 3.1. Pre-Processing

3.1. Pre-Processing

Figure 1 shows the normalization steered viewing mode, in which the same features were not

Figure 1 shows thethe normalization steered viewing mode, incalibration, which thedata same features were on the same line in raw side-slither data. To obtain accurate pre-processing is not on the same line in the raw side-slither data. To obtain accurate calibration, data pre-processing is

Remote Sens. 2018, 10, 381

5 of 21

Remote Sens. as 2018, x FOR PEER REVIEW of 21 each essential [16], it 10, guarantees high-precision relative radiometric coefficients, and ensures 5that row has the same features. essential [16], as it guarantees high-precision relative radiometric coefficients, and ensures that each Figure 4 describes the results of data pre-processing. Figure 4a shows the raw side-slither data, row has the same features. in which the red4 box indicates valid of image data. The upper left 4a corner and right data, corner of Figure describes the results data pre-processing. Figure shows thethe rawlower side-slither the raw side-slither data are invalid areas, which will be discarded before basic adjustment. Figure in which the red box indicates valid image data. The upper left corner and the lower right corner of 4b shows the primary corrected image obtained after will basic Figure 4c adjustment. is the resultant image the raw side-slither data are invalid areas, which beadjustment. discarded before basic Figure after 4b lineshows detection was applied. showsafter the image that was precisely using the the primary correctedFigure image4d obtained basic adjustment. Figure 4c adjusted is the resultant image after line detection applied. Figure 4d shows the image(4), thatwhich was precisely adjusted using results of line detection. The was adjustment was based on Equation is described as follows:

the results of line detection. The adjustment was based on Equation (4), which is described as follows:

DNij = DN j −−1,1,i ==0,0,1,1,.…. ., , m−− n−− 0,…1,, . . . , n = kj , k = , i=+ n +− − 1, 1,=j = 0, 1,

(4)

(4)

is the pixel valueof of the the primary image, is theis pixel the raw wherewhere DNij is the pixel value primarycorrected corrected image, DN thevalue pixelofvalue ofsidethe raw kj slither data, is the total number of rows in the raw side-slither data, and is the total number of side-slither data, m is the total number of rows in the raw side-slither data, and n is the total number of columns in the raw side-slither data. Figure 5 shows the process of basic adjustment using Equation columns in the raw side-slither data. Figure 5 shows the process of basic adjustment using Equation (4). (4).

(a)

(b)

(c)

(d)

Figure 4. Results of data pre-processing. (a) Raw side-slither data; (b) Primary image corrected using

Figure 4. Results of data pre-processing. Raw side-slither data; (b) (d) Primary image corrected Equation (4); (c) Results of line detection(a) applied to the primary image; Standard image correctedusing Equation (4); (c) Results of line detection applied to the primary image; (d) Standard image corrected using Equation (8) with the primary image. using Equation (8) with the primary image.

Remote Sens. 2018, 10, 381

6 of 21

Remote Sens. 2018, 10, x FOR PEER REVIEW

1 2 3 4 5 6 7 8

1 2 3 4 5 6 7 8

1 2 3 4 5 6 7 8

6 of 21

1 2 3 4 5 6 7 8

1 2 3 Basic 4 adjustment 5 6 7 8

1 2 3 4 5 6 7 8

1 2 3 4 5 6 7 8

1 2 3 4 5 6 7 8

1 2 3 4 5 6 7 8

1 2 3 4 5 6 7 8

Figure Theprocess processof ofbasic basic adjustment Equation (4).(4). Figure 5. 5. The adjustmentusing using Equation

In the ideal case, each line of the primary image must have the same features. Figure 4b shows

In the eachnot line of thedue primary image in must have the same features. Figureof 4bthe shows that theideal ideal case, state was realized to differences the horizontal and vertical resolutions that the ideal state was not realized due to differences in the horizontal and vertical resolutions TDI-CCD (time delay integrated-couple-charged device) detector. Therefore, further corrections ofof the TDI-CCD (time delay integrated-couple-charged detector. Therefore, further corrections the primary image were needed. In this paper, thedevice) line detection algorithm was used for the further of correction. First,were a Gaussian filter theline primary imagealgorithm to improvewas the SNR. the primary image needed. Inwas thisapplied paper,tothe detection usedSecond, for thethe further Sobel operator was used tofilter detect lines in the image. the lines whose lengthsthe doSNR. not reach the the correction. First, a Gaussian was applied to the Finally, primary image to improve Second, of width were removed, andlines the slopes the remaining lineslines werewhose calculated. The do slopes Sobelhalf operator was used to detect in the of image. Finally, the lengths not were reach the used for the precise adjustment of the primary image. The Gaussian filter is essentially a signal filter half of width were removed, and the slopes of the remaining lines were calculated. The slopes were with the purpose of smoothing the signal, and is used to obtain an image with a higher SNR. The used for the precise adjustment of the primary image. The Gaussian filter is essentially a signal filter filter is a mathematical model, through which image data will be converted to exclude the low-energy with the purpose of smoothing the signal, and is used to obtain an image with a higher SNR. The filter data that contains noise. If the ideal filter is used, a ringing artifact occurs in the image. With Gaussian is a mathematical through whichand image data will be converted excludea the low-energy filters, systemmodel, functions are smooth avoid ringing artifacts. We to designed Gaussian filter, data that contains noise. If the ideal filter is used, a ringing artifact occurs in the image. With Gaussian calculated an appropriate mask, and then convolved a raw image with the mask. Here, the standardfilters, system functionsof arethe smooth avoid ringing artifacts. designed a Gaussian filter, calculated an deviation Gauss and filtering function controls the We degree of smoothing, where the Gaussian filteringmask, function was described as follows: appropriate and then convolved a raw image with the mask. Here, the standard deviation σ of the Gauss filtering function controls the degree1of smoothing, + where the Gaussian filtering function ( , )= − (5) was described as follows: 2 2   1 x 2 + y2 G ( x, y) = called2 exp − The Sobel operator [19], sometimes operator or Sobel filter, is (5) 2πσ the Sobel–Feldman 2σ2 primarily used for edge detection. Technically, it is a discrete differentiation operator that

The Sobel operator [19], sometimes called the Sobel–Feldman operator or Sobel is edge primarily approximates the gradient of the image intensity function. The Sobel operator is a filter, typical used detection for edgeoperator detection. Technically, it is a discrete differentiation operator approximates based on the first derivative. Because it utilizes a local average,that the operator has a the smoothing eliminating impact ofThe noise. The Sobel operator position of the pixel gradient of theeffect, image intensitythe function. Sobel operator is aweights typicalthe edge detection operator effectively than the Prewitt operator or theaRoberts operator. the Theoperator Sobel operator of twoeffect, basedmore on the first derivative. Because it utilizes local average, has aconsists smoothing 3 × 3 matrices, the horizontal and vertical templates, which are convoluted with the image to eliminating the impact of noise. The Sobel operator weights the then position of the pixel more effectively obtain an approximate difference in luminance between the horizontal and vertical directions. In than the Prewitt operator or the Roberts operator. The Sobel operator consists of two 3 × 3 matrices, the practical application, the following two templates are commonly used to detect lines in images: horizontal and vertical templates, which are then convoluted with the image to obtain an approximate −1 0 1 and vertical 1 2directions. 1 difference in luminance between the horizontal In practical application, the ∇ = −2 0 2 , ∇ = 0 (6) 0 0 following two templates are commonly−1 used to detect lines in images: 0 1 −1 −2 −1    combined to determine At each point in the image,the resulting gradient approximations can be −1 0 1 1 2 1 the gradient magnitude using:   

∇ x =  −2 0 2 , ∇ y =  0 0 0  −1 0 1∇ = ∇ + ∇ −1 −2 −1

(6)

(7)

Figure 4c shows the resulting image after Gaussian filtering, Sobelcan filtering, and binarization At each point in the image, the resulting gradient approximations be combined to determine procedure. To facilitate the subsequent calculation of the slope of the lines, the binarization procedure the gradient magnitude using: q

∇=

∇2x + ∇2y

(7)

Remote Sens. 2018, 10, 381

7 of 21

Figure 4c shows the resulting image after Gaussian filtering, Sobel filtering, and binarization procedure. To facilitate the subsequent calculation of the slope of the lines, the binarization procedure is to assign the pixels of the detected lines to 1 and the other pixels to 0. The algorithm can detect more straight lines and calculate their actual slopes after removal of short straight lines. Figure 4d shows an image after data pre-processing. DNij = DNkj , k = i + I NT (s × j)

(8)

where i = 0, 1, . . . , m − I NT (s ∗ n) − 1, j = 0, 1, . . . , n − 1, and s represents the slope of the line in Figure 4c. Note that during pre-processing, only integer shifts are applied to avoid resampling the data. 3.2. Extraction of Key Points Obtaining accurate key points is essential for calculating high-accuracy coefficients of relative radiometric correction from Equation (3). We define DNi (i = 1, . . . , n) as key points. The key points were the optimum thresholds, which were obtained by using Otsu’s method. They are used to fit the model. Because of its small computational cost and low variance after transformation, rotation, and scaling, the histogram technique is widely used in image processing. In this paper, histograms were used to obtain the key points, due to the fact that this method is not sensitive to the location of a single pixel. First, histograms were used for rough matching using a histogram-matching method. Then, Otsu’s method was used to obtain the precise key point using a histogram within a specific grayscale region. Assume that r is in the range [0, L − 1], here L = 1024, and the normalized histogram of the i-th detector pi (r ) is n (9) pi (r ) = ir MNi where MNi is the total number of pixels in the i-th detector, and nir is the number of pixels with intensity of r in the i-th detector. Meanwhile, the normalized histogram of the whole image is nr p(r ) = MN , r = 0, 1, . . . , L − 1, MN is the total number of pixels in the whole image, and nr is the number of pixels with intensity of r in the whole image. 3.2.1. Histogram Matching Histogram matching [20] maps the cumulative distribution function (CDF) of each detector to a reference CDF. The CDF of i-th detector Pi (r ) was: r

Pi (r ) =

∑ pi (r )

(10)

t =0

where P(r ) = ∑rt=0 p(r ), r = 0, 1, . . . , L − 1 is the CDF of the s the normalized histogram of the entire image. For the i-th detector, every value of r, (i.e., r = 0, 1, . . . , L − 1) was matched to the corresponding value of r in the specified histogram so that Pi (r ) was close as possible to P(r ). A normalization lookup table was created for each detector to map every DN q to a reference DN q0 . 3.2.2. Otsu’s Method The Otsu criterion [17] is optimum in the sense that it maximizes the between-class variance, a well-known measure used in statistical discriminant analysis. Assuming that r is in the range [S0 , S1 ], we recalculated pi (r ), Pi (r ) at the same time. The cumulative mean of the i-th detector mi (r ) up to level h was h

mi ( h ) =

∑ ( t − S0 ) p i ( t ) , h ∈ [ S0 , S1 ]

t = S0

(11)

Remote Sens. 2018, 10, 381

8 of 21

And the global mean was given by S1

∑ ( t − S0 ) p i ( t )

miG =

(12)

t = S0

The between-class variance was given by 2 σiB (h) =

[miG Pi (h) − mi (h)]2 Pi (h)[1 − Pi (h)]

(13)

2 ( h ) in the range [ S , S ]: Then, the optimum threshold is the value, h∗ , that maximizes σiB 0 1 2 σiB ( h∗ ) =

2 max σiB (h)

S0 ≤ h ≤ S1

(14)

Next, the change in the optimum threshold after linear transformation was determined as follows. Suppose that the linear transformation formula is y = a × x + b, and the image prior to linear transformation is called A, then the image after linear transformation is called A’. For image A, if h is the dividing point, then ω0 is the proportion of foreground points, ω1 is the proportion of background points, µ0 is the foreground grayscale mean, and µ1 is the background grayscale mean. The optimum threshold h∗ is the dividing point that maximizes the objective function: g = ω0 × ω1 × (µ0 − µ1)2

(15)

Since the transformation is linear, the point h0 corresponding to h in image A can be found in image A0 , and h0 = a × h + b is satisfied. In image A0 , for h0 , the proportion of the foreground points is ω00 = ω0, the proportion of the background points is ω10 = ω1, the foreground grayscale mean is µ00 = a × µ0 + b, the background grayscale mean µ10 = a × µ1 + b, and thus, the objective function g0 in image A0 can be described as follows: 2

g0 = ω00 × ω10 × (µ00 − µ10 ) = ω0 × ω1 × ( a × µ0 − a × µ1)2 = a2 × g

(16)

Therefore, for image A, there was a one-to-one correspondence (h0 , g0 ) with image A’ for each (h, g). If h∗ is the optimum threshold for image A, h∗0 is the optimum threshold for image A’, and both satisfy the relationship h∗0 = a × h∗ + b. According to the above description, it can be concluded that the optimal threshold before and after the linear transformation also meets the linear transformation rule of each detector. Therefore, for the optimal threshold of the same input, the values after applying the linear transformation of each detector are still the optimal threshold of the output. Therefore, Otsu’s method can be utilized to extract high-precision key points. From the analysis above, Otsu’s method must ensure that the input radiance range of each detector is the same, and histogram matching is used for this function. 3.3. Obtaining Coefficients ∗ was obtained using Otsu’s In Section 3.2, key points were obtained. For the i-th detector, hik method, and the average response to the same input could be calculated as follows:

Yk =

1 N

N −1



i =0

∗ hik , k = 1, 2, . . . , m

(17)

Remote Sens. 2018, 10, 381

9 of 21

where m is the total number of key points obtained for the i-th detector, and N is the total number of detectors. The relative radiometric coefficients were computed using the least squares method: m

LSC (i ) =

∑ (Yk − gi × hik∗ − bi )2

(18)

k =1

For the i-th detector, the relative radiometric coefficients (gi ,bi ) were computed to minimize LSC (i ). 3.4. Processing of the Proposed Method Here, the method of obtaining coefficients using the relative radiometric calibration method is described in detail: 1. 2.

Definition of n different DN values qk , k = 0, 1, . . . , n − 1 in the range [0, L − 1] in the whole image. For the i-th detector:

• • •

• 3. 4.

A normalization look-up table was created to map every DN q to a referenced DN q0 through histogram matching. Obtain n different DN values qik , k = 0, 1, . . . , n − 1, locating the corresponding value of qk , k = 0, 1, . . . , n − 1 in the look-up table. Based on the n different DN values qik , k = 0, 1, . . . , n − 1, construct m = n − 1 ranges h i qi(k−1) , qik , k = 1, 2, . . . , m. h i For each range qi(k−1) , qik , k = 1, 2, . . . , m, obtain the maximum between-class variance ∗ , k = 1, 2, . . . , m using Otsu’s method. point hik

In step 2, m points were obtained for the i-th detector. Suppose the number of detectors is N, then ∗ , k = 1, 2, . . . , m. for index k, the average response is Yk = N1 ∑iN=−0 1 hik For the i-th detector, the relative radiometric coefficients ( gi , bi ) were obtained by minimizing 2 ∗ LSC (i ) = ∑m k =1 Yk − gi × hik − bi ; The relative radiometric coefficients could be obtained from the detailed proposed method.

4. Results 4.1. Experimental Data GF-9 is an optical remote sensing satellite launched in September 2015 as part of the China High Resolution Earth Observation System major science and technology project, with submeter ground resolution. The focal plane of GF-9 uses the optical butting system, which is similar to that of the multispectral sensors of ZY-3 [21]. The optical butting system ensures that all the detectors are aligned in a line on the focal plane [22]. Therefore, each detector can take the image of the same features using the side-slither scan. Due to the satellite’s strong agility and attitude control capabilities, side-slither data could be obtained using normalization steered viewing mode with this high-resolution remote sensing satellite. In this paper, three groups of remote sensing image data were used to calculate and verify the relative radiometric coefficients. To identify the coefficients in this text, we used the term on-orbit coefficients, for the relative radiometric coefficients as obtained using histogram matching on the data collected from September to December in 2015 [18], while the side-slither coefficients were the relative radiometric coefficients calculated using the proposed method. To verify the accuracy of the side-slither coefficients, they must be compared to the results of the on-orbit coefficients. Table 1 shows the details of the experimental data. Here, three groups of data were introduced: Group A was the calibration data, and Groups B and C were the verification data:

Remote Sens. 2018, 10, 381

10 of 21

Table 1. Details of the experimental data. Group Name

Data Type

Imaging Time

TDI Level

Group A

Side-slither data (Calibration data)

16 October 2015

48

Group B

Side-slither data (Verification data)

29 October 2015

48

Water (Verification data)

02 October 2015

32

City (Verification data)

02 October 2015

32

Desert (Verification data)

10 October 2015

48

Group C

Group A: To obtain the relative radiometric coefficient using the proposed method, side-slither data were collected on 16 October 2015, and used to calculate the relative radiometric coefficients with the proposed method. This side-slither image had a total of 625,920 lines. Group B: To verify the accuracy of relative radiometric coefficients using side-slither data, which guarantee that the same features are recorded by each detector, the standard data after pre-processing was considered as a uniform field. In this paper, side-slither data collected on 29 October 2015 were used as verification data to verify the relative radiometric coefficients. This side-slither image had a total of 443,136 lines. Group C: To verify the accuracy of relative radiometric coefficients using various objects, the effects of the coefficients on different types of object, images taken in classic push-broom viewing mode were used. In this paper, three types of objects were used to verify the effects of coefficients including water, city, and desert. The image of water was taken on 2 October 2015, that of the city was taken on 2 October 2015, and the desert image was taken on 10 December 2015. 4.2. Accuracy Assessment To more objectively evaluate the effects of the coefficients, different indices were proposed based on different validation datasets. The side-slither data for Group B described in Section 4.1 made use of the relative radiometric accuracy indices “root-mean-square deviation of the mean line” (RA) and “generalized noise” (RE), as well as streaking metrics [10], which could be treated as a uniform field because side-slither data have the same input radiance for each detector. The image data for Group C could be quantitatively evaluated using streaking metrics, the improvement factor (IF), and the structural similarity index (SSIM). The IF index was used to verify image quality before and after correction, whereas SSIM was used to evaluate the similarity of the two images. 4.2.1. Relative Radiometric Accuracy Indices The indices RA and RE [23] are used to evaluate uniform field images, and are also referred to as the relative radiometric accuracy. RA and RE are calculated follows: r RA =

∑in=1 ( Meani − Mean) n

Mean

2

× 100%

(∑in=1 Meani − Mean )/n RE = × 100% Mean

(19)

(20)

where Meani is the mean value of the i-th detector, Mean is the mean value of the image, and n is the total number of columns in the image. The smaller the values of RA and RE, the higher the accuracy.

Remote Sens. 2018, 10, 381

11 of 21

4.2.2. Streaking Metrics The streaking metrics are sensitive to detector-to-detector non-uniformity (streaking) [10,24,25], and are therefore used for detector comparison. It can be described as follows:

streakingi =

Meani − 12 ( Meani−1 + Meani+1 ) 1 2 ( Meani −1

+ Meani+1 )

× 100

(21)

where Meani is the mean value of the i-th detector. The lower the streaking metrics, the more uniform the image. 4.2.3. Improvement Factor (IF) To quantitatively evaluate the correction effects discussed above, the IF [26,27] was applied to the corrected image. It is defined as follows: d R [i ] = µ IR [i ] − µ I [i ]

(22)

d E [i ] = µ IE [i ] − µ I [i ] " # ∑i d2R [i ] IF = 10 log10 ∑i d2E [i ]

(23) (24)

where µ IR [i ] and µ IE [i ] are the mean values of the i-th detector in the raw and corrected images, respectively. µ I [i ] is the curve obtained by low-pass filtering of µ IE [i ]. The greater the IF, the higher the quality of the image. 4.2.4. Structural Similarity Index (SSIM) Structural similarity theory holds that images are highly structured (i.e., there is a strong correlation between pixels), particularly the pixels closest to each other in the image domain. Because it is based on structural similarity theory, the SSIM [28] defines structural information from the perspective of image composition as luminance, contrast, and structural similarity. The average value is used to estimate luminance, the standard deviation approximates contrast, and covariance is used to determine the degree of structural similarity. The mathematical model is calculated as follows: SSIM( IR, IE) =

(2µ IR µ IE + c1 )(2σIRIE + c2 )  2  2 +c + µ2IE + c1 σIR + σIE 2

µ2IR

(25)

where µ IR and µ IE are the mean values of the raw and corrected images, respectively. σIR and σIE are the variance values of the raw and corrected images, respectively. σIRIE is the covariance of the raw and corrected images, c1 = (k1 L)2 , c2 = (k2 L)2 are two stabilizing variables, L is the range of the image, and k1 = 0.01, k2 = 0.03 are default values. The result of SSIM is a decimal number between −1 and 1. If the result is 1, it indicates that the two image data being compared are consistent. 4.3. Verification Results for Raw Image in Group B It was difficult to verify the quality of the remote sensing images using uniform field data, because even the most uniform deserts have features that diverge from absolutely uniform field data. However, based on the characteristics of the side-slither data and the standard image obtained from data pre-processing, we used the most uniform feature in the linear direction. High-quality test data were available to verify the global relative radiometric accuracy. Here, a portion the Group B data were used to verify the relative radiometric coefficients. Figure 6 shows the results of the relative radiometric correction. Figure 6a shows the standard data that resulted from pre-processing of the raw side-slither data. Figure 6b shows the result of

Remote Sens. 2018, 10, 381

12 of 21

relative radiometric correction using the on-orbit coefficients with standard data. Figure 6c shows the result of relative radiometric correction using the side-slither coefficients with standard data. Analysis of the whole image shows that the standard image has obvious streaking noise, while the non-uniformity disappears in Figure 6b,c. Figure 7a–c shows the detailed images of the red box shown in Figure 6a–c. Figure 7a shows the detail of the standard image obtained from pre-processing of the raw side-slither data. Non-uniformity is apparent in the detailed image. Figure 7b shows the detail of the red box shown in Figure 6b. Compared to the standard data in Figure 7a,b showed greater uniformity. However, non-uniformity is still easily observed. Figure 7c shows the detail of the red box showed in Figure 6c. The non-uniformity in this detailed image has disappeared. However, detailed quantitative analysis is necessary. Therefore, column mean value analysis and streaking metrics analysis of the corrected image are required. Remote Sens. 2018, 10, x FOR PEER REVIEW

12 of 21

Remote Sens. 2018, 10, x FOR PEER REVIEW

12 of 21

(a)

(b)

(c)

Figure 6. The from data pre(a)results of relative radiometric correction. (b) (a) Standard data that resulted (c) Figure 6. The results of relative radiometric correction. (a) Standard data that resulted from data processing of the raw side-slither data; (b) Result of using on-orbit coefficients; (c) Result of using Figure 6. The results of side-slither relative radiometric correction. Standard data coefficients; that resulted from data prepre-processing the raw data; (b) Result of(a)using on-orbit (c) Result of using side-slither of coefficients. processing of the raw side-slither data; (b) Result of using on-orbit coefficients; (c) Result of using side-slither coefficients. side-slither coefficients.

(a)

(b)

(c)

Figure 7.(a) The detail of results. (a) Detailed image (b) of the red box showed in Figure(c)6a; (b) Detailed image of the red box showed in Figure 6b; (c) Detailed image of the red box showed in Figure 6c. Figure 7. The detail of results. (a) Detailed image of the red box showed in Figure 6a; (b) Detailed Figure 7. The detail of results. (a) Detailed image of the red box showed in Figure 6a; (b) Detailed image of the red box showed in Figure 6b; (c) Detailed image of the red box showed in Figure 6c.

Due to the side-slither the column mean clearly theFigure effects6c. of the image of the redcharacteristics box showed inofFigure 6b; (c)data, Detailed image of the red boxreflected showed in coefficients. Figure 8 shows the column mean value analysis of the standard image and corrected Due to the characteristics of side-slither data, the column mean clearly reflected the effects of the images. The abscissa represents the detector number, and the ordinate represents the DN value of the coefficients. Figure 8 shows the column meandata, valuethe analysis of the standard image and corrected Due to the characteristics of side-slither column mean reflected the effects of column mean of each detector. Figure 8a shows the column means ofclearly the standard image. It is images. The abscissa represents the detector number, and the ordinate represents the DN value ofcorrected the the coefficients. Figure 8 shows the column mean value analysis of the standard image and apparent that the column mean value of the standard image exhibited large variation, with a range column mean of each detector. Figure 8a shows the column means of the standard image. It is images. The abscissa represents the detector number, the ordinate represents DN8value of variation of about 40 DN, which is consistent with and the image shown in Figure 6a.the Figure showsof the apparent that the column mean value of the standard image exhibited large variation, with a range thatmean there of areeach seven detectorFigure groups.8aThe responses between the detector group were very different, column detector. shows the column means of the standard image. It is apparent of variation of about 40 DN, which is consistent with the image shown in Figure 6a. Figure 8 shows so the changes of the meanofwere very noticeable, resulting in the significant non-uniformity between that the column mean value the standard image exhibited large variation, with a range of variation that there are seven detector groups. The responses between the detector group were very different, the detector group shown in Figure 6a. The responses of the detectors inside each detector group of about 40changes DN, which consistent with the image shownininthe Figure 6a. Figure 8 shows that there are so the of theismean were very noticeable, resulting significant non-uniformity between similar, and the variations of the mean were not significant, while the non-uniformity inside the sevenwere detector The responses the detector group wereinside very different, so the changes the detectorgroups. group shown in Figurebetween 6a. The responses of the detectors each detector group detector group were not obvious shown in Figure 6a. Figure 8b shows the column means of the similar, thenoticeable, variations ofresulting the mean were not significant,non-uniformity while the non-uniformity inside the of thewere mean wereand very in the significant between the detector images corrected using the on-orbit coefficients and side-slither coefficients. The blue curve detector group were not obvious shown in Figure 6a. Figure 8b shows the column means of the and grouprepresents shown inthe Figure 6a. The responses of the detectors inside each detector group were similar, correction results from using the on-orbit coefficients and the red curve represents the images corrected using the on-orbit coefficients and side-slither coefficients. The blue curve correction results using the side-slither coefficients. It is obvious that the range of variation was represents the correction results from using the on-orbit coefficients and the red curve represents the smaller, about 6 DN, and that the on-orbit corrected image exhibited non-uniformity shown by the correction results using the side-slither coefficients. It is obvious that the range of variation was blue curve. On the other hand, the side-slither coefficients performed well; the range of variation was smaller, about 6 DN, and that the on-orbit corrected image exhibited non-uniformity shown by the about 2 DN for the red curve. For further analysis, streaking metrics analyses were performed. blue curve. On the other hand, the side-slither coefficients performed well; the range of variation was about 2 DN for the red curve. For further analysis, streaking metrics analyses were performed.

Remote Sens. 2018, 10, 381

13 of 21

the variations of the mean were not significant, while the non-uniformity inside the detector group were not obvious shown in Figure 6a. Figure 8b shows the column means of the images corrected using the on-orbit coefficients and side-slither coefficients. The blue curve represents the correction results from using the on-orbit coefficients and the red curve represents the correction results using the side-slither coefficients. It is obvious that the range of variation was smaller, about 6 DN, and that the on-orbit corrected image exhibited non-uniformity shown by the blue curve. On the other hand, the side-slither coefficients performed well; the range of variation was about 2 DN for the red curve. For further analysis, streaking metrics analyses were performed. Remote Sens. 2018, 10, x FOR PEER REVIEW 13 of 21

Remote Sens. 2018, 10, x FOR PEER REVIEW

(a)

13 of 21

(b)

Figure 8. Column mean value analysis of the standard image and corrected images. (a) The column

Figure 8. Column mean value analysis of the standard image and corrected images. (a) The column means of the standard image; (b) The column means of images corrected using the on-orbit (a) (b) The column means of images corrected using (b) the on-orbit coefficients means of the standard image; coefficients and the side-slither coefficients. and the side-slither Figure 8. Columncoefficients. mean value analysis of the standard image and corrected images. (a) The column means of the standard image; (b) The column of images corrected using the images. on-orbit The Figure 9 shows streaking metrics analysis of means the standard image and corrected coefficients and the the side-slither coefficients. abscissa represents detector number, and the ordinate represents the DN value of the Figure 9 shows streaking metrics analysis of the standard image and correctedstreaking images. The metrics for each detector. Figure 9a shows the streaking metrics of the standard image. It shows the abscissa represents the detector number,analysis and theofordinate represents the DN valueimages. of the streaking 9 shows streaking the standard sameFigure reaction as Figure 8a, andmetrics the maximum streaking metrics image reachesand 2.5.corrected Figure 9b shows The the metrics for each detector. Figure 9a shows the streaking metrics of the standard image. It shows the abscissa represents the detector number, and the ordinate represents the DN value of the streaking streaking metrics of the images corrected with on-orbit coefficients or side-slither coefficients. The sameblue reaction as Figure and the maximum streaking metrics reaches 2.5.image. Figure 9b shows metrics each detector. Figure 9aresults shows the streaking metrics of the standard It shows the the dotsforrepresent the8a, correction from the on-orbit coefficients and the red dots represent same reaction as Figure 8a, and the maximum streaking metrics reaches 2.5. Figure 9b shows the streaking metrics of the images corrected with on-orbit coefficients or side-slither coefficients. The those from the side-slither coefficients. It can be concluded that the red dots were lower than the blue blue streaking metrics of the corrected withon-orbit on-orbit coefficients side-slither coefficients. Thethose dotsdots, represent the correction fromcoefficients the coefficients and the dots represent which indicates thatimages theresults side-slither performed betteror than the red on-orbit coefficients. blue dots represent the correction results from the on-orbit coefficients and the red dots represent performed well, asItsupported by the results means fromThis the method side-slither coefficients. can be concluded that of thecolumn red dots wereanalysis lower and thanstreaking the blue dots, those from the side-slither coefficients. It can be concluded that the red dots were lower than thethese blue This metric analysis. Next, quantitative indices RA, RE and the mean of the streaking metrics for which indicates that the the side-slither coefficients performed better than the on-orbit coefficients. dots, which indicates that the side-slither coefficients performed better than the on-orbit coefficients. threeperformed images werewell, determined. method as supported by the results of column means analysis and streaking metric This method performed well, as supported by the results of column means analysis and streaking analysis. Next, the quantitative indices RA, RE and the mean of the streaking metrics for these three metric analysis. Next, the quantitative indices RA, RE and the mean of the streaking metrics for these images were determined. three images were determined.

(a)

(b)

Figure 9. Streaking metrics analysis of the standard image and corrected images. (a) The streaking metrics of the standard image; (b) The streaking metrics of the images corrected using the on-orbit (a) (b) coefficients and side-slither coefficients. Figure 9. Streaking metrics analysis of the standard image and corrected images. (a) The streaking Figure 9. Streaking metrics analysis of the standard image and corrected images. (a) The streaking metrics2ofshows the standard image; (b) The streaking of the images images corrected using the on-orbit Table the quantitative values of metrics these three shown in Figure 6a–c. The metrics of the standard image; (b) The streaking metrics of the images corrected using the on-orbit coefficients and side-slither coefficients. quantitative values of mean, standard deviation, RA (root-mean-square deviation of the mean line),

coefficients and noise), side-slither RE (generalized and coefficients. the maximum streaking metrics were used for comparison of the effect Table 2 shows the quantitative of these images shown in Figure 6a–c. The of on-orbit coefficients and side-slithervalues coefficients. Forthree uniform fields, RA and RE reflected the quantitative values of mean, standard deviation, RA (root-mean-square deviation of the mean line), uniformity of the entire image. The smaller the values RA and RE, the uniformity. In Table 2 shows the quantitative values of these threeof images shown in higher Figurethe 6a–c. The quantitative RE (generalized noise),the and the maximum streaking were used comparison the effect each of the results, target ofRA each detector can metrics be considered asof thefor same. Theline), RA of of standard values of line mean, standard deviation, (root-mean-square deviation the mean RE (generalized of on-orbit coefficients and side-slither coefficients. Forstandard uniformdata. fields, reflected the data is 22.4864%, which shows the non-uniformity of the TheRA RAand and RE RE values of the uniformity of the entire image. The smaller the values of RA and RE, the higher the uniformity. In corrected image using side-slither coefficients were better than 0.1%. It is obvious that the image each line of the results, the target of each detector can be considered as the same. The RA of standard corrected with side-slither coefficients was better than that with on-orbit coefficients. And the lower datastreaking is 22.4864%, which shows non-uniformity the standard The RA and REfor values of the the metrics, the morethe uniform the image.ofThe maximumdata. streaking metrics the image corrected image using side-slither coefficients were better than 0.1%. It is obvious that the image corrected using side-slither coefficients was lower than those of other images, while the maximum corrected metrics with side-slither coefficients was0.0145, better than that withThe on-orbit coefficients.ofAnd lower streaking were 2.5835, 0.3456 and respectively. non-uniformity the the standard

Remote Sens. 2018, 10, 381

14 of 21

noise), and the maximum streaking metrics were used for comparison of the effect of on-orbit coefficients and side-slither coefficients. For uniform fields, RA and RE reflected the uniformity of the entire image. The smaller the values of RA and RE, the higher the uniformity. In each line of the results, the target of each detector can be considered as the same. The RA of standard data is 22.4864%, which shows the non-uniformity of the standard data. The RA and RE values of the corrected image using side-slither coefficients were better than 0.1%. It is obvious that the image corrected with side-slither coefficients was better than that with on-orbit coefficients. And the lower the streaking metrics, the more uniform the image. The maximum streaking metrics for the image corrected using side-slither coefficients was lower than those of other images, while the maximum streaking metrics were 2.5835, 0.3456 and 0.0145, respectively. The non-uniformity of the standard image was obvious based on previous analyses, as well as what was shown in this table. Of course, it is better to keep the mean value of corrected images the same with that of standard data, and the mean values of these three images did not vary significantly. The quantitative values of the corrected image using the side-slither coefficients were better than standard data and the corrected image using the on-orbit coefficients. It can be concluded that the side-slither coefficients perform better than on-orbit coefficients. Table 2. Comparison of quantitative values of the images (coef: coefficients). Correction Method

Mean Value

Standard Deviation

RA (%)

RE (%)

The Maximum Streaking Metrics

Standard data On-orbit coef Side-slither coef

565.7841 564.7518 564.8308

146.0522 145.5909 145.6084

22.4864 0.2657 0.0082

1.6745 0.1751 0.0335

2.5835 0.3456 0.0145

4.4. Verification Results for Raw Images in Group C To verify that the side-slither coefficients also perform well using images taken in the classic push-broom viewing mode, three types of image data were chosen for verification images: water, city, and desert images. In this section, the visual effects and quantitative indices were analyzed for images corrected using the on-orbit and side-slither coefficients. Figure 10 shows the raw and corrected images of the water. Figure 10a shows the raw image of water, while Figure 10b,c show images corrected using the on-orbit coefficients and side-slither coefficients, respectively. Figure 11 shows the raw and corrected images of the city. Figure 11a shows the raw city image, while Figure 11b,c show images corrected using the on-orbit coefficients and side-slither coefficients, respectively. Figure 11 shows the raw and corrected images of the desert. Figure 12a shows the raw desert image, while Figure 12b,c show images corrected using the on-orbit coefficients and side-slither coefficients, respectively. Streaking can be clearly observed in Figures 10a and 12a, while it is hard to observe in Figure 11a. This artifact occurred because the objects in the two images shown in Figures 10a and 12a were uniform fields (i.e., the dynamic range of the images was narrow). When they are displayed such that the images are stretched, a small difference between DN pixels was magnified, while Figure 11a is the raw image of the city, which has a large dynamic range due to various reflective objects in the city. Furthermore, the streaking is not easily observed, and the analysis of streaking metrics is required. From the analysis of the results, Figure 10b clearly shows streaking, while the on-orbit coefficients seem to perform well in Figures 11b and 12b. From Figures 10c, 11c and 12c, the side-slither coefficients also appear to perform well. The streaking metrics are very important for verifying the performance of the coefficients in detail.

was narrow). When they are displayed such that the images are stretched, a small difference between DN pixels was magnified, while Figure 11a is the raw image of the city, which has a large dynamic range due to various reflective objects in the city. Furthermore, the streaking is not easily observed, and the analysis of streaking metrics is required. From the analysis of the results, Figure 10b clearly shows streaking, while the on-orbit coefficients seem to perform well in Figures 11b and 12b. From RemoteFigures Sens. 2018, 10,11c 381 and 12c, the side-slither coefficients also appear to perform well. The streaking 15 of 21 10c, metrics are very important for verifying the performance of the coefficients in detail.

(a)

(b)

(c)

Figure 10. The raw and corrected images of the water. (a) Raw image; (b) Image corrected using the

Figure 10. The raw and corrected images of the water. (a) Raw image; (b) Image corrected using the on-orbit coefficients; (c) Image corrected using the side-slither coefficients. on-orbit coefficients; (c) Image corrected using the side-slither coefficients.

Remote Sens. 2018, 10, x FOR PEER REVIEW

15 of 21

Remote Sens. 2018, 10, x FOR PEER REVIEW

15 of 21

(a)

(b)

(c)

(a)raw and corrected images of the city. (b)(a) Raw image; (b) Image corrected(c) Figure 11. The using the onFigure 11.coefficients; The raw and corrected images ofthe the city. (a) Raw image; (b) Image corrected using the orbit (c) Image corrected using side-slither coefficients. Figure 11. The raw and corrected images of the city. (a) Raw image; (b) Image corrected using the onon-orbit coefficients; (c) Image corrected using the side-slither coefficients. orbit coefficients; (c) Image corrected using the side-slither coefficients.

(a)

(b)

(c)

(a) raw and corrected images of the desert. (b) (a) Raw image; (b) Image corrected (c) using the Figure 12. The on-orbit coefficients; (c) Image corrected using the side-slither coefficients. Figure 12. The raw and corrected images of the desert. (a) Raw image; (b) Image corrected using the

Figure 12. The raw and corrected images of the desert. (a) Raw image; (b) Image corrected using the on-orbit coefficients; (c) Image corrected using the side-slither coefficients. Figures 13–15 show the streaking of side-slither water, city,coefficients. and desert images, respectively. The on-orbit coefficients; (c) Image correctedmetrics using the abscissa represents the detector number, and the ordinate represents value respectively. of the streaking Figures 13–15 show the streaking metrics of water, city, and desertthe images, The metrics for each detector. The streaking metrics of the raw data were shown in Figures 13a,streaking 14a and abscissa represents thethe detector number, and of thewater, ordinate represents the images, value of respectively. the Figures 13–15 show streaking metrics city, and desert The 15a. In Figures 13b, 14b and 15b, the red metrics dots represent the streaking metrics of the corrected images metrics for each detector. The streaking of the raw data were shown in Figures 13a, 14a and abscissa represents the detector number, and the ordinate represents the value of the streaking metrics using side-slither coefficients, while blue dots represent that ofmetrics the corrected images using the 15a. Inthe Figures 13b, 14b and 15b, the redthe dots represent the streaking of the corrected images for each detector. The streaking metrics of the rawthedata were shown inmetrics Figures 13a, 14aedge and 15a. on-orbit coefficients. Figures 13a, 14a, and 15a show maximum streaking were at the using the side-slither coefficients, while the blue dots represent that of the corrected images using the In Figures 13b,group. 14b and 15b, red the dots represent theofstreaking metrics ofmaximum the corrected images of detector Figure 13athe shows streaking metrics the water image,metrics the value of on-orbit coefficients. Figures 13a, 14a, and 15a show the maximum streaking were at the edge usingthe the side-slither coefficients, while the blue dots represent that of the corrected images using streaking metrics can reach 4, and most of the streaking metrics were below 0.3. Figure 13b shows of detector group. Figure 13a shows the streaking metrics of the water image, the maximum value of the that the streaking metric the corrected image using the on-orbit coefficients were lessstreaking than and on-orbit coefficients. Figure 13a, 14a, Figure 15a show thebelow maximum metrics the streaking metrics can of reach 4,Figure and most of and the streaking metrics were 0.3. Figure 13b1.5, shows of the corrected image using the side-slither coefficients were generally less than 0.5. Figures 14a werethat at the edge of detector group. Figure 13a shows the streaking metrics of the water image, that the streaking metric of the corrected image using the on-orbit coefficients were less than 1.5, and the and 15a show ofthe streaking metrics approximately theless same as0.5. in Figures Figure 13a. that of the corrected image using side-slither coefficients were than 14a 0.3. maximum value ofthe thedistribution streaking metrics can reach 4, were and most of generally the streaking metrics were below And the maximum streaking metrics were reaching 2 and 2.5 in Figures 14a and 15a, respectively. and 15a show the distribution of streaking metrics were approximately the same as in Figure 13a. Figure 14b shows that the streaking of the corrected image using 14a on-orbit coefficients were And the maximum streaking metricsmetrics were reaching 2 and 2.5 in Figures and 15a, respectively. generally less than 0.3, while that of the corrected image using side-slither coefficients were generally Figure 14b shows that the streaking metrics of the corrected image using on-orbit coefficients were less than 0.2. 15bwhile shows that of theside-slither corrected image using the side-slither generally lessFigure than 0.3, that ofthe thestreaking correctedmetrics image using coefficients were generally coefficients were below 0.2, which is obviously lower than that of the corrected image using the onless than 0.2. Figure 15b shows that the streaking metrics of the corrected image using the side-slither orbit coefficients. It can be seen from the above analysis that there are no obvious rules for the

Remote Sens. 2018, 10, 381

16 of 21

Figure 13b shows that the streaking metric of the corrected image using the on-orbit coefficients were less than 1.5, and that of the corrected image using the side-slither coefficients were generally less than 0.5. Figures 14a and 15a show the distribution of streaking metrics were approximately the same as in Figure 13a. And the maximum streaking metrics were reaching 2 and 2.5 in Figures 14a and 15a, respectively. Figure 14b shows that the streaking metrics of the corrected image using on-orbit coefficients were generally less than 0.3, while that of the corrected image using side-slither coefficients were generally less than 0.2. Figure 15b shows that the streaking metrics of the corrected image using the side-slither coefficients were below 0.2, which is obviously lower than that of the corrected image using the on-orbit coefficients. It can be seen from the above analysis that there are no obvious rules for the streaking metrics of different objects. However, for the same object, the streaking metrics of the corrected image using the side-slither coefficients were obviously less than that of the corrected image using the on-orbit coefficients. The images corrected using side-slither coefficients had streaking metrics that were consistently relatively small. Thus, the side-slither coefficients perform better than the on-orbit coefficients. For further analysis of the results, the quantitative values of the results are listed. Remote Sens. 2018, 10, x FOR PEER REVIEW 16 of 21 Remote Sens. 2018, 10, x FOR PEER REVIEW

16 of 21

(a) (b) (a) (b) Figure 13. Streaking metrics of water images: (a) Streaking metrics of the raw image; (b) Streaking Figure 13. Streaking metrics of water images: (a) Streaking metrics of the raw image; (b) Streaking Figure Streaking of using water on-orbit images: coefficients (a) Streaking metrics of thecoefficients, raw image; respectively. (b) Streaking metrics13. of the imagesmetrics corrected and side-slither metrics of the images corrected coefficientsand andside-slither side-slither coefficients, respectively. metrics of the images correctedusing usingon-orbit on-orbit coefficients coefficients, respectively.

(a) (a)

(b) (b)

Figure 14. Streaking metrics of city images. (a) Streaking metrics of the raw image; (b) Streaking Figure Streaking metricsusing of city images. (a) Streaking metrics of coefficients, the raw image; (b) Streaking metrics 14. of images corrected on-orbit coefficients and side-slither respectively. Figure 14. Streaking metrics of city images. (a) Streaking metrics of the raw image; (b) Streaking metrics of images corrected using on-orbit coefficients and side-slither coefficients, respectively.

metrics of images corrected using on-orbit coefficients and side-slither coefficients, respectively.

(a) (a)

(b) (b)

Figure 15. Streaking metrics of desert images. (a) Streaking metrics of the raw image; (b) Streaking Figure Streaking of using deserton-orbit images. coefficients (a) Streaking metrics of thecoefficients, raw image; respectively. (b) Streaking metrics15. of the imagesmetrics corrected and side-slither metrics of the images corrected using on-orbit coefficients and side-slither coefficients, respectively.

Tables 3–5 show the quantitative values of the water, city, and desert images, respectively. The Tables 3–5 show quantitative of the city,the andmaximum desert images, respectively. The quantitative values of the mean, standard values deviation, IF, water, SSIM, and streaking metrics were quantitative values of the mean, standard deviation, IF, SSIM, the maximum streaking used for comparison effect of on-orbit coefficients andand side-slither coefficients. Formetrics the IFs,were the used forthe comparison the effect of on-orbit and tables, side-slither coefficients. Forimages the IFs, the greater IF, the higher the quality of the coefficients image. In these the IFs of corrected using greater the IF, the higher the quality of the In these tables, IFs of corrected images using side-slither coefficients are greater than theimage. corrected images usingthe on-orbit coefficients. Therefore, side-slither are greater than on theimage corrected images using coefficients. Therefore, the effects ofcoefficients the side-slither coefficients improvement wereon-orbit much greater than those of the

(a)

(b)

Remote Sens. 2018, 10, Figure 14. 381 Streaking metrics of city images. (a) Streaking metrics of the raw image; (b) Streaking metrics of images corrected using on-orbit coefficients and side-slither coefficients, respectively.

(a)

17 of 21

(b)

Figure 15. Streaking metrics of desert images. (a) Streaking metrics of the raw image; (b) Streaking

Figure 15. Streaking metrics of desert images. (a) Streaking metrics of the raw image; (b) Streaking metrics of the images corrected using on-orbit coefficients and side-slither coefficients, respectively. metrics of the images corrected using on-orbit coefficients and side-slither coefficients, respectively.

Tables 3–5 show the quantitative values of the water, city, and desert images, respectively. The quantitative values the of mean, standardvalues deviation, IF, SSIM, and theand maximum were The Tables 3–5 show quantitative of the water, city, desertstreaking images, metrics respectively. used for comparison the effect of on-orbit coefficients and side-slither coefficients. For the IFs, the were quantitative values of mean, standard deviation, IF, SSIM, and the maximum streaking metrics greater the IF, the higher the quality of the image. In these tables, the IFs of corrected images using used for comparison the effect of on-orbit coefficients and side-slither coefficients. For the IFs, the side-slither coefficients are greater than the corrected images using on-orbit coefficients. Therefore, greater the IF, the higher the quality of the image. In these tables, the IFs of corrected images using the effects of the side-slither coefficients on image improvement were much greater than those of the side-slither arethe greater the corrected using on-orbit coefficients. Therefore, on-orbit coefficients coefficient. And lower than the streaking metrics,images the more uniform the image. The maximum the effects of metrics the side-slither coefficients on image improvement muchthan greater those of streaking for the image corrected using side-slither coefficientswere was lower thosethan of other the on-orbit coefficient. lower the streaking metrics, the more using uniform image. The images, which shows And that itthe was the better to eliminate non-uniformity the the side-slither maximum streaking metrics the image using side-slither loweristhan coefficients. For SSIMs, thefor result of SSIMcorrected is a decimal number betweencoefficients −1 and 1. Ifwas the result 1, it those indicates that the two image data being compared are consistent. The SSIMs of the images corrected of other images, which shows that it was the better to eliminate non-uniformity using the side-slither using theFor on-orbit coefficients relatively high, butnumber all of thebetween SSIMs were above coefficients. SSIMs, the resultwere of SSIM is a decimal −1 and 1. 0.99, If theshowing result is 1, it that the structure of the raw images had hardly changed. Of course, It is better to keep the mean value indicates that the two image data being compared are consistent. The SSIMs of the images corrected of corrected images the same with that of the raw data. For the water image and city image, the means

using the on-orbit coefficients were relatively high, but all of the SSIMs were above 0.99, showing that the structure of the raw images had hardly changed. Of course, It is better to keep the mean value of corrected images the same with that of the raw data. For the water image and city image, the means increased by about 10 percent after applying the correction proposed in this paper. By analyzing the details of the experimental data, it shows the TDI level of desert images is 48, which is the same as group A and group B in Table 1. The TDI level of water image is 32, the same as the city image, which is different from group A. Therefore, the change of the mean may be due to the different TDI levels between the verification data and calibration data. Except for the mean value, the correction proposed performed well on the water image and city image, and the non-uniformity of images had been eliminated. According to Tables 3–5, it can be concluded that the verification results of raw images in Group C show that the side-slither coefficients performed better than the on-orbit coefficients. Table 3. Comparison of quantitative values of the water images. Correction Method

Mean Value

Standard Deviation

Improved Factor (IF)

SSIM

The maximum Streaking Metrics

Raw data On-orbit coef Side-slither coef

75.9497 75.9763 83.5691

9.6081 8.9793 10.6944

/ 17.2223 22.6357

/ 0.9983 0.9923

4.1526 1.4877 0.8967

Table 4. Comparison of quantitative values of the city images. Correction Method

Mean Value

Standard Deviation

Improved Factor (IF)

SSIM

The Maximum Streaking Metrics

Raw data On-orbit coef Side-slither coef

289.1120 288.0641 313.7601

73.6728 73.1624 75.2528

/ 5.4147 15.0459

/ 0.9996 0.9938

2.0004 0.7410 0.5275

Remote Sens. 2018, 10, 381

18 of 21

Table 5. Comparison of quantitative values of the desert images. Correction Method

Mean Value

Standard Deviation

Improved Factor (IF)

SSIM

The Maximum Streaking Metrics

Raw data On-orbit coef Side-slither coef

667.4973 668.104 666.4261

62.28675 60.1704 59.4314

/ 1.7096 10.4055

/ 0.9975 0.9995

2.5343 1.4192 0.3067

5. Discussion 5.1. The Analysis of Pre-Processing The purpose of pre-processing is to ensure that each detector has the same radiance input in the image, and it involves two processes: primary correction and standard correction. Here, the side-slither data used was from Group A described in Section 4.1. A portion of this side-slither data was selected to illustrate the results of pre-processing. Figure 16 shows the results of data pre-processing. Figure 16a shows the primary image, which was the result of applying primary correction to the raw data. Visually, it was apparent that for a given input in the primary image, the output was not on the same image line. To make subsequent calibration more accurate, the standard image was obtained by applying standard correction to the primary image. Figure 16b shows the result of standard correction. On the standard image, each line responded to the same input, which was necessary for the quantitative analysis of changes in the image before and after standard correction. Remote Sens. 2018, 10, x FOR PEER REVIEW

(a)

18 of 21

(b)

Figure 16. The results of data pre-processing. (a) The primary image was obtained using Equation (4)

Figure 16. The results of data pre-processing. (a) The primary image was obtained using Equation (4) with raw side-slither data; (b) The standard image is the result of standard correction of the primary with image. raw side-slither data; (b) The standard image is the result of standard correction of the primary image. Figure 17 shows changes in the column means of the primary and standard images. Figure 17a shows means of in thethe primary image and of standard image.and The standard blue curveimages. represents the 17a Figurethe 17 column shows changes column means the primary Figure column means of the standard image and the red curve represents the column means of the standard shows the column means of the primary image and standard image. The blue curve represents the image. Figure 17b shows the absolute value of the difference between the column values of the column means of the standard image and the red curve represents the column means of the standard primary image and standard image. Note that the closer to the right detector in an image, the larger image. 17b shows the absolute the difference the column valuesFigure of the primary theFigure difference in the column mean.value This of phenomenon canbetween be explained by analyzing 16. image and standard image. Note that the closer to theand right detector in large an image, theamong larger the Side-slither data were not collected from a uniform field, thus, there was variation difference in the phenomenon explained by analyzing Figure 16. Side-slither columns. Thecolumn primarymean. imageThis shown in Figure 16can has be brighter features in the upper right corner. As progressed, the brighter features in the there upper was right large cornervariation were removed, leading data standard were notcorrection collected from a uniform field, and thus, among columns. to the phenomenon shown Figure16 17.has brighter features in the upper right corner. As standard The primary image shown inin Figure

Figure 17 shows changes in the column means of the primary and standard images. Figure 17a shows the column means of the primary image and standard image. The blue curve represents the column means of the standard image and the red curve represents the column means of the standard image. Figure 17b shows the absolute value of the difference between the column values of the primary image and standard image. Note that the closer to the right detector in an image, the larger Remote Sens. 2018, 10, 381 the difference in the column mean. This phenomenon can be explained by analyzing Figure 16.19 of 21 Side-slither data were not collected from a uniform field, and thus, there was large variation among columns. The primary image shown in Figure 16 has brighter features in the upper right corner. As correction progressed, the brighter in the upper right right corner were removed, standard correction progressed, thefeatures brighter features in the upper corner were removed,leading leadingto the phenomenon shown in Figure 17. to the phenomenon shown in Figure 17.

(a)

(b)

Figure 17. Changes in column means of the primary image and standard image. (a) Column means

Figure 17. Changes in column means of the primary image and standard image. (a) Column means of of the primary and standard images; (b) the absolute value of the difference between column values the primary and standard images; (b) the absolute value of the difference between column values of of the primary and standard images. the primary and standard images.

5.2. The Analysis of Corrected Images

5.2. The Analysis of Corrected Images

Two sets of different experimental data were used to verify the effect of the coefficients, and four sets quantitative indicators were used to verify experimental data. Fromofthese four sets of and Twoofsets of different experimental data were the used to verify the effect the coefficients, quantitative indices, it can be found the side-slither coefficients were performed well. The on-orbit four sets of quantitative indicators were used to verify the experimental data. From these four sets of coefficients, which were obtained using histogram matching on the data collected from September to quantitative indices, it can be found the side-slither coefficients were performed well. The on-orbit December in 2015 [18], were used for comparison. It is obvious from the quantitative results that the

coefficients, which were obtained using histogram matching on the data collected from September to December in 2015 [18], were used for comparison. It is obvious from the quantitative results that the relative radiometric correction index, RA of the standard image is 22.4864%. The RA of the corrected image using on-orbit coefficients was 0.2657%, while the RA of the corrected image using side-slither coefficients was 0.00082%. The RA of the corrected image using side-slither coefficients is far superior to that of using on-orbit coefficients. This situation also appeared in the analysis of RE; the RE of the corrected image using side-slither coefficients was 0.0355%, far better than that of the corrected image using on-orbit coefficients. In the analysis of the maximum streaking metrics, the maximum streaking metrics of the corrected image using side-slither coefficients was 0.8967, which was lower than 1. The maximum streaking metrics of the corrected image using the on-orbit coefficients were mostly larger than 1. By analyzing the image factor (IF), the IFs of the corrected images using side-slither coefficients were far superior to that of the corrected images using the on-orbit coefficients. From the above four groups of quantitative indices, it can be very obvious that the side-slither coefficients were better than the on-orbit coefficients. Meanwhile, the on-orbit coefficients were obtained by calculating the large amounts of raw data covering different times (e.g., the on-orbit coefficient used in this paper uses three months of raw data). Therefore, the side-slither coefficients have better effect and time efficiency. 6. Conclusions Due to the development of remote sensing satellite hardware technology, satellites have become more agile, allowing implementation of the normalization steered viewing mode. The TDI-CCD arrangement in this mode is parallel to the direction of satellite flight. In theory, this mode causes each detector of the TDI-CCD to capture the same object, and this type of data also can be called side-slither data. Given a linear response from each detector, this paper proposed a method of relative radiometric calibration based on the histogram of side-slither data, which does not require imaging of uniform fields. Due to the difference between the horizontal and vertical resolutions of the satellite, data pre-processing is an important step to ensure that each line has the same input. Standard data are obtained through data pre-processing. Otsu’s method can be used to calculate the change point of the histogram. After applying linear transformation, the relative position of the change point in the histogram does not change, so the key points can be extracted via Otsu’s method. First, the histogram-matching method was used to determine the ranges of Otsu’s method for each detector.

Remote Sens. 2018, 10, 381

20 of 21

Then, Otsu’s method was used to precisely obtain the key points of each detector in each range. Finally, the normalized relative radiometric coefficients were obtained by the least squares method. We conducted a comparison of the efficacy of on-orbit coefficients and side-slither coefficients for use in relative radiometric correction. According to the characteristics of side-slither data, the side-slither data were used to verify the effects of coefficients used in uniform fields. The relative radiometric accuracy indices RA and RE, as well as streaking metrics, were employed to compare the two coefficients. As can be observed in the experimental results, the “root-mean-square deviation of the mean line” (RA) and “generalized noise” (RE) were greater than 0.1%. The maximum streaking metrics, which are sensitive to detector-to-detector non-uniformity, was less than 1. The results clearly showed that coefficients calculated by the proposed method were superior to on-orbit coefficients on uniform fields. At the same time, to ensure the validity of the proposed method, three types of objects obtained through the classic push-broom viewing mode were used for validation, including waters, cities, and deserts. The quantitative indices of IF, SSIM, and streaking metrics were used to evaluate the corrected images. The experimental results showed that the image corrected using side-slither coefficients had a higher IF value, while the behavior of the SSIMs for the two corrected images was similar. Side-slither coefficients also performed better than on-orbit coefficients in terms of streaking metrics. Compared to on-orbit coefficients, the coefficients obtained by the proposed method had higher accuracy and were more effective for relative radiometric correction. Therefore, it can be concluded that coefficients calculated by the proposed method are superior to on-orbit coefficients. Acknowledgments: This work was supported by the National Natural Science Foundation of China under Grant 91438203, the Open Fund of Twenty First Century Aerospace Technology under Grant 21AT-2016-09 and the Open Fund of Twenty First Century Aerospace Technology under Grant 21AT-2016-02. Author Contributions: Mi Wang and Chaochao Chen contributed equally to this work in designing and performing the research. Jun Pan, Ying Zhu and Xueli Chang contributed to the validation and comparisons. Conflicts of Interest: The authors declare no conflict of interest.

References 1. 2. 3. 4.

5. 6.

7. 8. 9.

10.

Schott, J.R. Remote Sensing: The Image Chain Approach; Oxford University Press on Demand: Oxford, UK, 2007. Xiong, X.; Barnes, W. An overview of modis radiometric calibration and characterization. Adv. Atmos. Sci. 2006, 23, 69–79. [CrossRef] Li, Z.; Yu, F. A real time de-striping algorithm for geostationary operational environmental satellite (goes) 15 sounder images. In Proceedings of the 2015 CALCON Technical Meeting, Logan, UT, USA, 26 August 2015. Zenin, V.A.; Eremeev, V.V.; Kuznetcov, A.E. Algorithms for relative radiometric correction in earth observing systems resource-p and canopus-v. In Proceedings of the ISPRS—International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Prague, Czech Republic, 12–19 July 2016. Duan, Y.; Zhang, L.; Yan, L.; Wu, T.; Liu, Y.; Tong, Q. Relative radiometric correction methods for remote sensing images and their applicability analysis. Int. J. Remote Sens. 2014. [CrossRef] Krause, K.S. Worldview-1 pre and post-launch radiometric calibration and early on-orbit characterization. In Proceedings of the Optical Engineering + Applications, San Diego, CA, USA, 20 August 2008; International Society for Optics and Photonics: Bellingham, WA, USA, 2008; pp. 708111–708116. Lyon, P.E. An automated de-striping algorithm for ocean colour monitor imagery. Int. J. Remote Sens. 2009, 30, 1493–1502. [CrossRef] Gadallah, F.L.; Csillag, F.; Smith, E.J.M. Destriping multisensor imagery with moment matching. Int. J. Remote Sens. 2000, 21, 2505–2511. [CrossRef] Li, H.; Man, Y.-Y. Relative radiometric calibration method based on linear CCD imaging the same region of non-uniform scene. In Proceedings of the International Symposium on Optoelectronic Technology and Application 2014, Beijing, China, 18 November 2014; pp. 929906–929909. Pesta, F.; Bhatta, S.; Helder, D.; Mishra, N. Radiometric non-uniformity characterization and correction of landsat 8 oli using earth imagery-based techniques. Remote Sens. 2015, 7, 430–446. [CrossRef]

Remote Sens. 2018, 10, 381

11.

12.

13.

14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24.

25.

26. 27. 28.

21 of 21

Anderson, C.; Naughton, D.; Brunn, A.; Thiele, M. Radiometric correction of rapideye imagery using the on-orbit side-slither method. In Proceedings of the SPIE Remote Sensing, Prague, Czech Republic, 28 October 2011; International Society for Optics and Photonics: Bellingham, WA, USA, 2011; pp. 818008–818015. Kubik, P.; Pascal, W. Amethist: A method for equalization thanks to histograms. In Sensors, Systems, and Next-Generation Satellites VIII; Meynart, R., Neeck, S.P., Shimoda, H., Eds.; Spie-Int Soc Optical Engineering: Bellingham, WA, USA, 2004; Volume 5570, pp. 256–267. Henderson, B.G.; Krause, K.S. Relative radiometric correction of quickbird imagery using the side-slither technique on orbit. In Proceedings of the SPIE 49th Annual Meeting Optical Science and Technology, Denver, CO, USA, 26 October 2004; International Society for Optics and Photonics: Bellingham, WA, USA, 2004; pp. 426–436. Zhang, G.; Litao, L.I. A study on relative radiometric calibration without calibration field for YG-25. Acta Geod. Cartogr. Sin. 2017, 46, 1009–1016. Peng, Y.; Huang, H.-L.; Zhu, L.-M. The in-orbit resolution detection of TH-01 CCD cameras based on the radial target. Geomat. Spat. Inf. Technol. 2013, 7, 47. Gerace, A.; Schott, J.; Gartley, M.; Montanaro, M. An analysis of the side slither on-orbit calibration technique using the dirsig model. Remote Sens. 2014, 6, 10523–10545. [CrossRef] Otsu, N. A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man. Cybern. 1979, 9, 62–66. [CrossRef] Pan, Z.; Gu, X.; Liu, G.; Min, X. Relative radiometric correction of CBERS-01 CCD data based on detector histogram matching. Geomat. Inf. Sci. Wuhan Univ. 2005, 30, 925–927. Spontón, H.; Cardelino, J. A review of classic edge detectors. Image Process. Line 2015, 5, 90–123. [CrossRef] Horn, B.K.; Woodham, R.J. Destriping landsat mss images by histogram modification. Comput. Graph. Image Process. 1979, 10, 69–83. [CrossRef] Tong, X.; Xu, Y.; Ye, Z.; Liu, S.; Tang, X.; Li, L.; Xie, H.; Xie, J. Attitude oscillation detection of the ZY-3 satellite by using multispectral parallax images. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3522–3534. [CrossRef] Chen, S.; Yang, B.; Wang, H. Design and Experiment of the Space Camera; Aerospace Publishing Company: London, UK, 2003. Yufeng, H.Y.Z. Analysis of relative radiometric calibration accuracy of space camera. Spacecr. Recovery Remote Sens. 2007, 4, 12. Krause, K.S. Relative radiometric characterization and performance of the quickbird high-resolution commercial imaging satellite. In Proceedings of the SPIE 49th Annual Meeting Optical Science and Technology, Denver, CO, USA, 26 October 2004; International Society for Optics and Photonics: Bellingham, WA, USA, 2004; pp. 35–44. Krause, K.S. Quickbird relative radiometric performance and on-orbit long term trending. In Proceedings of the SPIE Optics + Photonics, San Diego, CA, USA, 7 September 2006; International Society for Optics and Photonics: Bellingham, WA, USA, 2006; pp. 62912P–62960P. [CrossRef] Chen, J.; Shao, Y.; Guo, H.; Wang, W.; Zhu, B. Destriping cmodis data by power filtering. IEEE Trans. Geosci. Remote Sens. 2003, 41, 2119–2124. [CrossRef] Corsini, G.; Diani, M.; Walzel, T. Striping removal in MOS-B data. IEEE Trans. Geosci. Remote Sens. 2000, 38, 1439–1446. [CrossRef] Ndajah, P.; Kikuchi, H.; Yukawa, M.; Watanabe, H.; Muramatsu, S. SSIM image quality metric for denoised images. In Proceedings of the 3rd WSEAS Internayional Conference on Visualization, Imaging and Simulation, Faro, Portugal, 3–5 November 2010; pp. 53–58. © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

Suggest Documents