An Effective Correction Method for Seriously

5 downloads 0 Views 6MB Size Report
Oct 18, 2016 - Multi-angle remote sensing, providing multi-angle observations of the ...... Ma, J.; Chan, J.C.-W.; Canters, F. Fully Automatic Subpixel Image ...
sensors Article

An Effective Correction Method for Seriously Oblique Remote Sensing Images Based on Multi-View Simulation and a Piecewise Model Chunyuan Wang 1, *, Xiang Liu 1,2 , Xiaoli Zhao 1 and Yongqi Wang 1 1

2

*

School of Electronicand Electrical Engineering, Shanghai University of Engineering Science, Longteng Road No. 333, Shanghai 201620, China; [email protected] (X.L.); [email protected] (X.Z.); [email protected] (Y.W.) School of Computer Science, Shanghai Key Laboratory of Intelligent Information Processing, Fudan University, Shanghai 200433, China Correspondence: [email protected]; Tel.: +86-180-0166-1195

Academic Editor: Felipe Gonzalez Toro Received: 13 July 2016; Accepted: 29 September 2016; Published: 18 October 2016

Abstract: Conventional correction approaches are unsuitable for effectively correcting remote sensing images acquired in the seriously oblique condition which has severe distortions and resolution disparity. Considering that the extraction of control points (CPs) and the parameter estimation of the correction model play important roles in correction accuracy, this paper introduces an effective correction method for large angle (LA) images. Firstly, a new CP extraction algorithm is proposed based on multi-view simulation (MVS) to ensure the effective matching of CP pairs between the reference image and the LA image. Then, a new piecewise correction algorithm is advanced with the optimized CPs, where a concept of distribution measurement (DM) is introduced to quantify the CPs distribution. The whole image is partitioned into contiguous subparts which are corrected by different correction formulae to guarantee the accuracy of each subpart. The extensive experimental results demonstrate that the proposed method significantly outperforms conventional approaches. Keywords: sensor correction; feature points detection; multi-view simulation; visual difference compensation; piecewise correction

1. Introduction Multi-angle remote sensing, providing multi-angle observations of the same scene, enhances human’s ability to monitor and identify the Earth’s surface [1]. However, to successful use multi-angle images, the distortions caused by imaging from multiple viewing angles must be removed [2]. Geometric correction, especially in the case of imaging with large viewing angles, is an indispensable preprocessing step for the processing and applications of remote sensing images [3]. There are two key factors that can affect correction accuracy. One is how to effectively detect and accurately match the feature points between the reference image and the distorted image. The other is how to precisely estimate the model parameters using control points (CPs, i.e., matched feature points). Previous studies of feature point extraction methods can be divided into three categories. The first category is based on the image gray that uses the gray differences around a pixel of an image [4]. The second category is based on the boundary. Detecting objects include invariant corners at the maximum curvature of the boundary chain code and intersection points after polygon approximation [5]. The third category is based on the parameter model that models each pixel of an image [6]. Since the same target in the distorted image often exhibits local distortions and gray differences compared to in the reference image, the second category method that depends on the edge quality, and the third category method that is restricted to the parametric model are unsuitable. The first Sensors 2016, 16, 1725; doi:10.3390/s16101725

www.mdpi.com/journal/sensors

Sensors 2016, 16, 1725

2 of 16

category method is comparatively simple and easy to combine with other methods. According to the evaluation of various feature point extraction algorithms in [7], Scale-invariant feature transform (SIFT) [8] can provide high stability of image translation, rotation, and scaling. However, its modeling principle does not include complete affine invariance, which is only robust to a certain degree of angle changes in practice. Afterwards, Morel [9] achieved an increasing number of extracted points by simulating scale and parameters related to the optical axis of the imaging sensor and normalizing parameters related to translation and rotation. However, the algorithm is more complex to perform the simulation and normalization for each image pairs, and it is time consuming in practice. In this paper, a new CPs extraction algorithm is proposed by compensating the perspective difference for large angle (LA) images. A reference feature point set is formed by simulating the multi-views of the reference image and performing feature point detection on simulated images. The proposed algorithm ensures effective matching of feature points between the reference image and LA images. Moreover, how to precisely estimate model parameters is another important aspect for correction accuracy [10,11]. Goncalves [12] experimentally proved the influence of the CP distribution on solving model parameters. The correction accuracy achieved by a uniform distribution of CPs is better than it obtained with a non-uniform set [13]. In addition, traditional methods usually employ the same correction model for a whole image [14] while few have considered the serious local distortions and resolution changes causing by LA imaging. Also, the large dimensions of an image would obviously magnify the error correction model parameters. Assuming that a 1000 × 1000 image is processed using the second-order polynomial model, the order of magnitude will reach 106. The piecewise linear functions [15–17], which divide the images into triangular elements by Delaunay’s triangulation method, and take affine transformation as the mapping function, are suitable to reduce local geometric distortions. However, the correction accuracy possible using the affine transformation model is lower than that of using a projection transformation model for LA images. The projection transformation model additionally requires at least five CP pairs. Consequently, the proposed piecewise correction algorithm is indispensable for LA image correction. Firstly, the image is separated into grids according to the variability of spatial resolution. Severely distorted are as have relatively intensive grid distributions. Based on these grids, the overall CP distribution is controlled and the piecewise location is determined. Then, CPs are optimized by joining the CPs distribution measurement (DM) to select the representative CPs in each grid. Finally, subparts deduced from the piecewise location are corrected by different correction formulae using the optimized CPs. The rest of this paper is organized as follows: In Section 2, CP extraction based on multi-view simulation (MVS) is introduced. The piecewise correction algorithm is advanced with the optimized CPs by DM are described in Section 3. In Section 4, experimental results are presented to quantify the effectiveness of the new method. The conclusions of this paper with a summary of our results are provided in the final section. 2. CPs Extraction Based on Multi-View Simulation SIFT, having excellent performance against scaling, rotation, noise and illumination changes, has been applied widely in the feature detection field. However, the modeling process does not consider the imaging oblique angle which results in the change of neighborhood pixels as shown in Figure 1. For example, in the process of the scale space extremum detection, determining whether the current pixel is a maximum value or minimum value requires 26 pixels in the neighborhood as a reference. If the reference pixels are changed, such as resolution reduction and gray distortion caused by LA imaging, the result of the extreme value would be changed. As another example, the process of the key point description also uses the gradient and direction of neighboring pixels as a reference. If the reference pixels were changed, the description vector would be changed which can reduce the matching probability. In the [18], the impact of LA imaging on spatial resolution is analyzed. For LA images, along the seriously oblique direction, the spatial resolution is decreasing with the increasing of imaging angles as shown in Figure 2. The off-nadir ground resolution RESθ is given by:

Sensors 2016, 16, 1725 Sensors 2016, 16, 1725 Sensors 2016, 16, 1725

3 of 16 3 of 16 3 of 16

RES n H RESn RES   βH  RES (1)  H 2 RES (1) RESθ = cos 22 = cos 2 22n (1) cos θ cos θ cos  cos  isisthe where where RES RES thenadir’s nadir’sresolution, resolution, θisisthe theangle anglefrom fromnadir. nadir.The Theoff-nadir off-nadirresolution resolutionisisreduced reduced n n where RES n is the nadir’s resolution,  is the angle from nadir. The off-nadir resolution is reduced byaafactor factorofofcosine cosinesquared squaredofofθ.θ. Equation Equation(1) (1)demonstrates demonstratesthat thatthe theresolution resolutioncompression compressionisis by by a factorwhen of cosine squaredangle of θ. Equation (1) 45 demonstrates that theofresolution compression is◦ ◦ . The performance nonlinear theimaging imaging larger than SIFTThe is stable within 50 nonlinear when the angle is is larger than 45°. As shown in the following performance of nonlinear when the imaging angle is larger than 45°. As shown in the following The performance of ◦ while it is obviously ineffective beyond 50 , which is shown in the experimental Section 4.1. SIFT is stable within 50° while it is obviously ineffective beyond 50°,following which is shown in the following SIFT is stable within 50° while it is obviously ineffective beyond 50°, which is shown in the following The experimental results are experimental consistent with the theoretical analysis. experimental Section 4.1. The results are consistent with the theoretical analysis. experimental Section 4.1. The experimental results are consistent with the theoretical analysis.

(a) (a)

(b) (b)

Figure Figure1.1.The Thechanges changesofofneighborhood neighborhoodpixels pixels(a)(a)the thereference referenceimage; image;(b) (b)the thelarge largeangle angleimage. image. Figure 1. The changes of neighborhood pixels (a) the reference image; (b) the large angle image.

Figure 2. Spatial resolution of off-nadir and nadir. Figure Figure 2. 2. Spatial Spatial resolution resolution of of off-nadir off-nadir and and nadir. nadir.

Based on the above analysis, in order to compensate viewing angle differences between the Based on the above analysis, in order to compensate viewing angle differences between the reference image andabove LA images, SIFT extended to the multi-views space differences in this paper. The Based on the analysis, in isorder to compensate viewing angle between reference image and LA images, SIFT is extended to the multi-views space in this paper. The transformation in MVS SIFT is described in Section 2.1.multi-views The detailedspace procedure CPs the reference model image employed and LA images, is extended to the in thisofpaper. transformation model employed in MVS is described in Section 2.1. The detailed procedure of CPs extraction based on MVS is introduced The transformation model employed in inSection MVS is2.2. described in Section 2.1. The detailed procedure extraction based on MVS is introduced in Section 2.2. of CPs extraction based on MVS is introduced in Section 2.2. 2.1. Multi-View Transformation Model 2.1. Model 2.1. Multi-View Multi-View Transformation Transformation Model The projection relation of ground scene I0 to digital image I can be described as: The scene II00 to The projection projection relation relation of of ground ground scene to digital digital image image II can can be be described described as: as: I  ATI 0 (2)  AT (2) II = ATII 00 (2) where A and T are projection and translation, respectively. where A are projection and respectively. where A and and T Tto are projection and translation, translation, respectively. According the affine approximation theory [19], the above model can be expressed as: According to the affine approximation theory [19], the the above above model model can can be be expressed expressed as: as: According to the affine approximation theory [19], I( x,y )  I(ax  by  e1,cx  dy  e2 ) (3) II((ax ax+  by  ee11,,cx (3) II((x,x,yy))→ by + cx +dy dy+e2e2)) (3) If any determinant of A is strictly positive, following the Singular Value Decomposition (SVD) If any determinant of A is strictly positive, following the Singular Value Decomposition (SVD) principle, A has a unique decomposition: principle, A has a unique decomposition:

Sensors 2016, 16, 1725

4 of 16

If any determinant of A is strictly positive, following the Singular Value Decomposition (SVD) Sensors 2016,A 16,has 1725a unique decomposition: 4 of 16 principle, " # a b a b = H R (ψ)tR (φ) A = A c  d   Hλ R11( )tR2(2 ) c d  (4) " #" #" # (4)  sin  t 0 coscosφ  sin −sinφ sinψ cosψcos−   =λ sin cos  0 1  sin  cos  sinψ cosψ 0 1 sinφ cosφ

 >>0,0,λ is a is R2 are two rotation matrices, and t expresses a tilt, a zoom parameter, R1Rand where λ where zoom parameter, R1 and 2 are two rotation matrices, and t expresses a tilt, namely a namely a matrix diagonal matrix with first eigenvalue t > 1second and the second eigenvalue diagonal with first eigenvalue t > 1 and the eigenvalue equal to 1.equal to 1. The geometric explanation for the matrix decomposition of transformation The geometric explanation for the matrix decomposition of transformation is is shown shown in in Figure Figure 3. 3. The angle  is called longitude, which is formed by the plane containing the normal and the optical The angle φ is called longitude, which is formed by the plane containing the normal and the optical axis with with aa fixed fixed vertical verticalplane, plane,where whereφ== ∈ [0,π). The angle angle θ == arccos(1/t) arccos(1/t) is is axis [0,π). The is called called latitude, latitude, which which is formed by the optical axis with the normal to the image plane I 0 , where  = [0,π/2).  is the rotation formed by the optical axis with the normal to the image plane I0 , where θ = ∈[0,π/2). ψ is the rotation angle of of the the imaging imaging sensor sensor around around the the optical optical axis. axis. As in Figure Figure 3, 3, the the motion motion of of the the imaging imaging angle As seen seen in sensor leads to a parameter change from the positive perspective (  0 = 1, t 0 = 1,  0 =  0 = 0) the sensor leads to a parameter change from the positive perspective (λ0 = 1, t0 = 1, φ0 = ψ0 = 0) to to the oblique angle (  , t,  ,  ), which represents the transformation I(x,y) I(A(x,y)), where the longitude oblique angle (λ, t, φ, ψ), which represents the transformation I(x,y) →I(A(x,y)), where the longitude φ  andthe thelatitude latitudeθ determine determinethe theamount amountofofperspective perspectivechanges. changes.The TheMVS MVS is is mainly mainly based based on on two two and factors: the longitude  and the latitude  .  is represented by the tilt parameter factors: the longitude φ and the latitude θ. θ is represented by the tilt parameter t = |1/cosθ| which t= 1/coswhich measures the obliquethe amount between reference image and LA images. The measures the oblique amount between reference imagethe and LA images. The oblique distortion obliqueadistortion a directional in the direction given causes directionalcauses subsampling in thesubsampling direction given by the longitude φ.by the longitude .

Figure 3. Geometric explanation for the matrix decomposition of transformation.

The Affine AffineSIFT SIFT(ASIFT) (ASIFT)approach approach[9] [9]operates operatesononeach each image simulate distortions caused image to to simulate allall distortions caused by a variation ofcamera the camera optical axis direction, and then it the applies SIFT ASIFT method. ASIFT abyvariation of the optical axis direction, and then it applies SIFT the method. provides provides robust image matching between the twodue images to the viewpoint change. However, robust image matching between the two images to thedue viewpoint change. √ However, ASIFT is ASIFT is time consuming. Assuming the tilt and are angle ranges [tmin2] , tand max] = and time consuming. Assuming that the tilt that and√ angle ranges [tmin , tmax ] are = [1,4 [φ[1,4√2 ]= min , φmax ◦ ],] and [◦min , max = [0,the 180], and the sampling = ◦√2,  = complexity 72/t. The complexity of the ASIFT [0 , 180 sampling steps are ∆t steps = 2,are ∆φt = 72 /t. The of the ASIFT approach approach 1 1)(180 +n (t◦1)(180/72) 1 +2.5 5 =2.5 = 13.5 times much thatofofaasingle single SIFT SIFT routine, is 1 + (|Γtis |− /72◦ ) = 1 + 5=o × 13.5 times as as much as as that √ √ √ where ||Γt || = . Even if the two-resolution ASIFT where = 1,1,√2,2,2,2√2, 2, 2 4,4√2 2, 4, 4 2= 6= 6. Even if the two-resolutionscheme schemeisisapplied, applied, the the ASIFT complexity is still more than twice that of SIFT. On the other hand, multiple matching of ASIFT results complexity is still more than twice that of SIFT. On the other hand, multiple matching of ASIFT results in matching matching point point redundancy redundancy which which demands demands huge huge calculating calculating quantity quantity and and long long computing computing time time in during the CPs optimization process. Even so, ASIFT provides a good idea for extracting CPs between during the CPs optimization process. Even so, ASIFT provides a good idea for extracting CPs between the two two images images of of viewing viewing angle angle differences. differences. the Inspired by ASIFT, the new CPs extraction algorithm algorithm is is proposed proposed by by simulating simulating multi-views multi-views of of Inspired by ASIFT, the new CPs extraction the reference image to compensate the viewing angle difference for LA images. And then the new the reference image to compensate the viewing angle difference for LA images. And then the new algorithm applies appliesthe theSIFT SIFT simulated images to construct a reference points database. algorithm to to simulated images to construct a reference featurefeature points database. Finally, Finally, feature matching is conducted between the reference feature point database and the feature points set detected from the LA image. The overview of the proposed algorithm is shown as Figure 4.

Sensors 2016, 16, 1725

5 of 16

feature matching is conducted between the reference feature point database and the feature points set 5 of 16 LA image. The overview of the proposed algorithm is shown as Figure 4.

Sensors 2016, 16, 1725 detected from the

Feature vector AI

Reference Image

MVS image I1

Feature vector AI1

MVS image I2

Feature vector AI2

 

 





MVS image Im×n

Reference feature vector set A

Feature space matching

Feature vector AIm×n

Distorted Image

(ENDR and RANSAC)

Feature vector B

Figure 4. The overview of the proposed algorithm based on MVS.

2.2. Procedure of CPs Extraction Based on Multi-View Simulation The proposed algorithm proceeds by the following steps: steps: 1. 1.

2. 2.

The transformed by simulating all possible affine affine distortions distortions by by changing changing The reference reference image image II is is transformed by simulating all possible the camera optical axis orientation. In this process, the oblique transform is simulated by the camera optical axis orientation. In this process, the oblique transform is simulated by the the tilt tilt parameter t. Assuming that the y direction is the oblique direction, it represents parameter t. Assuming that the y direction is the oblique direction, it represents I(x,y)→I((x,ty), I(x,y)I((x,ty), whichtoisperform equivalent to performt-subsampling. a y directionalTherefore, t-subsampling. Therefore, an which is equivalent a y directional an antialiasing √ filter antialiasing filter is previously applied, namely the convolution by a Gaussian with standard 2 is previously applied, namely the convolution by a Gaussian with standard deviation c t − 1. deviation √a very − 1.small This aliasing ensures error. a very small aliasing error. This ensures The simulated images of the reference the The simulated images of the reference image image are are calculated calculated by by the the tilt tilt parameter parameter tt and and the longitude  with sampling step lengths. When the latitude  increases at a fixed degree, the longitude φ with sampling step lengths. When the latitude θ increases at a fixed degree, the image image distortion is growing. the sampling density of increase shouldwith increase with the distortion is growing. Hence, Hence, the sampling density of θ should the increasing increasing of  . As a result, the corresponding tilt parameter t follows a geometric series 2, of θ. As a result, the corresponding tilt parameter t follows a geometric series tk1 = 1,u,u 2 m t.k1. .=,u1,u,u (k1 = 0,1,…,m), with u > 1. Meanwhile, the image distortion is more drastic by a m (k ,…,u 1 = 0,1, . . . ,m), with u > 1. Meanwhile, the image distortion is more drastic by a fixed fixed longitude displacement Δlarger at the latitude larger latitude  = arccos(1/t). the longitudes  longitude displacement ∆φ at the θ = arccos(1/t). Hence,Hence, the longitudes φ are an are an arithmetic series for each tilt = 0, / , … , / (k 2 = 0,1,…,n), where the integer k2 arithmetic series for each tilt φk2 = 0, v/t, . . . , nv/t (k2 = 0,1, . . . ,n), where the integer k2 satisfies satisfies the condition ( < 180°).The calculation of simulated images can be described as: the condition (k v/t < 180◦/). The calculation of simulated images can be described as: 2

"cos 2  sin  k 2  t# " 0 # −sinφk2  k1 tk1I 0 k  1,..., m  n  I k(t k1 ,  k 2 )   cosφkk2 (5) Ik (tk1 , φk2 ) =  sin  k 2 cos  k 2   0 1 I (k = 1, ..., m × n) (5) sinφk2 cosφk2 0 1 where the Ik expresses a set of simulated images of the reference image I, namely I1, I2,…,Im×n. where the Ik expresses a set of simulated images of the reference image I, namely I1 , I2 , . . . ,Im ×n . 3. The feature point detection and description (SIFT) are applied to the reference image I and the 3. The feature point detection and description (SIFT) are applied to the reference image I and the simulated images I1, I2,…,Im×n. The feature vector sets AI,AI1,AI2,…,AIm×n are established which are simulated images I1 , I2 , . . . ,Im ×n . The feature vector sets AI ,AI1 ,AI2 , . . . ,AIm ×n are established combined together to form the reference feature vector set A: which are combined together to form the reference feature vector set A: A = AI∪AI1∪AI2∪…∪AIm×n (6) A = AI ∪AI1 ∪AI2 ∪ . . . ∪AIm ×n (6) 4. The feature vector set B, obtained from the distorted LA image, is matched with the reference 4. The feature vector set B, obtained from the distorted LA image, is matched with the reference feature vector set A by employing the European neighbor distance ratio criterion (ENDR) [20] feature vector set A by employing the European neighbor distance ratio criterion (ENDR) [20] and Random Sample Consensus (RANSAC) [21]. and Random Sample Consensus (RANSAC) [21]. The two-resolution scheme [9] is applied to the proposed algorithm to reduce the computation The two-resolution schemefeature [9] is applied the proposed complexity. With this, matched points, to namely CPs, arealgorithm extracted. to reduce the computation complexity. With this, matched feature points, namely CPs, are extracted.

3. Piecewise Correction with Optimized CPs Based on Distribution Measurement For different correction models, the required minimum number of CPs is different. However, a larger number of CPs doesn’t achieve a better root mean square error (RMSE) when CPs are concentrated in a local area. The CPs which are uniformly distributed in the image can achieve accurate model parameter estimation. To quantify the uniformity of CPs, the concept of DM is

Sensors 2016, 16, 1725

6 of 16

3. Piecewise Correction with Optimized CPs Based on Distribution Measurement For different correction models, the required minimum number of CPs is different. However, a larger number of CPs doesn’t achieve a better root mean square error (RMSE) when CPs are concentrated in a local area. The CPs which are uniformly distributed in the image can achieve accurate model parameter estimation. To quantify the uniformity of CPs, the concept of DM is introduced based on information entropy. Then the proposed piecewise correction algorithm with the optimized CPs is presented. 3.1. The Distribution Measurement of CPs Zhu [22] indicated that the probability of correct matching and information content enjoy strong correlation. This means that the probability of correct matching increases with increasing information entropy. The information entropy of every CP can be obtained by defining the second order differential invariants as the descriptor. The second order differential invariants v are [23]: 



L x L x + Ly Ly

  L L L + 2L L L + L L L  xx x x xy x y yy y y v=   L xx + Lyy  L xx L xx + 2L xy L xy + Lyy Lyy

      

(7)

where Lx and Ly are the convolution by the one-dimensional Gauss kernel, and Lxy is the convolution by the two-dimensional Gauss kernel. In order to calculate the probability of entropy, the descriptor v is partitioned into multi-dimensional vector space. The partitioning is dependent on the distance between descriptors which is measured by Mahalanobis distance. The distance between descriptors v1 and v2 is: h i1/2 D (v1 , v2 ) = (v1 − v2 )T Λ−1 (v1 − v2 ) (8) where Λ is the covariance matrix. Λ can be SVD decomposed into Λ−1 = GT PG, where P is a diagonal matrix and G is an orthogonal matrix. Then, v can be normalized by P, which represents vnorm = P1/2 Gv. After the normalization, D(v1 , v2 ) can be rewritten as: D (v1 , v2 ) = P1/2 G(v1 − v2 ) (9) The descriptor v can be partitioned into many cells of equal size. Then the probability of each cell is used to calculate the entropy of v: H (v) = −∑ pi log( pi )

(10)

i

where pi = nv /n is the probability of v. nv and n are the number of CPs in each cell and the total number of CPs, respectively. The CP distribution can be regarded as the spatial dispersion of points, which can be measured by the normalized mean distance between CP and the distribution center of CPs. The distribution center of CPs is:  n  n wi x i ∑ wi y i ∑  i =1  i =1  (11) ( x, y) =  , n n   ∑ wi ∑ wi i =1

i =1

and DM is computed as: " DM =

n

n x −x 2 y −y 2 ∑ ( i Ix ) + ∑ ( i Iy ) i =1 i =1

#, !1/2 n

(12)

Sensors 2016, 16, 1725

7 of 16

where Ix and Iy are the row and column of the region. The parameter wi is the weight of the CP i, which is obtained by H(v) as Equation (10). The bigger the DM is, the better the CPs distribution is. The smaller the DM is, the more Sensors 2016, 16,CPs 1725 is. 7 of 16 concentrated the 3.2. Piecewise Correction Strategy with OptimizedCPs CPs 3.2. Piecewise Correction Strategy with Optimized According to theprevious previous analysis, correction is indispensable for LA images, and images, the According to the analysis,piecewise piecewise correction is indispensable for LA distribution quality of CPs is important to RMSE. Consequently, the image is gridded to determine and the distribution quality of CPs is important to RMSE. Consequently, the image is gridded to the piecewise location and simultaneously control the overall CPs distribution. In each grid, DM is determine the piecewise location and simultaneously control the overall CPs distribution. In each grid, employed to constrain the CP quality. By using these procedures, the distribution uniformity of CPs DM isisemployed to constrain the CP quality. By using these procedures, the distribution uniformity of guaranteed from global to local. CPs is guaranteed globaloftoLA local. Since the from resolution images along the oblique direction (assuming the y direction) is Since the loss, resolution of LA imagesofalong oblique (assuming the y For direction) nonlinear the gridded strategies the x the direction and direction the y direction are different. the x is nonlinear loss, the the image gridded strategiesdivided of the xinto direction and the direction are different. For the x direction, is averagely M subparts. Fory the y direction, the resolution variability is used to measure the distortion. Our aim isFor enable grid tothe own approximately direction, the image is averagely divided into M subparts. the yeach direction, resolution variability changing rate to the resolution. As a result, the gridded strategy of the y direction is shown as rate in to is used to measure the distortion. Our aim is enable each grid to own approximately changing Figure 5. the resolution. As a result, the gridded strategy of the y direction is shown as in Figure 5.

Figure 5. The gridded strategy in the y direction.

Figure 5. The gridded strategy in the y direction.

It can be seen that the larger the angle is, the smaller the division interval is. The gridded strategy

Itfor can seen thatis:the larger the angle is, the smaller the division interval is. The gridded strategy thebe y direction for the y direction is: N v  n  arccos (0 n N ) u N  u n N  n  (0 ≤ n ≤ N ) (13) (13) θn = arccosu   t  cosn2   cosN2 − n  top+ bottom   cos2 θtop cos2 θbottom where 0 and N are boundaries of the viewing angles. N is the subpart amount in the y direction. where θ 0 and θ N are boundaries of the viewing angles. N is the subpart amount in the y direction. The detail of the proposed piecewise correction algorithm goes as follows: The detail of the proposed piecewise correction algorithm goes as follows: 1.

1. 2.

Divide the whole image into N  M grids;

2. Setthe thewhole maximum number of × CPs Divide image into N Mneeded grids; to be selected in each grid. The maximum number is represented as m/(N  M),where m is the total in the wholegrid. image. followingnumber steps is Set the maximum number of CPs needed toCPs be selected in each TheThe maximum [(a)–(c)] should be executed in each grid: represented as m/(N × M), where m is the CPs total in the whole image. The following steps Calculate the information [(a)–(c)](a) should be executed in eachentropy grid: (IE) of every CP and put them in descending order, reserve the top m/(N  M) CPs, go to (b)). The remaining CPs form a spare set;

(a) (b)

(c)

Calculate the information entropyCPs, (IE)ifofDM every put them inCPs descending order, (b) Calculate DM of the reserved > TQCP , or and all the reserved are processed, goreserve to (c);m/(N If DM×< M) TQ, CPs, deletego thetoCP which the nearestCPs to the distribution center, retrieve the the top (b)). Theisremaining form a spare set; CP whose is reserved the maximum re-execute (b);processed, Here, TQ isgo theto (c); Calculate DM ofIEthe CPs, value if DMin> the TQ ,spare or all set, the and reserved CPs are threshold. If DM < TQ , delete the CP which is the nearest to the distribution center, retrieve the CP (c) Process all the grids. CPs selection terminates. whose IE is the maximum value in the spare set, and re-execute (b); Here, TQ is the threshold. Figure 6 illustrates the CPs process in one grid. Process all the grids. CPsselection selection terminates.

Sensors 2016, 16, 1725

8 of 16

Sensors 2016, 16, 1725 Sensors 2016, 16, 1725

8 of 16 8 of 16

Figure 6 illustrates the CPs selection process in one grid. 3. Partition Partition the the image image into into P P correction correction subparts subparts by merging merging adjacent grids grids along along the the yy direction. direction. 3. 3. Sensors Partition the1725 image into P correction subparts by by mergingadjacent adjacent grids along the y8direction. 2016, 16, of 16 To ensure ensure the the correction correction consistency consistency at at the the piecewise piecewise edge, edge, the the adjacent adjacent subparts subparts possess possess an To an Tooverlap ensure grid the as correction consistency at the piecewise edge, the adjacent subparts possess an shown in Figure 7. overlap grid asimage shown in Figure 7. subparts by merging adjacent grids along the y direction. 3. Partition the into P correction overlap grid as shown in Figure 7. 4. Estimate Estimate thethe model parameters by using using thepiecewise selected edge, CPs for for each subpart. All the the corrected 4. the model parameters by the selected CPs subpart. All corrected To ensure correction consistency at the theeach adjacent subparts possess an the model parameters by using the selected CPs for each subpart. All the corrected 4. Estimate subparts are integrated to obtain the final result. overlapare gridintegrated as shown to in Figure subparts obtain 7. the final result. subparts are integrated to obtain the final result. 4. Estimate the model parameters by using the selected CPs for each subpart. All the corrected subparts are integrated to obtain the final result.

Calculate IE of every CP Calculate IE of every CP

The remaining CPs IEThe of every CP CPs remaining Reserve the topm/(N×M)Calculate CPs Reserve the topm/(N×M)CPs Calculate DM of the Reserve theDM topm/(N×M) CPs Calculate of the reserved CPs reserved CPs Calculate DM of IF CPs DM >TQ,, reserved

the

IF DM >TQ or all the CPs are processed or all the CPs are processed , YES IF DM >TYES Q or all the CPs are processed CPs selection terminates CPs selection terminates YES

The remaining CPs

The updated CPs The updated CPs

Retrieve the Retrieve the CP with the CP with the maximum IE maximum Retrieve theIE CP with the maximum IE

updated CPs NO The Delete the nearest CP NO Delete the nearest CP to distribution center to distribution center

NO Delete the nearest CP to distribution center

CPs selection terminates

Figure6.6. 6.The The CPs CPs selection selection process process in one gird. Figure processin inone onegird. gird. Figure The CPs Figure 6. The CPs selection process in one gird.

pp11 P=22 P= P=2

p1

N=55 N= N=5

pp22

p2 M=33 M=

M=3

Figure 7. The griddivision divisionand andthe thepiecewise piecewise strategy strategy (e.g., (e.g., 2,2,N NN===5, 5,5, MM 3). Figure 7. 7. The grid (e.g., ppp ===2, = 3). Figure The grid division and the piecewise strategy M == 3). Figure 7. The grid division and the piecewise strategy (e.g., p = 2, N = 5,M = 3).

4. ExperimentalResults Resultsand and Analysis Analysis 4. 4. Experimental Experimental Results and Analysis 4. Experimental Results and Analysis In order to comprehensivelyillustrate illustrate the the performance performance of of the proposed method, two groups of of In In order toto comprehensively ofthe theproposed proposedmethod, method, two groups order comprehensively illustrate the performance two groups of Inare order to comprehensively illustrate the performance of the proposed method, twonadir groups of datasets employed. One group is taken from our semi-physical platform by imaging images datasets areare employed. One group semi-physical platform byimaging imaging nadir images datasets employed. One group taken from our semi-physical semi-physical platform nadir images datasets are employed. One groupisisistaken takenfrom from our platform byby imaging nadir images for the reference image and off-nadir images for distortion images. The viewing angles are from 30° ◦ to the reference image and off-nadir images for distortionimages. images.The Theviewing viewing angles are from 30° forfor the reference image and off-nadir images for The viewing angles are from for the reference image and off-nadir images fordistortion distortion images. angles are from 30°30 to 70° for an interval of 10° as shown in Figure 8. ◦ ◦ 70°70° forinterval anan interval as in 8. 70 tofor an of 10ofof10° as10° shown in Figure 8. 8. to for interval asshown shown in Figure Figure

Figure8.8.Images Imagestaken takenfrom from semi-physical semi-physical platform left isisthe reference image). Figure platform (the left the reference image). Figure 8.8.Images platform(the (the left the reference image). Figure Imagestaken takenfrom fromsemi-physical semi-physical platform (the left isisthe reference image).

Sensors 2016, 16, 1725

9 of 16

Sensors 2016, 16, 1725

9 of 16

Another is aerial remote sensing images as shown in Figure 9, and their details are listed in Another is aerial remote sensing images as shown in Figure 9, and their details are listed in Table 1. Our programs are implemented with MATLAB 2012a and the experiments are conducted on a Table 1. Our programs are implemented with MATLAB 2012a and the experiments are conducted on Windows 7 computer with a Xeon 2.662.66 GHz CPU, and 8 GB memory. a Windows 7 computer with a Xeon GHz CPU, and 8 GB memory.

(a)

(b)

(c)

(d)

Figure 9. Two sets of aerial images (a) Reference image 1; (b) Distorted image 1; (c) Reference image 2; Figure 9. Two sets of aerial images (a) Reference image 1; (b) Distorted image 1; (c) Reference image 2; (d) Distorted image 2. (d) Distorted image 2. Table 1. Introduction of the aerial remote sensing images. Table 1. Introduction of the aerial remote sensing images.

Index Index image1 image1 image2 image2

Size (pixel) Size 500 (pixel) × 500 500 500 800 × × 700 800 × 700

Resolution (m) Resolution 0.5 (m) 0.5 0.1 0.1

View Angle (°)

◦ View Angle 40 ( )

40 60 60

Rotation (°) ◦ Rotation 15 ( ) 15 15

Location Location Liaoning, China Liaoning, China Rhodes Island, USA Rhodes Island, USA

4.1. Performance of CPs Extraction Based on Multi-View Simulation

4.1. Performance of CPs Extraction Basedthe on Multi-View This experiment is to examine adaptabilitySimulation of the CPs extraction based on MVS for LA images. MVS of the reference image takes into account distortions enough accuracy This experiment is to examine the adaptability of theall CPs extractionwith based on MVS for LAwhich images. are caused by the variation of the imaging sensor optical axis. Therefore, the MVS is applied with MVS of the reference image takes into account all distortions with enough accuracy which are caused very large parameter sampling ranges and small sampling steps. With the increase of the latitude , by the variation of the imaging sensor optical axis. Therefore, the MVS is applied with very large the distortions are ranges increasingly larger. Especially nearWith 90°, athe little change latitude  can give rise to parameter sampling and small sampling steps. increase ofof the latitude θ, the distortions severe distortion. As a result, the sampling density would grow. Setting = √2 satisfies this ◦ are increasingly larger. Especially near 90 , a little change of latitude θ can give rise to severe distortion. √latitude requirement. Similarly, since the image distortion in the high is larger than that in the low As a result, the sampling density would grow. Setting u = 2 satisfies this requirement. Similarly, latitude, the sampling step of the longitude  should be reduced with the increasing of the latitude . since the image distortion in the high latitude is larger than that in the low latitude, the sampling

Sensors 2016, 16, 1725

10 of 16

step of the longitude φ should be reduced with the increasing of the latitude θ. We set ∆φ = 72◦ /tk1 . The corresponding sampling parameters for MVS are demonstrated in Table 2. In our experiment, we set m = 4 and n = 2. Table 2. The corresponding sampling parameters for multi-view simulation. Factor m

1

2

3

4

5

θ ∆φ

45◦

60◦

69◦

75◦

80◦ 12◦

48◦

36◦

24◦

18◦

Table 3 lists the correct matching number which is conducted by ASIFT, SIFT, Harris-Affine [24], Hessian-Affine [7] and the proposed algorithm respectively. The computation costs of ASIFT, SIFT and the proposed algorithm are shown in Table 4. Table 3. Correct Matches of different algorithms (Number). Index

ASIFT

SIFT

HarAff

HesAff

Proposed Method

30◦

1126 1733 918 685 120 1247 128

155 123 63 18 0 112 41

38 31 18 9 0 29 10

33 19 11 5 0 17 7

212 225 181 96 52 215 137

40◦ 50◦ 60◦ 70◦ Aerial image 1 Aerial image 2

Table 4. Computation Cost (Second). Index

ASIFT

SIFT

Proposed Method

30◦

11.25 10.66 9.76 9.56 9.56 9.27 10.86

4.58 4.75 4.07 3.01 2.64 4.66 4.85

6.11 6.36 5.89 5.76 4.18 6.77 6.14

40◦ 50◦ 60◦ 70◦ Aerial image 1 Aerial image 2

It can be inferred that the performances of all the algorithms are deduced with the increasing of the imaging angle. The ASIFT and the proposed algorithm which can obtain more CPs than the other algorithms are more robust for LA images. For the distorted image with 70◦ as example, SIFT gets 0 CPs, the proposed algorithm gets 52 CPs, and the ASIFT gets 120 CPs. Compared with SIFT algorithm, the proposed algorithm successfully achieves the LA image correction, so it is worth spending a little bit more time. However, the ASIFT is time consuming compared with the proposed algorithm. The computation cost of ASIFT is nearly twice that of the proposed algorithm. Multiple matching of ASIFT results in the matching point redundancy which demands the huge calculating quantity and long computing time in the following process. This is indicative of the fact that the proposed algorithm is indispensable and more efficient than the other algorithms, especially for LA images. In addition, for the proposed algorithm, the situation whereby the result of 40◦ is more than the 30◦ one is related the sampling parameters u. 4.2. Performance of the Piecewise Correction with Optimized CPs The total CPs of the whole image is adjustable. The number normally is 30~100. We set the total number m = 45. Considering the CPs number and computational complexity, we set M = 3, N = 5 and p = 2, just as shown in Figure 7. If Ix equals to Iy , the threshold TQ should be 0.25. Considering that Ix

Sensors 2016, 16, 1725

11 of 16

and Iy usually are different, TQ is adjusted to 0.35. For the CPs optimization process, the computation Sensors 2016, 16, 1725 11 of 16 efficiencies of two CPs sets, which are obtained by the ASIFT and the proposed algorithm, are listed in in Table 5. 5. It is illustrated toapply applyininpractice. practice.This This another Table It is illustratedthat thatthe theASIFT ASIFTisis too too time time consuming consuming to is is another proof that thethe proposed The CPs CPsbefore beforeand andafter afteroptimization optimization shown proof that proposedalgorithm algorithmisisindispensable. indispensable. The areare shown in in Figure 10. Figure 10. Table5.5.Computation Computation Cost Cost of CPs Optimization Table Optimization(Second). (Second).

Index Index 30° ◦ 30 40°◦ 40 50° 50◦ 60°◦ 60 70°◦ 70 Aerial image Aerial image 11 Aerial image Aerial image 22

CPs by ASIFT CPs by the Proposed Method CPs by ASIFT CPs by the Proposed Method 30.83 5.74 30.83 5.74 38.34 5.26 38.34 5.26 29.76 3.21 29.76 3.21 20.92 2.99 20.92 2.99 3.56 1.68 3.56 1.68 36.88 4.84 36.88 4.84 3.69 3.56 3.69 3.56

(a)

(b)

(c)

(d)

(e)

(f) Figure 10. Cont. Figure 10. Cont.

Sensors 2016, 16, 1725

12 of 16

Sensors 2016, 16, 1725

12 of 16

(g)

(h)

Figure 10. CPs optimization results (a) CPs distribution of 40° before optimization; (b) CPs Figure 10. CPs optimization results (a) CPs distribution of 40◦ before optimization; (b) CPs distribution distribution of 40° after optimization; (c) CPs distribution of 60° before optimization; (d) CPs of 40◦ after optimization; (c) CPs distribution of 60◦ before optimization; (d) CPs distribution of 60◦ distribution of 60° after optimization; (e) CPs distribution of aerial 1 before optimization; (f) CPs after optimization; (e) CPs distribution of aerial 1 before optimization; (f) CPs distribution of aerial 1 distribution of aerial 1 after optimization; (g) CPs of aerial 2 before optimization; (h) CPs of aerial 2 after optimization; (g) CPs of aerial 2 before optimization; (h) CPs of aerial 2 after optimization. after optimization.

It is obvious that objects before beforeoptimization. optimization.After Afterthe the optimization, It is obvious thatthe theCPs CPsgather gatheraround around some some objects optimization, dense CPs are reduced. To examine the behavior of the proposed method, this part presents dense CPs are reduced. To examine the behavior of the proposed method, this part presents thethe correction results conducted by the total CPs from Section 2.2, the optimized CPs by using DM, correction results conducted by the total CPs from Section 2.2, the optimized CPs by using DM, the thepiecewise piecewise linear functions [15] by the using the optimized and thestrategy piecewise from linear functions [15] by using optimized CPs, and CPs, the piecewise fromstrategy Section 3.2, Section 3.2, respectively. Except for the piecewise linear the projective transformation respectively. Except for the piecewise linear functions, thefunctions, projective transformation is employed for is employed for themodel. correction model. Theintest in all the experiments have the same and the correction The test points all points the experiments have the same quantity andquantity quality to ensure equality of experiments, the number set to 12.isHere, is used to scale the correction quality tothe ensure the equality of experiments, theisnumber set toRMSE 12. Here, RMSE is used to scale the accuracy.accuracy. It can be It given correction can as: be given as: v , u nn 2 2 RMSEu   x   y n (14)(14) i 2) RMSE = t ∑ (∆xii 2 + ∆y n i





i 1

i =1

where n the is the numberofofCPs, CPs,∆x xiand and ∆y yi are are the coordinates and where n is number the CPs’ CPs’ offset offsetbetween betweenthe thereference reference coordinates and i i the corrected coordinates. We also compute the correction errors of the x and y directions based onon the corrected coordinates. We also compute the correction errors of the x and y directions based q q n 2 /n ∆ / and ∆ i 2 /n. / . ∆x and ∑∑ ∑∑in= 1 ∆y i =1 i Comparison correctionerrors errorsfor forthe the xx and y direction the comparison Comparison ofofcorrection directionisislisted listedininTable Table6,6,and and the comparison ◦ of RMSEs is listed in Table 7.The mosaic results by different correction methods for 60° as an example of RMSEs is listed in Table 7.The mosaic results by correction methods for 60 as an example shown Figure11. 11.Figure Figure12 12shows shows the the mosaic mosaic results 1 and 2 by different areare shown inin Figure resultsofofthe theaerial aerialimages images 1 and 2 by different correction methods. correction methods. Table6.6.Comparison Comparisonof ofcorrection correction errors for Table for the thexxand andyydirection direction(pixel). (pixel).

Index Index 30° 40° ◦ 30 50° 40◦ 60° ◦ 50 70° ◦ 60 Aerial ◦1 70 Aerial 2 Aerial 1 Aerial 2

Corrected by the Total CPs Corrected by the x Total CPs y 0.73 1.11 x y 0.86 0.96 0.73 1.11 1.05 2.46 0.86 0.96 12.86 6.87 1.05 2.46 19.65 9.04 12.86 6.87 0.83 1.51 19.65 9.04 6.98 14.38 0.83 1.51 6.98 14.38

Correction Errors Correction Corrected by the Errors The Piecewise Linear Optimized CPsthe Functions Corrected by The Piecewise Optimized Linear Functions x yCPs x y 0.69 0.93 0.87 1.19 x y x y 0.86 0.93 1.01 2.19 0.69 0.93 0.87 1.19 0.89 1.76 1.51 3.12 0.86 0.93 1.01 2.19 0.99 2.34 3.03 4.31 0.89 1.76 1.51 3.12 1.59 3.74 5.27 7.10 0.99 2.34 3.03 4.31 0.81 1.32 0.98 1.42 1.59 3.74 5.27 7.10 1.96 3.12 4.66 6.34 0.81 1.32 0.98 1.42 1.96 3.12 4.66 6.34

The Piecewise TheCorrection Piecewise Correction x y 0.67 0.93 x y 0.81 0.93 0.67 0.93 0.83 1.57 0.81 0.93 1.05 1.70 0.83 1.57 1.51 2.92 1.05 1.70 0.69 1.23 1.51 2.92 1.84 2.66 0.69 1.23 1.84 2.66

Sensors 2016, 16, 1725

Sensors 2016, 16, 1725

13 of 16

Table 7. Comparison of RMSE (pixel).

13 of 16

Table 7. Comparison RMSEof RMSE (pixel).

Index Index 30◦ 30°◦ 40 50 40°◦ 60 50°◦ ◦ 70 60° Aerial 70° 1 Aerial Aerial 12 Aerial 2

Corrected by the Corrected by the The Piecewise RMSE Total by CPs Optimized CPs TheLinear Functions Corrected the Corrected by the Piecewise Linear Total1.44 CPs Optimized CPs Functions 1.17 1.47 1.44 1.17 1.47 1.29 1.27 2.21 2.65 1.98 3.43 1.29 1.27 2.21 14.57 2.54 5.26 2.65 1.98 3.43 23.09 4.07 8.77 14.57 2.54 5.26 1.72 1.58 1.76 23.09 4.07 8.77 15.98 3.68 7.85 1.72 1.58 1.76 15.98 3.68 7.85

The Piecewise Correction The Piecewise Correction 1.16 1.16 1.24 1.79 1.24 2.07 1.79 3.32 2.07 1.42 3.32 3.23 1.42 3.23

Visual inspection suggests that Figure 11a and the Figure 12b possess obvious double shadows. Visual inspection suggests Figure 11a and 12b possess obvious double shadows. It means that correcting using thethat total CPs gives risethe toFigure a larger correction error. Similarly, Figure 11c It means that correcting using the total CPs gives rise to a larger correction error. Similarly, Figure 11c and the Figure 12f also possess double shadows. This demonstrates the piecewise linear functions and the Figure 12f also possess double shadows. This demonstrates the piecewise linear functions are y are not suitable for correcting LA images. A comparison of correction errors for the x and not suitable for correcting LA images. A comparison of correction errors for the x and y direction is direction is given in Table 6, and comparisons of RMSEs are listed in Table 7. It is observed that the given in Table 6, and comparisons of RMSEs are listed in Table 7. It is observed that the optimized optimized distribution of CPs is effective at improving the correction accuracy. After CP optimization, distribution of CPs is effective at improving the correction accuracy. After CP optimization, the the reduction of the number of CPs doesn’t lead to a reduction of the correction accuracy, instead, reduction of the number of CPs doesn’t lead to a reduction of the correction accuracy, instead, the the correction accuracy in the x and y direction is advanced, and the final RMSE is also improved. correction accuracy in the x and y direction is advanced, and the final RMSE is also improved. For ◦ , RMSE increases 12.03 pixels. Forthe the LA image with 60 LA image with 60°, RMSE increases 12.03 pixels.

(a)

(b)

(c)

(d)

Figure 11. The correction results (for 60°◦ as an example) (a) Correction results by the total CPs; Figure 11. The correction results (for 60 as an example) (a) Correction results by the total CPs; (b) Correction results by the optimized CPs; (c) Correction results by the piecewise linear functions; (b) Correction results by the optimized CPs; (c) Correction results by the piecewise linear functions; (d) Correction results by the proposed piecewise strategy. (d) Correction results by the proposed piecewise strategy.

Sensors 2016, 16, 1725

14 of 16

Sensors 2016, 16, 1725

14 of 16

Compared with estimation based on the image,image, the proposed piecewise strategy Compared withthe themodel model estimation based onwhole the whole the proposed piecewise improves the correction accuracy in both the x and y direction, and the final RMSE accordingly strategy improves the correction accuracy in both the x and y direction, and the final RMSE increases. The improvement of the correction increases with the increasing the oblique accordingly increases. The improvement of theaccuracy correction accuracy increases with theofincreasing of ◦ , RMSE of the proposed piecewise angle especially for the y direction. For the LA image with 70 the oblique angle especially for the y direction. For the LA image with 70°, RMSE of the proposed correctioncorrection advances advances 0.45 pixels0.45 compared to the whole image correction. We can conclude that the piecewise pixels compared to the whole image correction. We can conclude proposed algorithm is effective and adaptable to correcttoLA distorted images. images. that the proposed algorithm is effective and adaptable correct LA distorted

(a)

(b)

(c)

(d)

(e)

(f) Figure 12. Cont. Figure 12. Cont.

Sensors 2016, 16, 1725

15 of 16

Sensors 2016, 16, 1725

15 of 16

(g)

(h)

Figure correction results (a) (a) TheThe correction results by the Aerial (b) The Figure 12. 12. The The correction results correction results bytotal the CPs totalofCPs of image Aerial1;image 1; correction results by the total CPs of Aerial image 2; (c) The correction results by the optimized (b) The correction results by the total CPs of Aerial image 2; (c) The correction results by CPs the of Aerial image 1; (d) Theimage correction by the optimized CPs of optimized Aerial image 2; of (e)Aerial The correction optimized CPs of Aerial 1; (d)results The correction results by the CPs image 2; results by the piecewise linear functions of Aerial image 1; (f) The correction results by the piecewise (e) The correction results by the piecewise linear functions of Aerial image 1; (f) The correction results linear Aerial imageof 2;Aerial (g) The correction results by theresults proposed piecewise of by the functions piecewise of linear functions image 2; (g) The correction by the proposedstrategy piecewise Aerial image 1; (h) The correction results by the proposed piecewise strategy of Aerial image 2. strategy of Aerial image 1; (h) The correction results by the proposed piecewise strategy of Aerial image 2.

5. Conclusions 5. Conclusions Traditional correction methods are unsuitable to effectively correct remote sensing images acquired under seriously conditions. On thetobasis of analyzing characteristic of LA Traditional correctionoblique methods are unsuitable effectively correct the remote sensing images images, a new correction method is advanced. The proposed method simulates the multi-views of acquired under seriously oblique conditions. On the basis of analyzing the characteristic of LA images, the reference image to is ensure the effective matching of simulates feature points, and uses ofthe a new correction method advanced. The proposed method the multi-views thepiecewise reference strategy CPs to of achieve Thestrategy new method is image to based ensure on the optimized effective matching featurehighly points,accurate and usescorrection. the piecewise based on analyzed theoretically and assessed experimentally by performing correction for LA images from optimized CPs to achieve highly accurate correction. The new method is analyzed theoretically and our semi-physical platform and aerial remote sensing. with traditional methods, the assessed experimentally by performing correction for LACompared images from ourthe semi-physical platform and new can effectively LAtraditional images, methods, and the the correction accuracy is significantly aerial method remote sensing. Comparedcorrect with the new method can effectively correct improved. has important value applied in improved. geometric correction, under LA LA images,This and paper the correction accuracy is significantly This paper especially has important value imaging conditions. applied in geometric correction, especially under LA imaging conditions. Acknowledgments: This This work work was was supported supported by by the the project of local colleges’ and universities’ capacity construction of of Science Science and and Technology construction Technology Commission Commission in in Shanghai Shanghai (15590501300), (15590501300), the the National National Natural Natural Science Science Foundation of China (61201244), the Shanghai funded training program of young teachers (E1-8500-15-01046) and Foundation of China (61201244), the Shanghai funded training program of young teachers (E1-8500-15-01046) the Doctoral Program Initiation Fund of SUES (E1-0501-15-0209). and the Doctoral Program Initiation Fund of SUES (E1-0501-15-0209). Author Contributions: The paper was a collaborative effort between the authors. Chunyuan Wang contributed Author Contributions: The modeling, paper was asimulation, collaborative effort betweenpreparation. the authors. Chunyuan to the theoretical analysis, and manuscript Xiang Liu,Wang Xiaolicontributed Zhao and Yongqi Wang were analysis, collectively responsible for result discussion and improved this manuscript’s EnglishZhao language to the theoretical modeling, simulation, and manuscript preparation. Xiang Liu, Xiaoli and and style. Yongqi Wang were collectively responsible for result discussion and improved this manuscript’s English language and style. The authors declare no conflict of interest. Conflicts of Interest:

Conflicts of Interest: The authors declare no conflict of interest.

References

References 1. Ma, J.; Chan, J.C.-W.; Canters, F. Fully Automatic Subpixel Image Registration of Multiangle 1. 2. 2. 3. 3. 4. 4.

CHRIS/ProbaData. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2829–2839. Ma, J.; Chan, J.C.-W.; Canters, F. Fully Automatic Subpixel Image Registration of Multiangle Berenstein, R.; Hoˇcevar, M.; Godeša, T.; Edan, Y.; Ben-Shahar, O. Distance-Dependent Multimodal Image CHRIS/ProbaData. IEEE Trans. Geosci. Remote Sens. 2010, 48, 2829–2839. Registration for Agriculture Tasks. Sensors 2015, 15, 20845–20862. [CrossRef] [PubMed] Berenstein, R.; Hočevar, M.; Godeša, T.; Edan, Y.; Ben-Shahar, O. Distance-Dependent Multimodal Image Kang, Y.-S.; Ho, Y.-S. An efficient image rectification method for parallel multi-camera arrangement. Registration for Agriculture Tasks. Sensors 2015, 15, 20845–20862. IEEE Trans. Consum. Electron. 2011, 57, 1041–1048. [CrossRef] Kang, Y.-S.; Ho, Y.-S. An efficient image rectification method for parallel multi-camera arrangement. IEEE Zheng, Z.; Wang, H.; Toeh, E.K. Analysis of gray level corner detection. Pattern Recognit. Lett. 1999, 20, Trans. Consum. Electron. 2011, 57, 1041–1048. 149–162. [CrossRef] Zheng, Z.; Wang, H.; Toeh, E.K. Analysis of gray level corner detection. Pattern Recognit. Lett. 1999, 20, 149–162.

Sensors 2016, 16, 1725

5. 6. 7. 8. 9. 10. 11. 12. 13. 14.

15. 16. 17. 18. 19. 20.

21.

22. 23. 24.

16 of 16

Mokhtarian, F.; Mackworth, A. Scale-based description and recognition of planar curves and two-dimensional shapes. IEEE Trans. Pattern Anal. Mach. Intell. 1986, 8, 34–43. [CrossRef] [PubMed] Baker, S.; Nayar, S.; Murase, H. Parametric feature detection. Int. J. Comput. Vis. 1998, 27, 27–50. [CrossRef] Mikolajczyk, K.; Schmid, C. A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 1615–1630. [CrossRef] [PubMed] Lowe, D. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [CrossRef] Morel, J.M.; Yu, G. ASIFT: A new framework for fully affine invariant image comparison. SIAM J. Imaging Sci. 2009, 2, 438–469. [CrossRef] Zitova, B.; Flusser, J. Image registration methods: A survey. Image Vis. Comput. 2003, 21, 977–1000. [CrossRef] Bhatti, H.A.; Rientjes, T.; Haile, A.T.; Habib, E.; Verhoef, W. Evaluation of Bias Correction Method for Satellite-Based Rainfall Data. Sensors 2016, 16, 884–901. [CrossRef] [PubMed] Goncalves, H.; Goncalves, J.A.; Corte-Real, L. Measures for an objective evaluation of the geometric correction process quality. IEEE Geosci. Remote Sens. Lett. 2009, 6, 292–296. [CrossRef] Sedaghat, A.; Mokhtarzade, M.; Ebadi, H. Uniform robust scale-invariant feature matching for optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2011, 49, 4516–4527. [CrossRef] Tait, R.J.; Schaefer, G.; Hopgood, A.A.; Zhu, S.Y. Efficient 3-D medical image registration using a distributed blackboard architecture. In Proceedings of the 28th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, New York, NY, USA, 30 August–3 September 2006; pp. 3045–3048. Goshtasby, A. Piecewise linear mapping functions for image registration. Pattern Recognit. 1986, 19, 459–466. [CrossRef] González, V.A.J. An experimental evaluation of non-rigid registration techniques on Quickbird satellite imagery. Int. J. Remote Sens. 2008, 29, 513–527. Han, Y.; Choi, J.; Byun, Y.; Kim, Y. Parameter Optimization for the Extraction of Matching Points between High-Resolution Multisensor Images in Urban Areas. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5612–5621. Wang, C.; Gu, Y.; Zhang, Y. A coarse-to-fine correction method for seriously oblique remote sensing image. ICIC Express Lett. 2011, 5, 4503–4509. Yu, G.; Morel, J.M. A fully affine invariant image comparison method. In Proceedings of the IEEE International Conference on Acoustics, Taipei, Taiwan, 19–24 April 2009; pp. 1597–1600. Beis, J.; Lowe, D. Shape indexing using approximate nearest-neighbor search in high-dimensional space. In Proceedings of the International Conference on Computer Vision and Pattern Recognition, San Juan, Puerto Rico, 17–19 June 1997; pp. 1000–1006. Bennour, A.; Tighiouart, B. Automatic high resolution satellite image registration by combination of curvature properties and random sample consensus. In Proceedings of the 2012 6th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications, Sousse, Tunisia, 21–24 March 2012; pp. 370–374. Zhu, Q.; Wu, B.; Xu, Z. Seed point selection method for triangle constrained image matching propagation. IEEE Geosci. Remote Sens. Lett. 2006, 3, 207–211. [CrossRef] Schmid, C.; Mohr, R.; Bauckhage, C. Evaluation of interest point detectors. Int. J. Comput. Vis. 2000, 37, 151–172. [CrossRef] Mikolajczyk, K.; Schmid, C. An Affine Invariant Interest Point Detector. In Computer Vision—ECCV 2002; Heyden, A., Sparr, G., Nielsen, M., Johansen, P., Eds.; Springer: Berlin/Heidelberg, Germany, 2002. © 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).