remote sensing Article
Accuracy Improvements in the Orientation of ALOS PRISM Images Using IOP Estimation and UCL Kepler Platform Model Tiago L. Rodrigues 1, *, Edson Mitishita 1 , Luiz Ferreira 1 and Antonio M. G. Tommaselli 2 1 2
*
Department of Geomatics, Federal University of Paraná (UFPR), Curitiba 81531-990, Brazil;
[email protected] (E.M.);
[email protected] (L.F.) Department of Cartography, São Paulo State University (UNESP), Presidente Prudente 19060-900, Brazil;
[email protected] Correspondence:
[email protected]; Tel.: +55-41-3631-3456
Academic Editors: Jose Moreno and Prasad S. Thenkabail Received: 10 March 2017; Accepted: 15 June 2017; Published: 1 July 2017
Abstract: This paper presents a study that was conducted to determine the orientation of ALOS (Advanced Land Observing Satellite) PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) triplet images, considering the estimation of interior orientation parameters (IOP) of the cameras and using the collinearity equations with the UCL (University College of London) Kepler platform model, which was adapted to use coordinates referenced to the Terrestrial Reference System ITRF97. The results of the experiments showed that the accuracies of 3D coordinates calculated using 3D photogrammetric intersection increased when the IOP were also estimated. The vertical accuracy was significantly better than the horizontal accuracy. The usability of the estimated IOP was tested to perform the bundle block adjustments of another neighbouring PRISM image triplet. The results in terms of 3D photogrammetric intersection were satisfactory and were close to those obtained in the IOP estimation experiment. Keywords: PRISM images; estimation of IOP; UCL Kepler model
1. Introduction The images from sensors installed on orbital platforms have become important sources of spatial data. In this context, the images of the PRISM (Panchromatic Remote-sensing Instrument for Stereo Mapping) sensor, onboard the Japanese ALOS (Advanced Land Observing Satellite) satellite, have the function of contributing to mapping, earth observation coverage, disaster monitoring and surveying natural resources [1]. An important feature of this sensor is that it was composed of three pushbroom cameras (backward, nadir and forward) that were designed to generate stereoscopic models. An important requirement for extracting geo-spatial information from ALOS images is the knowledge of a set of parameters to connect the image space with an object space. Using a rigorous model, a set of parameters is composed of the exterior orientation parameters (EOP) and the interior orientation parameters (IOP). The EOP can be directly estimated by on-board GNSS receivers, star trackers and gyros. In contrast, the IOP are estimated from laboratory calibration processes before a satellite is launched. However, after the launch and while the satellite is orbiting earth, there is a great possibility of changes in the values of the IOP. According to [2,3], three problems can contribute to changes in the original IOP values: accelerations; drastic environmental changes imposed during the launch of the satellite; and the thermal influence of the sun when the satellite is in orbit. Consequently, on-orbit geometric calibration has become an important procedure for extracting reliable geoinformation from orbital images. Examples of the on-orbit geometric calibration of the PRISM sensor can be seen in [4–7]. Remote Sens. 2017, 9, 634; doi:10.3390/rs9070634
www.mdpi.com/journal/remotesensing
Remote Sens. 2017, 9, 634
2 of 23
Another issue in the use of rigorous models is the platform model, which is responsible for modelling the changes in EOP during the generation of two-dimensional images. Over the years, several studies have proposed different platform models associated with the principle of collinearity [8–10]. Based on the platform model using second order polynomials, Michalis [11] associated the linear terms to the satellite velocity components and quadratic terms to the acceleration components. The platform model developed was called the UCL (University College of London) Kepler model, in which the accelerations are estimated from the two-body problem. An example of the UCL Kepler model used in the orientation of PRISM images with the subsequent generation of a Digital Surface Model (DSM) can be seen in [12]. In this paper, we present a study that was conducted to perform the orientation of a PRISM ALOS image triplet, considering the IOP estimation and using the collinearity model combined with the UCL platform model. Additionally, a methodology for the use of coordinates referenced to a Terrestrial Reference System (TRS) was developed, since the UCL Kepler platform model was developed to use coordinates referenced to the Geocentric Celestial Reference System (GCRS). The technique proposed in this work uses a different group of IOP and a different platform model from those used in [4–7]. The use of the UCL Kepler model, instead of models that use polynomials in the platform model, aims to reduce the number of unknowns and enables the use of position and velocity data extracted from the positioning sensors. In addition, in this article, significance and correlation tests between the IOPs were performed, and an IOP usability test was performed with a bundle adjustment with other neighbouring triplet. It should be noted that even though this satellite is out of operation, the acquired images remain in an archive for continued use. In addition, the methodologies investigated in this article can be applied to many other linear array pushbroom sensors with similar characteristics. The following seven sections contain information about the space reference systems of the images used; a brief description about the PRISM sensor; the mathematical modelling used for the estimation of the IOP; the used platform model; and the test field and experiments. Finally, in the last two sections, the results obtained from the performed experiments are shown and discussed. 2. Image Space Reference Systems The first reference system related to images is the Image Reference System (IRS). This system is associated with the two-dimensional array of image pixels in a column (C) and line (L) coordinate system. The system’s origin is at the centre of the top left image pixel, as shown in Figure 1a. Another system related to the linear array CCD chip is the Scanline Reference System (SRS), which is a two-dimensional system of coordinates. The origin of the system is located at the geometric centre of the linear array CCD chip. The xs axis is parallel to the L axis and is closest to the flight direction. The ys axis is perpendicular to the xs axis, as shown in Figure 1b. The mathematical transformation between IRS coordinates and SRS coordinates is presented in Equations (1) and (2). xs = [ L − int( L)] · PS, (1) ( n − 1) ( n − 1) ys = PS · C − C · PS = C − C · PS, (2) 2 2 where PS is the pixel size of the CCD chip in mm and nC is the number of columns of the image. The Camera Reference System (CRS) is a three-dimensional system. Its origin is at the perspective centre (PC) of each camera lens. The xc and yc axes are parallel to the xs and ys axes. The zc axis points upward and completes a right-hand coordinate system [11]. The projection of the PC in the focal plane defines the principal point (PP), and the focal length f is the distance between the PC and PP. An illustration of the CRS and SRS in a focal plane with three CCD chips is shown in Figure 2.
Remote Sens. 2017, 9, 634
3 of 23
Remote Sens. 2017, 9, 634 3 of 23 and PP. An illustration of the CRS and SRS in a focal plane with three CCD chips is shown in Figure 2. and PP. of the CRS and SRS in a focal plane with three CCD chips is shown in Remote Sens. An 2017,illustration 9, 634 3 of 23 Figure 2.
Figure 1. Image Reference System (a) and Scanline Reference System (b). Figure 1. Image Scanline Reference System (b).(b). Figure ImageReference ReferenceSystem System(a)(a)and and Scanline Reference System
Figure 2. Scanline and camera reference systems. Figure and camera reference Figure2.2.Scanline Scanline and camera referencesystems. systems.
The transformation between SRS coordinates and CRS coordinates is presented in Equation (3). The Thetransformation transformationbetween betweenSRS SRScoordinates coordinatesand andCRS CRScoordinates coordinatesisispresented presentedininEquation Equation(3). (3).
xc xs dx x xs dx c xc yc = yxs s + dy , dx = y0s + −dyf , y zcc= yc y + dy , zc 0 s − f zc centre of0a CCD chip − fto the PP in the focal plane. where dx and dy are translations from the where dx and dy are translations from the centre of a CCD chip to the PP in the focal plane. where dx and dy are translations from the centre of a CCD chip to the PP in the focal plane.
(3)
(3) (3)
Remote Sens. 2017, 9, 634 Remote Sens. 2017, 9, 634 Remote Sens. 2017, 9, 634
4 of 23 4 of 23 4 of 23
3. 3. PRISM Sensor PRISM Sensor 3. PRISM Sensor The ground sample sampledistance distance(GSD) (GSD)ofof2.52.5 and ThePRISM PRISMsensor sensorprovides provides images images with a ground m,m, and a a The PRISM sensor provides imagesrange with a from ground distance (GSD) of 2.5 m, of and a of radiometric depth of inaaspectral spectral range to 0.77 µm. sensor The sensor is composed radiometric depth of 88 bits in from 0.520.52 tosample 0.77 μm. The is composed three radiometric depth of 8 bits spectral range from 0.52 to 0.77 μm. The sensor is composed ofshown threein in three independent cameras inathe forward, nadir backward along-track directions, independent cameras in in the forward, nadir andand backward along-track directions, as as shown independent cameras in the forward, nadir and backward along-track directions, as shown ina a Figure Thesimultaneous simultaneous imaging imaging by the three and covers Figure 3a.3a.The three cameras camerasisiscalled calledthe thetriplet tripletmode mode and covers Figure The imaging by the in three cameras is called theand triplet mode viewings and covers a range of ×35 35simultaneous kmon onthe the ground, ground, 3b. forward backward were range of 3a. 3535 × km as shown shown inFigure Figure 3b.The The forward and backward viewings were range of 35 × 35 km on the ground, as shown in Figure 3b. The forward and backward viewings were oriented to ±23.8 degrees with respect to nadir viewing to obtain a base/height ratio equal to one [1]. oriented to ±23.8 degrees with respect to nadir viewing to obtain a base/height ratio equal to one [1]. oriented to ±23.8 degrees with respect to nadir viewing to obtain a base/height ratio equal to one [1].
Figure3.3.PRISM PRISMsensor sensorcameras cameras (a) (a) and and PRISM (b)(b) [1]. Figure PRISMtriplet tripletobservation observationmode mode [1]. Figure 3. PRISM sensor cameras (a) and PRISM triplet observation mode (b) [1].
In the focal plane of the backward and forward cameras, eight CCD chips, each with 4928 InInthe plane of of thethe backward andand forward cameras, eighteight CCDCCD chips,chips, each each with 4928 columns, thefocal focal plane backward cameras, with 4928 columns, were placed. In the focal plane of theforward nadir camera, six CCD chips, each with 4992 columns, were placed. In the focal plane of the nadir camera, six CCD chips, each with 4992 columns, were columns, were placed. In the focal plane of the nadir camera, six CCD chips, each with 4992 columns, were placed. The CCD chips overlap by 32 columns. In Figure 4, the arrangement of the CCD chips placed. The overlap by 32 columns. In Figure 4, the arrangement of the CCD chips in the were placed. The chips CCD chips overlap by 32 columns. In Figure 4, are thepresented. arrangement of the CCD chips in the focalCCD planes of the forward, nadir and backward cameras focal planes of the forward, nadir and cameras are presented. in the focal planes of the forward, nadirbackward and backward cameras are presented.
Figure 4. Focal plane arrangements of the ALOS/PRISM forward camera (a), nadir camera (b) Figure Focalplane planearrangements arrangements ALOS/PRISM forward camera (a),nadir nadircamera camera(b) (b)and and backward camera (c) [6]. Figure 4.4.Focal of of thethe ALOS/PRISM forward camera (a), and backward camera (c) [6]. backward camera (c) [6].
Remote Sens. 2017, 9, 634 Remote Sens. 2017, 9, 634
5 of 23 5 of 23
For the the arrangement arrangement of of the the final final image image distributed distributed to to users, users, with with 14,496 14,496 columns columns and and 16,000 16,000 For lines, aa set set of of four atat the left and right sides. In lines, four CCD CCD chips chips was wasselected. selected.Further Furtherpixels pixelswere werenot notused used the left and right sides. Figure 5, an example of the nadir image is shown. An on-orbit geometric calibration, performed by In Figure 5, an example of the nadir image is shown. An on-orbit geometric calibration, performed by JAXA in inJune June2007, 2007,computed computedthe thefollowing followingparameters: parameters: translation translation values values of of each each CCD CCD chip chip centre centre JAXA with respect respect to toPP, PP,the thefocal focallengths, lengths,the thesizes sizesof ofpixels pixelsin inthe theCCD CCDchips, chips,and andthe thePP PPcoordinates coordinatesxx00 with and yy00.. These is and These parameters parameters were were provided providedto tosome someresearchers, researchers,as ascan canbe beseen seeninin[6,7]. [6,7].This Thisdataset dataset currently of the the Cooperative Cooperative Research Research is currentlyembedded embeddedininBARISTA BARISTAsoftware, software,developed developedby by project project 2.1 2.1 of Centre for Spatial Information [7]. The calibrated values of the focal lengths of the backward, nadir Centre for Spatial Information [7]. The calibrated values of the focal lengths of the backward, nadir and forward cameras are shown in Table 1. and forward cameras are shown in Table 1.
Figure 5. 5. Example Example of of image image formation formation for for the thePRISM PRISMnadir nadircamera camera[6]. [6]. Figure Table1.1. Focal Focallengths lengthsof ofthe thebackward, backward,nadir nadirand andforward forwardcameras. cameras. Table
Focal Lengths Focal Lengths Backward camera 1999.8762715 mm Backward 1999.8762715 mm Nadircamera camera 1999.8630195 mm Nadir camera 1999.8630195 mm Forward camera 2000.0645632 mm Forward camera 2000.0645632 mm An example of the input step of the PRISM sub-image files and extraction of the mentioned data Anshown example of the input step of the PRISM sub-image files and extraction of the mentioned data set are in Figures 6 and 7, respectively. set areALOS shownPRISM in Figures 6 and 7, respectively. images are made available to users at four different processing levels. The ALOS PRISM images are made available to users levels. processing levels and characteristics of each level are shownatin four Tabledifferent 2 [1]. Forprocessing the application of The processing levels and characteristics of each level are shown in Table 2 [1]. For the application of rigorous models, processing levels 1A or 1B1 must be used. rigorous models, processing levels 1A or 1B1 must be used. Table 2. ALOS PRISM image processing levels and characteristics. Table 2. ALOS PRISM image processing levels and characteristics.
Level Level1B1 1A Level 1B2G Level 1B1 Level 1B2R Level 1B2G
Processing Levels and Characteristics PRISM raw data extracted from the Level 0 data, expanded and generated lines. Ancillary information Processing Levels and Characteristics such as radiometric information, etc., required for processing. PRISM raw data extracted from the Level 0 data, expanded and generated lines. Ancillary information Data with performed radiometric correction and added absolute calibration coefficient. such as radiometric information, etc., required for processing. Data with performed geometric correction to Level 1B1 data. Geo coded images oriented to north. Data with performed radiometric correction and added absolute calibration coefficient. Data with performed geometric correction to Level 1B1 data. Geo referenced images using orbital data. Data with performed geometric correction to Level 1B1 data. Geo coded images oriented to north.
Level 1B2R
Data with performed geometric correction to Level 1B1 data. Geo referenced images using orbital data.
Level 1A
Remote Sens. 2017, 9, 634
6 of 23
Remote Sens. 2017, 9, 634
6 of 23
Remote Sens. 2017, 9, 634
6 of 23
Figure 6. Example of image input for the PRISM backward camera. Figure 6. Example of image input for the PRISM backward camera. Figure 6. Example of image input for the PRISM backward camera.
Figure 7. Example of the dataset of CCD chip 3 in the backward and nadir cameras, containing translation values from thedataset CCD chip centrechip to PP, thethe cameras’ focal lengths, sizes ofcontaining pixels in Figure 7. Example of the of CCD 3 in backward and nadir the cameras, Figure 7. Example of the dataset of CCD chip 3 in the backward and nadir cameras, containing the CCD chips, andfrom the PP translation values thecoordinates. CCD chip centre to PP, the cameras’ focal lengths, the sizes of pixels in translation values from the CCD chip centre to PP, the cameras’ focal lengths, the sizes of pixels in the the CCD chips, and the PP coordinates. CCD chips, andModelling the PP coordinates. 4. Mathematical
4. Mathematical The accuracyModelling of orientation using the rigorous model is directly related to the accuracy of the 4. Mathematical Modelling interior orientation. IOP canusing be estimated by calibration in the laboratory before the satellite The accuracy ofThe orientation the rigorous model is directly related to the accuracy of the The accuracy of orientation using the rigorous model is directly related to the accuracy of launches. However, the in this case are notinthe as those found when thethe interior orientation. Thephysical IOP canconditions be estimated by calibration thesame laboratory before the satellite interior orientation. IOP cantoconditions be by calibration inthe the laboratory the satellite satellites are in orbit.The According [2],estimated duringinthe launch of a not satellite, the environmental launches. However, the physical this case are same as thosebefore foundconditions when the change andthe drastically, changes to the sensor geometry. According to thethe launches. However, physicalcausing conditions inthe this caseinternal area not the same as those found when satellitesrapidly are in orbit. According to [2], during launch of satellite, the environmental conditions authors, theinenvironmental conditions after orbit also harsh, and may cause conditions internal satellites orbit. to [2], during thestabilization launch of aare satellite, the environmental changeare rapidly and According drastically, causing changes to the internal sensor geometry. According to the authors, the environmental conditions after orbit stabilization are also harsh, and may cause internal
Remote Sens. 2017, 9, 634
7 of 23
change rapidly the Remote Sens. 2017, 9,and 634 drastically, causing changes to the internal sensor geometry. According to 7 of 23 authors, the environmental conditions after orbit stabilization are also harsh, and may cause internal however, this this is is not not as as crucial crucial as as the changes that occur during launch. According geometry changes; however, large acceleration acceleration during a launch may change the exact position of the CCD-lines in the to [3], the large camera and the relation between the CCD-lines. Consequently, Consequently, at least their geometric linearity must verified after afterlaunch. launch.In Inaddition, addition,according accordingtotothe thesame sameauthor, author, systematic lens distortion be verified thethe systematic lens distortion cancan be be calibrated before launch may be influenced launch. Furthermore, thermal influence calibrated before launch butbut may be influenced by by thethe launch. Furthermore, thethe thermal influence of of the sun can cause changes in the internal sensor geometry. Consequently, to exploit the full the sun can cause changes in the internal sensor geometry. Consequently, to exploit the full geometric geometricofpotential a IOP sensor, the IOP should be re-estimated after a satellite launches,periodically. preferably potential a sensor,of the should be re-estimated after a satellite launches, preferably periodically. the cameras, PRISM ALOS cameras, JAXA conducted of the parameters For the PRISMFor ALOS JAXA conducted a re-estimation of athere-estimation parameters multiple times after multiplesuch times launch, as the June 2007 calibration launch, asafter the June 2007such calibration mentioned above. mentioned above. array sensor in addition to the focal length (f) and Considering the the IOP IOPof ofan anorbital orbitalCCD CCDlinear linear array sensor in addition to the focal length (f ) sizessizes of the pixels in the CCD chip (PS), Poli and of the pixels in the CCD chip (PS), Poli[13] [13]proposed proposedtwo twoparameter parametersets. sets. The The first first set of parameters isisthe theIOP IOPrelated relatedtotothe theoptical opticalsystem. system.These These parameters coordinates x0 and parameters parameters areare thethe PPPP coordinates x0 and y0 , y0, the change in the focal length Δf, coefficients the coefficients K1 and of symmetric the symmetric radial distortion, the change in the focal length ∆f, the K1 and K2 ofK2the radial lens lens distortion, and andscale the scale variations sx sand y inxthe xs and ys directions, respectively. The scale variation is the variations sx and ys directions, respectively. The scale variation effecteffect is only y in sthe s and only considered ys direction when thearray CCDisarray to array. a linear array. Examples of the considered in the in ys the direction when the CCD equalistoequal a linear Examples of the effects of effects of IOP to the optical shown 8. in Figure 8. IOP related to related the optical system aresystem shownare in Figure
Figure Effects of Figure 8. 8. Effects of IOP IOP related related to to the the optical optical system: system: (a) (a) change change in in the the focal focal length, length, (b) (b) scale scale variation variation and (c) symmetric radial lens distortion. and (c) symmetric radial lens distortion.
The correctives terms dxff and dyff for the effects of change in the the focal focal length length in in the the xxss and and yyss symmetric radial radial distortion distortion terms terms dx dxrr and dyrr are given by: directions and symmetric
( xc − x0 ) = −( xc − x0 ) ⋅·Δ∆f f , dxdx f f= − ff , ((yyc −− yy0 )) dydyf f==−− c ⋅· Δ∆f f , ff , dxr = K1 r2 ·2( xc − x0 ) + K2 r44 · ( xc − x0 ),
with:
with:
(4) (4)
(5) (5)
dxr = K1r ⋅ ( xc − x0 ) + K2r ⋅ ( xc − x0 ) dyr = K1 r2 · (yc − y0 ) + K2 r4 · (yc − y0,),
(6) (6) (7)
dyr = K1q r 2 ⋅ ( yc − y0 ) + K2 r 4 ⋅ ( yc − y0 ) , r = ( x c − x0 )2 + ( y c − y0 )2 .
(7)
r=
( xc − x0 )2 + ( yc − y0 )2 .
The correction to the scale variation in the ys direction is:
(8)
(8)
Remote Sens. 2017, 9, 634
8 of 23
The correction to the scale variation in the ys direction is:
Remote Sens. 2017, 9, 634
8 of 23
dys = sy ys . (9) dys = s y ys . (9) The second set of IOP is related to the CCD chip. These parameters are the change of pixel The second of IOP is related to the CCD chip. These parameters are the change of pixel dimensions in the xset s and ys directions, dpx and dpy ; the two-dimensional shift a0 and b0 ; the rotation θ dimensions in the xs and ys directions, dpx and dpy; the two-dimensional shift a0 and b0; the rotation θ of the CCD chip in the focal plane with respect to its nominal position; and the central angle of the of the CCD chip in the focal plane with respect to its nominal position; and the central angle of the effect of a line bending in the focal plane, considering that the straight CCD line is deformed into effect of a line bending in the focal plane, considering that the straight CCD line is deformed into an an arc. As in the case of the scale variation effect, for a linear array sensor, the parameter dpx can be arc. As in the case of the scale variation effect, for a linear array sensor, the parameter dpx can be disregarded. As seen in [6], the CCD chip displacements dx and dy, with respect to the PP, can also be disregarded. As seen in [6], the CCD chip displacements dx and dy, with respect to the PP, can also considered IOP related to the CCD chip. Examples of effects of the IOP related to the CCD chips are be considered IOP related to the CCD chip. Examples of effects of the IOP related to the CCD chips shown in Figure 9. are shown in Figure 9.
Figure 9. Effects IOP relatedtotothe theCCD CCDchips chips in in the the focal focal plane: in in Figure 9. Effects ofof IOP related plane: (a) (a)change changeofofpixel pixeldimension dimension s and ys directions; (b) CCD chip rotation; (c) CCD chip shift; and (d) CCD chip bending. the x the xs and ys directions; (b) CCD chip rotation; (c) CCD chip shift; and (d) CCD chip bending.
The two-dimensional shift parameters a0, b0 are added to the dx and dy quantities. The two-dimensional shift parameters a0 , b0 are added to the dx and dy quantities. x x dx + a0 " # c "= s # (10) + " . # xc yc x sys dy +dxb0+ a0 = + . (10) yc ys dy + b0 The effect of the change of pixel dimension in the ys direction (dpy), CCD chip rotation in the xs and ys directions (dxθ, dyθ) and line bending in the xs (dxδ) direction are given by: The effect of the change of pixel dimension in the ys direction (dpy ), CCD chip rotation in the xs and ys directions (dxθ , dyθ ) and line bending in the xs Δ(dx p yδ ) direction are given by: , dp y = y s (11) PS ∆py dpy = ys , (12)(11) PSθ , dxθ = ys ⋅ sin
dxθθ ==ysy(s1·−sin dy cosθ, θ? ) ,
(12)(12)
Remote Sens. 2017, 9, 634
9 of 23
dyθ = ys (1 − cos θ ),
(13)
dxδ = ys rs2 δ,
(14)
rs2 = xs2 + y2s .
(15)
where In this research, all of the mentioned IOP were initially considered, except the dpy parameter, because its effect is similar to the effect caused by the scale variation in the ys direction (Figures 8 and 9). The sy parameters were considered different for each CCD chip to get closer to the physical reality and avoid strong correlations with the change in the focal length, unlike that considered in [6]. The central angle δ of the bending effect was also considered unique for each CCD chip. Since the corrective terms dyθ and dys of rotation and scale variation in the ys direction are functions of their own coordinate ys , they were grouped into a single correction term (a1 and b1 ): dyθ + dy p = ys (1 − cos θ ) − ys sy = ys 1 − cos θ − sy ,
(16)
dyθ + dy p = a1 ys ,
(17)
a1 = 1 − cos θ − sy .
(18)
where The term of rotations in the xs direction was considered: dxθ = ys · sen θ = b1 ys .
(19)
The IOP dx, dy, x0 and y0 were fixed with their nominal values since the residual errors are absorbed by the parameters a0 and b0 . The nominal values of the focal lengths used in BARISTA software were also fixed since their uncertainties were estimated from the ∆f parameters. The IOP a0 , b0 and a1 , b1 were estimated using weighted constrains with a standard error of 1.5 pixels and 300 ppm, respectively, and the latter was as adopted in BARISTA software [7]. To define the reference system and avoid singularities, as adopted in [7], for each of the three PRISM cameras, one CCD was selected as the master chip. In this research, the second CCD chip was considered the master CCD chip. As a result, the parameters a0 , b0 and a1 , b1 for this CCD chip were fixed, and its systematic errors were absorbed by the EOP. The mathematical transformation of SRS coordinates to CRS coordinates, considering the set of IOP, is given by the following equations: xc = xs + dx + a0 − x0 + a1 ys + dxr + dx f + dxδ ,
(20)
yc = ys + dy + b0 − y0 + b1 ys + dyr + dy f ,
(21)
The collinearity equations with the IOP are: xs = − f
∆X − dx − a0 + x0 − a1 ys − dxr − dx f − dxδ , ∆Z
(22)
∆Y − dy − b0 + y0 − b1 ys − dyr − dy f , ∆Z
(23)
ys = − f with:
∆X = r11 (t)[ X − XS (t)] + r12 (t)[Y − YS (t)] + r13 (t)[ Z − ZS (t)],
(24)
∆Y = r21 (t)[ X − XS (t)] + r22 (t)[Y − YS (t)] + r23 (t)[ Z − ZS (t)],
(25)
∆Z = r31 (t)[ X − XS (t)] + r32 (t)[Y − YS (t)] + r33 (t)[ Z − ZS (t)],
(26)
where X, Y, Z and XS (t), YS (t), ZS (t) are, respectively, the object space coordinates of a point and the PC sensor at an instant t; and r11 (t), ..., r33 (t) are the elements of the rotation matrix R (t), which is
Remote Sens. 2017, 9, 634
10 of 23
responsible for aligning the CRS with the TRS at a given instant t. Since the satellite attitude data were not available for the images used in this research, the rigorous model considered in this study was a Position-Rotation type, as described in [14]. Thus, the rotation matrix R (t) was defined as a function of the nonphysical attitude angles ω, φ, and κ, which vary in time: R(t) = R Z (κ (t)) RY ( ϕ(t)) R X (ω (t)).
(27)
After considering all of the mentioned IOP, significance and correlation analyses were performed between the parameters to find an optimal group of IOPs that could be used. The IOP significance test was based on a comparison of the estimated parameter value with its standard deviation, obtained from the variance-covariance matrix. When the IOP value was lower than the value of its standard deviation, it was considered non-significant and, consequently, was without importance in the final mathematical model. This test was performed iteratively, analysing one IOP at a time. The correlation analysis between IOP aimed to identify dependencies and the loss of physical meaning in the value of the parameter. In this research, the correlation was considered strong when the value of the correlation coefficient was higher than 0.75 or 75%. 5. Platform Model Based on the second-order polynomial platform model, Michalis [11] indicated that the first-order coefficient represents the velocity of the satellite on the reference axis and, similarly, the second-order coefficient represents the acceleration on the same axis. This platform model is called the UCL (University College of London) Kepler model. The components of the acceleration are calculated from the components of the position, using the mathematical formulation of the two-body problem [15]. The components of the sensor PC position on the satellite are calculated using the theory of Uniformly Accelerated Motion, as shown in Equations (28)–(30). X s ( t ) = X0 + u x t −
GM · X0 · t2
3/2 , 2 · X0 2 + Y0 2 + Z0 2 GM · Y0 · t2
Ys (t) = Y0 + uy t −
2 · X0 2 + Y0 2 + Z0 2
Zs (t) = Z0 + uz t −
3/2 ,
(29)
3/2 ,
(30)
GM · Z0 · t2
(28)
2 · X0 2 + Y0 2 + Z0 2
where X0 , Y0 , Z0 and ux , uy , uz are, respectively, the position and velocity components of the sensor PC at the time at which the first line of the image was obtained; t is the acquisition time of an image line; and GM is the standard gravitational parameter. The rotation angles of the sensor may be considered constant during the image acquisition time [16] or can be propagated by polynomials [8–10,12], depending on the satellite’s motion characteristics. In this research, the rotation angles ω and φ were considered invariant since the image acquisition occurred in approximately 6 s and the attitude control of the ALOS has sufficient accuracy [17]. However, the κ angle was considered variable, being modelled by a second-order polynomial, as shown in Equations (31)–(33). This is due to the yaw angle steering operation, which is intended to continuously modify and correct the satellite yaw attitude according to the orbit latitude argument to compensate for the effects of the Earth’s rotation on the sensor image data [17] (crab movement). ω = ω0 , (31) ϕ = ϕ0 ,
(32)
Remote Sens. 2017, 9, 634
11 of 23
κ = κ 0 + d1 · t + d2 · t2 ,
(33)
where ω 0 , φ0 , and κ 0 are the orientation angles in the first line of each image; and d1 , d2 are the polynomial coefficients of κ 0 variation. An important issue in the use of this platform model is that the coordinates in the object space of control, or check points, must be referenced to the GCRS due to the mathematical formulation of the two-body problem in the calculation of acceleration. Normally, the coordinates of the points collected in the object space are referred to a TRS. To avoid the transformation of coordinates from TRS to GCRS, the platform model composed of Equations (28)–(30) was adapted. In this transformation, the movements of precession and nutation, and polar motion must be taken into consideration in addition to the Earth’s rotation [15]. However, according to [18], in a short period of orbit propagation, the effects of precession and nutation and polar motion can be disregarded. Once the PRISM images with 16,000 lines are formed in approximately 6 s, only the influence of the Earth’s rotational movement was added to the equations of the two-body problem. Thus, the UCL platform model adapted for the use of coordinates referenced to a TRS is defined as follows: 1 GM · X0 2 + ωt X0 + 2ωt uy · t2 , (34) X s ( t ) = X0 + u x t + − 2 r3 1 GM · Y0 2 Ys (t) = Y0 + uy t + + ωt Y0 + 2ωt u x · t2 , − 2 r3 Zs (t) = Z0 + uz t − with:
q r=
GM · Z0 2 ·t , 2 · r3
X0 2 + Y0 2 + Z0 2 ,
(35) (36)
(37)
where ω t is the magnitude of the Earth’s rotational angular velocity. The advantage of using the UCL Kepler model instead of models that use polynomials is the reduction in the number of EOP unknowns in each image triplet. For the ALOS satellite, the components of position and velocity from GPS XGPS , YGPS , ZGPS , uxGPS , uyGPS , and uzGPS are available in the .SUP files, ancillary 8, with a sampling interval of 60 s for the day on which the image was obtained [1]. The data are provided by the GPS receiver that is on board the satellite. To estimate the state vectors referring to the instant of acquisition of the first lines of each image, a spline interpolation was used. In this process, the minimum accuracy of the spline interpolation, omitting a central point and using 34 surrounding points, was 6 mm for positions and 7 µm/s for velocities. These interpolation results are in agreement with those obtained by [19,20], who used the Hermite interpolator. However, the components of the position and velocity from GPS have uncertainties XT , YT , ZT , uxT , uyT , and uzT when accurate orbit determination is not applied by JAXA. Thus, the components X0 , Y0 , Z0 , ux , uy and uz were obtained by: X0 = XGPS + XT ,
(38)
Y0 = YGPS + YT ,
(39)
Z0 = ZGPS + ZT ,
(40)
u x = u xGPS + u xT ,
(41)
uy = uyGPS + uyT ,
(42)
uz = uzGPS + uzT .
(43)
ux = uxGPS + uxT
u y = uyGPS + uyT Remote Sens. 2017, 9, 634
uz = uzGPS + uzT
,
(37)
,
(38)
.
12(39) of 23
The values of XGPS, YGPS, ZGPS, uxGPS, uyGPS, and uzGPS are fixed in the estimation process by least squares To strong correlations with the parameters related to the systematic The adjustment. values of XGPS , Yavoid GPS , ZGPS , uxGPS , uyGPS , and uzGPS are fixed in the estimation process by least changesadjustment. of CCD chips positions the focal planes, parameters XT, toYTthe , Zsystematic T, uxT, uyT, changes and uzT squares To avoid strongincorrelations with thethe parameters related received weighted constraints in the least squares adjustment. As mentioned above, the satellite of CCD chips positions in the focal planes, the parameters XT , YT , ZT , uxT , uyT , and uzT received attitude data were notinavailable the images used in research.above, Therefore, the EOPattitude ω0, φ0, κdata 0, d1 weighted constraints the least for squares adjustment. Asthis mentioned the satellite and dnot 2 did not receive weighted identifyTherefore, the strongthe dependencies EOP were available for the imagesconstraints. used in thisTo research. EOP ω 0 , φ0between , κ 0 , d1 and d2 and did IOP and avoid inaccurate results or the loss of the physical meaning of parameters, correlation not receive weighted constraints. To identify the strong dependencies between EOP and IOP and analyses were performed using values obtained from of the covariance matrix. Similarly, the avoid inaccurate results or the loss of the physical meaning parameters, correlation analyses were correlation was considered strong when the value of the correlation coefficient was higher than 0.75 performed using values obtained from the covariance matrix. Similarly, the correlation was considered or 75%.when the value of the correlation coefficient was higher than 0.75 or 75%. strong Sincethe thestate statevectors vectors referenced to ITRF97, the value of ω7,292,115 t was 7,292,115 10−11 rad/s. The Since areare referenced to ITRF97, the value of ωt was × 10−11×rad/s. The value 14 − 3 s−2. 14 3 2 value of the GM used was the one indicated in the SUP file, which was 3.986004415000000 × 10 m of the GM used was the one indicated in the SUP file, which was 3.986004415000000 × 10 m s . 6. Test Test Field Field and and Experiments Experiments 6. In this this research, research, were were used used two two neighbouring neighbouring PRISM PRISM triplets, triplets, at at processing processing level level 1B1, 1B1, obtained obtained In on the the same same path, path, both both in in 20 20 November November 2008. 2008. The test field covered by the the images images included included the city on of Presidente Presidente Prudente Prudente and and adjacent adjacent regions regions in in the the state state of of São São Paulo, Paulo, Brazil. Brazil. Figure Figure 10a 10a shows shows the the of location and the areas covered by the triplets used in this study, which were called triplets n.1 and location and the areas covered by the triplets used in this study, which were called triplets n.1 and n.2. n.2. In Figure 10c, the locationwithin of triplets within the state Sãothe Paulo and the location ofSão the In Figure 10b,c,10b theand location of triplets the state of São Pauloofand location of the state of state of Paulo Brazilrespectively. are shown, respectively. Paulo inSão Brazil are in shown,
Figure Figure 10. 10. Location Location of the the state state of of São São Paulo Paulo in in Brazil Brazil (a), (a), location location of the test test field in in the the state state of of São São Paulo Paulo (b), (b), and and the the triplets triplets n.1 n.1 and and n.2 n.2 (c). (c).
In each triplet, forty tie points were collected and distributed homogeneously. An example is shown in Figure 11, with the distribution of these points in triplet n.1. All points were manually collected in the three images of each triplet; i.e., all points have three image measurements.
Remote Sens. 2017, 9, 634 Remote Sens. 2017, 9, 634
13 of 23 13 of 23
In each triplet, forty tie points were collected and distributed homogeneously. An example is In each triplet, forty were collected distributed An example is with tie thepoints distribution of these and points in triplethomogeneously. n.1. All points were manually 13 of 23 shown in Figure 11, with the distribution of these points in triplet n.1. All points were manually collected in the three images of each triplet; i.e., all points have three image measurements. collected in the three images of each triplet; i.e., all points have three image measurements.
shownSens. in 2017, Figure 11, Remote 9, 634
Figure n.1. Figure 11. 11. Distribution Distribution of of tie tie points points in in triplet triplet n.1. Figure 11. Distribution of tie points in triplet n.1.
The coordinates of the ground control and check points were extracted from orthophotos with 1 The coordinates of the the ground ground control control and andcheck checkpoints pointswere wereextracted extracted fromorthophotos orthophotoswith with1 Theand coordinates of m GSD from Digital Terrain Models (DTM) with 5 m GSD generated from from aerial images. The 1mmGSD GSD and from Digital Terrain Models (DTM) with 5 m GSD generated from aerial images. and from Digital Models with 5 minformation GSD generated fromfrom aerialthe images. The positional accuracy of the Terrain orthophotos and(DTM) the altimetric extracted DTM are, The positional accuracy of orthophotos the orthophotos and the altimetric information extracted from the DTM positional accuracy of the and the altimetric information extracted from the DTM are, respectively, compatible with the 1:2000 and 1:5000 scales, and class A of the Brazilian Planimetric are, respectively, compatible with the 1:2000 and 1:5000 scales, and class A of the Brazilian Planimetric respectively, compatible with the This 1:2000 and that 1:5000 scales, classpresent A of the Brazilian Planimetric Cartographic Accuracy Standard. means 90% of theand points errors less than 1 m and Cartographic Accuracy Standard. Standard.This Thismeans meansthat that 90% of the points present errors less than m Cartographic Accuracy 90% of the points present errors less than 1 m 1and 2.5 m. In the area covered by triplet n.1, 22 ground control points and 20 check points were collected. and 2.5Inm. In thecovered area covered by n.1, triplet n.1, 22 ground controland points and 20 check points were 2.5the m. thecovered area by triplet 22 ground control points 20 check points were collected. In area by triplet n.2, 21 ground control points and 24 check points were collected. In collected. In the area covered by triplet n.2, 21 ground control points and 24 check points were In the area covered by triplet n.2, 21 of ground control points 24 check were collected. In Figure 12a and 12b, the distributions the control and checkand points in thepoints area covered by triplets collected. In Figure 12a,b, the distributions of the control andpoints check in points in the area covered by Figure 12a and 12b, the distributions of the control and check the area covered by triplets n.1 and n.2 are shown, respectively. triplets n.2 are shown, respectively. n.1 andn.1 n.2and are shown, respectively.
Figure 12. Distribution of ground control points and check points in triplets n.1 (a) and n.2 (b). Figure 12. Distribution of ground control points and check points points in in triplets triplets n.1 n.1 (a) (a) and and n.2 n.2 (b). (b).
Remote Sens. 2017, 9, 634
14 of 23
Remote Sens. 2017, 9, 634
14 of 23
In Table 3, the numbers of ground control points and check points on each CCD chip for the three cameras of triplets n.1 and n.2. are shown.
In Table 3, the numbers of ground control points and check points on each CCD chip for the three Tablen.1 3. Quantities of ground cameras of triplets and n.2. are shown.control points and check points on each CCD chip.
Triplet n.1
Triplet n.2
Table 3. Quantities of ground control points and check points on each CCD chip.
CCD chip 1 2 3 4
CCD chip 1 2 3 CCD chip 4
1 CCD chip 1 2 3 4
2 3 4
CCD chip
CCD chip 1 2 3 4
1 2 3 4
Backward camera GCP Triplet n.1 CP
6 0 Backward camera GCP 3 CP6
CCD chip 1 2 31 42
CCD chip
Backward camera GCPTriplet n.2 CP
7 3 Backward camera GCP 0 8 CP
6 9 010 3 4 64 9 10 4 Nadir camera 4
CCD4chip
Nadir 3 camera 6
2 camera 4 Nadir
CCD1 chip
3Nadir camera 3 73 12 3 47 6 12 4 1 3 6
2 11 11 0 0 1 1
42 211 11 3 3
Forward camera
Forward camera 4 11 0 2
4 11 0 2
4 4 44 1111 1 1
3
21 32 43 4
CCD chip CCD chip
1 1 22 33 44
117 30
11
1
7 3 6 8 7
3
Forward camera
Forward camera
6 6 77 44 33
3 3 1212 8 8 1 1
To facilitate facilitatethethe measurement of ground andpoints checkonpoints on the images and To measurement of ground controlcontrol and check the images and orthophotos, orthophotos, the determination centroidsentities of geometric entitiesaswas utilized,inas presented in [21]. the determination of centroids ofofgeometric was utilized, presented [21]. The geometric The geometric entities used were buildings, soccer or fields, courts,structures or anthropic structures other entities used were buildings, soccer fields, courts, anthropic of other types.ofAs an types. As in anFigure example, 13a, the of point in thethat space image, that is,images, in the example, 13a,in theFigure coordinates ofcoordinates point 16 in the space16 image, is, in the PRISM PRISM images,from were fromof the coordinates of 16_3 points 16_3 13b, and point 16_4. 16 In were obtained theobtained coordinates points 16_1, 16_2, and16_1, 16_4.16_2, In Figure Figure 13b, point 16 obtained inis, the space, that in is,the in same the orthophotos, in the same way is obtained in the object space, that inobject the orthophotos, way is shown. After estimating shown. After coordinates estimating of thetheplanimetric coordinates of thecoordinates centroids were points, the altimetric the planimetric centroids points, the altimetric extracted from the coordinates were extracted from the DTM of the region. DTM of the region.
Figure 13. 13. Examples of aa centroid centroid in in aa PRISM PRISM image image (a) (a) and and in in an an orthophoto orthophoto (b). (b). Figure Examples of
Remote Sens. 2017, 9, 634 Remote Sens. 2017, 9, 634
15 of 23 15 of 23
In this research, four experiments were carried out using triplets n.1 and n.2. In experiments 1 and 2, the bundle adjustment trials of triplet n.1 with and without the estimation of IOP were In this research, four experiments were carried out using triplets n.1 and n.2. In experiments 1 and performed. To analyse the quality of the obtained IOP from experiment 2, in experiment 4 they were 2, the bundle adjustment trials of triplet n.1 with and without the estimation of IOP were performed. used to perform the bundle adjustment of triplet n.2. In experiment 3, the bundle adjustment of To analyse the quality of the obtained IOP from experiment 2, in experiment 4 they were used to perform triplet n.2 was performed without the use of the IOP estimated in experiment 2. The results obtained the bundle adjustment of triplet n.2. In experiment 3, the bundle adjustment of triplet n.2 was performed from experiments 3 and 4 are compared and discussed in the next section. In Table 4, the without the use of the IOP estimated in experiment 2. The results obtained from experiments 3 and 4 are configuration of each experiment is shown. compared and discussed in the next section. In Table 4, the configuration of each experiment is shown. Table 4. Main characteristics of the four experiments. Table 4. Main characteristics of the four experiments. Experiment 1 Experiment Experiment21 Experiment Experiment32 Experiment43 Experiment Experiment 4
Experiments and Yours Characteristics Experiments Yourswithout Characteristics Bundle adjustment of triplet n.1and images estimation of IOP. Bundle estimation of IOP. Bundleadjustment adjustmentofoftriplet tripletn.1 n.1images imageswith without estimation of IOP. Bundle the use ofofIOP Bundleadjustment adjustmentofoftriplet tripletn.2 n.1images imageswithout with estimation IOP.estimated in experiment 2. Bundleadjustment adjustmentofoftriplet tripletn.2 n.2images imagesusing without of IOP estimated in experiment 2. Bundle the the IOPuse estimated in experiment 2. Bundle adjustment of triplet n.2 images using the IOP estimated in experiment 2.
In experiment 2, in addition to the IOP correlation analysis, the correlations between IOP and EOP In were estimated. The procedures analyses analysis, were previously mentioned in Sections and experiment 2, in addition to thefor IOPthe correlation the correlations between IOP and4 EOP 5. To better demonstrate the steps of the proposed methodology in experiment 2, a workflow is were estimated. The procedures for the analyses were previously mentioned in Sections 4 and 5. To better shown in Figure 14. of the proposed methodology in experiment 2, a workflow is shown in Figure 14. demonstrate the steps
Figure Figure 14. 14. Workflow Workflow of of the the proposed proposed methodology of experiment 2.
7. 7. Results Results and and Discussion Discussion 7.1. 7.1. Obtained Obtained Results Results from from the the Experiments of Triplet n.1 As mentioned, mentioned,two two experiments performed with n.1. triplet Thewas firsttheone was the experiments werewere performed with triplet The n.1. first one conventional conventional bundle adjustment of the triplet. In this experiment, the image’s EOP and the 3D bundle adjustment of the triplet. In this experiment, the image’s EOP and the 3D coordinates of control, tie coordinates of control, tie and check points one, wereIOP computed. the second one, IOP in were also4 and check points were computed. In the second were also In estimated. As mentioned Section estimated. As in mentioned Section 4 and indicated in Figure the analysis of IOPanalysing significance and indicated Figure 14,inthe analysis of IOP significance was 14, performed iteratively, one was IOP performed iteratively, analysing one IOP each run. After fourteen iterations, a reduced group of each run. After fourteen iterations, a reduced group of significant IOP was defined. The values of estimated significant IOPand wastheir defined. The values of deviations estimated are significant and significant IOP respective standard shown inIOP Table 5. their respective standard deviations are shown in Table 5.
Remote Sens. 2017, 9, 634
16 of 23
Table 5. Estimated significant IOP values of the backward, nadir and forward cameras and their standard deviations from experiment 2. IOP
Backward Camera
a0 _chip1 , σ (mm) b0 _chip1 , σ (mm) b0 _chip3 , σ (mm) a0 _chip4 , σ (mm) ∆f, σ (mm)
−0.0386, 0.0167 0.0688, 0.0160 −0.0120, 0.0058 0.0145, 0.0097 2.1501, 1.7158
IOP
Nadir camera
a0 _chip1 , σ (mm) b0 _chip3 , σ (mm) b0 _chip4 , σ (mm) δ_chip2 , σ (rad) ∆f, σ (mm)
0.0281, 0.0077 0.0247, 0.0129 0.0637, 0.0289 3.96 × 10−6 , 1.64 × 10−6 2.4863, 1.3128
IOP
Forward camera
a0 _chip3 , σ (mm) b0 _chip3 , σ (mm) b0 _chip5 , σ (mm) b0 _chip6 , σ (mm) ∆f, σ (mm)
0.0506, 0.0087 −0.0301, 0.0127 0.0327, 0.0142 0.0642, 0.0272 1.7388, 1.2220
As can be seen in Table 5, for the three cameras, the parameters K1 and K2 and all parameters a1 and b1 for all CCD chips were not significant. This occurred because the values of their standard deviations were greater than their own values. Consequently, they were not considered in the bundle adjustment, and there was no change in planimetric and altimetric accuracy. With the exception of the IOP δ_chip2 in the nadir camera, all of the δ parameters of the CCD bending effect were insignificant. To investigate the reduction of the functional mathematical models, the correlations between the IOP were analysed. In Table 6, the values of correlation coefficients are shown. Table 6. Correlation between the IOP in experiment 2. Backward Camera a0 _chip1 1.00 0.16 −0.19 −0.95 −0.35
b0 _chip1
b0 _chip3
a0 _chip4
∆f
a0 _chip1 b0 _chip1 b0 _chip3 a0 _chip4 ∆f
1.00 0.22 −0.16 0.42
1.00 0.17 0.37
1.00 0.34
1.00
b0 _chip3
b0 _chip4
δ_chip2
∆f
a0 _chip1 b0 _chip3 b0 _chip4 δ_chip2 ∆f
a0 _chip1 1.00 0.41 0.54 0.80 0.00
1.00 0.81 0.28 0.14
1.00 0.37 0.18
1.00 −0.02
1.00
b0 _chip3
b0 _chip5
b0 _chip6
∆f
1.00 0.89 0.87 0.62
1.00 0.89 0.55
1.00 0.58
1.00
Nadir Camera
Forward Camera a0 _chip3 b0 _chip3 b0 _chip5 b0 _chip6 ∆f
a0 _chip3 1.00 −0.88 −0.76 −0.81 −0.56
Remote Sens. Sens. 2017, 2017, 9, 9, 634 634 Remote
17 of 23
As can be verified in Table 6, in the backward camera, there was one case of strong correlation As verified Table 6, in thethe backward camera, there wastwo onecases case of betweencan thebe IOP a0_chip1inand a0_chip4 . In nadir camera, there were of strong strong correlation correlation between the IOP a _ and a _ . In the nadir camera, there were two cases of strong chip4 between b0_chip3 e 0b0chip1 _chip4 and 0between δ_chip2 e a0_chip1. In the forward camera, all IOPcorrelation related to between b0 _changes between δ_chip2 e afocal . In the forward camera, all IOP related to 0 _chip4 0 _chip1 chip3 e bin systematic CCDand chips positions in the plane were strongly correlated to each other. systematic changes in CCD chips positions in the focal plane were strongly correlated to each other. After analysing the reduced model, it was concluded that only the IOP δ_chip2 could be ignored After analysing the reduced concluded that only the be ignored without chip2 could without significant impactsmodel, in theit was planialtimetric accuracy ofIOP the δ_ bundle adjustment. After this significant impacts in the planialtimetric of the bundle adjustment. After this simplification, simplification, the values of the IOP andaccuracy their precisions, as well as the correlation values, were not the valuesThe of the IOP and their precisions, as some well asIOP, the correlation values, were notcan changed. Thea changed. strong correlations between as previously mentioned, indicate strong correlations between some IOP, mentioned, can indicate a possible loss of physical possible loss of physical meaning of as thepreviously parameters when estimating them by this technique with meaning of the parameters when estimating them by this technique with bundle adjustment. bundle adjustment. Check Check points points were were used used to to verify verify the the obtained obtained accuracies accuracies in in the the performed performed experiments. experiments. The The 3D 3D coordinates coordinates of of check check points, points, as as extracted extracted from from the the orthophotos orthophotos and and DTM, DTM, were were compared compared to to the the 3D 3D coordinates estimated with bundle adjustment. The values of the mean and root mean square errors coordinates estimated with bundle adjustment. The values of the mean and root mean square errors (RMSE) (RMSE) of of discrepancies discrepancies were were calculated. calculated. All All discrepancies discrepancies were were calculated calculated in in the the Local Local Geodetic Geodetic System System (LGS). (LGS). In In Table Table 7, 7, these thesevalues valuesfrom fromexperiments experiments11and and22are areshown. shown. Table 7. Table 7. Mean Mean and and root root mean mean square square errors errors of of the the discrepancies discrepancies obtained obtained in in experiments experiments11and and2.2.
Experiment Experiment 11 L ΔYL ∆XΔX ∆Y L L mean −0.0776 − −0.0028 mean (m) (m) −0.0776 0.0028 RMSE (m) (m) 1.4808 RMSE 1.4808 1.8279 1.8279 Experiment Experiment 22 L ΔYLL ∆XΔX ∆Y L mean (m) −0.2712 −0.4271 mean (m) −0.2712 −0.4271 RMSE (m) (m) 1.2376 RMSE 1.2376 1.1683 1.1683
ΔZ L ∆Z L −3.3045 −3.3045 5.1204 5.1204 ΔZ L ∆Z L −1.5886 −1.5886 3.7642 3.7642
To analyse analysethe theplanimetric planimetric altimetric discrepancies in experiments they were To andand altimetric discrepancies in experiments 1 and 12, and they 2, were plotted plotted and are presented and 16. Theplanimetric expected planimetric 1 GSD was and are presented in Figuresin15Figures and 16. 15 The expected accuracy of 1accuracy GSD wasofadopted as a adopted as a reference, and this value is indicated with a circle in the graph. reference, and this value is indicated with a circle in the graph.
Figure 15. 15. Planimetric discrepancies in in check check points points from from experiments experiments 11 (a) (a) and and 22 (b). (b). Figure Planimetric discrepancies
Remote Sens. 2017, 9, 634 Remote Sens. 2017, 9, 634 Remote Sens. 2017, 9, 634
18 of 23 18 of 23 18 of 23
Figure 1 (a) and 2 (b). Figure 16. 16. Altimetric Altimetric discrepancies discrepancies in in the the check check points points from from experiments experiments Figure 16. Altimetric discrepancies in the check points from experiments 11 (a) (a) and and 22 (b). (b).
As can be seen in Figure 15a,b, the planimetric accuracies obtained from experiment 2, were As can be seen in Figure Figure 15a,b, 15a,b, the the planimetric planimetric accuracies obtained from experiment 2, were significantly improved. The number of planimetric discrepancies less than 2.5 m was three times significantly improved. The The number number of of planimetric planimetric discrepancies discrepancies less less than than 2.5 2.5 m was three times greater than that obtained from experiment 1, although a small tendency was found in the Y greater than that obtained from experiment 1, although although a small tendency tendency was found found in the Y discrepancies. Comparing the altimetric discrepancies in Figure 16a,b, the discrepancies from discrepancies. Comparing the altimetric discrepancies in Figure 16a,b, the discrepancies from Comparing the altimetric discrepancies in Figure 16a,b, the experiment 2 had a better distribution than those from experiment 1, although with a small tendency experiment 2 had a better distribution than those from experiment 1, although with a small tendency towards negative values. towards negative values. To facilitate the graphical analyses and to verify the normality of the discrepancies samples in To facilitate facilitate the graphical graphical analyses and to verify the normality of the discrepancies samples in the components, the Shapiro-Wilk test was performed in two experiments, considering the expected the components, the Shapiro-Wilk test test was was performed performed in in two two experiments, experiments, considering considering the the expected expected planimetric accuracy of one GSD. In experiment 1, the null hypothesis of normality in the planimetric accuracy of one GSD. In experiment 1, the null hypothesis of normality in the planimetric accuracy of one GSD. In experiment 1, the null hypothesis of normality in the discrepancies discrepancies samples was not rejected in the XL and ZL components at the confidence level of 95%. discrepancies samples wasinnot XL and ZL components at the confidence level of 95%. samples was not rejected therejected XL andinZthe at the confidence level of 95%. However, L components However, in experiment 2, the null hypothesis of normality was not rejected in any of the three However, in experiment 2, the null of hypothesis normality wasinnot in any of the three in experiment 2, the null hypothesis normalityof was not rejected anyrejected of the three components at components at the same confidence level. components at the same confidence level. the same confidence level. Considering the RMSE values of the 3D discrepancies from the experiments 1 and 2, it can be Considering theRMSE RMSEvalues valuesofofthe the discrepancies from experiments 1 and 2, itbe can be Considering the 3D3D discrepancies from thethe experiments 1 and 2, it can seen seen in Table 7 that the estimation of IOP improves the accuracies by 0.24 m, 0.66 m, and 1.36 m in seen in Table that the estimation IOP improves the accuracies and in Table 7 that7the estimation of IOPofimproves the accuracies by 0.24by m,0.24 0.66m, m,0.66 andm, 1.36 m 1.36 in themXin L, the XL, YL and ZL components, respectively. The improvement in planimetric accuracy was 0.65 m, as the X L , Y L and Z L components, respectively. The improvement in planimetric accuracy was 0.65 m, as YL and ZL components, respectively. The improvement in planimetric accuracy was 0.65 m, as can be can be seen in the graph in Figure 17. can in the in graph in Figure 17. seenbe inseen the graph Figure 17.
Figure 17. Graphical representation of the planimetric and altimetric RMSE in experiments 1 and 2. Figure altimetric RMSE RMSE in in experiments experiments 11 and and 2. 2. Figure 17. 17. Graphical Graphical representation representation of of the the planimetric planimetric and and altimetric
The strong dependencies between IOP and EOP were also analysed. As shown in Table 8, only The strong dependencies between IOP and EOP were also analysed. As shown in Table 8, only in theThe backward camera were between the IOP a0_chip1and andEOP a0_chip4 strongly correlated theinEOP . only in strong dependencies were also analysed. As with shown Tableκ08, in the backward camera were the IOP aIOP 0_chip1 and a0_chip4 strongly correlated with the EOP κ0. the backward camera were the IOP a0 _chip1 and a0 _chip4 strongly correlated with the EOP κ 0 .
Remote Sens. 2017, 9, 634
19 of 23
Table 8. High correlations between the IOP and EOP in experiment 2. Remote Sens. 2017, 9, 634
19 of 23
Backward Camera Table 8. High correlations between the IOPκand EOP in experiment 2. 0
a0 _chip1 0.92 Backward Camera a0 _chip4 −0.95 κ0 a0_chip1 0.92 a0_chip4 −0.95 a probable IOP and EOP show
These strong correlations between dependency and, thus, a lack of physical meaning for the a0 _chip1 and a0 _chip4 parameters. The results showed that the IOP computed These strong correlations between IOP and EOP show a probable dependency and, thus, a lack in experiment 2 are feasible, only to be used in triplet n.1. Even getting this result, this set of IOP was of physical meaning for the a0_chip1 and a0_chip4 parameters. The results showed that the IOP computed used in toexperiment perform the bundle adjustment triplet n.2. n.1. Even getting this result, this set of IOP was 2 are feasible, only to be of used in triplet used to perform the bundle adjustment of triplet n.2.
7.2. Obtained Results from the Experiments of Triplet n.2
7.2. Obtained from experiments the Experimentswere of Triplet n.2 Using tripletResults n.2, two performed. In the first one, that is, in experiment 3, the bundle adjustment of two triplet n.2 was performed without using in experiment 2. Using triplet n.2, experiments were performed. In the first the one,IOP thatestimated is, in experiment 3, the adjustment of experiment triplet n.2 was4,performed without using the IOP estimated inused experiment 2. In the In thebundle second, that is, in the IOP estimated in experiment 2 were to perform theadjustment second, thatof is,the in experiment theobjective IOP estimated experiment was 2 were used to perform the bundle triplet n.2. 4, The of thisinexperiment to investigate the usability bundle adjustment of the triplet n.2. The objective of this experiment was to investigate the usability of the IOP estimated with a triplet in the bundle adjustment of another triplet. of the IOP estimated with a triplet in the bundle adjustment of another triplet. As performed in the previous experiments, the 3D discrepancies of check points were computed. As performed in the previous experiments, the 3D discrepancies of check points were The mean and RMSE values of the 3D discrepancies were also calculated. In Table 9, the obtained computed. The mean and RMSE values of the 3D discrepancies were also calculated. In Table 9, the values for experiments 3 and 4 are 3shown. obtained values for experiments and 4 are shown. TableTable 9. Mean andand root mean square obtained experiments 3 and 9. Mean root mean squareerrors errorsof of the the discrepancies discrepancies obtained in in experiments 3 and 4. 4.
mean (m) RMSE (m)
mean (m) RMSE (m)
Experiment 33 Experiment ΔYL ΔZL ΔXL ∆X ∆Y mean (m) L −0.2936 −1.5743L 1.9034 −0.2936 −1.5743 RMSE (m) 2.1704 2.8500 2.8500 2.1704 5.4814 Experiment 44 Experiment ΔY L ΔZL ∆XL ΔXL ∆Y L mean − (m) 0.3160−0.3160 0.5742 0.5742 0.2137 1.4102 1.4102 2.1995 3.7334 RMSE (m) 2.1995
∆ZL 1.9034 5.4814 ∆ZL 0.2137 3.7334
As shown in Figure 18a,b, the planimetric discrepancies of experiment 4 have fewer trends than
As shown in Figure 18a,b, the planimetric discrepancies of experiment 4 have fewer trends than do do those of experiment 3. In experiment 3, most of the discrepancies are in the South quadrant. those of experiment 3. In experiment 3, most of the discrepancies are in the South quadrant. Additionally, Additionally, in experiment 4, the number of planimetric discrepancies less than 2.5 m is three times in experiment theinnumber of planimetric lessaccuracy than 2.5when m is three times than that in more than4,that experiment 3, showing discrepancies better planimetric the IOP frommore experiment experiment 3, showing better planimetric accuracy when the IOP from experiment 2 were used, despite 2 were used, despite the IOP a0_chip1 and a0_chip4 of the backward camera being correlated with the the IOP a _ and a _ of the backward camera being correlated with the EOP κ . EOP0κ0chip1 . 0 chip4 0
Figure Planimetric discrepanciesin in check check points 3 (a) andand 4 (b). Figure 18. 18. Planimetric discrepancies pointsfrom fromexperiments experiments 3 (a) 4 (b).
Remote Sens. 2017, 9, 634
20 of 23
Remote Sens. 2017, 9, 634 Remote Sens. 2017, 9, 634
20 of 23 20 of 23
As 19a,b, the the altimetric discrepancies from experiment 4 are distributed without As shown shownininFigure Figure 19a,b, altimetric discrepancies from experiment 4 are distributed trends. In contrast, the altimetric discrepancies from experiment 3 have a strong trend towards positive As shown in Figure 19a,b, the altimetric discrepancies from experiment 4 are distributed without trends. In contrast, the altimetric discrepancies from experiment 3 have a strong trend values. planimetric accuracy, the altimetric accuracy wasexperiment also improved when the IOP trend from withoutSimilar trends.toIn contrast, thetoaltimetric discrepancies from 3 have aalso strong towards positive values. Similar planimetric accuracy, the altimetric accuracy was improved experiment 2 were used. towards positive values. Similar to planimetric accuracy, the altimetric accuracy was also improved when the IOP from experiment 2 were used. when the IOP from experiment 2 were used.
Figure 19. 19. Altimetric in check check points points obtained obtained in in experiments experiments 33 (a) (a) and and 44 (b). (b). Figure Altimetric discrepancies discrepancies in Figure 19. Altimetric discrepancies in check points obtained in experiments 3 (a) and 4 (b).
Using the Shapiro-Wilk tests to check the normality of the 3D discrepancies samples, the null Using Shapiro-Wilk tests to the normality of 3D discrepancies samples, the Using the thenormality Shapiro-Wilk testsrejected to check check of the the 3D discrepancies samples, the null null hypothesis of was not forthe anynormality components of X L, Y L and ZL at the 95% confidence hypothesis of normality was not rejected for any components of X , Y and Z at the 95% confidence L L L hypothesis of normality was not rejected for any components of X L , Y L and Z L at the 95% confidence level. As shown in Table 7, the bundle adjustment using the IOP estimated in experiment 2 showed level. Table bundle adjustment using the in 22 showed level. As As shown showninin in Table 7, 7, the the bundle adjustment using the IOP IOP estimated estimated in experiment experiment showed improvements accuracies. The improvements were approximately 1.44 m and 1.75 m in the XL and improvements in accuracies. The improvements were approximately 1.44 m and 1.75 m in with the XLa improvements in respectively. accuracies. The were approximately 1.44 m and 1.75 m in the XL and Z L components, Inimprovements the YL component, the accuracies were nearly equal, and ZL components, respectively. Inbe the theaccuracies accuracies werenearly nearly equal, with with aa L component, ZL components, the Yobserved L Y component, theplanimetric were equal, difference of 3 cm.respectively. In Figure 20, In it can that the accuracy was improved when difference of 3 cm. In Figure 20, it can be observed that the planimetric accuracy was improved when difference of 3IOP cm. from In Figure 20, it can be observed the planimetric accuracy was improved when the estimated experiment 2 were used inthat the bundle adjustment procedure. the the estimated estimated IOP IOP from from experiment experiment 22 were wereused usedin inthe thebundle bundleadjustment adjustmentprocedure. procedure.
Figure 20. Graphical representation of planimetric and altimetric RMSE from experiments 3 and 4. Figure 20. Graphical representation of planimetric and altimetric RMSE from experiments 3 and 4. Figure 20. Graphical representation of planimetric and altimetric RMSE from experiments 3 and 4.
8. Discussion 8. Discussion In light of the results obtained in the experiments, a synthesis can be given. Regarding the 8. Discussion In light results adjustment obtained in with the experiments, a synthesis can be given. accuracy, in ofthethebundle IOP estimation, improvements wereRegarding observed the in In light of the results obtained in the experiments, a synthesis can be given. Regarding the accuracy, accuracy, into the bundle adjustment adjustmentwithout with IOP estimation, mainly improvements were observed in comparison the bundle IOP estimation, in the altimetric component. in the bundletoadjustment with IOP estimation, improvements were observed in comparison to the comparison the bundle adjustment without IOP estimation, mainly in the accuracy altimetric component. While the resulting planimetric accuracy improved by 0.65 m, the altimetric improved by bundle adjustment without IOP estimation, mainly in the altimetric component. While improved the resulting While resulting planimetric improved 0.65 m, the altimetric accuracy by 1.36 m.the However, it is important accuracy to highlight that the by measurements in the image space of the control planimetric accuracy improved by 0.65 m, the altimetric accuracy improved by 1.36 m. However, it is 1.36 check m. However, it is important to highlight that the measurements in the image spacemethodology, of the control and points used in the bundle adjustment were refined by using the centroids important highlight that thebundle measurements in the image spaceby ofusing the control and check points used and checkto points usedbetter in the adjustment were refined the centroids methodology, presenting a quality than 1 pixel. It is also worth mentioning that the magnitudes of the in the bundle adjustment were refined by using the centroids methodology, presenting a quality better presenting aaccuracies quality better 1 pixel. It isare also worth mentioning the magnitudes of the planimetric foundthan in this research close to those found inthat [6,7], which used different than 1 pixel. It is also worth mentioning that the magnitudes of the planimetric accuracies found in planimetric accuracies found in IOP this sets research close to those in [6,7], which used different platform models and different fromare those used here. found In contrast, the magnitudes of the this research are close to those found in [6,7], which used different platform models and different IOP platform models and different from were thoselarger, used here. Inless contrast, thethan magnitudes of the altimetric RMSE values found inIOP thissets research that is, accurate those found in sets from those used here. In contrast, the magnitudes of the altimetric RMSE values found in this altimetric RMSE values found in this research were larger, that is, less accurate than those found in the cited studies. research were larger, that is, less accurate than those found in the cited studies. the cited Twostudies. questions that were not addressed in the works carried out by [6,7] are the analysis of Two questions thatthe were not addressed in them. the works [6,7] are theofanalysis of significance of IOP and correlation between This carried is done out withbythe objective analysing significance of IOP and the correlation between them. This is done with the objective of analysing
Remote Sens. 2017, 9, 634
21 of 23
Two questions that were not addressed in the works carried out by [6,7] are the analysis of significance of IOP and the correlation between them. This is done with the objective of analysing the importance of the parameter, with a possible loss of physical meaning and a possible simplification of the functional mathematical model. In the significance analysis, it was observed that the effects of the symmetrical radial distortions of the lens systems, of the rotations of the CCD chips in the focal plane and of the scale variation in the ys direction were not significant. Consequently, these parameters could be ignored. For the effect of bending of the CCD chip, the parameter was only significant in chip 2 of the nadir camera. The effect of systematic change in focal length was significant in all cameras. The parameters related to systematic changes CCD chip positions in the focal planes did not present a homogeneous behaviour. After analysing the correlation between the significant IOP, it was verified that only the IOP δ_chip2 could be ignored without a significant change in planialtimetric and altimetric accuracies. An issue that was not analysed in [6,7] was the application of the IOP estimated with one triplet in the bundle adjustment of a different triplet. After the correlation analyses between the IOP and EOP, strong correlations were detected between the IOP a0_chip1 and a0_chip4 of the backward camera and the EOP κ 0 , indicating a possible loss of physical significance of the IOP. Even so, when IOP were applied to the bundle adjustment of triplet n.2, there were planimetric and altimetric improvements compared to bundle adjustment without the use of IOP. It should be noted that even though this satellite is out of operation, the images obtained by it remain in archives and are still being used in several applications. In addition, the methodologies investigated in this article can be applied to other linear array pushbroom sensors with similar characteristics. As an example, we can cite the images obtained by the ZY-3 Chinese satellite. 9. Conclusions and Recommendations This study investigated the importance of estimating IOP to compute 3D coordinates by photogrammetric intersection with bundle adjustment using a PRISM image triplet. Additionally, the study proposed some improvements in the platform model developed by Michalis [9] to use coordinates referenced to the TRS ITRF97. The bundle adjustment with IOP estimation improved the quality of 3D photogrammetric intersection. Considering the RMSE of the 3D discrepancies in check points, the planimetric and altimetric accuracies increased 0.65 and 1.36 m, respectively. After the bundle adjustment with IOP estimation, a trend in the southern area of the experimental set was observed. However, the sample of discrepancies in Y component was considered normal by the Shapiro-Wilk test considering the expected planimetric accuracy of one GSD. Ground control points’ and check points’ coordinates were obtained from the determination of centroids points of geometric entities, such as buildings, soccer fields and courts. This procedure provides an improvement in the precision of measurements of points, both in the image space and the object space. This can be confirmed since, in all experiments, more than 97% of the residual values of the observations after bundle adjustment were lower than 1 pixel. In the bundle adjustment with IOP estimation, only part of the IOP related to systematic changes in CCD chip positions in the focal planes and the IOP related to changes in focal lengths was considered significant. The effects of the rotation and bending of CCD chips, the symmetric radial distortion of the lens and the scale variation in the ys direction could be discarded without affecting the accuracy results. The practical analysis of the usability of the estimated IOP from an independent triplet to be used in other applications was performed. The IOP estimated from triplet n.1 were applied to the bundle adjustment of triplet n.2. Despite the strong correlations between the IOP a0 _chip1 and a0 _chip4 , and the EOP κ 0 in the backward camera, the result of using these IOP was satisfactory. The improvements in the planimetric and altimetric accuracies were 0.40 m and 1.75 m, respectively. For future studies to be developed, it is recommended to verify the estimation of the IOP of the PRISM sensor by using the polynomial model presented in [7] in conjunction with the adapted UCL platform model; to estimate the IOP of other sensors using the methodology used in this study; and
Remote Sens. 2017, 9, 634
22 of 23
to analyse the use of collinearity and coplanarity rigorous model with ground control points and straight lines. Acknowledgments: The authors thank the company Topocart for providing the orthophotos and DTM used for the extraction of control and check points and to Unesp, Department of Cartography, for the use of the PRISM images. Author Contributions: Tiago Rodrigues and Edson Mitishita conceived and designed the study; Tiago Rodrigues performed the data processing and analysis, and wrote the manuscript. Edson Mitishita, Luiz Danilo and Antonio Tommaselli contributed to results interpretation and manuscript writing. Conflicts of Interest: The authors declare no conflict of interest.
List of Mathematical Symbols Used for the Interior Orientation Parameters The following mathematical symbols are used for the interior orientation parameters in this manuscript: nc PS f x0 , y0 K1 , K2 , K3 ∆f s x , sy dp x , dpy dx, dy θ a0_chipn , b0_chipn a1_chipn b1_chipn δ_chipn
Number of columns in a CCD chip Pixel dimension in the CCD chip Focal length PP coordinates Coefficients of the symmetric radial lens distortion Change in the focal length Optical system scale variations in the xs and ys directions Change of pixel dimensions in the xs and ys directions Two-dimensional CCD chip displacements with respect to the PP Rotation of the CCD chip in the focal plane with respect to its nominal position Change of the CCD chip positions, for a chip n, in the xs and ys directions Parameter resulting from the grouping of the CCD chip rotation and scale variation effects, for a chip n, in the ys direction Parameter resulting from the correction of the CCD chip rotation effect, for a chip n, in the xs direction Central angle of the line bending effect in the focal plane, for a chip n
References 1. 2. 3.
4.
5.
6.
7. 8. 9.
ALOS User Handbook. Available online: http://www.eorc.jaxa.jp/ALOS/en/doc/alos_userhb_en.pdf (accessed on 15 December 2012). Baltsavias, E.P.; Zhang, L.; Eisenbeiss, H. DSM generation and interior orientation determination of IKONOS images using a Testfield in Switzerland. Photogramm. Fernerkund. Geoinf. 2006, 1, 41–54. Jacobsen, K. Geometry of satellite images—Calibration and mathematical models. In Proceedings of the Korean Society of Remote Sensing—ISPRS International Conference, Jeju, Korea, 12 October 2005; pp. 182–185. Tadono, T.; Shimada, M.; Iwata, T.; Takaku, J. Results of Calibration and Validation of ALOS Optical Sensors, and Their Accuracy Assessments. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Barcelona, Spain, 23 July 2007; pp. 3602–3605. Tadono, T.; Iwata, T.; Shimada, M.; Takaku, J. Updated results of calibration and validation of PRISM onboard ALOS. In Proceedings of the IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 25 July 2010; pp. 36–38. Kocaman, S.A. Sensor Modeling and Validation for Linear Array Aerial and Satellite Imagery. Ph.D. Thesis, Institute of Geodesy and Photogrammetry, Eidgenössische Technische Hochschule Zürich, Zürich, Swiss, 2008. Weser, T.; Rottensteiner, F.; Willneff, J.; Poon, J.; Fraser, C.S. Development and testing of a generic sensor model for pushbroom satellite imagery. Photogramm. Rec. 2008, 23, 255–274. [CrossRef] Rodrigues, T.L.; Ferreira, L.D.D. Aplicação do movimento kepleriano na orientação de imagens HRC—CBERS 2B. Bol. Ciênc. Geod. 2013, 19, 114–134. [CrossRef] Tommaselli, A.M.; Marcato Junior, J. Bundle block adjustment of CBERS-2B HRC imagery combining control points and lines. Photogramm. Fernerkund. Geoinf. 2012, 2012, 129–139. [CrossRef]
Remote Sens. 2017, 9, 634
10. 11. 12. 13. 14. 15. 16. 17. 18. 19.
20.
21.
23 of 23
Marcato Junior, J.; Tommaselli, A.M.G. Exterior orientation of CBERS-2B imagery using multi-feature control and orbital data. ISPRS J. Photogramm. Remote Sens. 2013, 79, 219–225. [CrossRef] Michalis, P. Generic Rigorous Model for along Track Stereo Satellite Sensors. Ph.D. Thesis, University College London, London, UK, 2005. Dowman, I.; Michalis, P.; Li, L. Analysis of Urban Landscape Using Multi Sensor Data. In Proceedings of the 4th ALOS PI Symposium, Tokyo, Japan, 11 November 2010. Poli, D. Modelling of Spaceborne Linear Array Sensors. Ph.D. Thesis, Institute of Geodesy and Photogrammetry, Eidgenössische Technische Hochschule Zürich, Zürich, Swiss, 2005. Kim, T.; Dowman, I. Comparison of two physical sensor models for satellite images: Position-Rotation model and Orbit-Attitude model. Photogramm. Rec. 2006, 21, 110–123. [CrossRef] Seeber, G. Satellite Geodesy, 2nd ed.; Walter de Gruyter: Berlin, Germany, 2003. Michalis, P.; Dowman, I. A Generic Model for along Track Stereo Sensors Using Rigorous Orbit Mechanics. Photogramm. Eng. Remote Sens. 2008, 74, 303–309. [CrossRef] Satoru, W.; Akihiro, H. Development of the Earth Observation Satellite “DAICHI” (ALOS). NEC Tech. J. 2011, 6, 62–66. Leick, A. Satellite Surveying, 3rd ed.; John Wiley: Hoboken, NJ, USA, 2004. Sandwell, D.T.; Myer, D.; Mellors, R.; Shimada, M.; Brooks, B.; Foster, J. Accuracy and Resolution of ALOS Interferometry: Vector Deformation Maps of the Father’s Day Intrusion at Kilauea. IEEE Trans. Geosci. Remote Sens. 2008, 46, 3524–3534. [CrossRef] Schneider, M.; Lehner, M.; Müller, R.; Reinartz, P. Stereo Evaluation of ALOS/PRISM Data on ESA-AO Test Sites—First DLR Results. In Proceedings of the XXI ISPRS Congress—Technical Commission I, Beijing, China, 3–11 July 2008; pp. 739–744. Mitishita, E.A.; Habib, A.; Centeno, J.; Machado, A.; Lay, J.; Wong, C. Photogrammetric and lidar data integration using the centroid of a rectangular building roof as a control point. Photogramm. Rec. 2008, 23, 19–35. [CrossRef] © 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).