Orientation of linear array imagery by adjustment in image space José A. Gonçalves University of Porto – Science Faculty Porto, Portugal
[email protected] Abstract— Images acquired by linear array sensors on board of satellites, such as SPOT, are oriented, for mapping purposes, by rigorous sensor models. These models use a set of orbital and attitude parameters that represent the exterior orientation of an image. Rigorous values of the exterior orientation parameters are determined with ground control points (GCPs). In this article we present an alternative methodology of linear array image orientation. The sensor model is set up with approximate parameters provided with image data. The improvement of image orientation is done by applying corrections in image space. The paper justifies the conditions in which the method is applicable. Several experiments of the method were carried out with images of a mountainous region in Portugal. Initial orientation parameters, provided with images, had an accuracy of the order of 500 m. The method improved the orientation to sub-pixel accuracy, similar to the results achieved by a commercial software package that implements a rigorous sensor model. Keywords – SPOT, Exterior orientation, Accuracy, GCP, ICP
I.
INTRODUCTION
Topographic mapping from satellite images, as well as other applications of data integration, require high positional accuracy. Images must be geo-referenced and rectified with a positional accuracy of the order of the image spatial resolution. This requires a precise sensor model that represents the image formation process, by establishing mathematical relations between object coordinates (3D) and image coordinates (2D). Together with the image orientation, images can also be ortho-rectified using a digital elevation model (DEM) in order to correct distortions due to relief. Quite often, optical sensors are pointed far from the nadir direction, which increases the relief displacement effects, justifying the need for precise sensor models. Once images are ortho-rectified, pixel by pixel image integration can be done safely. In the case of optical sensors images are formed by an optical system that can be mathematically modelled by a central projection, using the colinearity equations. Linear array images are formed dinamically, requiring the sensor trajectory to be modelled along the orbit, as well as the attitude variations. Many of these models are described in literature (see for example [1] and [2]) and were implemented in software packages to ortho-rectify, extract 3D coordinates from stereo pairs and generate DEMs.
0-7803-9050-4/05/$20.00 ©2005 IEEE.
5365
These sensor models require GCPs with an accuracy better than the image resolution. The regions of the world where topographic mapping from satellite images is more important are remote regions where GCP collection is more difficult. Hence it is important to make efforts in order to minimize the requirement of GCPs. A possible solution is the use of other satellite images with a more rigorous geo-location, such as SAR [3]. Another alternative is the use of satellite navigation data obtained by on-board equipment, and applying simplified methods that make use of the good accuracy of the exterior orientation provided by that equipment. That is the case of the orientation models used with very high resolution sensor, such as Ikonos or Quickbird [4]. In this article we describe the application of an alternative orientation model for SPOT images, based on adjustments in image space, instead of what is usually done with rigorous orbital models (i.e. the adjustment in the exterior orientation parameters). The proposed model is of very simple application and is justified by several facts, such as the small field of view and relatively small terrain height variations compared to satellite altitude. These factors are analysed in detail in section 3 of this article. In this study five SPOT panchromatic images (10 m pixel size), of a mountainous region in Portugal, were used. They were acquired by satellites 1, 2 and 4, as described in table I, which also indicates other image characteristics, such as the incidence angle (to the left or to the right of the trajectory) and the processing level (1A or 1B). Incidence angles are in general large, which originates large relief displacements. Figure 1 shows the location of the 5 images in North Portugal. It was possible to verify that the alternative orientation model proposed was applicable to the images, keeping an accuracy similar to the one achieved with a rigorous orbital, sensor model found in a commercial software package (PCI Orthoengine). GCPs were obtained from topographic maps of scale 1:25,000 or, in the case of image 1, surveyed in the field using GPS. TABLE I. # 1 2 3 4 5
SPOT 1 1 2 2 4
CHARACTERISTICS OF THE IMAGES USED Level 1A 1B 1A 1A 1B
Pixels/Lines 6000 – 6000 7380 – 6011 6000 – 6000 6000 – 6000 6302 – 6004
Date 05-08-90 27-10-86 29-06-94 27-06-94 14-10-98
Inc. Ang. (º) L25.8 L24.1 R27.6 L28.9 R01.6
5365
The determination of sensor pointing direction is obtained from the angles imposed to the sensor (ψX, ψY), that can be calculated for position (x,y) from the data provided [6]. The attitude variation must also be considered: it is measured on a sample of 72 image lines and, after an integration, instantaneous roll, pitch and yaw can be calculated for any image line. A unit vector (u1,u2,u3) of the instantaneous viewing direction is obtained. Together with the sensor position the equation of the line that originated pixel (x,y) is written as:
( X ,Y , Z ) = ( X S (t ),YS (t ), Z S (t )) + k (u1 , u2 , u3 )
(4)
Figure 1. Location of the images used
II.
SPOT SENSOR MODEL
SPOT satellites are equipped, since SPOT-4, with the DORIS (Doppler Orbitography and Radiopositioning Integrated by Satellite) positioning system, which allows for the determination of sensor position with accuracy better than 1 meter [5]. Positioning systems present in SPOT 1 and 2 achieve a lower accuracy, in the order of hundreds of meters. The attitude is measured by star-tracking systems, which, even in the case of SPOT-4, only assures a geo-location accuracy of one hundred meters. The navigation data provide a set of exterior orientation parameters that are used in a sensor model to establishes the transformation between object space (WGS84 coordinates: longitude, latitude and height, λ, ϕ, h) and the image space (column and row pixel position, x, y). The sensor model used is described in full detail in documentation provided by Spotimage [6]. A. Image to object projection Given the position of a point on the image and its height on the ground (height above the WGS-84 ellipsoid) this problem consists in calculating the corresponding geographic coordinates. Equations can be written in generic terms as:
(λ ,ϕ ) = F (x, y, h )
(1)
The formulation established by Spotimage consists of calculating, for an image position (x, y), of processing level 1A, the equations of the straight line that represents the light ray that formed the pixel. Two orientation parameters are the time of the first image line (t0) and the time interval between consecutive lines (∆t). Coordinate y is converted into the time of that image line (equation 2). From the trajectory the instantaneous position vector of the satellite (rS) is calculated, in a geocentric cartesian system (equation 3). t = t 0 + y ⋅ ∆t G rS = ( X S (t ), YS (t ), Z S (t ))
0-7803-9050-4/05/$20.00 ©2005 IEEE.
(2) (3)
5366
where k is the sensor-object distance. That line is intersected with a surface of constant height, h, originating the geographic coordinates (λ,ϕ). The uncertainty in these coordinates results from the inaccuracy of the trajectory (very small in the cases of SPOT-4 and SPOT-5) and the absolute attitude. For images of processing level 1B, image coordinates must be converted back to level 1A. The corresponding mathematical formulation is clearly described in the Spotimage documentation [6]. B. Object-to-image projection The inverse problem consists of, given a point by its geodetic coordinates (WGS84), determining the corresponding position on the image. This problem is solved from the equations of the previous problem, in an iterative manner. Details of the algorithms can be found in the documentation provided by Spotimage [6]. In an equivalent manner this projection can be written as:
(x, y ) = G (λ ,ϕ , h)
(5)
These equations are normally used in the image orientation process. Function G involves the exterior orientation that describes the orbit and the initial attitude angles. Taking a set of GCPs a system of equations is established, having as unknowns the orientation parameters. The number of parameters is usually 7 [1] although some authors prefer to model the attitude variation, increasing the number to 10 or more. Orientation models implemented in commercial software packages for photogrammetric processing of satellite images are of this kind. That is the case of PCI software [2]. Usually these programs provide little details on the models, besides basic rules on choosing GCPs and interpreting residuals. C. Accuracy of the exterior orientation provided with images A set of GCPs were identified and measured on topographic maps of scale 1/25.000, in order to determine image orientation, as described in the following section. Those points were first used in a global determination of the geolocation error of the images. They were projected from image to object space and planimetric errors were assessed. Table II
5366
shows the mean errors on longitude and latitude (in arc seconds) and as distances. TABLE II. Image 1 2 3 4 5
MEAN ERRORS IN IMAGE-TO-OBJECT PROJECTION No. of GCPs 22 13 13 9 18
∆λ (“) 15.2 5.0 9.3 11.0 1.9
∆ϕ (“) -19.1 -13.1 -5.9 -21.9 -0.8
Distance (m) 688 422 282 722 52
Figure 2. Effect of an attitude error in image-to-object projection
As expected for SPOT-1 and 2, errors are of several hundreds of meters [6]. In the case of image 5, acquired by SPOT-4, the error is, as expected, much smaller. III.
In such a situation the simplified model (first approach considered a constant shift all over the image) can be adjusted to an affine transformation (parameters A1,...A6). First the approximate coordinates (x0, y0) are calculated using (eq. 5) and corrections (∆x, ∆y) are expressed by:
ADJUSTMENT IN IMAGE SPACE
The alternative orientation model of SPOT images is based on the object-to-image equations, set up with the parameters provided in the ancillary data, and introducing corrections in the image coordinates, instead of the exterior orientation parameters. It results from the fact that shifts found in the object-to-image projection of GCPs, within an image, are approximately constant. This model can be written as: x = G x (λ , ϕ , h ) + ∆x
y = G y (λ , ϕ , h ) + ∆y
(6)
where Gx e Gy are the components of function G in eq. (5). The parameters to determine would be only (∆x, ∆y), that could be determined with only one GCP. This method is applicable if certain conditions are verified. They are analysed below.
A. Narrow field of view of SPOT Linear sensors on board of satellites have normally small fields of view. SPOT, for example, has an angle of 4º (coverage of 60 km at 830 km distance). This fact makes that an error in exterior orientation parameters produces an error in the projection to image space practically constant along the image. This is very different from what happens with conventional aerial photography (wide angle lens with field of view nearly 90º) In the case of attitude errors the projection error in image coordinates may not be constant but have a predictable variation along the image. Lets consider an image with a large inclination angle, θ, (25º at the image centre, i.e., 23º and 27º at the borders). Lets now consider an error (∆θ) of 0.025 degrees in the vertical plane where the sensor is tilted, as shown in figure 2. The projection error will be about 427 m (H.tan(θ+∆θ)-H.tan(θ), where H is the orbit height). On the other border the error is larger by 29 m, i.e., approximatelly 3 pixels more. Since the field of view is small, the variation will be approximately linear with the image position.
0-7803-9050-4/05/$20.00 ©2005 IEEE.
5367
∆x = A1 x 0 + A2 y 0 + A3 ∆y = A4 x 0 + A5 y 0 + A6
(7)
Naturally, for a smaller sensor inclination and smaller attitude errors (e.g. 100 m) the differences between different positions along the image may be subpixel. In that case the shift model may be enough. B. Effect of terrain heght Another problem that must be analysed in this approximate orientation model is how the correction in image space depends on terrain height. Lets consider an attitude error, ∆θ, that creates, at height 0, an error on the ground, ∆S, as shown in figure 3.
0 Figure 3. Projection errors on the ground (∆S, ∆S’) at different heights (0 and h)
For a different height, h, the effect will be ∆S’, that can be expressed as:
∆S ′ = ∆S
H −h H
(7)
Being, for example h = 2000 m (height range in an image) and H=830 km (satellite height), ∆S’ will be smaller than ∆S in 0.24%. For ∆S = 500 m, the difference will be only 1 m, i.e. 1/10 of a pixel. Only in very extreme conditions (large height variations and very poor initial orientation) this effect won’t be neglegible. For this reason the proposed model does not need to consider terrain height.
5367
Two models of simplified orientation were then considered: one based on a constant shift (minimum 1 GCP) and the other based on an affine transformation (minimum 3 GCPs). With larger numbers of GCPs, residuals are obtained, which can be compared with residuals given by a rigorous sensor model, based on orbital parameters. IV.
residuals were calculated, for GCPs and ICPs, and can be found in table IV. TABLE IV. Type of points GCP ICP
RESULTS OF THE ORIENTATION MODEL
The orientation models described were applied in the 5 available images. Figure 4 represents residual vectors (scale factor 10), for image 1, which show the systematic shift effect. The same GCPs were used to orientate the images in PCI Orthoengine. The root mean square (RMS) of the residuals were calculated for the 3 cases and are shown in table III. A graphical representation of residual norm √(RMSx2+RMSy2) is given in figure 5.
RMS OF RESIDUALS FOUND ON GCPS AND ICPS (IMG3) Shift
Affine
Orbital model
RMSx
RMSy
RMSx
RMSy
RMSx
0.21 0.60
0.14 1.19
0.53 0.40
0.62 0.70
0.39 0.51
RMSy
0.38 0.78
The affine model shows for the ICPs, in this case, a result as good as the orbital model. Results of shift model are not so good, but at the level of 1 pixel. V.
CONCLUSIONS
An alternative model for SPOT image orientation was developed. Its performance was nearly as good as with a rigorous orbital model. The alternative model requires approximate exterior orientation, which is provided with appropriate accuracy even for the older SPOT images. In its simplest mode (shift) the requirement of GCPs is very small. However, better results are obtained by the affine model, since it can model variation of looking angles along the image. Its implementation is very simple and the ground control requirements are small. This model performed nearly as well as the rigorous orbital model, and better in the case of level 1B images. An orientation model similar to the one proposed has been used to orientate high resolution satellite images, such as Ikonos [4]. These images are resampled to a mode called GEO and treated by a sensor model based on rational functions. In case that SPOT images are projected in a similar a manner, and replacing the sensor model by a rational function, a common orientation procedure can be used for any kind of linear array imagery. This would facilitate the use of different types of satellite images with a common photogrammetric software.
Figure 4. Error vectors in image space, scale factor 10, for image 1
TABLE III. Img. 1 2 3 4 5
No. of GCPS 22 13 13 9 18
RMS OF THE RESIDUALS FOUND FOR THE 5 IMAGES USING THE 3 ORIENTATION MODELS Shift
Affine
Orbital model
RMSx
RMSy
RMSx
RMSy
RMSx
RMSy
0.42 0.51 0.74 0.88 0.81
0.58 1.29 1.15 0.67 0.76
0.40 0.50 0.66 0.84 0.8
0.51 0.48 0.79 0.65 0.71
0.37 0.65 0.45 0.35 0.86
0.45 1.06 0.57 0.67 0.70
[1]
[2]
[3]
[4]
[5]
Figure 5. Graphical representation of residuals of the 3 models in all cases
All models show, in general, RMSE smaller than 1 pixel. The orbital model shows slightly better results than the other two, especially with images of level 1A. With images 1B the affine model performs better than the rigorous model. An experiment of independent checking was done with image 3. It was orientated with 5 of the available GCPs (distributed uniformly on the image). The remaining 22 points were used as independent check points (ICP). The RMSs of the
0-7803-9050-4/05/$20.00 ©2005 IEEE.
5368
[6]
Westin, T., “Precision Rectification of SPOT Imagery,” Photogrammetric Engineering and Remote Sensing, Vol. 56, pp. 247253, 1990 Toutin, Th., “Multi-source Data Integration with an Integrated and Unified Geometric Modelling”, EARSeL Journal in Advances in Remote Sensing, Vol. 4, no. 2, pp. 118-129, 1995. Gonçalves, J. and I. Dowman, “Precise orientation of SPOT panchromatic Images with tie points to a SAR image”, Int. Arch. of Photogrammetry and Remote Sensing, Vol. 34 (3A), 2002 (CD-ROM) Dial, G. e J. Grodecki, “Block adjustment with rational polynomial camera models”, ACSM-ASPRS 2002 Annual Conference Proceedings, 2002. CNES, SPOT4 internet web page: http://spot4.cnes.fr/spot4_gb/ index.htm, 2000. Spotimage, SPOT satellite geometry handbook. Document S-NT-73-12SI, Edition 1, Revision 0, 82 pages, January 2002.
Aknowledgements
To Spotimage for providing some of the images under the ISIS program. PCI Orthoengine software was used under the research and teaching conditions.
5368