sensors Article
FPGA-Based On-Board Geometric Calibration for Linear CCD Array Sensors Guoqing Zhou 1,2,3, *, Linjun Jiang 4 , Jingjin Huang 2 , Rongting Zhang 2 , Dequan Liu 5 Xiang Zhou 1,5 and Oktay Baysal 6 ID 1 2 3 4 5 6
*
ID
,
Guangxi Key Laboratory for Spatial Information and Geomatics, Guilin University of Technology, Guilin 541004, China;
[email protected] School of Precision Instrument & Opto-Electronics Engineering, Tianjin University, Tianjin 300072, China;
[email protected] (J.H.);
[email protected] (R.Z.) The Center for Remote Sensing, Tianjin University, Tianjin 300072 China Guangxi Institute of Geoinformation, Surveying and Mapping, Liuzhou 545006, China;
[email protected] School of Microelectronics, Tianjin University, Tianjin 300072, China;
[email protected] Mechanical & Aerospace Engineering, Old Dominion University, Norfolk, VA 23529, USA;
[email protected] Correspondence:
[email protected]; Tel.: +86-773-589-6073
Received: 24 April 2018; Accepted: 29 May 2018; Published: 2 June 2018
Abstract: With increasing demands in real-time or near real-time remotely sensed imagery applications in such as military deployments, quick response to terrorist attacks and disaster rescue, the on-board geometric calibration problem has attracted the attention of many scientists in recent years. This paper presents an on-board geometric calibration method for linear CCD sensor arrays using FPGA chips. The proposed method mainly consists of four modules—Input Data, Coefficient Calculation, Adjustment Computation and Comparison—in which the parallel computations for building the observation equations and least squares adjustment, are implemented using FPGA chips, for which a decomposed matrix inversion method is presented. A Xilinx Virtex-7 FPGA VC707 chip is selected and the MOMS-2P data used for inflight geometric calibration from DLR (Köln, Germany), are employed for validation and analysis. The experimental results demonstrated that: (1) When the widths of floating-point data from 44-bit to 64-bit are adopted, the FPGA resources, including the utilizations of FF, LUT, memory LUT, I/O and DSP48, are consumed at a fast increasing rate; thus, a 50-bit data width is recommended for FPGA-based geometric calibration. (2) Increasing number of ground control points (GCPs) does not significantly consume the FPGA resources, six GCPs is therefore recommended for geometric calibration. (3) The FPGA-based geometric calibration can reach approximately 24 times faster speed than the PC-based one does. (4) The accuracy from the proposed FPGA-based method is almost similar to the one from the inflight calibration if the calibration model and GCPs number are the same. Keywords: FPGA; on-board; geometric calibration; parallel computing; spaceborne sensor
1. Introduction Geometric calibration is one of the most important steps for quality control in high-resolution optical satellite imagery [1–3] and is a prerequisite for high-accuracy of direct georeferencing of remotely sensed images [4,5]. Previous researchers have made efforts in inflight geometric calibration on the basis of PC computers in the past decades. With increasing demands for real-time or near real-time remotely sensed imagery in applications such as military deployments, quick response to terrorist attacks and disaster rescue (e.g., flooding monitoring), the on-board implementation of geometric calibration is has been attracting many scientists’ interest worldwide in recent years.
Sensors 2018, 18, 1794; doi:10.3390/s18061794
www.mdpi.com/journal/sensors
Sensors 2018, 18, x FOR PEER REVIEW
2 of 19
response terrorist Sensors 2018, to 18, 1794
attacks and disaster rescue (e.g., flooding monitoring), the on-board 2 of 19 implementation of geometric calibration is has been attracting many scientists’ interest worldwide in recent years. The of geometric calibration consist consist of of the the exterior exterior orientation orientation elements elements (EOEs) (EOEs) and and The parameters parameters of geometric calibration interior orientation elements (IOEs). Usually, the IOEs are calibrated in the laboratory, while the EOEs interior orientation elements (IOEs). Usually, the IOEs are calibrated in the laboratory, while the often due todue micro-gravidity, solar pressure, etc. when satellite runs inruns spaceinfor a certain EOEs change often change to micro-gravidity, solar pressure, etc.the when the satellite space for a while. Hence it is necessary to develop the calibration method and algorithm to carry out an on-board certain while. Hence it is necessary to develop the calibration method and algorithm to carry out an geometric calibration. on-board geometric calibration. The of “on-board geometric correction” correction” was was first first presented presented by by Zhou Zhou et et al. al. [6], [6], but but the The concept concept of “on-board geometric the authors did not present any details of its on-board implementation. This paper presents a Field authors did not present any details of its on-board implementation. This paper presents a Field Programmable Gate Array The FPGA FPGA can Programmable Gate Array (FPGA)-based (FPGA)-based implementation implementation of of geometric geometric calibration. calibration. The can offer a highly flexible design, a scalable circuit, and a high efficiency in data processing, because of offer a highly flexible design, a scalable circuit, and a high efficiency in data processing, because its of pipeline structure andand fine-grained parallelism. Moreover, the the DSPDSP units embedded in the FPGA are its pipeline structure fine-grained parallelism. Moreover, units embedded in the FPGA suitable for floating point arithmetic. The proposed FPGA-based architecture of geometric are suitable for floating point arithmetic. The proposed FPGA-based architecture of calibration geometric is depicted in Figure 1, which consists of five modules: Template Image Selection, Image Matching calibration is depicted in Figure 1, which consists of five modules: Template Image Selection, Image (Huang and Zhou [7]), Initial value EOEs, Bundle Adjustment, and Timing Control. Matching (Huang and Zhou [7]), Initial value EOEs, Bundle Adjustment, and Timing Control.
Figure of geometric geometric calibration. calibration. Figure 1. 1. The The FPGA FPGA architecture architecture of
All of the functional modules in Figure 1 are managed by the Timing Control module. The All of the functional modules in Figure 1 are managed by the Timing Control module. Template Images Selection module is used for selection of target template images from a template The Template Images Selection module is used for selection of target template images from a template image library that is created from many template images, each of which has a unique ID number. image library that is created from many template images, each of which has a unique ID number. The principle of this module is carried out through matching between the geodetic coordinates The principle of this module is carried out through matching between the geodetic coordinates centralized at an imaged area and the coordinates of a georeferenced template image. The Image centralized at an imaged area and the coordinates of a georeferenced template image. The Image Matching module accurately determines the coordinates of spaceborne image and geodetic Matching module accurately determines the coordinates of spaceborne image and geodetic coordinates coordinates of template images by matching their sub-image windows. The Initial external of template images by matching their sub-image windows. The Initial external orientation elements orientation elements (EOEs) module computes the initial values of EOEs of the spaceborne sensor. (EOEs) module computes the initial values of EOEs of the spaceborne sensor. The Bundle Adjustment The Bundle Adjustment module accurately computes six external orientation elements (EOEs). Due module accurately computes six external orientation elements (EOEs). Due to the resource limitation to the resource limitation of a FPGA chip, it usually applies external memory to store multiple of a FPGA chip, it usually applies external memory to store multiple template images and ground template images and ground control points (GCPs) data. Random Access Memory (RAM) is used to control points (GCPs) data. Random Access Memory (RAM) is used to store the data stream of a store the data stream of a template image and/or an imaged scene temporarily. The line buffers of template image and/or an imaged scene temporarily. The line buffers of image data are generated image data are generated using multiple RAMs for further image matching. using multiple RAMs for further image matching. This paper presents the details of FPGA-based implementation of on-board geometric This paper presents the details of FPGA-based implementation of on-board geometric calibration. calibration. The input parameters (i.e., GCPs and initial EOEs) are assumed to be directly provided The input parameters (i.e., GCPs and initial EOEs) are assumed to be directly provided by other by other modules (for the details, please reference to the Huang and Zhou [7]). The paper is modules (for the details, please reference to the Huang and Zhou [7]). The paper is organized as organized as follows: Section 2 overviews previously relevant efforts; Section 3 gives the detailed follows: Section 2 overviews previously efforts; Section Section 3 gives 4the detailedthe FPGA-based FPGA-based implementation of on-boardrelevant geometric calibration. describes validation implementation of on-board geometric calibration. Section 4 describes the validation and the and the experimental results. The conclusions are drawn up in Section 5. experimental results. The conclusions are drawn up in Section 5.
Sensors 2018, 18, 1794
3 of 19
2. Relevant Efforts Traditional geometric calibration methods have been investigated for several decades and a number of papers have been published in the computer vision [8–10], image processing [11,12], robotic vision [13,14] and photogrammetry communities. However, geometric calibration on-board spaceborne implementation has not yet been reported worldwide so far, although in-lab and/or inflight (also called on-orbit) geometric calibrations for various satellites have overwhelmingly been reported in the past two decades [15–17], such as SPOT1-5 [18], IKONOS [19], ALOS [20], Orbview-3 [21], IRS-1C [22], GeoEye-1 [23], MOMS-2P [24,25], CBERS-02B [26], TH-1 [27] and ZY-3 [28,29]. Jacobsen [30,31] implemented geometric calibration of the IRS-1C satellite using self-calibration bundle block adjustment with additional parameters. Crespi et al. [23] used the block adjustment method to investigate the inflight calibration of the GeoEye-1 satellite, with which the accuracy reached 3.0 m without GCPs. Lei [28] presented a self-calibration bundle block adjustment on the basis of the known interior orientation parameters with line array CCD. Li [29] and Li et al. [32] proposed a geometric calibration method with step-by-step on the basis of imaging model of the ZY-3 satellite. The experimental results indicated that the calibration accuracy of ZY-3 can meet the requirement of the accuracy of 1:50,000 mapping. Yang et al. [33] also presented an on-orbit geometrical calibration model for ZY-1 02C panchromatic camera. The experimental results demonstrated that the accuracy with or without GCPs is better than 0.3 pixels. Zhang et al. [34] proposed an on-orbit geometric calibration for and a validation of ZY-3 linear array sensors. Wang et al. [3] constructed an on-orbit rigorous geometric calibration model through selecting optimal parameters. The experimental results discovered that the geometric accuracy without GCPs is significantly improved after on-orbit geometric calibration. Cao et al. [35] proposed a named “look-angle calibration method” for on-orbit geometric calibration of ZY-3 satellite imaging sensors. The experimental analysis discovered that the accuracy under the nadir-looking images is higher than ±2.7 m when five GCPs are utilized and the laboratory calibration parameters are provided as initials. Cao et al. [36] proposed a simple and feasible orientation method for calibration of the CCD-detector’s look angles in the three-line array cameras of ZY-3. The experimental results discovered that the accuracies in planimetry and height are 3.4 m and 1.6 m with four GCPs, respectively. Wang et al. [37] proposed an on-orbit geometric calibration method for TH-1 three-line-array camera based on a rigorous geometric model. The experiment results demonstrated that the accuracies in both planimetry and height are approximately 6.93 m and 3.96 m, respectively. Although research in on-board geometric calibration is relatively rare so far, the FPGA-based on-board data processing for spaceborne images has been reported. For instance, the German small satellite BIRD applied an on-board data processing system consisting of DSP, FPGA and network co-processor in on-board implementation of radiation correction, rectification of partial systematic geometric and image classification using neural network algorithm [38]. Surrey Satellite Technology Limited in the UK applied FPGA as an on-board data processing chip in a small satellite [39]. The French used FPGA chip to realize on-board data processing for Pleiades imagery [40]. Stuttgart University in Germany designed an on-board computing system using FPGA for the small satellite Flying Laptop to realize real-time attitude control, housekeeping, data compression and image processing [41]. The U.S. Jet Propulsion Laboratory employed a Xilinx FPGA to realize on-board hyperspectral image classification based on Support Vector Machine (SVM) [42]. González et al. [43] conducted the investigation on an FPGA-based implementation of N-FINDR algorithm for hyperspectral image analysis. Williams et al. [44] also investigated an FPGA-based implementation of real-time cloud detection of spaceborne image. Hihara et al. [45] analyzed an on-board image processing system for hyperspectral imagery. All of the efforts mentioned above have shown a promise for FPGA-based implementation of geometric calibration, which is presented in this paper.
Sensors 2018, 18, 1794
4 of 19
3. On-Board Geometric Calibration Using FPGA Chip 3.1. Brief Overview of Geometric Calibration Model Sensors 2018, 18, x FOR PEER REVIEW
4 of 19
The different linear imaging systems may have their own imaging modes, resulting in the slight 3. On-Board Geometric Calibration Using FPGA Chip difference of the geometric calibration algorithm. This paper takes MOMS-2P as an example to describe the calibration model (for details readers should refer to [46,47]). 3.1. Brief Overview of Geometric Calibration Model The geometric calibration algorithm of the MOMS imaging system was performed at the The different linear imaging systems may have their own imaging modes, resulting in the slight laboratories of the German Aerospace company DASA (Stuttgart, Germany), where the MOMS difference of the geometric calibration algorithm. This paper takes MOMS-2P as an example to was developed and manufactured. A rigorous model of describing geometry calibration is composed describe the calibration model (for details readers should refer to [46,47]). of fiveThe parameters (also see Table 1): geometric calibration algorithm of the MOMS imaging system was performed at the laboratories of the German Aerospace company DASA (Stuttgart, Germany), where the MOMS was Principal point coordinates of each sensor (x0 , y0 ), developed and manufactured. A rigorous model of describing geometry calibration is composed of parameter κ of CCD fiveRotation parameters (also see Table 1): array in the image plane,
Deviation of thecoordinates focal length df and distortion Principal point of each sensor (x0, y0), parameter k of the sensor curvature. The sensor Rotation parameter κ of CCD array in the image curvature is modeled by a second order polynomialplane, equation. The parameter k here indicates the along track deviation at the edges of the CCD-array at 3000 pixelkdistance fromcurvature. the arrayThe center, caused by Deviation of the focal length df and distortion parameter of the sensor sensor the sensor curvature. curvature is modeled by a second order polynomial equation. The parameter k here indicates the along track deviation at the edges of the CCD-array at 3000 pixel distance from the array center, Table 1. Lab-calibrated camera parameters for moms-2p [46,48]. caused by the sensor curvature.
HR5B forST6 ST7 Table 1. Lab-calibratedHR5A camera parameters moms-2p [46,48]. f (mm) HR5A 660.256HR5B 660.224ST6237.241 ST7 237.246 xf0(mm) (pixel) 660.256 0.1 660.2240.2237.241−7.2 237.246 −0.5 yx00(pixel) (pixel) 0.1−0.4 0.2 0.1 −7.2 8.0 −0.5 19.2 Kc (pixel) −0.4 −0.3 0.1 −0.4 8.0 −1.1 19.2 1.7 y0 (pixel) k (mdeg) −2.9 −0.4 5.4 −1.1 −1.5 1.7 −1.4 Kc (pixel) −0.3 k (mdeg)
−2.9
5.4
−1.5
−1.4
3.1.1. Interior Orientation 3.1.1. Interior Orientation
Interior orientation is to transform sensor coordinates (i and j in Figure 2) into image coordinates Interior orientation is to transform sensor coordinates (i and j in Figure 2) into image (x and y in Figure 2) and correct the lens distortion (symmetric and tangential) and CCD array curvature coordinates (x and y in Figure 2) and correct the lens distortion (symmetric and tangential) and CCD distortion. Thus the following tasks should be performed. array curvature distortion. Thus the following tasks should be performed. Definethe the sensor sensor (called coordinate system and and image coordinate systems, • Define (called“screen”) “screen”) coordinate system image coordinate systems, Apply the principal point offset that should be estimated from an in-lab or in-flight calibration, • Apply the principal point offset that should be estimated from an in-lab or in-flight calibration, and and • Correct various distortions.
Correct various distortions.
Figure 2. Sensor/screen and image coordinate system.
Figure 2. Sensor/screen and image coordinate system.
3.1.2. Transformation from Image to Reference Coordinate System
3.1.2. Transformation from Image to Reference Coordinate System Each of the fore-, nadir- and aft-looking array has its own image coordinate system (xc, yc and zc, Each of the fore-, nadir- and aft-looking array has its own image coordinate system (x , y and right-handed). An image reference coordinate system (xR, yR and zR, right-handed) is defined to unifyc c zc ,image right-handed). image reference system (xR , yR andfrom zR , right-handed) is defined to coordinatesAn from all three arrays coordinate (Figure 3). The transformation an image coordinate
unify image coordinates from all three arrays (Figure 3). The transformation from an image coordinate
Sensors 2018, 18, 1794
5 of 19
Sensors 2018, 18, x FOR PEER REVIEW
5 of 19
system to the reference coordinate system involves a translation (dx, dy and dz) and three rotations (ω,
system to the reference coordinate system involves a translation (dx , dy and dz ) and three rotations φ and κ). We define a counterclockwise rotation angle as positive. The transformation equation is: (ω, ϕ and κ). We define a counterclockwise rotation angle as positive. The transformation equation is:
x xc d x R R xR xc ddxy RC R y yRR RC yc R + y yR = c z C z ddy = RCf R zc c dzz zR
xR
dx x d x dy y + dy d − f z dz
(1)
(1)
where:
where:
cos cos cosωsinκ cos sin + sin sin cos sin sin -−cos sincos sinωsinϕcosκ sinωsinκ cosωsinϕcosκ cosϕcosκ R R RC = cosϕsinκ sinωsinϕsinκ sinωcosκ cosωsinϕsinκ RC −-cos sin cosωcosκ coscos−- sin sin sin sin cos ++cos sin sin −sinωcosϕ cosωcosϕ sinϕ -sincos coscos sin
(2)
(2)
Figure 3. From image coordinate to image reference coordinate system.
Figure 3. From image coordinate to image reference coordinate system. 3.1.3. Geometric Calibration Model
3.1.3. Geometric Calibration Model
For any image point within a CCD array, its image reference coordinates are (xR, yR and zR). The
For any image point within a CCD array, itsinimage reference coordinates (xRimaging , yR and zR ). coordinates of the exposure center of the array the ground coordinate system are at the epoch t are (X (t), Y C(t), ZC(t)). The corresponding ground point coordinates (XG, Yat G, Z G). imaging The The coordinates ofCthe exposure center of the array in the ground coordinateare system the that all of the three points must lie onpoint the same straight line: epochcollinearity t are (XCcondition (t), YC (t),states ZC (t)). The corresponding ground coordinates are (XG , YG , ZG ). The collinearity condition statesr that all of the three points must lie on the same straight line: 11 (X G - X C (t )) + r12 (YG - YC (t )) + r13 (Z G - Z C (t )) xR= z R r (X - X (t )) + r (Y - Y (t )) + r (Z - Z (t )) 31 33 − ZG ( t )) C rG ( X C − X (t))+32 r (YG −Y C(t))+r ( Z (3) x R = z R r11 (XG − XC (t))+r12 (YG −YC (t))+r13 (ZG −ZC (t)) 33 r G(Z C - Z (t )) 31- X G (tC)) + r 32(YG - Y C (t )) + r ( X (3) G C 23 G C y = z 21 rG21 (XG C−XC (t))+22 r Y − Y t r Z − Z t ( ( ))+ ( ( )) 22 G 23 C G C R y R = z R r (X R r31-( XX t ))r+ ZZGC (-t)) Z C (t )) −(XtC)) r32((Y YG −-YYCC(t())+ ( t+ ))+r32 33 (rZ GC 31 G 33G(− R wherewhere rij (i, rjij =(i,1,j =2,1,3) the elements matrix RGG =RRCCR((ϕ((tt),), ω((tt),),κ(t )) ;; ϕ(t), ω(t)and and κ(t) 2, are 3) are the elementsof of rotation rotation matrix φ(t), ω(t) are defined eachfor CCD the at epoch t. t. κ(t) are for defined eacharray CCDat array the epoch A separate imagecoordinate coordinate system defined for each CCD array relatedis to related an imageto an A separate image systemisis defined for each CCD which arrayiswhich reference coordinate system by a 3D They are a focal principal pointf, offset image reference coordinate system bytransformation. a 3D transformation. Theylength are af, focal length principal 0, y0), and a curvature parameter Kc. The sensor coordinates (i, j) are transformed to image point(xoffset (x0 , y0 ), and a curvature parameter Kc. The sensor coordinates (i, j) are transformed to in the following stepssteps (the details can be can referenced to [46,47]): imagecoordinates coordinates in the following (the details be referenced to [46,47]): R
• • • •
R
Step 1: Transformation from sensor/screen coordinates to image coordinates by
x 0 ;
Step 1: sensor/screen coordinates to image coordinates by x 0 = 0; ; y Transformation ( j column / 2) 10from e 3(mm) 0 y = ( j − column/2) × 10e − 3(mm); Step 2: CCD curvature correction by cx Kc y y ; Step 2: CCD curvature correction by cx = Kc × y0 × y0 ; Step 3: Lens distortion correction is unavailable; Step 3: Lens distortion correction computed is unavailable; Step 4: Final image coordinates by x cx x0 , y y y0 . Step 4: Final image coordinates computed by x = cx − x0 , y = y0 − y0 .
To reduce the calculation budget and save the FPGA resources, a first-order polynomial equation is adopted to compute the six EOEs, i.e.:
Sensors 2018, 18, 1794
6 of 19
Sensors 2018, 18, x FOR PEER REVIEW
6 of 19
0 0 XC ( t ) = XC + a 0 ϕ C ( t ) = ϕ C + d 0 + d 1 t 0 To reduce the calculation YC (budget t) = YCand + bsave ) =FPGA ωC0 + resources, e0 + e1 t a first-order polynomial (4) 0 ωC ( tthe equation is adopted to compute ZCthe (t)six = EOEs, ZC0 + i.e.: c0 κC (t) = κC0 + f 0 + f 1 t 0 C (t ) = C0 +of d 0EOEs + d1t at epoch t. The initial values 0 X C (t )0= X C 0+ a0 where XC0 (t), YC0 (t), ZC0 (t), ϕ C (t), ωC (t), 0κC (t) are initial values 0 C (t t, ) =Equation C + e0 +(3) e1tis linearized by Taylor (4) Series. of EOEs are provided by DLR, In 0one epoch YGermany. C (t ) = YC +b 0 0 Substitute Equation (4) into Equation (3), and then linearize Series, it yields: C (t ) = itCby + fTaylor 0 + f1t Z C (t ) = Z C + c0 ( 0 0 0 values EOEs X C00(t+ ),Y t ),Z ,∆c t )+ ,aC014 (t∆d ),0C0 + (t )) vwhere aC12(∆b a15are ∆e0initial + a16 ∆ f 0 + aof a18epoch ∆e1 + t.a19The ∆ f 1initial − lx x = a11(∆a C (ta)13 C (0 0+ 17 ∆d 1 + at (5) of EOEs are provided by DLR, Germany. In one epoch t, Equation (3) is linearized by Taylor vvalues = a ∆a + a ∆b + a ∆c + a ∆d + a ∆e + a ∆ f + a ∆d + a ∆e + a ∆ f y 29 0 22 0 23 0 0 25 0 26 0 27 28 21 24 1 1 1 − ly
Series. Substitute Equation (4) into Equation (3), and then linearize it by Taylor Series, it yields:
The vector form of Equation (5) is described as: vx = a11a0 + a12 b0 + a13c0 + a14 d 0 + a15 e0 + a16 f 0 + a17 d1 + a18 e1 + a19 f1 - l x v y = a21a0 + a22 b0 + a23c0 + aV 24 d 0+ =0 +Aa25 Xe− l a26 f 0 + a27 d1 + a28e1 + a29 f1 - l y t
t
t
(5)
(6)
t
i Tof Equation h (5) is described as: Thehvector form
iT , which are ∆c0 ∆d0 ∆e0 ∆ f 0 ∆d1 ∆e1 ∆ f 1 Vt At X t lt (6) T the unknowns; lt = [lx ly ] ; At is coefficient vector, which is expressed by: T T where Vt vx v y , X "t a0 b0 c0 d0 e0 f0 d1 e1 f1 # , which are the a12 a13 a14 a15 a16 a17 a18 a19 T a11 (7) unknowns; lt [lxAtl y=] ; At is coefficient vector, which is expressed by: a21 a22 a23 a24 a25 a26 a27 a28 a29
where Vt =
vx
vy
, Xt =
a
∆a0
∆b0
a
a
a
a
a
a
a
a
13 14 15 16 17 18 19 The detailed derivation in [46,47]. When the number of GCPs are greater At of A11t can12be found (7) than a24 a25 a26 a27 a28 a29 23 follows: a21 a22 aas 5, the observation equation is constructed
The detailed derivation of At can be found in [46,47]. When the number of GCPs are greater V =asAX −L than 5, the observation equation is constructed follows:
(8)
AX L A L An ]T , L = [l l L ln ]T , n represents (8) the where V = [V 1 V 2 L Vn ]T , X = [X1 X2 L Xn ]TV, A = [A 1 2 1 2 number of GCPs. Equation and the solutions are: of where V = [V1 The V2 L V n]T, X = [X(8) 1 X2is L solved Xn]T, A by = [Aleast 1 A2 Lsquare An]T, L =algorithm, [l1 l2 L ln]T, n represents the number GCPs. The Equation (8) is solved by least square algorithm, and the solutions are:
−1 X = A T TA -1 AT T L
X = A A
A L
(9)
(9)
The The solution for for Equation (9)(9)is isobtained process.A flowchart A flowchart of entire the entire solution Equation obtained by by an an iterative iterative process. of the algorithm is presented in Figure 4. 4. algorithm is presented in Figure
Figure Flowchart of of the the whole Figure 4.4.Flowchart wholealgorithm. algorithm.
Sensors 2018, 18, 1794
7 of 19
R ), which are As seen from Figure 4, the initial data are first used to compute the rotate matrix (RG used Sensors for computation of the coefficient matrix, A and constant matrix, L. The inversion of7 of (A19T A) is 2018, 18, x FOR PEER REVIEW computed by an LDLT algorithm. The solution, X, is obtained by the iteration process and is compared 𝑅 seen fromIf Figure 4, the initialof data are first compute the rotate matrix 𝐺 ), which areends, to a givenAs threshold. the increments X are less used thantothe given threshold, then (𝑅 computation used for computation of the coefficient matrix, A and constant matrix, L. The inversion of (ATA) is and the X is considered as the final solution. Otherwise, the above computation is repeated, until the computed by an LDLT algorithm. The solution, X, is obtained by the iteration process and is increments are less than the given threshold.
compared to a given threshold. If the increments of X are less than the given threshold, then computation ends, and the X is considered as the final solution. Otherwise, the above computation is 3.2. FPGA-Based Computation of On-Board Geometric Calibration repeated, until the increments are less than the given threshold.
3.2.1. Design for On-Board Geometric Calibration
3.2. FPGA-Based Computation of On-Board Geometric Calibration
An FPGA-based implementation for on-board geometric calibration is proposed in Figure 5, Design On-Board Geometric Calibration which3.2.1. consists offor four modules: Input Data, Coefficient Calculation, Adjustment Computation, and Comparison. The details implementation of the four modules are described as follows: An FPGA-based for on-board geometric calibration is proposed in Figure 5, (1)
which consists of four modules: Input Data, Coefficient Calculation, Adjustment Computation, and The initial data and the updating data are stored inasRAM of the Input Data module. When Comparison. The details of the four modules are described follows:
receiving an enable signal, the data are sent to the Coefficient Calculation module at the same
(2)
(3) (4)
(1) The initial data and the updating data are stored in RAM of the Input Data module. When clock cycle. receiving an enable signal, the data are sent to the Coefficient Calculation module at the same The clock elements cycle.of matrixes A and L (in Equation (8)) are calculated by the Coefficient Calculation module, and the of computed results sent to the Computation module at the same (2) The elements matrixes A and L are (in Equation (8))Adjustment are calculated by the Coefficient Calculation clockmodule, cycle. and the computed results are sent to the Adjustment Computation module at the clock cycle. The same solution X in Equation (9) is calculated by matrixes A and L in the Adjustment (3) The solution X in Equation (9) is calculated by matrixes A and L in the Adjustment Computation module. Computation module. If the increments of solution X meet the requirement of a given threshold, the iteration (4) If the increments of solution X meet the requirement of a given threshold, the iteration computation is terminated, and the solution a0 , b0 , c0 , d0 , e0 , f 0 , d1 , e1 and f 1 are outputted. computation is terminated, and the solution a0, b0, c0, d0, e0, f0, d1, e1 and f1 are outputted. Otherwise, the X updated and the is recomputed untiluntil the increments of theofsolutions Otherwise, theis X is updated anditeration the iteration is recomputed the increments the meetsolutions the requirement of the given threshold. meet the requirement of the given threshold.
Figure 5. The proposed implementation of on-board geometric calibration.
Figure 5. The proposed implementation of on-board geometric calibration. 3.2.2. FPGA-Based Parallel Computation for Matrixes 𝑅𝐺𝑅 , lt and At
R , l and A 3.2.2. FPGA-Based Parallel Computation for Matrixes RG t t
Because the elements (r11 through r33) in the rotation matrix 𝑅𝐺𝑅 involves sine and cosine R involves sine and cosine functions Because (r11 through the rotation RG functionsthe of elements three rotational angles,r33 φ,) in ω and κ, whichmatrix is time/resources-consuming, a parallel computation Figure where the implementationsa of sine and cosine of three rotationalmethod angles,isϕ,presented ω and κ,inwhich is 6a, time/resources-consuming, parallel computation functions are carried out by6a, a CORDIC IP implementations core. To ensure that ofand the cosine intermediate results method is presented in Figure where the of all sine functions areare carried outputted at the same clock cycle, the delay units are adopted for some intermediate results. This out by a CORDIC IP core. To ensure that all of the intermediate results are outputted at the same clock module includes 12 multipliers, 2 adders and 2 subtractors. cycle, the delay units are adopted for some intermediate results. This module includes 12 multipliers, For computation of lt in Equation (6), the initial data stored in the Input Data module and the 2 adders and 2 subtractors. rotation matrix are used to compute the lx and ly, which are presented in Figure 6b. The delay units For computation of ltthe in final Equation initial at data stored in the Input Data module and the are used to ensure that results(6), are the outputted the same clock cycle.
rotation matrix are used to compute the lx and ly , which are presented in Figure 6b. The delay units are used to ensure that the final results are outputted at the same clock cycle.
Sensors 2018, 18, 1794 Sensors 2018, 18, x FOR PEER REVIEW
8 of 19 8 of 19
Six elements in matrix At , i.e., a11i , a12i , a13i , a21i , a22i , a23i , are computed in parallel, where Six elements in matrix At, i.e., a11i, a12i, a13i, a21i, a22i, a23i, are computed in parallel, where 18 18 multipliers, 1 divider and 6 subtractors are employed (see Figure 6c). The rest of the elements in multipliers, 1 divider and 6 subtractors are employed (see Figure 6c). The rest of the elements in matrix At , i.e., a14i ,a14i a15i , a, 16i , a18i , ,aa24i , a25i, ,a27i a26i a29i ,computed are also computed parallel, 28i , also matrix At, i.e., , a15i a16i,, aa17i 17i, a18i , a,19ia,19i a24i 25i, a26i , a,28ia, 27i a29i, ,aare in parallel, in which whichincludes includes 39 multipliers, 10 subtractors, 3 adders, and 1 divider (see Figure 6d). 39 multipliers, 10 subtractors, 3 adders, and 1 divider (see Figure 6d).
Figure 6. FPGA-based coefficients computation. (a) is for 𝑅𝑅 computation, r11 to r33, in which the
R computation, r to r , in which the Figure 6. FPGA-based coefficients computation. (a) is for R𝐺G 33 negate unit means the negation of value; (b) is for l computation, lx and ly; (c) is for11a11 thru a13 and a21 negate unit means the negation of value; (b) is for l computation, lx and ly; (c) is for a11 thru a13 and a21 thru a23, computation, and (d) is for parallel computation of a14 thru a19 and a24 thru a29. thru a23 , computation, and (d) is for parallel computation of a14 thru a19 and a24 thru a29 .
3.2.3. FPGA-Based Parallel Computations for ATA and ATL
3.2.3. FPGA-Based Parallel Computations for AT A and AT L
Due to limitation of the FPGA resource, a parallel computation method for ATA in Equation (9) TA, i.e.: is presented through Due to limitation of modifying the FPGAAresource, a parallel computation method for AT A in Equation (9) is
presented through modifying AT A, i.e.:
Sensors 2018, 18, 1794
9 of 19
Sensors 2018, 18, x FOR PEER REVIEW
9 of 19
A1 A1 A1 A2 · · · A1 A9 AA A1 A1 A1 A2 A2 A1 A2 A2 · · ·1 9A2 A9 (10) A A A A (10) B9×9 = A9T×2nTA2n×9 = A A . . . . 2 1 2 2 2 9 .. .. .. .. B99 A92n A2n9 A9 A1 A9 A2 · · · A9 A9 A9 A9 A9 A1 A9 A2 With considering the symmetry of AT A, an FPGA-based parallel computation for the upper With considering the symmetry of ATA, an FPGA-based parallel computation for the upper triangular matrix of AT ATis presented and depicted in Figure 7. As seem from Figure 7a, a number of triangular matrix of A A is presented and depicted in Figure 7. As seem from Figure 7a, a number of processing elements (PEs) units are employed to reduce the complexity of computation [49]. All of the processing elements (PEs) units are employed to reduce the complexity of computation [49]. All of PE units are withare thewith same i.e., “ai.e., +1ba12+b2a”, which is enlarged in Figure 7b. Similarly, the 1 b1“a the PE units thestructure, same structure, 2b2”, which is enlarged in Figure 7b. Similarly, T T method for computation of A Lofin (8) is same asas that the method for computation ATEquation L in Equation (8)the is the same thatofofthe theAATA. A.
Figure 7. Parallel computation method for ATTA. (a) is the matrix multiplication unit, (b) is Figure 7. Parallel computation method for A A. (a) is the matrix multiplication unit, (b) is multiplicative unit. multiplicative unit.
3.2.4. FPGA-Based Parallel Computation for B−1
3.2.4. FPGA-Based Parallel Computation for B−1
Also due to limitation of the FPGA resources, computing matrix inversion, B−1 is implemented Also due to limitation of the FPGA resources, through Cholesky decomposition method [50], i.e.: computing matrix inversion, B−1 is implemented
through Cholesky decomposition method [50], i.e.: T B = LDL
(11)
T where L is a lower triangular matrix, D is a diagonal matrix, and LT is L’s transpose. The solutions of B = LDL
(11)
L and D are expressed by:
where L is a lower triangular matrix, D isi 1a diagonal matrix, and LT is L’s transpose. The solutions of L d11 b11 ,dii bii lik d kk lik i n and D are expressedby:
k 1 (12) j 1 i −1 bi1 , dii=i biin − l (2 ≤ i ≤ n ) b11 ,l∑ij likdbkk d11li1 = ij ik lik d kk l jk d jj (1 j i n ) k =1 d11 k 1 ! (12) j −1 b i1 li1 = d (2 ≤ofi LDL ≤ nT), ,dl11ij and = li1bijare − first ≤ j d