default /files/ADSR_2008.pdf. [2] S. Tanathong and I. Lee. âSpeeding ... Computer Vision with the OpenCV Library; O'Reilly, pp. 316-367, 2008. [12] Z. Zivkovic.
An Automated Real-time Image Georeferencing System Supannee Tanathong and Impyeong Lee Laboratory for Sensor and Modeling, The University of Seoul, Dongdaemun, Seoul, Korea
Abstract - To provide rapid analysis for the ongoing status of emergency, the captured aerial images must be georeferenced to the same coordinate system as the existing spatial data in real-time. This study presents a development of an automated real-time image georeferencing which involves a novel image matching and simultaneous AT. We implemented a reliable fast-and-automated image matching based on the KLT tracker to extract tie points that are well distributed over images. We accelerated the KLT tracker by supplying the initial guessed tie points computed based on GPS/INS data and determined, in advance, a promising depth level for multi-resolution tracking. The experimental results show that the proposed image matching is 12% faster than the original KLT and achieves up to 98% accuracy on a real sequential aerial image. Due to its lightweight processing, the improved KLT tracker has high potential to incorporate with simultaneous AT to build a real-time image georeferencing system. Keywords: KLT algorithm, image matching, simultaneous AT, error propagation, exterior orientation
1
Introduction
Disasters are generally classified into two broad categories: natural or manmade. Whichever disaster occurs, it can cause a heavy toll on human life and significantly devastate infrastructure and natural environment and further result in a suspension of economic growth. In the past decades, the number of worldwide catastrophes increases exponentially [1]. Since disasters usually happen unpredictably in a variety of forms without advance warning, it is hard to avoid their occurrence or being adversely affected by their consequences. However, the degree of hazardous impact can be lessened, and the mitigation and rehabilitation activities can be established and delivered immediately, by real-time monitoring systems that are capable to track and capture the ongoing emergency with a great deal of information and accuracy. In the recent few decades, we have witnessed the advancement of sensor technology and how they are utilized in monitoring systems as a primary element of surveillance applications. With their undertaking to immediately capture the ongoing status of emergency, most of them attach airborne real-time sensors to UAV platforms. The mounted sensors, typically, include a digital camera, a laser scanner, a Global Positioning System (GPS) and an Inertial Navigation System (INS). With all these components equipped together, the
system can acquire aerial images, ranging data, positions and attitudes of the platform tagged with GPS time [2]. According to the trajectory plan of the platform, the sequential images are usually taken at individual different location and attitude with some partial overlap. The foremost process prior to any disaster management activity is to address and project the real-time captured data with the existing spatial data. Since these two sources of data or more are apparently captured at different timing and are not always aligned with the same coordinate system, to effectively utilize the complementarity, the newly captured images must be rectified to reference with the same coordinate system as the existing data, typically the absolute ground coordinate system. This rectification process is referred to as image georeferencing. The technique requires accurate position and attitude of the camera at the time of exposure of each acquired image which is technically called exterior orientation parameters or extrinsic parameters, aka EO [3]. A potential method to adjust the exterior orientation parameters of all images simultaneously is the aerial triangulation with bundle block model since all images’ parameters will be correctively adjusted together [3], [4]. As presented in Fig. 1, the process requires the knowledge about intrinsic camera parameters, tie points between adjacent images that share overlapped area, in conjunction with the controlled information about the ground features plus some initial approximation of extrinsic camera parameters and ground points. The controlled information can be divided into two categories: indirect and direct measurements of the exterior orientation parameters. The indirect measurement involves the acquisitions of ground control points (GCP) which exploit enormous labor operations. This technique has an advantage in producing high accuracy output while its disadvantage is to prevent automation and disable real-time processing. The direct measurement acquires the exterior orientation parameters directly from the GPS/INS sensors with reasonable accuracy. Therefore, it is more adequate to be incorporated into a real-time processing than the indirect approach.
Fig. 1. Aerial triangulation with bundle block adjustment.
As phrased in our recent article [2], in order to implement a real-time image georeferencing system, all inputs must be readily available in advance or at least be acquired in real-time. The intrinsic camera parameters can be obtained in a priori from the camera calibration process before flight [5]. The controlled information from the direct measurement is more promising for real-time requirement since the exterior orientation parameters can be obtained from the GPS/INS sensors in real-time. The initial approximation for EOs is simply the knowledge acquired from the GPS/INS sensors, and the initial approximation for ground points is determined based on the initial EOs and tie points. A real-time georeferencing can not be accomplished if the tie point acquisition process does not function in real-time nor produce results that do not support georeferencing process. For a system to meet a real-time requirement, the computational time of any process in the system should be less than the image acquisition process. For each image newly acquired from the airborne sensor, the computational time to obtain conjugate points with its adjacent prior-acquired image must be small. Therefore, the tie point acquisition process is required to complete before a new image is captured. In this paper, the design and implementation of an image georeferencing system are discussed. The system operates in a straight-through manner which associates the ability of being fully automated and real-time processing. The whole process involves an aerial section to acquire a sequential image and its corresponding EO according to the flight plan, a tie point extraction to compute a set of tie points whenever a new image is captured and a sophisticate image georeferencing. The focus point of this paper is to implement an automated tie point extraction that is well incorporated into the image georeferencing system.
2
Background
Aerial triangulation (AT) is a promising technique for image georeferencing to adjust together the exterior orientation parameters of each image by employing the knowledge about intrinsic camera parameters, tie points between adjacent images, the controlled information about the ground features plus some initial approximation for extrinsic camera parameters and ground points. Tie points are used to establish the relationships among images that share certain similarities. In analytical photogrammetry, tie points may be measured manually through a photogrammetric plotter. This technique is, however, computationally inefficient and prevents the real-time processing. To develop a real-time system, tie points must be computed within a short period of time. Therefore, many researchers employ image matching technique in computer vision to determine tie points [6]. In computer vision, there are a number of powerful techniques for image matching. The most widely used methods are KLT (Kanade-Lucas-Tomasi) and SIFT (ScaleInvariant Feature Transform). KLT was developed by Shi and
Tomasi [7] based on the original work presented by Lucas and Kanade [8]. The algorithm locates significant features to track by examining the minimum eigenvalues of the autocorrelation matrix defined on the derivative image. SIFT was first published by Lowe [9]. This technique extracts distinctive invariant features from images with a high reliability. The excellent characteristics of SIFT features that are invariant to image scale and rotation cause the algorithm to be computationally expensive. SIFT feature is considered to be more robust than KLT but KLT is significantly less expensive in computation. According to its lightweight computational characteristic for feature selection (a candidate for tie points, in this case) that is more appropriate for real-time processing plus its sufficient robustness, in this research, the proposed real-time image matching is developed based on KLT. As mentioned in [10], [11], the KLT tracker offers two functions: significant feature selection and tracking. Based on the local information derived from a local window surrounding each point, a point is considered to be a good feature when the minimum eigenvalue of its autocorrelation matrix is larger than a predefined threshold. Once the good features in the first images are determined, their corresponding point, or equivalently tie point, in the consecutive frame may be found if there exists an optical flow vector that minimizes the residual function, defined over a local window, between the first image and the subsequent image. To maintain the spatial coherence among neighboring points, a small local window is preferred. As a consequence, the KLT algorithm is only applicable to track when the displacements are small [12]. The disadvantage of using small windows becomes visible with large motions in which points may be located outside the local window and thus makes it impossible for the algorithm to track. This limitation led to the pyramidal implementation of the KLT tracker which enables the original KLT algorithm to track under large displacement [10].
3
Proposed approach
Due to the designed period of camera’s exposure time, in conjunction with the planned trajectory plus the effect of wind, the displacement between two image frames may be considerably large. KLT is thus supposed to start tracking at a very coarse resolution. In a similar meaning, the number of pyramidal depth-level for multi-resolution tracking may be high to ensure the distance between corresponding points in a pair of images is entirely tracked and finally increase the computational time. In order to build a real-time image georeferencing system, this study aims to develop a novel image matching, to be integrated into real-time georeferencing system, which possesses, at least, the following properties: (1) capable to produce evenly distributed tie points, (2) fast processing, (3) automated processing, and (4) highly accurate. The solution descriptions for each requirement are discussed thoroughly in this section.
3.1
Evenly distributed tie points
To obtain a promising result, AT requires tie points to be evenly distributed over images. Prior to image matching, image will be divided based on the overlap configuration, as presented in Fig. 2. One advantage of the KLT point extraction is that its results are always ordered by their significant scale. Therefore, we simply fill each block with the extracted points according to their pixel coordinates. In practical, one tie point per block is sufficient.
Fig. 2. A 3 3 pattern of the overlap configuration where tie points are well distributed over images.
3.2
Initial guessed positions for tie-points
According to various conditions of the image acquisition system, the displacements of tie points in adjacent images are not always small. Large displacement drives the KLT tracker to perform in multi-resolution scheme with a large number of pyramidal depth levels and, importantly, requires a lot of computational time. To shorten this distance, we calculate the initial guessed positions of tie points through the collinearity equation by employing the GPS/INS data. As illustrated in Fig. 3, with the knowledge of the perspective center (PC) and the orientation of the camera ( , , ) bundled with the images, the corresponding ground points of the tie points in the first image ( x1 , y1 ) are determined using the collinearity equation, which is rearranged to result in the ground coordinate ( X P , YP ) with the average terrain elevation Z a for simplicity X P X 1c ( Z a YP
Y1c
Z1c )
r11x1 r21 y1 r31 (c) r13 x1 r23 y1 r33 (c)
r x ( Z a Z1c ) 12 1 r13 x1
(1)
r22 y1 r32 (c) r23 y1 r33 (c)
where ( x p , y p ) and c , are the IO parameters that correspond to the image principle point and focal length. ( X c , Yc , Z c ) and rij , which are referred to as EO parameters, denote the perspective center and rotational matrix of (, , ) . Those ground points are then projected to their conjugate point in the subsequent images using the familiar collinearity equation below x xp c y yp c
( X p X c )r11 (Y p Yc )r12 ( Z a Z c )r13 ( X p X c )r31 (Y p Yc )r32 ( Z a Z c )r33 ( X p X c )r21 (Y p Yc )r22 ( Z a Z c )r23 ( X p X c )r31 (Y p Yc )r32 ( Z a Z c )r33
(2)
Fig. 3. An approach to estimate the initial guessed tie-points. The resulting coordinates ( x, y) in (2) are incompetent to be used as tie points yet since they, inevitably, contains some errors. It is rather be used as the initial guesses for the optical flow vector in the pyramidal KLT tracker. With the guessed positions, the distances of corresponding points in image pairs will be significantly decreased and, in effect, reduce the processing time.
3.3
Promising pyramidal depth levels
The most effective way to determine the number of pyramidal depth levels is by deriving from the displacement between tie points. Although this displacement is shortened, as discussed in previous section, its quantity in term of numerical is still unknown since the figure is the far distance between the initial guessed position and the exact tie point position. The distance, however, is actually the error involved by the uncertainties of all parameters in the collinearity equations. The primary sources of errors are: (a) the measurement error from the feature selection process, (b) the instrumental error from the GPS/INS sensor, and (c) the error of estimating the average terrain elevation. Generally, the unknown values are often determined indirectly by making direct measurement of other quantities which are functionally related to the desired unknown [13]. The obscure distance between the unknown correct positions of tie points in the second image and the positions of the corresponding points from the colllinearity equation may be referred to as the error of the initial guessed positions of tie points. The error can be approximated through the general law of variance propagation given that the errors of those three related measurements above are known. Every measurement has some uncertainty associated with it. The GPS/INS data, which are used for exterior orientation parameters, incorporated in this calculation are also subject to some uncertainty. The uncertainty from GPS corresponds to error at perspective center position: X2 C , Y2C and Z2 C . Similarly, the uncertainty from INS corresponds to error at orientation of the camera: 2 , 2 and 2 . The average terrain elevation Z a also associates an uncertainty from the estimation.
D{Z} BD{Y }BT
(6)
Finally, the candidates for tie-points residing in the second image is theoretically defined in the range of the below definition in which ( x , y ) is the uncertainty of the tie point as a result of (6). Fig. 4. The diagram for calculating the initial guessed positions and for deriving the error propagation. As mentioned before, the calculation for the initial guessed position of tie point involves primarily the collinearity equation, equation (1) then (2), as presented in Fig. 4. Since the collinearity equation is nonlinear, the Taylor series approximation is then applied to linearize it by taking the partial derivative of (1), with respect to the unknown X 1C , Y1C , Z 1C , 1 , 1 , 1 and Z a . The design matrix A is given as the notation below X X c A 1 Y X c 1
X
X
Y1c
Z1c
Y1c
Z1c
Y
Y
X 1 Y 1
X 1 Y 1
X 1 Y 1
X Z a Y Z a
x x2 x
With the assumption that the window size w in x and y directions are equal, the number of the pyramidal depth level L can be approximated through the propagated error. Using the framework from [10], the ordinary point u is defined as u L u / 2 L on the pyramidal image I L . The feasible number of the pyramidal depth level L is determined by the inequation (8). Based on the inequality plus the requirement for the number of resolutions to be integer, the quantity L must be rounded up to its nearest larger integer.
(3) L log 2 (
D{Y } AD{X }A
(4)
The uncertainty result will be used further for deriving the uncertainty of the tie point in the second image as presented in Fig. 4. In this step, the unknown is defined as above plus two additional unknowns for the ground coordinates ( X , Y ) which are the outcome of the former derivation (4). The design matrix B is defined from the partial derivative of (2) with respect to the unknown X 2C , Y2C , Z 2C , 2 , 2 , 2 , ground coordinate X , ground coordinate Y and Z a . x 2 X 2c B y 2 X c 2
x 2
x 2
Y2c y 2
Z 2c y 2
Y2c
Z 2c
x 2 2 y 2 2
x 2 2 y 2 2
x 2 2 y 2 2
x 2 X y 2 X
x 2 Y y 2 Y
x 2 Z a y 2 Z a
(5)
The dispersion matrix D{Y } to construct the propagation equation Z BY is formed as a 9 9 matrix containing the variance of the uncertainty of all parameters. Using the general law of variance propagation, the uncertainty of the tie point in the second image ( x, y) is determined as
KLT Feature selection
max{ x , y } w
Good features
Initial guesses approx. based on Collinearity equation
(8)
)
EOs of image pairs
First image
The dispersion matrix D{X } of the parameters C, C, C , X 1 Y1 Z 1 1 , 1 , 1 and Z a to construct the propagation equation Y AX is formed as a 7 7 matrix containing the variance of the uncertainty of all parameters. Applying the general law of variance propagation [13], the uncertainty of the ground coordinate ( X , Y ) is determined as T
(7)
y y2 y
Second image
Initial guesses for tie points
Pyramidal KLT Tracker
Tie points
No. depth levels
Fig. 5. The workflow of the improved KLT image matching with initial guessed tie-points and number of depth levels.
3.4
Outlier removal
With all previous solutions, described in Section 3.1-3.3, implemented together, an automated real-time image matching based on the KLT tracker can be developed as illustrated in Fig. 5. Since one of the key requirements for image georeferencing is to produce very high-accurate tie points, we conducted a preliminary experiment to measure the accuracy of tie-points obtained from a system based on the above three combined solutions. A sequence of thirty-seven images is used in this measurement. The detailed description of the experimental data is discussed more in Section 5. The experiment was conducted on two configurations: 1) a 3 3 overlap configuration pattern with the maximum of 3 tie points per block and 2) a 4 4 pattern with 2 tie points per block. The tracker achieved 95.14% accuracy for the first configuration and 92.12% for the second configuration. Although the false tracking rates 4.86% and 7.88% are not significantly large, to increase its accuracy, the normalized correlation coefficient for each matching point is computed. Tie point with its degree of similarity being below a predefined threshold will be discarded. Finally, the entire image matching system is implemented as shown in Fig. 6.
Fig. 7. The architecture of the image georeferencing system.
5 5.1
Experimental results and analysis Experimental data
The experimental data used in this study are a sequence of thirty-seven aerial images acquired from a UAV equipped with GPS/INS sensors in a trial phase. The platform was boarded approximately 70 meters above the ground. An image dimension is 4288 2848 pixels. The acquisition produced a number of flaws in the captured images. Along the trajectory, three abrupt changes of orientation and eight extreme illumination differences are observed over the image sequence. These input images are, however, not pre-processed prior to the experiments. Fig. 6. Schematic procedure of the proposed automated realtime image matching based on the KLT tracker.
4
Implementation
As illustrated in Fig. 7, the entire real-time automated georeferencing system consists of three primary components. The main focus is on the implementation of the proposed image matching and the image georeferencing. In this present study, the georeferencing task is functioned by the simultaneous AT which adjusts the exterior orientation of all images simultaneously based on the bundle block adjustment model [3], [14]. An experiment conducted in [15], to measure the efficacy of the simultaneous AT towards the set of sensory data acquired from the low-cost GPS/INS sensors, demonstrates that the RMSE of the resulting EOs and GPs is 90% decrease in comparison with the results obtained merely from the direct georeferencing. According to its straightforward implementation plus its acceptable reliability, the simultaneous AT is utilized in this work. Tie points from the proposed real-time image matching in combination with the GPS/INS data from the direct measurement are the key elements to establish the real-time georeferencing process.
5.2
Effectiveness of initial guessed positions
An experiment was conducted to evaluate and compare the performance of (a) the ordinary KLT tracker and (b) the proposed image matching system which supplies the initial guessed positions of tie points. For evaluation, the experiment performed on a single pair of aerial images rather than the whole sequence. Prior to the measurement, we employed the KLT feature extraction to obtain a set of significant points from the first image under the 3 3 tie point pattern with the maximum of 10 points per block, then manually recorded the positions of their corresponding points in the second image to be used as a reference. TABLE I DISPLACEMENTS MEASURED BETWEEN TIE POINTS Condition Parameter X Y Euclidian{X,Y}
(a) Original (pixels) Avg. 527.4 105.6 537.1
Max. 605.1 171.7 617.4
(b) Shortened (pixels) Avg. 39.2 141.1 146.1
Max. 70.7 197.7 210.0
The displacement of tie points between the image pair for case (a) is obtained as the distance between the point in the first image and the manually extracted corresponding point, while for case (b) is the distance between the initial guessed position in the second image and the manually extracted position. The measured displacements are presented in Table I. Originally, the distance for case (a) is quite large in which its maximum Euclidean distance is up to 617 pixels. With the initial guessed positions in case (b), the displacement is dramatically decreased to 210 pixels. This presenting result supports our proposed solution that the initial guessed position of tie point can reduce the tracking distance as discussed in Section 3.2. Fig. 8 compares the tracking accuracy and processing time. With the assumption that the depth level is unknown, we performed the tracking from 0 (at the original image size) to 10 depth levels with the window size set to 10 pixels in both x and y directions. The original KLT tracker (without the initial guesses) achieved its maximum success tracking rate at 84.2105% at the 7th depth levels and consumed 125 ms processing time. With the initial guessed positions of tie points, the tracker reached the same tracking rate at the 5th depth levels with 109 ms or 12.8% speed improvement. This strengthens our proposed work to employ the guessed positions to speed up the KLT tracker, as discussed in Section 3.2. The processing speed, however, is not dramatically improved. This is according to the low accuracy of the GPS/INS sensors we employed that lead to the derivation of not sufficiently good quality of the initial guessed positions.
5.3
Evaluation of suggested pyramidal depth levels
number
of
Referring to the general specification of the GPS/INS sensors, the uncertainties involved in deriving the number of depth levels are summarized as follows: (a) the uncertainty of the GPS sensor is within 30 cm, (b) the uncertainty of the INS sensor is within 0.5 degrees, and (c) the average elevation estimation error is within 10 m. We selected 5 random points on the first image and followed the mathematical derivation described in Section 3.3. With the local window size set equally to 10 pixels in both x and y axis, using (8), the computing results are 4.58, 4.81, 4.57, 4.20 and 4.82. Rounding up to the closest larger integer, the promising number of pyramidal depth level is suggested to 5. This figure is exactly corresponding to the result discussed in Section 5.2 that the KLT tracker with initial guessed positions achieved its maximum tracking rate at 84.21% at its smallest depth level 5 as presented in Fig. 8.
5.4
Performance matching
of
the
proposed
image
This experiment was conducted to measure the performance of the proposed image matching, as summarized in Fig. 6, in term of the tracking accuracy, computation time and the distribution of tie points. The testing was performed on the whole set of the image sequences based on the 3 3 configuration pattern with the maximum of 3 points per block and 10-pixel window size. The experimental results are presented as Table II.
Success Tracking Rate (%)
100 90
TABLE II PERFORMANCE OF THE KLT IMAGE MATCHING
80 70 60 50
Measurement
40 30 20 10 0 0
1
2
3
4
5
6
7
8
9
10
Num ber of Pyram idal Depth Levels Without Initial Guess
No. pyramidal levels % Accuracy Total points / matched Avg. points per image Avg. time per img pair
Original KLT
Proposed Work
7 82.12 274 / 225 11.52 0.5413 sec
5 98.23 226 / 222 9.25 0.4751 sec
With Initial Guess
Computational Time (milliseconds)
(a) Measurement of success tracking rate. 300 250 200 150 100 50 0 0
1
2
3
4
5
6
7
8
9
10
Num ber of Pyram idal Depth Levels Without Initial Guess
With Initial Guess
(b) Measurement of computational time. Fig. 8. Comparison of the KLT tracking measurements with and without initial guesses.
The proposed image matching could obtain the tie points at a very high accuracy rate, 98.23%, which is far superior to the original KLT by 16.11%. Moreover, the proposed tracker is faster than the original KLT by 12.23%. With an additional implementation to automatically determine the appropriate pyramidal levels and evenly distribute tie points over the image, the proposed image matching is particularly appropriate to be integrated into the real-time image georeferencing system. The limitations of our proposed work, which are derived from the ordinary KLT, are that the tracker does not track well when the subsequent image has significant differences in orientation or illumination.
5.5
Discussion of georeferencing
the
real-time
image
Although the entire real-time image georeferencing system has been implemented, the performance measurement is put on hold. With the lack of GCPs, we are unable to evaluate the system accuracy and, therefore, the computational time cannot be measured appropriately. To convince the readers regarding the promising georeferencing system, we refer to an experiment conducted in [15] to measure the time executed by the simultaneous AT. The experiment was conducted on a set of simulated data with the reasonable assumption of flight plan, sensor parameters and the terrain model. The processing time of the simultaneous AT is steadily increased when the number of images used in the processing increases. We notice that if the number of images in the processing is limited under 225 images the computation will be remained below 0.5 seconds. Therefore, it is feasible to integrate the simultaneous AT and the proposed image matching into the real-time georeferencing system in which a new image acquisition rate is every 1 second.
6
Conclusions
In this paper, we designed and implemented an automated real-time image matching which particularly corresponds to the image georeferencing requirements. This study utilizes the GPS/INS data to obtain promising initial guessed positions of tie points through the collinearity equation and to reveal the appropriate number of multiresolution via the variance propagation technique in order to reduce the tracking time of the KLT tracker. The proposed image matching can obtain evenly-distributed tie points with up to 98% accuracy and function faster than the ordinary KLT by 12% which is more promising for real-time image georeferencing. In the near future, we plan to measure the performance of the entire image georeferencing process, which involves the proposed matching technique and the simultaneous AT, over the simulated and real data.
7
Acknowledgment
This research was supported by a grant (06KLSGB01) from Cutting-edge Urban Development - Korean Land Spatialization Research Project funded by Ministry of Land, Transport and Maritime Affairs. We would like to thank Dr. Armin Gruen and Dr. Henri Eisenbeiss, from the ETH Zurich, for the permissions to use their aerial images for testing.
8
References
[1] J. Rodriguez, F. Vos, R. Below, and D. Guha-Sapir. “Annual Disaster Statistical Review 2008: The numbers and trends”; Center for Research on the Epidemiology of Disasters, 2009. [Online]. Available: http://www.cred.be/sites/ default /files/ADSR_2008.pdf.
[2] S. Tanathong and I. Lee. “Speeding up the KLT Tracker for Real-time Image Georeferencing using GPS/INS Data,”; Korean J. Remote Sensing, vol. 26, no. 6, pp. 629-644, Dec 2010. [3] T. Schenk. Digital Photogrammetry; TerraScience, pp. 225-255, 381-405, 1999. [4] P. R. Wolf and B. Dewitt. Elements of Photogrammetry with Applications in GIS; McGraw-Hill, pp. 366-403, 1999. [5] T. Hassan, C. Ellum, and N. El-Sheimy. “Bridging landbased mobile mapping using photogrammetric adjustments”; ISPRS Commission I Symposium, France, 2006. [6] A. W. Gruen. “Adaptive least squares correlation: a powerful image matching technique”; South Africa J. Photogrammetry, Remote Sensing and Cartography, pp. 175187, 1985. [7] J. Shi and C. Tomasi. “Good features to track”; Proc. IEEE Conf. on Computer Vision and Pattern Recognition, pp. 593-600, 1994. [8] B. Lucas and T. Kanade. “An iterative image registration technique with an application to stereo vision”; Proc. International Joint Conf. on Artificial Intelligence, pp. 674679, 1981. [9] D. Lowe. “Distinctive image features from scale invariant keypoints”; International J. Computer Vision, vol., 60, no. 2, pp. 91-110, 2004. [10] J. Y. Bouguet. “Pyramidal implementation of the Lucas Kanade feature tracker description of the algorithm”; Technical Report, Intel Corporation, Microsoft Research Labs, 2000. [11] G. Bradski and A. Kaehler. Learning OpenCV: Computer Vision with the OpenCV Library; O’Reilly, pp. 316-367, 2008. [12] Z. Zivkovic. “Improving the selection of feature points for tracking”; Pattern Analysis and Applications, vol. 7, no. 2, pp. 144-150, 2004. [13] C. D. Ghilani and P. R. Wolf. Adjustment Computations: Spatial Data Analysis; John Wiley & Sons, pp. 84-95, 2006. [14] C. McGlone. Manual of Photogrammetry; ASPRS, pp. 847-870, 2004. [15] K. Choi and I. Lee. “Image georeferencing using AT without GCPs for a UAV-based low-cost multisensor system”; Korean J. Geomatics, pp. 249-260, 2009.