Document not found! Please try again

Detection of Positional Errors in Systems Utilizing Small-Format Digital ...

11 downloads 1941 Views 2MB Size Report
positional accuracy of small-format digital imagery whose cen- ter point ..... the digital aerial image and has the feature signature close to the one existing in a6ial ...
Detection of Positional Errors in Systems Utilizing Small-Format Digital Aerial Imagery and Navigation Sensors Using Area-Based Matching Techniques Amr Abd-Elrahman, Leonard Pearlstlne, Bon A. Dewltt, and Scot E. Smith

Abstract Integration of small-format aerial photographs with navigation systems has been widely used in many remote sensing applications. Low cost systems that employ onboard small-format digital cameras, GPS receivers, and attitude and heading measuring devices can be eficiently utilized as a point sampling technique. These systems are, however, subject to manypotential sources of positional error. In this research, a method that uses area-based image matching techniques was developed to detect positional errors in the image center point locations. The aerial images were matched with lower resolution georeferenced images. An Indian Remote Sensing [IRS) image and a Digital Orthophoto Quadrangle (DOQ)were used a s reference images. The matching process succeeded in 70 percent and 50 percent of the tested aerial images when using the IRS and the DOQ as reference images, respectively. Limited success, however, was achieved where tree coverage was a prominent feature in the image. Positional errors in the system were detected by applying this technique on images within the actual flight line andlor over a test area before and after taking the main flight line.

Introduction Utilization of small-format aerial imagery is a common means of collecting useful information in a land-uselland-cover study. Applications include wildlife (Norton-Griffiths, 1988), forestry (Hall and Aldred, 1992),environmental assessment (Warnes et al., 1993),and photogrammetric surveys (Warner et al., 1996).Small-format camera systems are becoming popular due to their low cost, reasonable accuracy, and simple implementation. Recently, other navigation sensors such as the Global Positioning System (GPS)and inertial navigation systems have been successfully integrated with mobile mapping systems to provide positional and attitude information (Li, 1997).The positional accuracy of such systems are not only dependent on the accuracy of individual components, e.g., GPS and inertial navigation sensors, but also are dependent on variables such as the synchronization between different sensors, mounting platform stability, lens distortion, and weather conditions. A. Abd-Elrahman, B.A. Dewitt, and S.E. Smith are with the Civil Engineering Department, University of Florida, Gainesville, EL 32603 ([email protected];[email protected]; [email protected]). L. Pearlstine is with Wildlife Ecology and Conservation, University of Florida, Gainesville, FL 32603 (PearlstineLQ wec.ufl.edu) PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

A customized system composed of a Kodak DCS 420 CIR digital camera, GPS receiver, and gyros to measure aircraft attitude angles and heading was designed, built, and tested jointly by the Florida Fish and Wildlife CooperativeUnit and the University of Florida Departments of Civil Engineering and Aerospace Engineering. The camera was used to provide groundtmth data to a project whose objective was to produce a map of potential wildlife habitat for the state of Florida. The camera provided data needed to label and perform accuracy assessment for the project which used Landsat TM imagery as its primary source of data. While the digital camera images proved to be excellent in terms of image quality, they suffered from relatively low positional accuracy. Because the position of the images was critical to their usefullness, it was essential that a system that could automatically check the positional accuracy of sample images within each flight line be developed. Aerial photos were taken in a series of pre-planned flight lines. The position of the center point of each photo was determined by recording the camera position and the aircraft attitude at the time of each exposure. According to the specified accuracy of the GPS and attitude measuring devices, this system should have achieved a positional accuracy less than one-third of the pixel size of the TM image (approximately 10 m). The actual overall positional uncertainty of the photo centers, however, was significantly greater than 10m, and so ground and flight tests were conducted to investigate possible sources of errors. The values of the photo center coordinates were then manually compared with the corresponding location obtained from geo-referenced images (Abd-Elrahman et al., 2000). These tests led to several important outcomes. First, the main error in the photo center location, which was mainly in the flying direction, was found to be due to the lack of synchronization between the instant of taking the image and the recording of the GPS location and the aircraft attitude. This lack of synchronization was noticed at higher altitude flights and especially in cold weather. Another source of error was found to be due to a weak electromagnetic field created by a metal bar inside the aircraft (a Cessna 172) door in close proximity to the

Photogrammetric Engineering & Remote Sensing Vol. 67, NO. 7, July 2001, pp. 825-831. 0099-lllZ/Ol/6707-825$3.00/0 O 2001 American Society for Photogrammetry

and Remote Sensing

attitude and heading measuring device. This electromagnetic field was the reason for an error in the recorded aircraft attitude. The main problem with these types of errors is that their different magnitudes varied significantly from one flight line to another and, sometimes, within the same flight line. Clearly, the first step to handle these errors was to identify their existence and discover their sources. Some of these errors were difficult to detect unless the overall position of the images' center points were compared with their actual locations acquired from an independent reference of higher accuracy. One way to geo-reference individual aerial images is to take direct ground observations of identified control points or distinctive features in the image and geo-reference the images by applying photogrammetric techniques using available digital elevation models (DEM)to account for relief displacement within the image (Novak, 1992). This direct method provides highly accurate geo-referencing,but involves additional effort for collecting ground observations. The time and effort required to collect ground observations, together with the problems of finding ground control points and/or landmarks preclude this approach from being practical in our case. In addition, this direct method can be applied when taking individual images or even a series of images that cover a small ground area, but is not practical in large areas where hundreds or even thousands of small-format aerial images are taken for point sarnpling applications. Another method for checking the positional accuracy of images, especially in the case of applications that only need the coordinates of the image center, is to match the aerial images with other geo-referenced images. USGS Digital Orthophoto Quadrangles (DOCZ)or medium resolution satellite imagery (e.g., 5-m panchromatic Indian Remote Sensing (m)or 10m panchromatic SPOT images) are good candidates for such reference images. This solution can theoretically give the position of the center of the photos to approximately the same accuracy as that of the reference images. Manual matching by comparing the two images (aerial images and reference images) is a tedious and time-consuming process. Also, it is limited in precisely locating the center point of the aerial image on the reference image. In this research, a method was developed for checking the positional accuracy of small-format digital imagery whose center point position is acquired by an onboard GPS receiver and attitude and heading measuring devices. This method was based on automatic matching of the aerial photo with geo-referenced images. The method provides an economical means and an effective tool for detecting deficiencies in aerial imaging systems utilizine small-format cameras. GPS receivers, and/or attitude and heauding measuring devicis. In the following two subsections, a review of basic image matching techniques and the role image re-sampling played in these techniques are briefly introduced. Then, the following sections illustrate the methodology adopted in this research followed by the results, discussion, and conclusions.

match. In area-based image matching methods, a candidate window of the reference image is statistically compared with windows of the same size in the other image (Brown, 1992). The window moves over the reference image and the similarity measure is computed at every new position of the candidate window. Different similarity measures are used to evaluate the matching process. The normalized cross-correlation (Rossenfeld and Kak, 1982) and least-squares matching (Gruen, 1985; Ackermann and Schade, 1993) are well-known examples of area-based similarity measures. The main reason for selecting an area-based rather than a feature-based matching technique was the characteristics of the data used in this research. The re-sampled digital aerial images had a relatively small number of pixels and details. This was especially the case when matched with a much lower resolution reference image (e.g.,the 5-m IRS image).This made extraction of meaningful features a difficult process. In other words, the quantity and quality of features extracted from the aerial digitd images afteire-sampling are generally not sufficient to achieve adeauate feature-based match in^ results. Also, the small size of theimages after re-sampling made the time needed for area-based matching practical. In addition, the least-squares matching technique adopted in this research was found to be more accurate than were the feature-matching methods. A traditional area-based matching method utilizing a cross-correlation matching technique to give approximate matching followed by the more accurate least-squares technique has been used in this research. Each one of the small-format aerial digital images was used as a template and matched with geo-referenced image to get its center point location. In the following sections, a brief description of the normalized cross-correlation and least-squares matching techniques used in this research will be introduced. Normalized CfossWmlation Matching

Cross correlation is the basic statistical approach for image matching. It gives a measure of the degree of similarity between a reference image and a template where the template size is small compared to the reference image. This method is generally useful for matching images which are unaligned by slight rotation and scale. In this method, the cross correlation at an image pixel, as a similarity measure between the template window and image window centered at that pixel, was computed. This process was repeated for a moving window over the image. In the case of perfect matching, the result will be another image that has its peak at the pixel where the template matches the image. For a template T and image I,the two-dimensional crosscorrelation function for each translation is (Sveldo et al., 1976; Rossenfeld and Kak, 1982)

q u , v) =

2 ( n x , Y) - PT)(I(x - u, y - V) - PI) Jz 2Y (ax- u , Y - V) - PI)' 2Y (T(x,y) - p ~ ) ' x

x

Image Matching Techniques Many techniques have been developed to match images taken from different sensors. These methods can broadly be divided into two major classes (Lemmens, 1988):signal- or area-based matching and feature-based matching. Feature-based methods for image matching involve performing two different tasks. Features from the two images are first extracted and then matched. Candidate features commonly used in digital imagery include edges, regions, region centers, line intersections, curvature discontinuities, etc. Feature matching algorithms make use of attributes such as shape (perimeter, invariant moments, elipticity, etc.), color, and texture. Each feature in one image is compared with potential corresponding features in the other image. A pair of features with similar attributes is accepted as a

where x and y are the row and column numbers of a pixel in the template image, u and v are the row and column in the reference image at which the correlation coefficient between the template image and the reference image is computed, pTis the mean of the template image, and plis the mean of the reference image over the window region. In this equation, the cross-correlation coefficient is normalized because local image intensity would otherwise influence the value. Normalization of the cross correlation leads to values between -1 and 1which can easily be evaluated by searching for a peak at which the matching occurs. In order to PHOTOGRAMMETRIC ENGINEERING 81REMOTE SENSING

avoid false matching, the computed normalized cross-correlation coefficient at the peak is compared with a selected threshold. If the coefficient is less than this threshold, the matching procedure fails.

Ai = [I, Ri, hl Rxi,hlxriRd, hl ~ f i l hl l ~Ryi, , hlxriRyi,h ~ y , ~ R ~(8) ~l

Least-Squares Matchlng

It should be noted that, in order to perform a new iteration, the x,, y, coordinates are computed using the obtained transformation parameters. These values are floating point numbers and do not represent certain pixels in the reference image. A re-sampling algorithm.must be used to interpolate for the digital numbers of the reference image at those computed coordinates

The least-squares matching technique is a widely accepted technique to achieve sub-pixel matching accuracy. In this technique, a geometric relationship between the template and reference images is assumed. Then, a nonlinear optimization algorithm is utilized to minimize the square of the residuals between the template and reference image. This technique has been widely used and investigated since it emerged in the 1980s (Gruen, 1985; Helava, 1988; Calitz and Ruther, 1996; Thevenaz et al., 1998).A brief description of the least-squares matching technique follows. Let T(xt,yt) and R(x,, y,) represent the template and reference images, respectively. The mathematical model relating the template and reference images, assuming an affine geometrical relationship between the two data sets, is

where xt, yt and x,, y, are the respective coordinates in the template and reference images; a,, bi; i = 0,1,2 are the affine transformation parameters; and ho,hl are the calibration factors for the radiometric shift and scale. The previous mathematical model was nonlinear. This model can be linearized by means of first-order Taylor series expansion and reformulated in terms of the differentials of affine transformation coefficients. The linearized form can be expressed as

where

Image ReSampling

The re-sampling problem arises whenever pixels are required in an output grid at locations different from where input pixels are located. In this study, the image re-sampling problem was encountered when the image resolution needed to be changed and within the correlation and least-squares matching processes. The small-format aerial digital images had relatively high resolution (each pixel represented approximately 23 cm on the ground), while the DOQ image had a 1-m ground size and the IRS image had a 5-m pixel size. In addition, there were rotational differences between the aerial images and both the DoQ and IRS images. In order to successfully match these two sets of images, scale and rotation differences must be minimized. Scale and orientation variations are accounted for in a pre-processing step before the correlation matching step. Images are also approximately scaled before the least-squares matching. Accurate orientation and scale parameters are computed through the matching process. In each iteration, a new set of transformation parameters between the two matched images was computed. Then, the re-sampling process was applied for each pixel in the image using this new set of parameters. Three methods of re-sampling are widely used: nearest neighbor, bilinear interpolation, and cubic convolution (Billingsley et al., 1983). In this research, the bilinear interpolation re-sampling method was adopted due to the fact that it yielded "smoother" results than nearest neighbor method and less processing time than the cubic convolution method (Smith et al., 1995).In bilinear interpolation, the adjacent neighboring pixels are used in a first-order interpolation to provide the gray level for an unknown pixel.

Methods

The above linearized form is solved iteratively. The affine parameters and radiometric shift and scale parameters are assigned initial values and are updated at each iteration. The initial values are obtained from the results of the normalized cross correlation explained in the previous section. Convergence of the iterative solution can be assumed when all corrections to the affine parameters and the radiometric corrections are less than predefined values. For each iteration, a normal e uation system was formed. The parameter adjustment vector = (dho,dh,, doo,da,, da,, dbo,dbl, db2) is given as

3

where A is an m by 8 matrix with a row for each pixel in the template image and a column for each unknown parameter, and L is an m by 1residual vector. One row of the A and L matrices can be expressed as follows: PHOTOORAMMETRIC ENGINEERING & REMOTE SENSING

A transect of ten images was flown over Gainesville, Florida. The images were taken with a 20-mm Nikkor A1 lens on a KodakDCS 420 digital color infrared camera. The flying height was approximately 440 meters. Imagery from the camera was downloaded to a computer on board the aircraft. The camera format consists of 1524by 1012 pixels. A Garmin 12-channel GPSreceiver measuring CIA code was used to provide real-time differentially corrected coordinates for the camera location at each exposure. The real-time differential correction was broadcast from ground control stations and received through the receiver, This system can provide coordinates of the camera location with errors from 1 to 3 m. The aircraft attitude was determined using a Watson Industries Attitude and Heading Reference System (AHRS-BABOB).This device utilizes three solid-state attitude gyros to provide two attitude angles and a heading angle. An accuracy of 0.5 degrees in attitude angle determination can be achieved using this instrument (Watson Industries, AHRS-BA303 owner's manual, 1995). The ground coordinates of the point representing the image center can be calculated using the GPS location of the camera and the attitude and heading of the aircraft recorded at image exposure. The camera-GPS antenna offset was approximately 25 cm. This offset was found to be insignificant in this /uly 2001

827

Figure 1. Sample small-format aerial images (image 1and 7 from left to right).

application where expected errors were believed to be on the order of meters. Figure 1shows two of the ten aerial images used. The first image covers part of a suburban area where tree canopy represents a major portion of the image while the second image shows part of a high-density residential area. The figure shows the lack of distinguishable feature signatures in the first image and the strong feature signatures in the second image. The recorded GPS locations and aircraft attitude and heading angles for the ten images are listed in Table 1. A 1-m-resolution USGS Digital Orthophoto Quadrangle (DOQ), re-sampled to a 5-m resolution, and a 5-m Indian Remote Sensing (IRS) image were used in this study as reference images. Figure 2 shows the approximate locations of the center points of the digital aerial images plotted on a sub-scene from the reference IRS image. Each one of the test digital aerial images was matched with the reference image. The coordinates computed from the GPS,attitude, and heading data were compared with those resulting from the matching process. As previously mentioned, the matching process was conducted in two phases. First, the correlation matching was used to get approximate matching parameters. These parameters were then used in a second phase utilizing the least-squares matching technique to get sub-pixel matching accuracy. The digital aerial images were re-sampled to approximately 5-m resolution to match the resolution of the reference images. A search area of 150 by 150 m was specified for the correlation matching process. Before applying the correlation matching, . the aerial digital images were rotated with the aircraft heading values recorded by the attitude and heading measuring devices. Gross errors in the recorded heading values could easily be

828

/,I#,

J I I ~ ~ I

detected by comparing these measured heading values with the corresponding GPS heading determined by the GPS receiver. The correlation coefficient was calculated for each window position in the search area. The location having maximum correlation coefficient that exceeded a pre-specified threshold was taken as the approximate initial location for the leastsquares matching process. Again, accounting for the differences in scale and orientation between the aerial digital images and the reference images was accomplished by re-sampling and rotating the digital aerial image to the same resolution and orientation of the reference image. This process was essential to achieve good correlation matching results. Although this process was crucial for the correlation matching process to eliminate most of the scale and orientation variations, there still existed some changes in scale and orientation. These changes were caused by errors in scale and orientation corrections due to errors in measured aircraft altitude and heading angle. In addition, distortions in both images, especially in the aerial images due to relief displacement, perspective projection, and lens distortions, are expected to represent a major component of this error. A correlation value of 0.4 was used as a threshold for the correlation matching process. Scale-orientation discrepancies, differences in sun angle, view angles, and time between the two matched images made the matching process more difficult and forced the use of low thresholds for the correlation coefficient. Aerial images that gave a correlation coefficient below 0.4 were excluded from the subsequent least-squares matching process. False correlation matching with a maximum correlation coefficient that exceeds the 0.4 value threshold could be detected

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

Figure 2. IRS image: Center points of the small-format aerial images.

using the least-squares matching process. Diverging leastsquares matching or a continuous very slow false conversion indicates failure in the matching process and so must be rejected. ?Lvo tests were conducted with changes in reference images. The first test used a 5-m resolution IRS image while a DOQ image, re-sampled from its original 1-m resolution to 5-m resolution pixels, was used in the second test.

Results and Discussion Results from the conducted tests were analyzed from two different aspects. First, the matching process was evaluated. This involves evaluating the matching of the digital aerial images with two different kinds of images (IRS and DO^). Then, coordinates of the aerial images obtained through the matching process were compared with those computed from the recorded GPS attitude and heading information. When using the 5-m resolution IRS image as a reference image, successful matching was achieved for seven out of the ten images used. Two of the unmatched images were excluded PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

&omthe least-squares matching process after achieving a correlation coefficient in the correlation matching step less than the pre-defined threshold. These images lacked a strong feature signature due to extensive tree cover. The last image failed to achieve convergence in the least-squares matching process. Results obtained when using the re-sampled 5-m resolution digital DOQ as a reference image showed that only five of the tested ten images were successfully matched. Images number 3 and 4 failed in the correlation matching step. Although, image 3 has a distinctive feature signature and was successfully ~ image, it failed partially due to matched with the I R reference changes in the landscape that occurred in the time period between taking the aercal image (late 1999)and the D ~ image Q (acquired in 1995). On the other hand, the IRS image was taken one year before the digital aerial image and has the feature signature close to the one existing in a6ial image 3. Although image 10 achieved a maximum correlation coefficient that exceeds the redefined threshold, it failed to achieve convergence in the leist-squares matching due to the fact that a water body covered more than two thirds of the image. Tables 2 and 3 list the results for matching both the IRS image and the 5-m resolution D ~ Q respectively, , with each of the ten aerial images. This includes the matching row and column pixel number achieved through both the developed automatic matching algorithm and manual matching of the images. Aerial images that fail the automatic matching process were marked "No Matchine." Com~arinethe matchine" row and column pixels resulting &omthiautokatic matching process and those extracted through manual matching, we could distinguish a difference of 1.7 pixels and 2.5 pixels in the x direction when using the IRS and the DOQ reference images, respectively. Most of the successfully matched images were within one pixel of the manual matching positions. In this application, where the images are used toassist land-cover classification and perform accuracy assessment of 30-m pixel size Landsat TM images, an error of 2 to 3 pixels (10 to 15 m) in the matching process is acceptable and will not lead to rejection of these images or to a review of the entire flight line. Table 4 lists the number of iterations required to achieve conversion for each one of the matched images. In addition, the total time needed for both the correlation and least-sauares adjustment is also listed for each image. From these risults it can be concluded that, in general, the number of iterations needed to achieve conversion when using the IRS image as a reference image is less than the corresponding number of iterations when using the re-sampled DOQ image. From a practical point of view, this was an acceptable processing time duration given that the correlation matching was searching an area of 150 by 150 m. The majority of this time (41.7 sec) was spent in the correlation matching step, while the time needed for each least-squares iteration was found to be approximately 0.21 sec. The matching process was conducted on a 200-MHz Pentium 11machine with 64-MB RAM and 256K cache memory. Finally, center-point coordinates of the small-format aerial images resulting from the matching process were compared with those computed from the GPS and the aircraft attitude and heading angles recorded at the time of each image exposure. Comparing these two sets of coordinates can evaluate the positional accuracy of the obtained coordinates for the image center points. Table 5 lists these coordinates computed from the GPS and aircraft attitude and heading angles and those resulting from matching the small-format aerial images with the IRS image. This table shows that differences between actual coordinates obtained through the matching process and the coordinates computed from the GPS and aircraft attitude and heading observations were less than 10 m for five of the images tested. Two other images had difference less than 15 m.

TABLE 2. MANUALAND AUTOMATICMATCHINGUSING 5-m RESOLUTIONIRS AS REFERENCEIMAGE Manual Matching Results

Aerial Image Number

x (pixels)

Automatic Matching Results

y (pixels)

x (pixels)

3349 3497 3648 3797 3939 4086 4232

3527.22 3534.36 3540.21 3534.25 3523.06 3518.34 3516.64

Difference

y (pixels)

dx (pixels)

dy (pixels)

-1.22 -0.36 -2.21 0.75 -0.06 1.66 -0.64

-0.14 0.75 1.36 0.78 -0.03 0.00 -0.84

No Matching 3349.14 3496.25 3646.64 3796.22 3939.03 4086.00 4232.84

No Matching No Matching

TABLE3. MANUALAND AUTOMATICMATCHINGUSING 5-m RESOLUTIONDOQ AS REFERENCEIMAGE

Aerial Image Number

Manual Matching Results

Automatic Matching Results

y (pixels)

x (pixels)

Difference

y (pixels)

x (pixels)

dx (pixels)

dy (pixels)

-0.54

-0.09

No Matching

1 2 3 4 5 6 7 8 9 10

625

234

625.54

234.09

No Matching No Matching 635 623 620 617

682 825 973 1120

634.13 622.38 617.48 615.59

681.42 824.96 972.10 1119.23

0.87 0.62 2.52 1.41

0.58 0.04 0.90 0.77

No Matching No Matching

Conclusions 5-m DOQ as Reference

IRS as Reference Image

Aerial Image Number

No. of Iterations

Time (seconds)

8 6 7 16 10 5 6

43.37 42.96 43.16 45.05 43.79 42.75 42.96

Image No. of Iterations

Time (seconds)

No Matching 27

47.35

No Matching No Matching 14 18 6 8

No Matching No Matching

44.63 45.47 42.96 43.37

The positional accuracy of small-formatdigital aerial images can be effectively tested by matching them with other geo-referenced images. Different types of geo-referenced images can be used in the matching process as reference images. An IRs image and DOQ image re-sampled to a 5-m resolution have been successfully utilized in this research. The process depends on the success of the matching process which, in turn, is affected by the degree of similarity of the two features to be matched and the matchability under some similarity measure. The matching process succeeded in 70 percent and 50 percent of the tested aerial images when using the IRS and the D o Q as reference images, respectively. This implies that enough images could be collected to reach a 90 to 95 percent statistical confidence in the positional accuracy of the images in a planned flight line. Time changes must be carefully considered because they contributed to the lower percentage of success in matching the aerial images achieved when using the resampled D o Q as a reference image. In this research, matching was conducted for images in

TABLE 5. IMAGE CENTER POINT COORDINATES COMPUTED FROM NAVIGATION SENSORS DATAAND FROM Aerial Image

830

luly 2001

Computed-x (m)

Computed-y (m)

IRS-matched-x (ml

IRS-matched-y (ml

IMAGE

MATCHINGRESULTS

dx (m)

dy (m)

distance (m)

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

urban, suburban, and forest areas. As expected, the matching technique was most successful in the urban and suburban areas. The matching process failed where the majority of the image consisted of dense tree canopy. Road intersections, small subdivisions, and/or distinctive bare soil types provided a successful outcome. Comparing the results obtained from the least-squares matching and the manual matching, we can conclude that automatic matching positions were within one pixel from those determined manually in most of the successfully matched images. Even for the rest of the successfully matched images, a difference between the positions obtained through automatic matching and manual matching was less than 3 pixels, which was still sufficient to detect gross positional errors in the mapping system. On the other hand, the processing time was found to be approximately 42 seconds per image, which is practical except for an extremely large number of images. In areas where most of the flight lines are over dense forest areas, planned test images over areas having feature diversity is recommended at least at the beginning and the end of the flight mission. Checking the positional accuracy for these images, in addition to the ones that may achieve successful matching in the main flight line, can provide a good evaluation for the errors that may have been introduced during the mission. . -Many types of geo-referenced images can be used to test the positional accuracy of small-format aerial images. In this research, IRS and resampled DOQ images were tested. Resampling the DOQ image to 5 m achieved a reasonable and practical size for the template image, which is important in area-based image matching techniques. Further tests are recommended using other commercial satellite images (e.g., SPOT, Landsat, and IKONOS images). Investigation may also extend to individual band matching when multispectral reference images are used. - - - --

References Abd-Elrahman, A., L. Pearlstine, S. Smith, and P. Princz, 2000. Use of small format digital aerial images for classification of satellite images, Proceedings of XVI IMEKO World Congress, LMEKO 2000 Conference, 25-28 September, Vienna, Austria, W:273-278. Ackermann, F.,and H. Schade, 1993. Applications of GPS for aerial triangulation, Photogmmmetric Engineering & Remote Sensing, 59(11):1625-1632. Billingsley, F.C., P.W. Anuta, J.L. Carr, C.D. McGillem, D.M. Smith, and T.C. Strand, 1983. Data processing and reprocessing, Manual of Remote Sensing, Second Edition, (D.S. Sirnonett and ET. Ulaby, editors), American Society of Photogrammetry, Falls Church, Virginia, pp. 734-736. Brown, L.G., 1992. A survey of image registration techniques, ACM Computing Surveys, 24(4):325-376.

Read P E b R S on-line!

Excerpts of PE&RS are now available on-line ...

Plus, stay tuned,for many more advances to the ASPRS web site.

PHOTOGRAMMETRIC ENGINEERING & REMOTE SENSING

Calitz, M.E, and H. Ruther, 1996. Least absolute deviation (LAD) image matching, ISPRS Journal of Photogrammetry and Remote Sensing, 51(5):223-229. Gruen, A,, 1985. Adaptive least squares correlation-A powerful matching technique, South Africa Journal of Photogrammetric Engineering and Remote Sensing, 53(2):167-169. Hall, R.J., and A.H. Aldred, 1992. Forest regeneration appraisal with large-scale aerial photography, The Forestry Chronicle, 68~142-150. Helava, U.V., 1988. Object-space least squares correlation, Photogrammetric Engineering b Remote Sensing, 54(6):711-714. Lemmens, M., 1988. A survey on stereo matching techniques, International Archives of Photogrammetry and Remote Sensing, 27(B3):11-23. Li, R., 1997. Mobile mapping: An emerging technology for spatial data acquisition, Photogmmmetric Engineering 6.Remote Sensing, 63(9):1085-1092. Lillesand, T.M., and R.W. Kiefer, 1994. Remote Sensing and Image Interpretation, John Wiley & Sons, Inc., New York, N.Y., 750 p. Norton-Griffiths, M., 1988. Aerial point sampling for land use surveys, Journal of Biogmphy, 15:149-156. Novak, K., 1992. Rectification of digital imagery, Photogrammetric Engineering b Remote Sensing, 58(3):339-344. Rossenfeld, A., and A.C. Kak, 1982. Digital Picture Processing, Vol. II, Academic Press, Orlando, Florida, 37 p. Smith, J., C. Campbell, and R. Mead, 1986. Imaging and identifying loblolly pine seedlings after the first growing season on 35mm aerial photography, Canadian Journal of Remote Sensing, 12:19-27. Smith, S.E., B.A. Dewitt, E.P. Gonzalez, and G.W. Hurt, 1995. Georeferencing satellite imagery of digital soil mapping, Surveying and Land Information Systems, 55(1):13-20. Sveldow, M., C.D. McGillem, and P.E. Anuta, 1976. Experimental examination of similarity measures and processing methods used for image registration, Proceedings, Symposium on Machine Processing of Remotely Sensed Data, 29 June-01 July, Westville, Indiana, pp. 4A-9. Thevenaz, P., U.E. Ruttimann, and M. Unser, 1998. A pyramid approach to subpixel registration based on intensity, IEEE lhnsactions on Image Processing, 7(1):27-40. Warner, W.S., S. Andersen, and S. Saerland, 1993. Surveying a waste site with 35mm oblique aerial photography: Monoplotting with a digitizing tablet, Cartography and Geographic Information Systems, 20237-243. Warner, W.S., R.W. Graham, and R.E. Read, 1996. Small Format Aerial Photography, American Society for Photogrammetry and Remote Sensing, Bethesda, Maryland, 257 p. Watson Industries, 1995. AHRS-BA303 Owner's Manual, Watson Industries, URL:http:llwww.primenet.coml-watgyroIPDF1 AHRSE303-Manual.pdf) (Received 10 May 2000; accepted 04 August 2000; revised 14 October 2000)

I

Grids & Datums Abstracts Software Reviews Calendar Notices Classifieds Book Reviews and many more

Suggest Documents