automating image registration and absolute orientation: solutions and ...

2 downloads 290 Views 1MB Size Report
SOLUTIONS AND PROBLEMS. By I. J. DOWMAN. University College London. (Paper read at a Technical Meeting of the Photogrammetric Society on.
Photogrammetric Record, 16(91): 5–18 (April 1998)

AUTOMATING IMAGE REGISTRATION AND ABSOLUTE ORIENTATION: SOLUTIONS AND PROBLEMS By I. J. DOWMAN University College London

(Paper read at a Technical Meeting of the Photogrammetric Society on 27th January, 1997) Abstract The basic concepts and tools of automatic image registration are presented and the problems which exist are discussed. The paper describes work carried out at University College London on the registration of two images and on the registration of images with maps. It is shown that strategies have been developed which are applicable to different types of image and reference data and that with the tuning of well tried algorithms it is possible to achieve the required matching. The PAIRS technique for automatic registration of optical images from satellites has been implemented and tested and is now being developed for use with optical and microwave images. The ARCHANGEL system for registration of images with maps or vector data is also being implemented and initial results show considerable promise. KEY WORDS: image registration, stereomatching, absolute orientation INTRODUCTION IT is now widely accepted that photogrammetry has reached the digital age and that many processes can be carried out as efficiently using digital data as with hard copy images. It is by no means yet accepted that digital methods can offer savings in time and cost across the board and it is likely that the development of more robust automatic techniques will be needed before this can happen. At present, automation is only fully accepted in certain areas and even then the advantages of speed are offset by the possibility of blunders which must still be corrected manually. There are, however, many areas where research into automatic techniques is intense. It is the primary objective of this paper to examine the true state of affairs in terms of the practical reality of how much automation really exists in the procedures for handling images for measurement and of what the future realistically holds. In order to do this, recent developments in digital photogrammetry and image processing will be reviewed and in particular those techniques which can be used to automate the mapping processes. It is accepted that the generation of digital elevation models (DEMs) using automatic techniques can be successful with many types of data, terrain and land cover. The use of DEM generation with transformation software Photogrammetric Record, 16(91), 1998

5

DOWMAN. Automating image registration and absolute orientation TABLE I. Status of automation in photogrammetry. Operation

Comment

Examples

Inner orientation

Fully automatic and operational

Relative orientation Absolute orientation

Fully automatic and operational Unsolved at present

DEM generation

Fully automated generation but manual editing required Intensive work at present but no robust solution

Feature extraction

Zeiss PHODIS, Leica Helava, Vision International SoftPlotter Zeiss PHODIS Schickler (1992) Morgado and Dowman (1997) Gruen et al., (1995) Gruen et al., (1997)

to produce digital ortho-images is also now widespread. Automated relative orientation and image registration is also a production feature. More recently, automated aerial triangulation has been successfully introduced. All of these processes are based on a well defined set of low level algorithms, incorporated into a strategy to solve the particular problem in hand. These algorithms will be introduced and discussed and their role in the various strategies will be investigated. The paper will also discuss any unrealized potential applications of these algorithms. Following this discussion, some examples of automated registration will be examined and the results of current research will be described. These examples will include relative orientation, orientation of different types of image and, finally, registration of images with reference data such as maps. Several papers on the current status of automation in photogrammetry have been published in recent years. Of particular note are those by Fo¨rstner (1993) and Heipke (1997). Two workshops held at the University of Bonn (Bonn, 1995) have given very comprehensive coverage of the subject. Workshops on automated extraction of man-made features from images have been held at Ascona in Switzerland in 1995 and 1997 and these have also provided valuable information (Gruen et al., 1995; Gruen et al., 1997). The current situation in the automation of photogrammetric processes is summarized in Table I. The fundamental operation of image matching is present in most of these processes and, where matching is the prime operation and does not involve any reasoning, the process is successful. Thus inner orientation involves matching a defined template to a feature in a known position on an image (that is the fiducial mark). DEM generation involves matching patches on the image until a good correlation is found and the output is the disparity which can be converted into an elevation. Relative orientation requires more reasoning because erroneous matches must be automatically eliminated and this is done by the introduction of geometrical constraints. Absolute orientation and registration of different images is more difficult because it is essential that the same feature is identified from both images, or from the image and the reference data. These problems will be discussed in more detail. THE PROBLEMS OF IMAGE REGISTRATION AND ABSOLUTE ORIENTATION It is useful to divide the problems into two types, those arising from the registration of images of the same type and those from the registration of data of different types. The first type includes relative orientation, registration of two satellite images and, at a different level, DEM generation. The second type can be subdivided into the registration of different types of satellite images, such as SPOT and Landsat 6

Photogrammetric Record, 16(91), 1998

DOWMAN. Automating image registration and absolute orientation

for example, and SPOT and SAR, and the registration of images with maps or vector data. With the first type, aerial photography is the most straightforward example. Photographs are normally taken with the same camera under very similar conditions within a very short space of time. Accordingly, the images are of the same scale and the illumination conditions and land cover will be unchanged. This makes image matching relatively straightforward. The main problem to deal with on aerial photography is the relief and this is conventionally handled by relative orientation which uses a rigorous model of the cameras and works in three dimensions. If satellite images are to be registered, such as two SPOT images for example, although the sensor and the scale will be the same and relief may not be a problem, the possibility of different tilts or a large time difference can cause significant difficulties. Tilt causes scale changes and both illumination conditions and land cover can have changed over a period of time. In registering images of the same type the presence of very similar features and standard procedures can be exploited as, for example, the standard points used for relative orientation. Automation allows the provision of very high redundancy and this compensates for changes due to temporal differences. Images of different types present problems because the same feature in the real world may not appear in the same way on different images. Obvious examples are the difference between optical images and microwave images, and between images and maps. There may also be differences of scale and the relief problem will be accentuated when there are different distortion characteristics as, for example, between optical and SAR data. Point features are no longer appropriate for this type of registration and current work is based on object recognition as with the relational matching of Schickler (1995) or on patches or polygons, which is the basis of the work in progress at UCL (Dowman et al., 1996). Before proceeding to look at the solutions to these problems, some basic algorithms will be described and discussed. Bonn (1995) has been used in the compilation of these sections.

BASIC CONCEPTS AND TECHNIQUES Level of Automation All automated processes depend on a number of basic algorithms and these are grouped into their levels of complexity: (a) low level processes: associated with properties of pixels (colour, gradient, texture); (b) mid level processes: processes with no semantic knowledge attached (for example segmentation, extraction of points, lines and regions or structural features such as polygons); and (c) high level processes: these involve interpretation when semantic knowledge is involved. The algorithms needed for automated registration mainly fall into the first two classes. It is necessary to extract features which may be complex, but it is not generally necessary to know what the features are, although this information may emerge from the processes. Photogrammetric Record, 16(91), 1998

7

DOWMAN. Automating image registration and absolute orientation

FIG. 1. Examples of points.

Extraction of Points Points can be defined in a number of ways. They may be circular symmetric points, end points, corners or junctions (Fig. 1). Points (which in this context will comprise a small number of pixels) can be detected by interest operators which determine whether a feature can be designated as an interest point based on the grey level gradient compared to surrounding pixels, on size and on symmetry. The best known interest operators used in photogrammetry are the Fo¨rstner operator and the Moravec operator. Points of a known shape or pattern can be determined with the use of a point template. Extraction of Regions A region is an area on an image which has similar characteristics. There are two basic approaches: thresholding techniques, which define classes and then join groups of pixels in a class to form a region, and region growing techniques, which work outwards from a seed point to find pixels with similar characteristics. Region extraction may lead to incorrect boundaries and other techniques such as split and merge may be needed to extract meaningful areas. Edge Extraction An edge is a boundary where some property (brightness, colour or texture) is changing rapidly, perpendicular to the edge. We assume that on each side of the edge the adjacent regions are homogeneous in this property. A line may also be regarded as an edge, that is a narrow region which has a different property from the areas on either side. An edge is determined by a gradient but there are other factors such as the length of the edge. A very short segment will not usually be meaningful. It may be possible that a series of small edges can be built into longer lines which may be straight, curved or part of a structure, such as a polygon. There are many techniques for extracting edges, for example using templates, gradients and parametric edge models. Operators such as the Canny operator and the Hough transform are used widely. Edge extraction needs to be well tuned and an uncontrolled edge extraction will produce too many or too few edges to be useful. Extracted edges must be filtered or merged to produce edges which define meaningful features. Matching Algorithms A fundamental operation of photogrammetry is the establishment of correspondence between the same point in two or more images (conjugate points for relative 8

Photogrammetric Record, 16(91), 1998

DOWMAN. Automating image registration and absolute orientation TABLE II. Requirements and solutions for automatic registration of similar images. Requirement

Solution

Good initial values Very reliable conjugate points Image co-ordinates to sub-pixel accuracy Points well distributed over the whole of the overlap Determination of parameters of relative orientation

Use image pyramid Use high initial redundancy and apply robust checks Final correlation using least squares matching High redundancy covering whole overlap Use of rigorous 3D model

orientation) or between a point on an image and its corresponding representation in, for example, a camera calibration certificate (fiducial marks) or a list of control point co-ordinates. This gives rise to image matching which a human being with normal vision is able to do so well. In computing terms this breaks down to area based matching and feature based matching. Area based matching considers only the intensity of the pixels. The simplest area based technique is cross correlation which compares only the intensity of patches from the two images. The method depends on geometric similarity and on the radiometric difference being relative between the two images. A more general technique is least squares matching which uses the least squares principle to minimize the differences in geometry and radiometry. Feature based matching uses symbolic descriptions of the images for establishing correspondence. The best known example of this is DEM generation from matched points. The points are extracted by use of an interest operator and then points without a match are eliminated. Another example is relational matching which establishes topological relationships between objects. In all matching methods, the problem of establishing an initial estimate of the match is essential and this is most frequently solved by using an image pyramid.

REGISTRATION OF TWO SIMILAR IMAGES The best known example of image registration is relative orientation of a pair of aerial photographs. The minimum requirement is that five conjugate points, well distributed in the overlap, are found in order to be able to calculate the five parameters of orientation. Usually the photographs will be similar, having the same scale, 60 per cent overlap, very little tilt and similar radiometric properties. Because flying is well planned and controlled, very few areas are unusable (for example, due to water or snow). Well established procedures exist for relative orientation and the analogue method has been converted to analytical with very little change in the procedure. The same method can be applied to registering two satellite images although the images may have more differences that two aerial photographs. In order to design an automatic system the requirements should be specified and possible solutions examined (Table II). Relative orientation is designed to produce discrete three dimensional (3D) co-ordinates in an arbitrary model co-ordinate system. If an ortho-image is required, the full process of absolute orientation, DEM generation and ortho-image generation must be followed. Heipke (1997) has given a full description of automated relative orientation with aerial photographs and he provides the following steps for a generic solution for autonomous relative orientation. Photogrammetric Record, 16(91), 1998

9

DOWMAN. Automating image registration and absolute orientation

(1) Compute image pyramids for both images separately. (2) Approximately determine overlap and possible rotation and scale differences between the images on the highest level. (3) Extract features using an interest operator. (4) Match features using a cross correlation function and employing epipolar geometry to constrain matching position. (5) Determine coarse orientation parameters. (6) Proceed with extraction, matching and parameter determination through the pyramid from coarse to fine in order to increase the accuracy of the results. Heipke has identified seven implementations described in the literature and he reports accuracies of the parallax of 4·6 mm which is 2·3 times worse than the manual method but an increase in the accuracy of the parameters by a factor of 2·6 due to the high redundancy and better point distribution. In this example, 15 points were used for the manual method and 132 for the automatic method. If images from satellites are of the same type, then they can be treated in a similar way to aerial photography and, if necessary, relative orientation can be carried out and 3D co-ordinates generated. It may also be necessary to carry out automatic registration of satellite images which do not necessarily form a stereoscopic pair but which might have different distortion due to relief. The main requirement is often to compare images taken at a different time to determine change. In this case 3D co-ordinates may not be required and, if relief distortion is low, or if the images are taken from the same position, a two dimensional warping may be sufficient. Dowman et al. (1996) have reported on the Prototype Automated Image Registration System (PAIRS) which has been developed by Earth Observation Sciences Ltd., University College London, University of Stuttgart and the University of Oporto, under contract for the Western European Union Satellite Centre (WEU). The main features of PAIRS are shown in Fig. 2. A system for registration of two images of similar type adopts the strategy of selecting a large number of interest points in two images. From these two sets of points, conjugate points are selected. The pairs of conjugate points are used to calculate a transformation from one image, the slave, to the other, the master. Within this transformation, procedures are built in to detect any erroneous matches which have escaped the checking procedure in the previous stage. In flat areas a plane transformation can be used but, for hilly areas, a three dimensional transformation is used to produce an ortho-image and a digital elevation model (DEM) is needed. The individual algorithms used in the system are used in many image understanding problems. The Fo¨rstner operator (Fo¨rstner and Gu¨lch, 1987) is well known for selecting interest points and this or similar algorithms have been used in a number of applications, such as Genseed, described by Allison et al. (1991). In photogrammetry, systems for relative orientation have been described by Hellwich et al. (1994) and Haala and Vosselman (1992). The PAIRS system is based on published algorithms but draws particularly on work done at the University of Stuttgart for the strategy for determining conjugate points (Haala and Vosselman, 1992) and on the use of robust estimation for removing erroneous points.

REGISTRATION OF DIFFERENT IMAGES Once images from different sensors are considered then a large number of differences could occur. The most important of these are the pixel size and the spectral band. Difference in pixel size is equivalent to a difference in scale and this can be handled quite easily by reducing the scale of the image with the largest scale. This has the disadvantage of losing information but is a straightforward way of 10

Photogrammetric Record, 16(91), 1998

DOWMAN. Automating image registration and absolute orientation

FIG. 2. Diagram of the Prototype Automated Image Registration System (PAIRS).

ensuring that automated matching can take place. Differences in spectral band are more difficult to manage if one band is in the visible, near infra-red part of the spectrum and the other is in the microwave part (that is, one band is optical and one is radar). A synthetic aperture radar (SAR) image is formed by recording the backscatter of a signal transmitted from the instrument. The backscatter value of a particular pixel is computed and stored with the time of imaging and the range from the sensor to the imaged area. An uncorrected image is therefore very distorted in the range direction. A coarse correction can be made from slant range to ground range by taking into account the look angle from the sensor to the ground. The geometry is then similar to an optical image although the characteristics of the distortion due to relief are different. Another major problem when trying to match SAR with optical data is the phenomenon known as speckle, which gives the image a very speckled or noisy appearance. This can be reduced but not removed altogether. Other radiometric differences which can occur are in brightness, contrast and landcover, due to seasonal change. Because of these differences, techniques for matching and registration are necessary which are not dependent on the characteristics of any particular sensor. The use of points is not appropriate for a number of Photogrammetric Record, 16(91), 1998

11

DOWMAN. Automating image registration and absolute orientation

reasons. Scale differences will mean that different features will be detected by the interest operator which is essentially looking for features which are the size of one pixel. In SAR, the presence of speckle will also cause problems as speckle points will be chosen rather than features which might appear on both images. It is therefore sensible to look for line and area features. Line features tend not to be distinctive, especially when detected by automatic edge detection. In some images, it may be possible to extract junctions which can be characterized by their position as well as their shape. Polygonal features do however tend to have distinctive shapes and these are therefore the best features to use for matching purposes. Polygons are determined by segmentation. A number of methods have been used to extract polygons. Dowman et al. (1996) describe work done with aerial photographs and with KVR-1000 satellite data. The methods involve smoothing, segmentation, edge enhancement, edge thinning and removal of small polygons. Techniques for segmentation are being improved (Ruskone´ and Dowman, 1997). Segmentation relies on the occurrence of processing at different levels: the region creation that occurs in the image space and the merging that is done in the region space. It is a classical region growing technique based on the clustering of neighbouring pixels with similar properties. All the agglomerated pixels form a region (or segment). When a region grows, its properties become better defined which allows an accurate delineation of its geometry. The agglomeration of pixels into a segment permits the computation of different characteristics such as information about the neighbours or measurements that will allow us to compute statistical indicators in the following steps. Once segmentation has taken place, polygons may be formed from the boundaries of the segments. Because the segmentation is dependent on pixel values, the segment boundaries may not represent the true feature boundary and changes in the feature between imaging may also cause differences in the boundaries to be matched. The matching algorithms must therefore be able to take into account differences between polygons, even though they represent the same feature. The basic techniques of matching polygons is adapted from Abbasi-Dezfouli and Freeman (1994). Polygons are characterized by a number of parameters such as shape and area. Shape is defined by a bounding rectangle, parallel to defined axes, and also by the chain code method described by Abbasi-Dezfouli and Freeman. The initial translation and azimuth must be fixed by first defining a few polygons which have good matches based on a first pass through the selected points. An iterative approach then allows corresponding polygons to be identified. A large number of polygons is not necessary but it is important that the polygons are distributed in a suitable pattern over the image. Once established, the corresponding polygons must then be exactly matched in order to extract conjugate points. A method of dynamic programming developed at UCL is one way of doing this (Newton et al., 1994). The perimeter of the feature is followed and a best fit obtained. Costs are determined by a number of measures relating the predicted edge pixel position projected into the map and the edge pixel under consideration. The difference in gradient direction between the map boundary pixel and the edge pixel under consideration are also used as costs. The method allows the detection of changes between the two polygons which may represent true change or an error in detection. In either case, such points will not be selected as conjugate. The technique makes allowance for the fact that the image may be distorted due to terrain effects or geometric effects from the camera or sensor. The polygon matching technique has been implemented and tested in PAIRS using two SPOT images. Fig. 3 shows part of the two images in which the extracted polygons can be seen together with the conjugate points which have been extracted. 12

Photogrammetric Record, 16(91), 1998

DOWMAN. Automating image registration and absolute orientation

FIG. 3. Polygons extracted from corresponding SPOT images. The crosses indicate common points matched on the polygon boundaries.

REGISTRATION OF IMAGES AND GROUND REFERENCE DATA Absolute Orientation of Aerial Photography This is a more difficult problem because we are matching a 2D image with 3D features in the real world. We may simplify the problem by representing the real world by two dimensional data to which a height can be attached or determined for certain points. But even in this case, the reference data will not be in the same form as the image. The problem has been tackled in several ways. Heipke (1997) has identified a number of research approaches to the problem but concludes that a general solution is not yet available. Using aerial photography the main approach has been to use signalized points (Gu¨lch, 1994) or well defined man-made features such as manholes (Drewniok and Rohr, 1996). Such methods are clearly too limited for general use. Feature based matching has been proposed by using man-made objects such as building roofs. A wire frame model can be created from the known building characteristics and this can be matched with lines extracted from the image (Schickler, 1992). A method proposed by Vosselman and Haala (1992) uses relational matching in which linear topographical features such as roads, rivers and parcel boundaries are related to corresponding features extracted from the image using a relational description. These methods have only been demonstrated for limited examples. Morgado and Dowman (1997) have shown that a method developed from the polygon approach discussed earlier for image registration can also work for the absolute orientation problem. In this case a pair of 1;11 000 scale aerial photographs was automatically oriented to a 1;10 000 scale Ordnance Survey map. The relative orientation was carried out to give the orientation parameters of the model. Polygon extraction and matching was then used to give corresponding polygons on one image and the reference data, which were a set of field boundaries, were extracted from the 1;10 000 scale map. A set of common points was then found on the polygon boundaries using dynamic programming. The relative orientation parameters were used to find the conjugate points on the second image and three dimensional model co-ordinates were determined for these points. Since these were common to the map, Photogrammetric Record, 16(91), 1998

13

DOWMAN. Automating image registration and absolute orientation TABLE III. Summary of the differences between 10 ground co-ordinates obtained from Kern DSR observations and from the model set up with the automatic procedures. Eastings (m) Maximum Minimum Mean s r.m.s.

14·085 2 6·838 4·275 5·923 7·017

Northings (m) 2·474 2 14·812 2 3·600 4·821 6·010

Height (m) 1·879 2 1·572 2 0·153 1·262 1·207

an absolute orientation could then be computed. The results are shown in Table III. These indicate that an absolute orientation can be carried out but that in this case not all of the points are determined with sufficient accuracy. An alternative method of finding the conjugate points and the model co-ordinates is to create epipolar images. The features extracted in one image can be found automatically in the matching stereo-image and hence three dimensional co-ordinates can be determined in the model system. The absolute orientation can then be carried out. This operation will not be possible if a single image is being registered with a map. The method proposed assumes that a DEM is available in the same reference system as the map and hence points matched in the map system can be assigned height values. The collinearity equations can then be used to determine the orientation parameters of the image. This method will fail if points selected do not fall on the same surface as the DEM as, for example, if building roof lines are extracted (Vohra and Dowman, 1996). Orientation of Satellite Data to Maps Methods have been developed for registering satellite images to maps. For example, Lee et al. (1993) describe the REGGIE method which carries out a two dimensional registration. Guindon (1995) has described a method in which a simulated SAR image is created which is then registered to a real image using area based matching. A DEM is needed for this approach to be successful. Laserscan used a similar approach with map data for SPOT orientation (Morris et al., 1988; Stevens et al., 1988). Holm et al. (1995) have used lakes and islands in Finland as the features to employ for matching and the method has been demonstrated with Landsat and SPOT data. The work in the PAIRS project is being extended, with all of the partners in the WEU consortium together with the Royal Institute of Technology (KTH), Stockholm and the Swedish Space Corporation, to develop the automatic registration of images to maps. This is the ARCHANGEL (Automatic Registration and Change Location) project, funded under the European Union Fourth Framework research programme. The ARCHANGEL project aims to provide a generic, robust system for registering images from a variety of sensors to map data which may be originally in vector form or as a paper map which can be rasterized. The method is designed for smaller scale aerial photographs and satellite images from which polygons can be extracted. The features to be used as control can come from a data base in which features are given attributes so that those with similar characteristics can be matched. For example, a lake will have irregular boundaries and fields will have linear boundaries. It is not necessary to know whether the polygon is a field boundary or a lake or a building. All that is necessary is to extract a polygon from the image which will have a 14

Photogrammetric Record, 16(91), 1998

DOWMAN. Automating image registration and absolute orientation

FIG. 4. A portion of the segmented image and the corresponding map.

corresponding shape to an object in the reference data. The extraction of these polygons should be possible from any type of imagery, including SAR. An added attraction of this method is that any existing reference data can be used to assist in the detection of the features in the image. The following topics are being addressed: (1) (2) (3) (4)

automatic detection of cloud, snow and ice; segmentation of images and extraction of line and area features; structuring of map data bases to extract line and polygon features; matching line and polygon features on the map and on the image to generate tie points for transformation; (5) transformation of images into the map reference system and correction of distortions due to relief; and (6) change detection.

The basic flow diagram is similar to that for PAIRS (Fig. 2) but the second image is replaced by the map and additional algorithms are needed to structure the map data. The image segmentation methods are those discussed earlier and in Ruskone´ and Dowman (1997). The method and progress have been described by Dowman and Ruskone´ (1997). Fig. 4 shows a segmented image and the corresponding area of a map with corresponding features which can be used for registration. The boundaries of the river and the main highway show up particuarly well but corresponding field boundaries can also be seen.

CONCLUSIONS It can be seen that many of the tools necessary for automatic image registration and absolute orientation are available but that more work is still needed to refine these tools for particular applications and particular images. Recent work published by Tseng et al. (1997) uses similar strategies to those discussed in this paper but makes use of Fourier descriptors and neural networks. It is possible that a fully automatic method will not be achieved in the near future but that systems will become available which include operator interaction to select algorithms or parameters which are suitable, on the basis of operator experience, for a particular set of imagery. It does seem likely that before much longer semi-automatic systems will be available. Photogrammetric Record, 16(91), 1998

15

DOWMAN. Automating image registration and absolute orientation

ACKNOWLEDGEMENTS The work described in this paper has been caried out at University College London over a number of years, contributing to the content of PhD theses and as part of research contracts and this is noted in the text. The author particularly acknowledges the contribution of Dr. R. Ruskone´ in the preparation of this paper. REFERENCES ABBASI–DEZFOULI, M. and FREEMAN, T. G., 1994. Patch matching in stereo images based on shape. International Archives of Photogrammetry and Remote Sensing, 30(3/1): 1–8. ALLISON, D., ZEMERLY, M. J. A. and MULLER, J.-P., 1991. Automatic seed point generation for stereo matching and multi-image registration. Remote sensing: global monitoring for earth management. 11th Annual International Geoscience and Remote Sensing Symposium, Espoo, Finland. Institute of Electrical and Electronics Engineers. Pages 2417–2421. BONN, 1995. Second course in digital photogrammetry. Landesvermessungsamt Nordrhein-Westfalen and Institut fu¨r Photogrammetrie, Universita¨t Bonn. Unpaginated. DOWMAN, I. J. and RUSKONE´, R., 1997. Extraction of polygonal features from satellite images for automatic registration: the ARCHANGEL project. Automatic extraction of man-made objects from aerial and space images (II) (Eds. A. Gruen, E. P. Baltsavias and O. Henricsson). Birkha¨user, Basel. 393 pages: 343–354. DOWMAN, I. J, MORGADO, A. and VOHRA, V., 1996. Automatic registration of images with maps using polygonal features. International Archives of Photogrammetry and Remote Sensing, 31(B3): 139–145. DREWNIOK, C. and ROHR, K., 1996. Automatic exterior orientation of aerial images in urban environments. Ibid.: 146–152. FO¨RSTNER, W., 1993. Feature extraction for digital photogrammetry. Photogrammetric Record, 14(82): 585–611. FO¨RSTNER, W. and GU¨LCH, E., 1987. A fast operator for detection and precise location of distinct points, corners and centres of circular features. Proceedings of ISPRS Intercommission Conference on Fast Processing of Photogrammetric Data, Interlaken. 437 pages: 281–305. GRUEN, A., KUEBLER, O. and AGOURIS, P. (Eds.), 1995. Automatic extraction of man-made objects from aerial and space images. Birkha¨user, Basel. 321 pages. GRUEN, A., BALTSAVIAS, E. P. and HENRICSSON, O. (Eds.), 1997. Automatic extraction of man-made objects from aerial and space images (II). Birkha¨user, Basel. 393 pages. GUINDON, B., 1995. Performance evaluation of real-simulated image matching techniques in the acquisition of ground control for ERS-1 image geocoding. ISPRS Journal of Photogrammetry & Remote Sensing, 50(1): 2–11. GU¨LCH, E., 1994. Using feature extraction to prepare the automated measurement of control points in digital aerial triangulation. International Archives of Photogrammetry and Remote Sensing, 30(3/1): 333–340. HAALA, N. and VOSSELMAN, G., 1992. Recognition of road and river patterns by relational matching. Ibid., 29(B3): 969–975. HEIPKE, C., 1997. Automation of interior, relative, and absolute orientation. ISPRS Journal of Photogrammetry & Remote Sensing, 52(1): 1–19. HELLWICH, O., HEIPKE, C., TANG, L., EBNER, H. and MAYR, W., 1994. Experiences with automatic relative orientation. International Archives of Photogrammetry and Remote Sensing, 30(3/1): 370–378. HOLM, M., PARMES, E., ANDERSSON, K. and VUORELA, A., 1995. A nationwide automatic satellite image registration system. Integrating photogrammetric techniques with scene analysis and machine vision II. SPIE 2486: 156–167. LEE A. J., CARENDER, N. H., KNOWLTON, D. J., BELL, D. M. and BRYAN, J. K., 1993. Fast autonomous registration of Landsat, SPOT, and digital map imagery. Integrating photogrammetric techniques with scene analysis and machine vision. SPIE 1944: 68–79. MORGADO, A. and DOWMAN, I., 1997. A procedure for automatic absolute orientation using aerial photographs and a map. ISPRS Journal of Photogrammetry & Remote Sensing, 52(4): 169–182. MORRIS, A. C., STEVENS, A. and MULLER, J.-P. A. L., 1988. Ground control determination for registration of satellite imagery using digital map data. Photogrammetric Record, 12(72): 809–822. NEWTON, W., GURNEY, C., SLOGGETT, D. and DOWMAN, I., 1994. An approach to the automated identification of forests and forest change in remotely sensed images. International Archives of Photogrammetry and Remote Sensing, 30(3/2): 607–614. RUSKONE´, R. and DOWMAN, I., 1997. Segmentation design for an automatic multisource registration. Integrating photogrammetric techniques with scene analysis and machine vision III. SPIE 3072: 307–317. SCHICKLER, W., 1992. Feature matching for outer orientation of single images using 3-D wireframe controlpoints. International Archives of Photogrammetry and Remote Sensing, 29(B3): 591–598. SCHICKLER, W., 1995. Automation of orientation procedures. Second course in digital photogrammetry. Landesvermessungsamt Nordrhein-Westfalen and Institut fu¨r Photogrammetrie, Universita¨t Bonn. Unpaginated.

16

Photogrammetric Record, 16(91), 1998

DOWMAN. Automating image registration and absolute orientation STEVENS, A., MORRIS, A. C., IBBS, T. J., JACKSON, M. J. and MULLER, J.-P., 1988. Automatic generation of image ground control features from a digital map database. International Archives of Photogrammetry and Remote Sensing, 27(B2): 402–413. TSENG, Y.-H., TZEN, J.-J., TANG, K.-P. and LIN, S.-H., 1997. Image to image registration by matching area features using Fourier descriptor and neural networks. Photogrammetric Engineering & Remote Sensing, 63(8): 975–983. VOHRA, V. K. and DOWMAN, I. J., 1996. Automatic extraction of large buildings from high resolution satellite images for registration with a map. International Archives of Photogrammetry and Remote Sensing, 31(B3): 903–908. VOSSELMAN, G. and HAALA, N., 1992. Erkennung topographischer Passpunkte durch relationale Zuordnung. Zeitschrift fu¨r Photogrammetrie und Fernerkundung, 60(6): 170–176.

Re´sume´ On pre´sente dans cet article les concepts de base et les outils de la superposition automatique d’images et l’on aborde les proble`mes correspondants. On y de´crit les travaux effectue´s a` l’“University College London” sur la superposition de deux images et sur celle des images avec les cartes. On montre que les strate´gies que l’on y a mises en œuvre s’appliquent aux diverses cate´gories d’images et de donne´es de re´fe´rence et qu’avec l’assistance d’algorithmes valide´s par l’expe´rience on peut obtenir l’appariement recherche´. La me´thode “PAIRS”, qui a e´te´ conc¸ue et essaye´e pour la superposition automatique des images optiques issues des satellites, est actuellement en cours de de´veloppement pour un emploi avec des images aussi bien optiques qu’hyperfre´quences. Le syste`me “ARCHANGEL” de superposition des images aux cartes ou aux donne´es en mode vecteur est e´galement en cours de mise en œuvre et les premiers re´sultats sont extreˆmement prometteurs. Zusammenfassung Die grundlegenden Begriffe und Werkzeuge der automatischen Bildregistrierung werden dargestellt und die bestehenden Probleme diskutiert. Im Beitrag werden die am University College London ausgefu¨hrten Arbeiten zur Registrierung von zwei Bildern und zur Registrierung von Bildern mit Karten beschrieben. Es wird gezeigt, daß Strategien fu¨r verschiedene Bildtypen Referenzdaten entwickelt wurden und daß durch Anpassung eines bewa¨hrten Algorithmus die geforderte Bildkorrelation mo¨glich wird. Das PAIRS-Verfahren zur automatischen Registrierung optischer Bilder von Satelliten wurde implementiert und getestet und wird jetzt fu¨r die Nutzung optischer Mikrowellenbilder weiterentwickelt. Das ARCHANGEL-System zur Registrierung von Bildern mit Karten oder Vektordaten wurde ebenfalls implementiert und erste Ergebnisse sind vielversprechend. DISCUSSION Dr. Chandler: You stated that you have experienced problems due to perspective and relief displacement. I know that this is perhaps putting the chicken before the egg in this type of problem, but have you tried to carry out the matching process using an ortho-image derived after some kind of initial relative orientation? Professor Dowman: There are two aspects to the question of dealing with the displacements due to relief. The first is how to correct for it over large areas and the second is to deal with the effects on small detail. We want to develop a system which will deal with many different types of imagery and which will need a minimum of Photogrammetric Record, 16(91), 1998

17

DOWMAN. Automating image registration and absolute orientation

human intervention. To deal with different imagery, we work in the image space and the algorithms which we use for matching the polygons will deal with most of the distortions caused by terrain relief and hence we do not need an approximate ortho-image. The detailed effects of relief displacement caused by buildings and trees, for example, do cause a problem with the detail matching, but even this does not matter if there are sufficient points around the edge of the polygon which are not affected. We are working on this problem at present. Chairman (Dr. M. J. Smith): May I say that there is work being undertaken in this field at the University of Nottingham, matching ortho-images and maps (vector data)? Mr. Varshosaz: You have explained a number of techniques for registration of images to maps, most of which use points to register the two systems. What do you think about using other entities such as lines or planes? Professor Dowman: As I tried to explain, there is a problem in using lines which do not form a closed feature. If we could define the lines in three dimensions, then there would be some scope for using line photogrammetry to match these. We are starting with two dimensions, so that all we can do is define lines in the image space and map space. The edge detection process tends to produce lines which are very short and fragmented and it is very difficult to show that one line segment image corresponds to the specific line segment in the map. It is for this reason that we examined polygons because polygons do have a unique shape which we can exploit. Mr. Newby: I noticed in your Farnborough images that there seemed to be a very strong preference for lines which were either parallel to one of the axes (presumably north/south), or at 45° to the axes. Do you find that there is a problem of simply getting artefacts out of the pixel structure in that way, or was that just coincidental and unfortunate in that example? Professor Dowman: I don’t think that it is related to the pixel structure. I think that it is related to the perspective and the sun. In this particular scene, there were some quite strong shadows which affected the image and these tend to enhance edges. Similarly the perspective tends to accentuate certain edges. I think that the trends which you noted were more to do with orientation of the image with regard to the sun than to artefacts. Mr. Newby: So in principle there is no reason why the edges should appear at any particular angle? Professor Dowman: No. Professor Harley: You did not explain the coding of the polygons. Could you tell us what the numbers mean, please? Professor Dowman: We are looking for changes in direction and the code defines which octant that line is in. But then, in order to reduce the amount of data, we can actually look at the differences of the absolute directions. This is a technique which was developed by Freeman in Australia and it is a fairly widely used technique in this kind of process. Chairman: You mention that relative orientation can be undertaken automatically. What sort of limitation is there on tilts? Is it limited to the same extent as for human observers or are there more constraints in the matching process? Professor Dowman: There are several ways of dealing with large tilts. Most automated relative orientation algorithms make use of the epipolar constraint to search for corresponding points or to align the patches for matching. If rigorous geometry is used, any angle of tilt should be accommodated. If epipolar geometry is not assumed, the correlation algorithm should have a parameter to define rotation. In theory, there should be no problem but, in the end, it depends how the algorithms were written. Chairman: As there are no more questions, it leaves me to thank Professor Dowman for his presentation which, I hope you agree, was certainly worthwhile.

18

Photogrammetric Record, 16(91), 1998

Suggest Documents