Eigenmethod for feature matching of pre- and post ... - Semantic Scholar

18 downloads 0 Views 199KB Size Report
2 is the squared Euclidean distance between point i and point j and σ a parameter ..... [7] Jacqueline Le Moigne, William J. Campbell, and Robert F. Cromp: “An ...
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

1

Eigenmethod for feature matching of pre- and post-event images exploiting adjacency Marco Manfredi, Massimiliano Aldrighi, Fabio Dell’Acqua*, senior member, IEEE

Abstract With the continuing increase in the number of images collected every day from different sensors, automated registration of multisensor/multispectral images has become a very important issue. This is especially true when pre- and post-event image comparison is concerned: for this particular application, the requirement of obtaining the earliest possible post-event image imposes the use of data potentially possessing significantly different characteristics with respect to the pre-event image. Strongly inhomogeneous image pairs require robust automatic registration techniques, preferably based on resolution-independent, featurebased registration. In a previous paper we proposed a mode-based feature matching scheme, mutuated from the computer vision domain and adapted to pre- and post-event feature matching. Some of the weakpoints highlighted in that first version are addressed in this paper, where a new version of the method is proposed which exploits a new piece of information, i.e. the adjacency between feature points, generally preserved across the disaster event. Extensive generation of synthetic cases allowed obtaining significant feedback and consequently tuning the algorithm. Three real cases of pre-post event feature matching on high resolution satellite images are shown and discussed.

Index Terms change detection, mode-based methods, remote sensing, image registration

I. I NTRODUCTION

T

HE inclusion of a Disasters task into the Group on Earth Observation (GEO) [1] Work Programme testifies the unprecedented attention that monitoring of natural and man- made disasters is achieving in the Earth Observation (EO)

scenario. In the immediate aftermath of a disaster event, satellites can help in providing a quick mapping of the damage caused, Marco Manfredi, Massimiliano Aldrighi, Department of Electronics, University of Pavia, Pavia, Italy Fabio Dell’Acqua, European Centre for Training and Research in Earthquake Engineering (EUCENTRE), Pavia, Italy and Department of Electronics, University of Pavia, Italy *corresponding author. UNIPV and EUCENTRE, via Ferrata, 1, I-27100 Pavia, ITALY. E-mail: [email protected]. Tel.: +39 0382 516990 Fax: +39 0382 529131

2

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

useful for emergency relief actions. Mapping is routinely performed by institutions like UNOSAT [2], SERTIT [3] and the International Charter on Space and Major Disasters [5]. Damage mapping is generally a change detection problem involving comparison between a pre- and a post-event image to detect differences attributable to the damage caused by the event. For the comparison to make sense, the images need to be first registered, i.e. a spatial relation must be established between the two images allowing comparison of exactly matching portions of the Earth surface. A survey of techniques for coregistration of remotely sensed images, not so recent anymore but still largely valid in the outlined classification is contained in [6], where manual registration, area-based (or correlation-based, like in [7]) and feature-based registration are outlined. A more recent one, oriented to computer vision but still applicable here, is found in [8], basically confirming the classification of methods outlined in [6]. In the context of emergency mapping, time is a critical issue, and the earliest available post-event image may show different characteristic with respect to the pre-event image because it has e.g. been acquired with a different sensor [9][10]. Manualbased techniques are slow and correlation-based techniques are naturally unsuitable for inhomogeneous data; feature-based techniques, with feature points established and then matched [11], are the best option as features are expected to be imageindependent, while computer vision techniques are available that allow matching outlined feature points [12]. In this paper the issue of detecting and locating feature points will not be discussed, as the work focuses on matching extracted feature points. We will thus assume that a set of sensible feature points have already been extracted by a standard technique like e.g. a crossroads detector [13] or a building corner detector [14], and we will concentrate on automatically matching such feature points between the two images. Some experiments aimed at automatically generate a consistent set of feature points have already been performed by some of the authors In particular we have considered a mode-based method for matching points from two sets based on their relative distances [16]. The method is translation-, rotation- and also potentially scale-independent, which is desirable for registration purposes. In a former paper [17] we proposed to use a modified version of the method to match features points (e.g. building corners) in urban areas. The method performed well in a number of cases including real and synthetic data, but some important issues remained open. Among these is the constraint on the number of feature points, which is required to be the same in the two images (pre- and post-event). This is clearly a particularly critical limitation in a pre- to post-event comparison, because some of the feature points may have disappeared. Another issue pinpointed was the failure to exploit other types of information such as adjacency between feature points (e.g. two corners from the same wall). In this paper such issues are explicitly addressed, together with others not listed here, and solutions proposed and tested on synthetic and real data. In the following section II a brief description of the formerly proposed method will be provided, while in section III the

2

3

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

synthetic and real-case data used for testing the proposed algorithm will be presented and discussed. In section IV the weak points of the previous method are pointed out through experiments and improvements proposed and illustrated. Section V describes some registration experiments driven by the feature point matching, while section VI proposes a statistical feature selection method supposed to be useful for filtering out the incorrect matches, proposing also some preliminary experiments. Finally, section VII closes the paper with some conclusions and research lines for its potential future developments.

II. T HE P REVIOUS M ETHOD The modal matching method starts from an idea by Shaphiro and Brady [16] who conceived a modal representation of point sets in a 2-D space. In this algorithm, first of all a square proximity matrix has to be computed for each of the point set (pre- and postevent sets) whose elements are defined through a Gaussian metric as follows: 2 −rij

Hij = e 2σ2

2 where ~rij = k~xi − ~xj k2 is the squared Euclidean distance between point i and point j and σ a parameter controlling the degree

of interaction between points. Then the classical eigenproblem HEi = λi Ei is solved through the computation of proximity matrix eigenvalues and eigenvectors. Having sorted positive eigenvalues in decreasing order into the non-zero elements of a diagonal matrix D and organized the column eigenvectors into the orthogonal modal matrix V in corresponding positions, the following relation holds: H = V DV T To match the two sets of image points, in [16] a comparison is proposed between the rows of the modal matrix. Correspondence is assigned based on the minimum Euclidean distance in the modal space: i ↔ j = arg min ′ j

o X

kVD (i, l) − VM (j ′ , l)k2

l=1

where VD , VM and i, j are the modal matrices and the points associated with the pre-event and post-event image point set. Afterwards, starting from [16], Carcassoni and Hancock [15] proposed a new method that established the matching of the points after partitioning them into clusters. In [17], the authors highlighted some issues not discussed in [15] and they attempted to address them by introducing some modifications into the algorithm, starting from the very beginning, i.e., the clustering criterion. Point sets partitioning into clusters is made by selecting the maximum value among the feature vector modal matrix

3

4

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

components of each point and clustering together the points that present a maximum value in the same column of the modal matrix.; thus the total number of clusters equals the number of columns of the modal matrix in which a maximum is found. The next steps consisted in the computation of the position vector for the cluster center P|D| i=1 |ΦD (i, ωd )|wi = cD P|D| ωd i=1 |ΦD (i, ωd )|

the computation of the cluster center proximity matrix GD using an hyperbolic tangent weighting function (taking into consideration the total number of clusters) and then in solving the eigenproblem using the cluster center proximity matrix det(GD − ΛD I) = 0 obtaining the cluster center modal matrix ΨD and analogously ΨM for the post-event set. M Finally, the point and cluster matching is calculated comparing the elements of ΨD , ΨM and ΘD ωd , Θωm on a row-by-row

basis [19]:

  2 D D M D , l)k (δ (δ , l) − Θ exp − k kΘ w ωm j,ωm ωd i,ωd l=1   P POwd ,wm D D 2 D M exp − kw kΘωd (δi,ωd , l) − Θωm (δj,ωm , l)k j ′ ∈M l=1 POωd ,ωm

PS

L=1 exp



2



− kb kΨD (ωd , L)| − |ΨM (ωm , L)k   PS PS 2 ωm =1 L=1 exp − kb kΨD (ωd , L)| − |ΨM (ωm , L)k M where Oωd ,ωm = min[kCωd k, kCωm k], ΘD ωd , Θωm are the proximity matrices relative to points belonging to cluster ωd and ωm

(d and m stands for data or model set) and kw and kb are set to 1000 as a result of a previous optimisation work [17].

III. T EST SETS In order to test the algorithm on a widely diverse test set and thus expose its weak points, both real and simulated data were considered. Before describing them, the reader should be made aware of the underlying assumptions: •

feature points are building corners, extracted with a reliable method (in our case, simulated by a manual extraction)



adjacency information is included, i.e. connections between feature points are explicitly declared (in our case, points belonging to the same building side are considered adjacent)

The real images are on three different events: •

a pre- post-event image pair over Bam, Iran, stroke by a 6.5 MW earthquake on 26th December 2003;



a pre- post-event image pair over Boumerdes, Algeria, stroke by a 6.8 MW earthquake on 26th May 2003;

4

5

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING



a pre- post-event image pair over a shanty town on the outskirts of Harare, Zimbabwe, demolished during Operation Murambatsvina in May 2005.

In all cases the pairs consisted of Quickbird images: the second is panchromatic while the first and the third are pan-sharpened, real color images. Four point set pairs were extracted from the above image pairs: •

1 set pair for Bam, 48 points



1 set pair for Boumerdés, 60 points



1 set pair for Harare, 60 points

The above set pairs were collectively termed "test set A" and represent the "real-world" datasets. Such test sets are however not sufficient to provide a solid statistical base for testing; yet enlarging the dataset size is a labour-intensive operation. As a compromise we decided to automatically create a number of random datasets. A generator script was written, capable of generating random sets of feature points associated with fictitious buildings. It includes three processing steps: •

in a first step, a random coordinate generator spreads building gravity centres over a portion of a 2D space;



in the second step, for every gravity centre a pair of side sizes and a rotation angle are generated to characterize size and orientation of the building associated with the gravity centre at hand;



In the third step, feature points are generated, compliant with the assigned size and orientation, and adjacency flags between neighbouring points are set

A typical results is shown in Figure 1. The size of the 2D plan was limited to around 1000 × 1000 pixels. Although the scale-invariance of the method should make the magnitude of the point coordinates unrelevant, it was chosen to keep measures homogeneous with those of the real images, expressed in pixels, to prepare the grounds for possible, future versions including pixelwise processing steps.

Figure 1: A typical output of the random feature point generator. Feature points are marked by red circles, adjacency relationships are symbolized by blue segments.

5

6

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING It was decided not to filter out partly overlapping buildings because building patterns alike are commonly found in old,

masonry or stone-based urban clusters on steep-sloped areas (e.g. Greek island villages, or Sassi di Matera in Italy). Using the generator, 20 pre-event sets containing 30 buildings each were generated. The corresponding post-event sets were created by adding a gaussian noise to the pixel coordinates, simulating the effect of random disturbance factors. Doing so with a standard deviation of 1 pixel resulted in generating the "test set B10" and with a standard deviation of 2 pixel in generating the "test set B20", and so on with different values. We thus obtained a "real-case" test set A and more "simulated" test sets Bxx. Six post-event sets were finally derived, from B05 at σ = 0.5, B10, B20, B30, B40 and finally B50 at σ = 5. We wish to recall that at the typical VHR pixel posting of 1 m, a standard deviation of 1 pixel means along each axis the location of the pixel is potentially displaced by less than 2 meters in around the 95% of cases. This is to be compared with an average size of the considered buildings on the order of meters and is deemed suitable for representing small displacements due to e.g. different vantage points between the two acquisitions.

IV. I MPROVEMENTS This section will discuss the drawbacks of the original method and illustrate the proposed improvements.

A. Mode number reduction As introduced in section II, the original matching method relied on a comparison of the feature points as projected in the mode space. From [16] we know that shape information tends to concentrate on principal modes, thus some of the columns (modes) of the modal matrix may be ignored where the suppressed information mostly consists of small-scale differences between feature point sets, potentially even harmful. The issue of unrelevant subspaces in the mode space was investigated through a series of experiments. In the first experiments, all the B05 subsets were repeatedly matched increasing one by one the number of neglected modes, starting from the least relevant. Fig. 2 shows the percentage of correct matchings vs. the number of ignored modes. The graphs feature a clear trend, common to the three sets, with a negligible effect until around 15 modes followed by a steep descent. In order to validate the results, the same experiment was performed also on the C05 set, whith similar results (Fig. 3). It is however interesting to note that experiments on the C30 set showed a different trend, departing much earlier from 100% correctness, as in Fig. 4. This phenomenon has two implications; the first being that in the absence of a noise level estimate it is preferable to keep all of the modes active in the feature point comparison, which is the case in the following experiments; the second is that a decrease in accuracy is expected when considering uneven sets of points, where a number of modes in the more numerous set do not possess a matching mode in the other set. This is actually the case, as it will be shown in IV-E. 6

7

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

Figure 2: matching accuracy as a function of the number of least-significant, neglected columns; real-case data

Figure 3: same as fig. 2, with synthetic data, σ = 0.5

B. Clusters As already mentioned in section II, the doubtful point in defining clusters for this particular application was confirmed. Experiments were performed on the C05 set forcing the number of clusters to 1, 2, 3 ... but letting the feature points be assigned to the clusters according to the rules defined in II. Corresponding matching accuracies are shown in Fig. 5, clearly showing a dramatic drop in the resulting percentages. The clustering was abandoned resulting in a signficant simplification in the matching formulae: Oωd ,ωm

X l=1



exp −

D kw kΘD ωd (δi,ωd , l)



2 D ΘM ωm (δj,ωm , l)k



This formula, although not supported anymore by a Bayesian framework, still provides higher values for more likely correspondences, and has proven to actually allow matching feature points in its practical use, both on real and simulated data.

Figure 4: same as fig. 3, σ = 3

7

8

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

Figure 5: matching accuracy vs. number of clusters

C. The parameter σ In section II it was stated that setting the σ parameter to 1.1 × dm ax, where dm ax is the largest distance between any two points in the feature set at hand, appeared to be a good criterion for mode computation. When comparing potentially uneven sets of points, however, this criterion may lead to significantly smaller values of σ in the post-event set, as the farthest points may have been suppressed as a consequence of the disaster event. The experiments evidenced that this is an undesirable condition; a first set of matching attempts, sets from B05 to B50 were compared, and results are displayed in Fig. 6, red solid line. Here, although no points were suppressed, a mismatch between the σ values is introduced by the feature points displacement caused by the noise added on the coordinates. Also in Fig. 6 on the green solid line, the same accuracy results are shown in the hypothesis of forcing the value of σ for the post-event image to the same value computed for the pre-event image. The greater accuracy values testify that it is worthwhile to force the two sigma values to be the same; unfortunately this is applicable only when the two images feature the same pixel posting, which is however not so an uncommon case.

Figure 6: accuracy vs. variance for even (solid red line) and uneven (solid green line) values of the scale parameter sigma

D. Matching strategy Another improvement was introduced in terms of matching feature points associated to the row and columns of the absolute maximum element in the matrix ΨD instead of matching the maximum of each row with the corresponding column; this strategy may indeed lead to one-to-many matchings in case of row maxima on the same column. Due to the definition of

8

9

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

correspondence in our framework, such case is not acceptable at all. Matching only the absolute maxima results in higher accuracy, as confirmed by table I. Synthetic Data

Harare Image

Row Maxima

Absolute Maxima

Row Maxima

Absolute Maxima

96.51 %

98.86 %

90.00 %

96.66 %

Table I: comparison between accuracy from the two possible matching strategies

E. The issue of uneven point sets After a disaster event, a decrease in the number of extracted feature points is to be expected due to possible partial or total destruction of some buildings. An uneven pair of points, as mentioned in IV-A leads to unsatisfactory results due to the consequent mismatch in the mode space. It was then considered to re-integrate the post-event point set by adding dummy feature points instead of trying and correcting the mismatch in the feature space. Since the modal space is very sensitive to the overall shape [16] of the point set, oversampling the existing shape was thought of as a natural choice to re-integrate the number of feature points in the post-event set. Oversampling of the building contour is made possible by the adjacency information, in the reasonable assumption that building walls are linearly shaped. The concept is illustrated in Fig. 7 : the pre-event set (a), and the corresponding post-event set (b) where two points went missing in the bottom right area. Fig. 8 represents three different strategies for reintegrating the missing feature points through oversampling of the surviving shapes: (a) on the surviving piece of collapsed building; (b) concentrating oversampling on any single wall; (c) distributing feature points on different walls.

Figure 7: Original pre- (a) and post-event (b) synthetic test sets

A

B

C

77.67 %

55.00%

57.33%

Table II: Oversampling Strategies Comparison

9

10

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

Figure 8: Oversampled post-event sets: (a) on the same building, (b) on a different building, (c) distributed

A comparison of the results is found in Table II where the three resulting accuracy values are reported. Apparently, oversampling the surviving section of the building is the best strategy, but the statistical sample is too small to make any definite statement. A similar experiment was made on real images whose post-event sets were reduced in accordance with the observed building collapses: •

Bam: 48-point pre-event set, 45-point post-event;



Boumerdes: 60-point pre-event set, 55-point post-event;



Harare: 60-point pre-event set, 55-point post-event; Image

Without Oversampling

With Oversampling

Difference

Bam

6.81%

33.00%

26.19%

Boumerdes

21.81%

51.7%

23.91%

Harare

23.63%

54.54%

30.91%

Table III: Oversampling vs Non Oversampling

Results are summarized in Table III and although accuracy levels are not outstanding, it is clearly apparent that re-introducing feature points -even based on ancillary information- is better than leaving them out completely. In a framework of potentially different pre- and post-event feature point set, this makes quite a difference.

V. R EGISTRATION As mentioned in IV, a registration step is applied after the point matching is completed. In the following it is explained how this step was implemented.

10

11

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

A. Registration technique A linear conformal transformation is assumed to be sufficient, defined by the classical RST (rotation-scale-translation) formula:



 h11   [XB YB ] = [XA YA ]   h21   h31



h12    h22     h32

(1)

where h11 = h22 = s. cos(θ), h21 = −h12 = s. cos(θ), h31 = tx , h32 = ty , being s the scale factor, θ the rotation angle, tx and ty the translation components. Such components are estimated through minimization of the squared distance between the projected feature point and its matching one. Only the actual feature points are considered, i.e. distances between pairs of feature points including one result of the shape oversampling operation are neglected; such pairs are indeed necessary to make the matching algorithm work at its best performances by making the mode spaces more homogenous, but they do not convey any useful spatial information in the image domain, on the contrary they are obviously misleading. It is finally to be noted that a "perfect" matching in the image domain is however made impossible by the Gaussian noise added to the coordinates at the beginning. To evaluate registration results, the distances between each projected feature point and its actual corresponding point on the post-event image are considered as the series of error values; the performances of the registration algorithm are defined based on the average and mean square deviation of such series, assumed to be representative of a systematic and residual error respectively in the defined transformation. Once the preliminary registration based on matched feature points has been performed, a refinement step can be added under the assumption of small residual error. In this latter case, indeed, each feature point should have been projected close enough to its corresponding feature point on the post-event image for a pure distance-based matching to work satisfactorily. Based on the new point matching results, a new transformation is defined, expected to bring feature points even closer to their corresponding ones.

B. Registration results The experiments carried out on the three real-world images have confirmed the correctness of the procedure illustrated in the former paragraph. Table IV shows the results from modal-based registration of image pairs. The distances, expressed in pixels, are to be compared with maximum interfeature point distances of 1143.35, 649.73, 1787.15 for the three images respectively. The first registration results in non-negligible residual errors, but it is to be noted that most

11

12

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

Image

Measure

1st Transformation

2nd Transformation

Mean

101.28

36.11

St.Dev.

90.12

16.47

Mean

25.77

16.54

St.Dev.

3.61

3.71

Mean

300.64

96.81

St.Dev.

17.91

13.80

Bam

Boumerdes

Harare

Table IV: residual misregistration levels

of the average error is due to the few wrong associations whose corresponding squared distances impact heavily on the overall figures. Actually, the feature points get sufficiently close to their actual corresponding ones for the subsequent, distance-based matching technique to work properly. The error shows indeed a clear, decreasing trend, both in the average distance and the mean squared deviation. The final residual error is still somehow too big, and further refinement of the technique is in order, which may be represented by the introduction of a robust statistical estimation technique, as briefly described in VI.

VI. S TATISTICAL SELECTION OF

MATCHING SAMPLES

Correspondence between feature points may easily include wrong associations which result in outliers, i.e. pairs which tend to pull the transformation parameters away from the average of the other associations. This lays the ground for possible use of a robust regression method such as RANSAC (RANdom SAmple Consensus) [20] which attempts to outline a small subset of correct data instead of using the largest possible dataset to estimate the transformation parameters. In the case at hand, the application of RANSAC translates into picking random subsets of matched feature points, defining a transformation based on the subset as described in V-A and comparing the defined transformations. A subset of the defined transformations should appear, which is sufficiently homogeneous to be recognised as the "correct" one and thus define a "correct" transformation, clean of the bias from mismatched points. As a preliminary test we defined six slected subsets of feature points from the Boumerdes real image case, two of which contained correct point matchings only (18 points, 9 correct point matchings), other two contained correct and incorrect matchings (18 points, 4 correct point matchings) and finally the last two contained only wrong associations. Transformation parameters were computed as defined in V-A, and resulted in the following matrices:         H1 =    

0.9903

−0.0050  0.0023  0.9645      0.0050 0.9903  0.9645  , H2 =  −0.0023     −271.9655 −89.0267 −250.4210 −83.1329 12

       , H3 =       

0.9443



−0.0162    0.0162 0.9443  ,   −241.1614 −43.0921

(2)

13

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 

   H4 =    

0.8340





−0.2441  −0.648  0.9079      0.2441 0.8340  0.9079  , H5 =  0.0648     −238.9818 323.1771 −229.2627 76.8677





0.0768   0.8372      , H6 =  −0.0768 0.8372       −23.2849 −113.4855

       

(3)

Two facts can be noted, as visible also in Fig. 9: •

parameters computed for the two sets containing "correct" matchings only are not extremely similar to each other; this is probably due to the small, unavoidable errors in defining feature point locations, realistic if one thinks of automatic extraction;



still, when comparing either "correct" set with either of the other four sets, at least one parameter appears as significantly different and flags the presence of at least on "incorrect" set in the two sets examined.

This probably means that the application of RANSAC is a sensible way to improve results, although it may not be straightforward to tune the selection parameters when detection of correct subsets is concerned.

Figure 9: visual comparison between transformation parameters defined by the different subsets. Left: comparison between translation parameters, on a cartesian plane; Right: comparison between rotation and scale parameters, in polar coordinates. Labels indicate subset numbers.

VII. C ONCLUSIONS In the framework of feature point matching between pre- and post-event images, the proposed modal method may represent a valuable tool. Its performances, once bound by a numer of limitations, have been improved by developing and applying suitable devices, including a reduction in the number of considered modes, and exploitation of adjacency information to oversample the post-event shapes in order to make even the pre- and post-event feature point sets. Experiments have reported enhanced 13

14

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

point matching capabilities which resulted in better registration of images. There is however still room for improvements and it is the authors’ feeling that efforts should be concentrated on post-matching selection with methods like the one discussed in section VI rather than attempting further tuning of the matching method itself. The set of wrong matchings is generally quite small with respect to the global set of matchings and it is thus expected to be easily separable, while further improvements of the matching method would probably pass through complex modifications with uncertain outcome. Future work will be directed towards: •

finding an efficient method to sort out a subset of correct matchings;



introduce a third, fine-tuning registration step in the final phase. At that stage it would probably make sense to apply a correlation-based method to the rescaled, resampled post-event image.

Experiments are currently in progress, and preliminary results appear encouraging.

VIII. ACKNOWLEDGEMENTS This work was partly funded by the Italian Space Agency through sponsorship of a PhD grant. The authors wish to thank Prof. Uwe Soergel at the university of Hannover for the useful discussion concerning RANSAC at the Urban 2009 event in Shanghai.

R EFERENCES [1] GEO web site: http://www.earthobservations.org/, last accessed on 07/08/09, 13:15. [2] UN Institute for Training and Research Operational Satellite Applications Program (UNOSAT). Web site: http://unosat.web.cern.ch/unosat/, last accessed on 07/08/09, 13:16. [3] Service Regional de Traitement d’Image et de Teledection (SERTIT). Website: http://sertit.u-strasbg.fr, last accessed on 07/08/09, 16:42. [4] Image concerning destroyed houses in Mbare Township, Harare, Zimbabwe. Pre-event QuickBird image acquired on 16th April 2005, post event IKONOS image acquired on 27th June 2005. Available online at: http://unosat.web.cern.ch/unosat/freeproducts/zimbabwe/harare_map01-22July2005_high.jpeg, last accessed on 07/08/09, 13:17. Source(s): Map production and image analysis: UNOSAT Data: UNOSAT, UNEP, NGA, IRIN, BBC, GLC Satellite Images: IKONOS: Copyright INTA Space Turk 2005 QUICK BIRD: Copyright Digital Globe 2005 [5] The Charter web site: http://unosat.web.cern.ch/unosat/, last accessed on 07/08/09, 13:18. [6] Leila M.G. Fonseca and B.S. Manjunath: “Registration Techniques for Multisensor Remotely Sensed Imager” Photogrammetric Engineering & Remote Sensing, Vol. 62, No. 9, September 1996, pp. 1049-1056. [7] Jacqueline Le Moigne, William J. Campbell, and Robert F. Cromp: “An Automated Parallel Image Registration Technique Based on the Correlation of Wavelet Feature”. IEEE Transactions on Geoscience and Remote Sensing, Vol. 40, No. 8, August 2002 [8] Barbara Zitov and Jan Flusser “Image registration methods: a surve”, Image and Vision Computing Volume 21, Issue 11, October 2003, pp. 977-1000. [9] Gabrielle Lehureau, Florence Tupin, Celine Tison, Guillaume Oller, David Petit: “Registration of metric resolution SAR and Optical images in urban area”. Proceeding of the 7th European Conference on Synthetic Aperture Radar , 02.-05. June 2008, Friedrichshafen, Germany. On CD-ROM. [10] Jordi Inglada, Alain Giros “On the Possibility of Automatic Multisensor Image Registratio”, IEEE Trans. on Geo. and Rem. Sens., vol. 42, n. 10, 2004.

14

15

IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING

[11] David M. Mount, Nathan S. Netanyahu, Jacqueline Le Moigne: “Efficient algorithms for robust feature matching”, Pattern Recognittion, n.32, pp. 17-38, 1999. [12] Lisa Gottensfeld Brown: “A survey of image registration techniques”. ACM Computing Surveys, vol. 24, no. 4, pp. 326–376, Dec. 1992. [13] Fabio Dell’Acqua, Paolo Gamba, Gianni Lisini: “Improvements to urban area characterization using multitemporal and multiangle SAR image”. IEEE Transactions on Geoscience and Remote Sensing, Volume 41, Issue 9, Part 1, Sept. 2003 Page(s):1996 - 2004 [14] S. M. Smith and J. M. Brady, SUSAN - A New Approach to Low Level Image Processing, International Journal of Computer Vision, 23(1): 45-78, 1997. [15] M.Carcassoni and E.R.Hancock, “Correspondence Matching with Modal Clusters’, IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 25 N.125,December 2003. [16] L.S. Shapiro and J.M. Brady, “Feature-Based Correspondence: An Eigen vector Approach”, Image and Vision Computing, vol. 10, pp. 283-288, 1992. [17] Massimiliano Aldrighi, Fabio Dell’Acqua: “Mode-Based Method for Matching of Pre- and Postevent Remotely Sensed Images”, IEEE Geoscience and Remote Sensing Letters, vol.6, no.2, pp.317-321, April 2009 [18] Fabio Dell’Acqua, Alessandro Sacchetti (2009): “Steps towards a new technique for automated registration of pre? and postevent images”. Proc. of the 2009 Joint Urban Remote Sensing Event, Shanghai, China, 20?22 May 2009. [19] M.Carcassoni and E.R. Hancock, “Spectral Correspondence for Point Pattern Matching,”, Pattern Recognition 36, pp. 193-204, 2003 [20] M.A. Fischler, R.C. Bolles, “Random Sample Consensus: A paradigm for model fitting with applications to image analysis and automated cartography”, Communications of ACM, pp. 381-395, 1981.

15

Suggest Documents