A Segment-based Registration Technique for Visual-IR Images Enrique Coiras, Javier Santamaría and Carlos Miravet SENER, Ingeniería y Sistemas. Severo Ochoa, 4. Tres Cantos, PTM 28760, Madrid, Spain. e-mail:
[email protected]
Abstract- A new general registration method for images of different nature is presented in this paper. As grey-levels or textures cannot be used for the registration of images from separate spectral bands, an edge-based method has been developed. Edge images are processed to extract straight linear segments, which are then grouped to form triangles. A set of candidate transformations is determined by matching triangles from the source and destination images. The transformations are then evaluated by matching the transformed set of source segments to the set of destination segments. As the coincidence of vertices or edge overlapping cannot be assumed in the registration of images of different nature, a new function for evaluating the matching quality between source and destination segments which does not rely on overlapping measures is proposed. Results and subjective evaluation of the registration of visual and thermal infrared images are presented.
Keywords: Image registration, edge-based, segment-based, segment matching, visual-IR registration.
1. Introduction
Image registration is a required operation when dealing with images captured with different sensors, or to compare information of the same scene captured at different moments. Although manual registration may be adequate for an occasional image processing task, automatic registration is required for higher work loads or real time registration of video images. Our automatic registration process can be split in three basic steps: (1) Extraction of features from the images, (2) matching of the extracted features and (3) determination of a warping function from the determined matchings.
If the images to register are similar, as in those formed from similar sensors, techniques that rely on the correspondence on grey-levels or textures may be used for registration
1
[1-12]. But in images taken with sensors operating on different spectral bands (for instance visual and thermal IR images) the textures and grey-levels will not likely match, and methods using other features are required. In some special cases, as in the registration of medical images from disparate sensors, contextual considerations give additional information about the images to register, but these cannot be generalized for the registration of remotely sensed data or real world scenes.
The edges of the objects present in the images are a significant common feature that might be preserved in images of the same scene captured by sensors of different nature. Subsets of elements from the edge image, such as corners and linear segments, then, might be preserved, and can be used as features for the matching process. The use of corners or, more generally, feature points, has already been proposed for registration [12-16]. However, corner (and feature point) detectors are very sensitive to scale and skew, and some of them also to rotation, noise-level, texture and grey-level, making the registration process also very sensitive to these factors. Besides, point-based matching schemes are, in general, not very robust under severe geometric distortions or incomplete matchings (which usually appear in the registration of images from different sensors) unless some a priori information is provided.
Segment-based registration schemes represent a more robust approach than point-based methods, as segments fitted to edges are less sensitive to those factors (hence preserving the information present in the edge image) and are easily manageable as vectors. Some authors have already used them for registration of stereo pairs or 3D reconstruction [1722], but for the registration of images of different nature, some problems must be addressed first.
2
The first problem is that edge segments frequently appear fragmented, incomplete, or may not appear at the same position in one of the images (see figure 1). Thus overlapping, segment size and coincidence of segment vertices (although used for registration of similar images [18-27]) cannot be guaranteed, and should not be used for registration of images from separate spectral bands. The registration method presented in this paper is based on the assumption that the only segment-related invariant and robust characteristic between images is the straight line defined by the segment.
a
b
c
Figure 1. The behavior of edge and segment extractors in images from different bands may be altered because of differences on scale, grey-levels, texture and noise-levels. (a) A synthetic IR-visual pair, (b) extracted edges, and (c) detected linear segments. Segments A and A’, and B and B’, do match, but won’t overlap when the registration transformation is applied. The two source segments labeled C match the same destination segment C’, as the edge or segment extractor has not been able to detect the whole C segment.
3
Another problem is that a source segment can potentially match any segment in the destination image. Thus, the number of possible matching combinations is very high. This is not very critical in stereo pair registration methods, where segment disposition in both images is very similar, and search space is smaller. However, scale differences and different information content make the application of these methods difficult for generic image registration.
A more powerful approach consists of grouping segments in pairs, so that they define a “virtual corner”. The matching of segment-pairs fixes both position and orientation. Scale and skew could also be fixed if the segment extraction process preserved segment length and position, but this is not the case when dealing with images of different spectral bands. Therefore, techniques making use of these features, such as those presented in reference [20] for stereo matching, cannot be applied here.
Some authors have already used triangles or segment triples for matching or registration [20, 27-29]. Reference [28] uses accumulation of possible registration transformations associated with the matching of several features, but only rotation and translation are restored. In reference [29] triangle matching is used for planar shape pose estimation, but the method is only applicable to a single isolated shape, and, again, only rotation and translation are restored. References [20] and [27] assume the preservation of line segments and their vertex points, which is not the case for the registration of images from separate spectral bands. In this paper a new method for image registration, based on the matching of groups of three segments, is presented.
4
The main advantage of grouping segments to form triangles is that the matching of a source triangle and a destination triangle directly defines a complete affine transformation (scale, rotation, skew and translation) from the source to the destination image, which constitutes a possible global registration transformation for the images. Note that these triangles could be called “virtual triangles”, in the sense that they may not correspond to real triangular features present on the images.
Another advantage is that two triangles can always be put in correspondence, even if the segments that define them do not overlap when the images are registered, as shown in figure 2. Note, for instance, that the segments A and A’ (marked in figure 1.c), will not overlap, but belong to the same physical edge of the real scene.
The paper is organized as follows. First, a brief outline of the proposed registration method is described in section 2. In section 3 the method for triangle-based extraction of possible transformations is presented. Section 4 discusses the segment matching criterion. The method for the determination of the probability of a certain transformation, based on the segment matching quality function, is described in section 5. Section 6 studies second order correction of the obtained registration affine transformation. Finally, results of visual-IR registration and conclusions are presented.
5
Figure 2. Corresponding triangles (dark grey) in segment images of the registration image pair of figure 1. The segments defining the triangles have been highlighted for clarity. Note that the transformation that registers the triangles can also register the whole images, even though some of the highlighted source segments, once transformed, will not overlap with their correspondent destination segments.
2. Outline of the registration method
In our registration method, source and destination triangles are extracted from the segment images, and possible matchings between the triangles are determined. Every triangle match has an associated affine transformation whose parameters are checked against a set of previously specified limits. Then the resulting set of transformations is analyzed for the determination of the most likely registration transformation. A higher order correction for the most probable affine transformation by local segment matching can then be obtained.
The main steps of the registration procedure are the following:
-
Edges are extracted on both images. In our case several edge extractors have been used, such as Canny’s edge extractor [30], diffuse edges edge extractor [31] or
6
Sobel [32]. Registration results showed that the selection of the edge extractor is not critical, although for some of them a previous noise filtering process is required.
-
Segments are extracted from the edge images. First, edge thinning is applied to obtain edges that are one pixel thick. Then, connected pixel chains are extracted and approximated as straight linear segments using Ramer’s algorithm [33]. The number of resulting linear segments depends on the accuracy level of this approximation process. For the determination of the global affine transformation coarse approximations are sufficient, and decrease computing time. Furthermore, in order to reduce the effect of edge noise on extracted segments it is not convenient to set a very tight tolerance level. As a reference, for the IR-visual images presented in this paper, the allowed deviation of the approximated segment from the pixel chain was set to 1.5 pixels.
-
All possible triangles are formed by grouping the segments in both images. Triangles with very small areas or with very acute angles are discarded, because of their high sensitivity to segment noise. For the IR-visual images shown in this paper, a minimum area of 16 pixels and a minimum angle of 0.018 radians have been used.
-
For each triangle of the source image, the transformations yielding a match with every triangle of the destination image are computed and stored. A set of limits for the transformation parameters (scale, rotation, skew and translation) is established, and all transformations out of these ranges are discarded. It must be pointed out that these limits are not a requirement of the registration method, and are only used to reduce the computation time.
7
-
All obtained transformations are applied to the source segment image and the quality of the matching of the transformed source segments with the destination segments is computed by means of a quality function described in section 4. The transformation with the best value is selected as the most probable affine transformation for the registration of the source and destination images.
-
The resulting affine transformation is refined with the re-extraction of the image segments with smaller deviations in the chain approximations, by the application of a minimization method, such as the simplex [34] method.
-
A higher order refinement for the affine transformation may then be obtained if needed, as it is usually the case when distortion or other non-linear effects are present in the images. The refinement process is based on the local matching of the transformed source segments with the destination segments, and is described in section 6.
3. Extraction of possible transformations
Triangle matchings are used to determine possible transformations. Every three noncollinear segments, that do not intersect in the same point, form a triangle. It must be noted that valid triangles can also be formed when the segments are not contained within the triangle sides, as shown in figure 3.
8
a
b
Figure 3. (a) A group of three segments {a, b, c} define a triangle. The segments do not have to be contained within the triangle sides. (b) Once the intersection points have been determined, they are labeled clockwise.
When the points of intersection between the three segments have been determined, they are labeled clockwise, starting from one of them. As image flipping is not considered in our registration procedure, this clockwise order must be preserved by the triangle matching process.
Figure 4. Matching between a source and a destination triangle. Depending on the order of vertex assignation three different registration transformations can be obtained.
Thus (figure 4), if a source triangle, T = {p0 , p1 , p 2 }, is matched to a destination triangle, T ′ = {p0′ , p1′ , p ′2 }, there will be three possible ways of vertex assignation:
9
{p0 , p1 , p 2 }→ {p 0′ , p1′ , p 2′ } {p0 , p1 , p 2 }→ {p1′ , p 2′ , p0′ } ⇔ {p0 , p1 , p 2 }→ {p ′k , p(′k +1) mod 3 , p(′k +2) mod 3 }; k = 0, 1, 2 {p0 , p1 , p 2 }→ {p 2′ , p0′ , p1′}
(1)
Each of them will define a transformation from the source to the destination image. The transformation can be determined by a system of equations for the vertices of the triangles. For the assignation {p0 , p1 , p 2 }→ {p 0′ , p1′ , p ′2 } these equations are:
′ pi = TL ⋅ pi ; i = 0, 1, 2
(2)
where pi = (xi , y i ) and pi′ = (xi′ , y i′ ) are the image coordinates of the source and destination vertices, respectively, and TL is the affine transformation, characterized by the following matrix:
m0 TL = m3 0
m1 m4 0
m2 m5 1
(3)
Solving for every possible vertex assignation in (1), three transformations are obtained, T0, T1 and T2.
In principle, all resulting transformations can be considered as candidates for the registration of the input images. However, in most practical cases, limiting values for the registration parameters can be established and used to discard many candidate transformations before their complete evaluation, thus highly reducing the computing time.
10
For the calculation of the parameters of a transformation accounting for translation, rotation, skew and scale, the following structure has been considered:
1 0 t x cos α TL = 0 1 t y ⋅ sinα 0 0 1 0
s x cos α = s x sinα 0
− sinα
0 1 sk 0 ⋅ 0 1 1 0 0
cos α 0
0 sx 0 ⋅ 0 1 0
0 sy 0
0 0 = 1
s y ( s k cos α − sinα ) t x s y ( s k sinα + cos α ) t y 0 1
(4)
where tx and ty indicate translation, sx and sy scale, sk skew and α rotation angle.
From expressions (3) and (4), the following relations are determined:
t x = m2 ;
t y = m5 ;
s x = m 0 + m3 ; 2
2
sy =
tanα =
m3 ; m0
sk =
m0 ⋅ m1 + m3 ⋅ m4 m0 ⋅ m4 − m1 ⋅ m3
(5)
m0 ⋅ m4 − m1 ⋅ m3 m 0 + m3 2
2
The limiting values for these parameters used in the registration of the cases presented in this paper are:
0 .7 < s x , s y < 1 .3 ;
− 0.2 rad < α < 0.2 rad ;
− 0.15 < s k < 0.15 ;
− 64.0 pixels < t x , t y < 64.0 pixels
(6)
11
If no reflections are allowed, which is the usual case in image registration, the values of sx and sy will always be positive. This is equivalent to the preservation of the clockwise order in the triangle matching process.
Additional techniques are used to reduce the complexity of the calculations. For example, every two collinear segments define the same straight line, and thus, will generate the same triangle when grouped with another two (non-collinear) segments, as shown in figure 5. Then, the initial source and destination segment sets can be processed to eliminate these line redundancies.
Figure 5. Redundancies in triangle generation. Segments s1 and s2 form the same triangle when grouped with segments s3 and s4. Frequently, collinear segments, such as s1 and s2 in the figure, will correspond to the same fragmented edge of the scene. The elimination of one of these collinear segments prevents the unnecessary evaluation of redundant triangles.
4. Evaluation of a registration transformation
The triangle matching process yields a set of candidate registration transformations. The next step is to identify the best transformation for the registration of the input images. Two main approaches can be envisaged. One is to apply accumulation techniques to the 12
set of candidates in the parameter space, on the assumption that the most frequent transformation should be the best one (as proposed in reference [28]). However, the memory and computation needs for the management of the required six-dimensional space make the use of this approach unpractical.
The other approach, which is the one we have followed, consists on the evaluation of every individual candidate transformation to select the one that maximizes the value of a certain quality criterion. The problem in this case is the definition of an adequate criterion.
Methods based on grey-level or texture information, segment overlapping or coincidence of segment vertices cannot be used for the registration of images acquired with different sensors. Therefore, a specific matching quality function based on the correspondence of the individual straight lines containing the segments has been developed. This function, Q( si , s j ) , represents the matching quality of the transformed source segment si with the destination segment sj, and is described in detail in the next section.
In this conditions, the global matching quality of the source and destination segment sets is defined as the sum of the matching qualities of every transformed source segment si with the destination segment set: (7)
Q ( S1 , S 2 ) =
∑ Q( s , S
si ∈S1
i
2
)
13
where S1 is the transformed source segment set, S2 is the destination segment set, and Q( si , S 2 ) is defined as:
({
Q( s i , S 2 ) = MAX Q( si , s j ) s j ∈ S 2
(8)
})
The quality of a match between a transformed source segment si, and a destination segment sj, Q( si , s j ) , is defined as a function of the distance from si to sj, and is described in the following section.
5. Segment matching quality function
Our distance function relies on the angular difference between the segments and on the distance from the center of one of them to the straight line defined by the other (see figure 6), and is defined as:
1 d ρ ( si , s j ) d θ ( s i , s j ) + D ρθ ( s i , s j ) = 2 d ρmax dθmax 2
2
1/ 2
(9)
where d ρ ( s i , s j ) is the distance from the center of si to the line defined by sj, d θ ( si , s j ) is the angular distance (the angular difference between si and sj), dρmax and dθmax are the maximum allowed values for dρ and dθ respectively and are used for normalization purposes. The resulting normalized distance will be 0 if the transformed source segment
14
is completely contained within the line defined by the destination segment. If the distance is greater than 1, it is assumed that the segments do not match.
For the images of this paper, the following values have been used for distance normalization:
(10)
d ρmax = 5 pixels ;
dθmax = 0.2 rad
From this distance function, the matching quality factor Q( si , s j ) is defined as:
1 − D ρθ ( si , s j ) if Q ( si , s j ) = if î 0
D ρθ ( si , s j ) ≤ 1 D ρθ ( si , s j ) > 1
(11)
This quality function is also used for the detection of collinear segments within the same set. As noted in section 3, this is important for the elimination of redundant triangles. In this case, two segments s1 and s 2 are considered to be collinear if
Q( s1 , s 2 ) and Q( s 2 , s1 ) are both above 0. For the evaluation of these quantities the same limiting values in (10) have been used.
15
Figure 6. The distance of the transformed source segment si to the destination segment sj depends on the angular difference between the segments, dθ(si, sj) and the distance from the center of si to the straight line defined by sj, dρ(si, sj).
6. Higher order transformations
Once the best global affine transformation has been determined through maximization of (7), a higher order correction may be required to account for optical or scanning distortions and other non-linear effects that may be present in the images. If the sensor or camera configuration parameters are known or if some automatic correction techniques (such as in [36]) are applied, the images may be corrected before their registration and an affine transformation will be sufficient. However, in the general case, no additional information apart from the raw images should be expected.
Our approach to obtain a second order transformation starts with the assumption that the affine transformation is the initial second order transformation for an iterative refinement process. For every iteration, k , this second order function, Fk , transforms every source point (x, y ) into a destination point (x ′, y ′) in the following way:
16
x ′ = a 0k + a1k x + a 2k y + a 3k xy + a 4k x 2 + a 5k y 2
(12)
y ′ = b0k + b1k x + b2k y + b3k xy + b4k x 2 + b5k y 2
From the expressions (2) and (3) for the affine transformation the following relations result:
x ′ = m0 x + m1 y + m2
(13)
y ′ = m3 x + m 4 y + m5
Then, comparing (12) and (13) the following starting values for the second order coefficients are obtained:
a 00 = m2
a10 = m0
a 20 = m1
a 30 = a 40 = a 50 = 0
b00 = m5
b10 = m3
b20 = m4
b30 = b40 = b50 = 0
(14)
At every iteration, k , the second order transformation Fk is applied to the set of source segments. Thus, for every source segment, s = {(x1 , y1 ), (x 2 , y 2 )}, a new transformed segment, s ′ = {(x1′ , y1′ ), (x ′2 , y ′2 )}, results. If this transformed segment matches a destination segment, t , the straight lines containing s ′ and t should be very close.
In order to calculate Fk +1 , we force s ′ to lay within the line defined by t , by projecting s ′ to t. The co-ordinates of the resulting segment, s ′′ = {(x1′′, y1′′), (x 2′′ , y ′2′ )}, are substituted
in (12) to obtain:
17
x1′′ = a0k +1 + a1k +1 x1 + a 2k +1 y1 + a3k +1 x1 y1 + a 4k +1 x1 + a5k +1 y1 2
2
(15)
x 2′′ = a0k +1 + a1k +1 x 2 + a 2k +1 y 2 + a3k +1 x 2 y 2 + a 4k +1 x 2 + a5k +1 y 2 2
2
and
y1′′ = b0k +1 + b1k +1 x1 + b2k +1 y1 + b3k +1 x1 y1 + b4k +1 x1 + b5k +1 y1 2
2
y 2′′ = b0k +1 + b1k +1 x 2 + b2k +1 y 2 + b3k +1 x 2 y 2 + b4k +1 x 2 + b5k +1 y 2 2
(16) 2
Repeating for all the matched segments, two sets of equations result (one for the a ik +1 and the other for the bik +1 ), which are solved by singular value decomposition [34] to obtain the set of coefficients C k +1 . This solution can be directly considered as Fk +1 . However, to avoid abrupt changes in the evolution of F , we blend this new transformation with the previous one using: (17)
Fk +1 = χ ⋅ Fk + (1 − χ ) ⋅ C k +1
where χ is a number between 0 and 1 (we use χ = 0.9 ).
This new transformation Fk +1 is then evaluated by the method described in section 4. If the matching value, Q(S1, S2), associated with the new transformation Fk +1 is greater than that of the previous one, Fk , the new transformation is selected as the best second order approximation, and a new iteration is performed. The iterative process continues until the change of the matching value is smaller than a specified threshold value. At every step, a maximization method, such as simplex [34], can be applied to accelerate the convergence of the process.
18
7. Results
The registration method described in the paper has been used mainly for IR-visual registration, although, in principle, images from any other spectral bands can be used if the edge information content is sufficiently dense in both images. Figure 7 shows the results of the application of the registration procedure to the synthetic image pair of figure 1. Figures 8 and 9 show the corresponding registration results for two visual-IR image pairs. These images have been acquired with a standard video camera and a thermal IR camera working in the 3 to 5 µm spectral band. The size of the synthetic images is 256x256 pixels, while the real visual and IR images are 320x240 pixels. The registered images have been superimposed using a fusion method based on Gabor wavelet decomposition [35].
The lack of physical correspondence between the features of the visual and IR images makes difficult the evaluation of the registration results for natural images. For this reason, a fusion procedure has been used to visually compare the registered source image with the destination image. Image fusion is very adequate for visual evaluation, due to its sensitivity to misregistration effects. The obtained results do not show any artifacts, meaning that the input images have been correctly registered.
The registration computing time depends mainly on the number of segments, which determines the number of candidate transformations. On the other hand, for a precise registration, a high number of homogeneously distributed segments is required. Thus, a compromise between computing time and registration accuracy has to be found. In our 19
case, for the images of figures 8 and 9, about 70 segments where extracted on every image to generate approximately 15000 transformations, requiring 5 minutes of computing time on a Silicon Graphics Indigo2 workstation with R10000 processor and 128 Mb of memory. It must be pointed out that the registration of larger images does not necessarily require higher computation efforts, as the number of extracted segments can be limited at the beginning of the process. In any case, the homogeneity of the distribution of the segments is more important than their absolute number, to avoid the convergence to non-optimal global transformations. The registration and fusion of the synthetic image pair took just a few seconds.
a
b
c
d
Figure 7. Registration results for the synthetic image pair of figure 1. (a) synthetic IR image, (b) synthetic visual image, (c) registered IR image, and (d) fusion of the visual and registered IR images.
20
a
b
c
d
e
f
g
h
Figure 8. Registration of an IR-visual image pair. (a) IR image, (b) visual image, (c) and (d) detected segments. Only the highlighted segments have been used for triangle extraction. (e) IR image after affine registration, and (f) its fusion with the visual image. (g) Second order registration of the IR image, and (h) fusion result. The images show some buildings of the SENER facility in Madrid.
21
a
b
c
d
Figure 9. Registration of an IR-visual image pair. (a) IR image, (b) visual image, (c) registered IR image, and (d) fusion of the registered IR and visual images.
8. Conclusions
A new general registration method for images of different spectral bands has been presented in this paper. As grey-levels or textures cannot be used for the registration of images from separate spectral bands, a segment-based registration method has been developed. The method first determines a set of possible registration transformations, which is then evaluated to obtain the best transformation for the registration of the input images.
The determination of possible registration transformations is done by matching triangles formed by grouping segments in the source and destination images. For the evaluation 22
of these candidate transformations, the source and destination sets of segments are matched using a quality function that does not rely on overlapping measures, as coincidence of edges on images of different spectral bands cannot be assumed.
The proposed registration method does not need a priori information about the images to register, and it can restore scale, rotation, skew and translation variations of arbitrary size. However, to reduce computation time, limits for these registration parameters can be easily included.
The core of the registration process determines the best global affine transformation for the registration of the input images, which can then be refined to obtain higher order corrections. For the registration of the IR-visual images presented in the paper, second order polynomials have been used, although, in principle, any transformation model can be applied.
The absence of strict spatial coincidence in the features present on images from different spectral bands makes the evaluation of the precision of the registration method a difficult task, although, in principle, the precision of the registration method is only limited by that of the edge extractor. In this paper, a fusion process has been used for subjective evaluation of the registration procedure, due to the highly evident artifacts that appear in a fused image as a consequence of misregistration errors.
The method described in this paper is also suitable for the registration of SAR and visual-infrared imagery. However, additional specific techniques are needed to deal
23
with the poor quality of edges extracted from SAR images, and are out of the scope of the general method described here.
Our current work includes the development of a quantitative criterion for the evaluation of the registration method. The criterion is based on the same Gabor wavelet decomposition technique used for the image fusion process. This method will provide an adequate measure of the quality of our registration procedure.
Acknowledgments
This work was partially supported by the Ministerio de Educación y Ciencia of Spain (grant IN92-D50837758).
The authors would like to thank the reviewers for their interesting and constructive comments on the first version of the paper.
24
References
1.
L. G. Brown, “A Survey of Image Registration Techniques,” ACM Computing Surveys 24(4), 325-376 (1992).
2.
G.-Q. Wei, W. Brauer, G. Hirzinger, “Intensity- and Gradient-based Stereo Matching Using Hierarchical Gaussian Basis Functions,” IEEE PAMI 20(11), 1143-1160 (1998).
3.
J. Sato, R. Cipolla, “Image Registration using Multi-Scale Texture Moments,” Image and Vision Computing 13(5), 341-353 (1995).
4.
J. Flusser, T. Suk, “Degraded Image Analysis: An Invariant Approach,” IEEE PAMI 20(6), 590-603 (1998).
5.
H. Saji, H. Nakatani, “Energy-Minimization-Based Approach to Image Modification for Assembling Subpictures,” Opt. Eng. 37(3), 984-988 (1998).
6.
C. Miravet, J. Santamaría, E. Coiras, J. Ureña, “Generación Automática de Mosaicos. Aplicación de Técnicas de Fusión de Imágenes,” Revista de Teledetección, December (1998).
7.
M. S. Alam, J. G. Bognar, S. Cain, B. J. Yasuda, “Fast Registration and Reconstruction of Aliased Low-Resolution Frames by Use of a Modified Maximum-Likelihood Approach,” Applied Optics 37(8), 1319-1328 (1998).
8.
J. D. Dunlop, G. H. Holder, E. F. LeDrew, “Image Matching using Spatial Frequency Signatures,” Proceedings of the IGARSS’89 and Canadian Symposium on Remote Sensing, Vol. 3, 1273-1276 (1989).
9.
R. Venkateswarlu, B. N. Chatterjee, “Analysis of Image Registration Algorithms for Infrared Images,” SPIE Vol. 1699, 442-451 (1992). 25
10.
J.-P. Djamdji, A. Bijaoui, R. Maniere, “Geometrical Registration of Images: The Multiresolution Approach,” Photogrammetric Engineering & Remote Sensing 59(5), 645-653 (1993).
11.
B. S. Reddy, B. N. Chatterji, “An FFT-Based Technique for Translation, Rotation and Scale-Invariant Image Registration,” IEEE Transactions on Image Processing 5(8), 1266-1271 (1996).
12.
J.-W. Hsieh, H.-Y. M. Liao, K.-C. Fan, M.-T. Ko, Y.-P. Hung, “Image Registration Using a New Edge-Based Approach,” Computer Vision and Image Understanding 67(2), 112-130 (1997).
13.
H. H. Li, Y.-T. Zhou, “Automatic visual/IR image registration,” Opt. Eng. 35(2), 391-400 (1996).
14.
N. Sang, T. Zhang, “Rotation and Scale Change Invariant Point Pattern Relaxation Matching by the Hopfield Neural Network,” Opt. Eng. 36(12), 33783385 (1997).
15.
Q. Zheng, R. Chellapa, “A Computational Vision Approach to Image Registration,” IEEE Transactions on Image Processing 2(3), 311-326 (1993).
16.
D. Allison, M. J. A. Zemerly, J.-P. Muller, “Automatic Seed Point Generation for Stereo Matching and Multi-Image Registration,” IGARSS’91 Proceedings of the 11th Annual International Geoscience and Remote Sensing Symposium, Vol. 4, 2417-2421 (1991).
17.
W.-H. Wang, Y.-C. Chen, “Image Registration by Control Points Pairing Using the Invariant Properties of Line Segments,” Pattern Recognition Letters 18, 269281 (1997).
18.
G. Medioni, R. Nevatia, “Segment-based Stereo Matching,” Comp. Vis. Graph. Img. Proc. 31, 2-18 (1985).
26
19.
Z. Zhang, “Estimating Motion and Structure from Correspondences of Line Segments between Two Perspective Images,” IEEE PAMI 17(12), 1129-1139 (1995).
20.
P. Gros, O. Bournez, E. Boyer, “Using Local Planar Geometric Invariants to Match and Model Images of Line Segments,” Comp. Vis. Img. Und. 69(2), 135155 (1998).
21.
S. Sull, N. Ahuja, “Integrated Matching and Segmentation of Multiple Features in Two Views,” Comp. Vis. Img. Und. 62(3), 279-297 (1995).
22.
C. J. Taylor, D. J. Kriegman, “Structure and Motion from Line Segments in Multiple Images,” IEEE PAMI 17(11), 1021-1032 (1995).
23.
R. Horaud, T. Skordas, “Stereo Correspondence Through Feature Grouping and Maximal Cliques,” IEEE PAMI 11(11), 1168-1180 (1989).
24.
D. P. Huttenlocher, S. Ullman, “Recognizing Solid Objects by Alignment with an Image,” International Journal of Computer Vision 5(2), 195-212 (1990).
25.
Y. Zhang, J. J. Gerbrands, “Method for Matching General Stereo Planar Curves,” Image and Vision Computing 13(8), 645-655 (1995).
26.
Behzad Kamgar-Parsi, Behrooz Kamgar-Parsi, “Matching Sets of 3D Line Segments with Application to Polygonal Arc Matching,” IEEE PAMI 19(10), 1090-1099 (1997).
27.
S. O. Mason, K. W. Wong, “Image Alignment by Line Triples,” Photogrammetric Engineering and Remote Sensing 58(9), 1329-1334 (1992).
28.
L. N. Kanal, B. A. Lambird, D. Lavine, G. C. Stockman, “Digital Registration of Images from Similar and Dissimilar Sensors,” Proceedings of the International Conference on Cybernetics and Society, 347-351 (1981).
27
29.
I. Fermin, A. Imiya, “Planar Motion Detection by Randomized Triangle Matching,” Pattern Recognition Letters 18, 741-749 (1997).
30.
J. F. Canny, “A Computational Approach to Edge Detection,” IEEE PAMI 8(6), 679-698 (1986).
31.
T.-H. Yu, S. K. Mitra, “Efficient Approach for the Detection of Diffuse Edges,” Opt. Eng. 35(12), 3522-3530 (1996).
32.
W. K. Pratt, Digital Image Processing, John Wiley & Sons, (1991).
33.
R. M. Haralick, L. G. Shapiro, “Segmentation of Arcs into Simple Segments,” Chap. 11 in Computer and Robot Vision Volume I, Addison-Wesley, pp. 563565 (1972).
34.
W. H. Press, S. A. Teukolsky, W. T. Vetterling, B. P. Flannery, Numerical Recipes in C. Second Edition, Cambridge University Press (1994).
35.
J. Santamaría, M. T. Gomez, “Visible-IR Image Fusion Based on Gabor Wavelets Decomposition,” EOS Annual Meetings Digest Vol. 3, pp. 97-98 (1993).
36.
B. Prescott, G. F. McLean, “Line-Based Correction of Radial Lens Distortion,” Graphical Models and Image Processing 59(1), 39-47 (1997).
28
Enrique Coiras received his MS in Physics from the Complutense University of Madrid in 1993. Since 1994 he is pursuing the PhD degree in Optics in the Electro-optics section of SENER Ingeniería y Sistemas, under a grant from the Ministerio de Educación y Ciencia of Spain. He has published several papers on digital image processing. His current research interests are image registration and image morphology.
Javier Santamaría received his MS and PhD degrees in physics from the University of Zaragoza, Spain, in 1969 and 1973 respectively. In 1972 he joined the Instituto de Optica, Madrid, where he was working as a research scientist in the fields of image evaluation, image processing and vision. He has been head of the Imaging and Vision Department until 1988 when he began to work at SENER Aerospace Division where he is responsible for the Electro-optics group. He has been regularly publishing scientific papers and presenting communications to international meetings. He was president of the Spanish Optical Society (1990-92) and member of the Advisory Committee of the European Optical Society (1993-96). His current research interest include Electro-optical system performance, automatic target recognition and image enhancement and restoration.
Carlos Miravet received his MS in Physics from Madrid Complutense University in 1986. Since then, he has hold posts in national and international industrial and research and development institutions related to the fields of Electro-optics and image processing. Currently he is a senior engineer in the Electro-optics section of the Aerospace Division of SENER Ingeniería y Sistemas S.A.
29