Eighth Mexican International Conference on Current Trends in Computer Science
An Accurate Image Registration Method Using a Projective Transformation Model Felix Calderon and Leonardo Romero Universidad Michoacana de San Nicolas de Hidalgo Santiago Tapia 403. Col Centro Morelia Michoacan 58000, Mexico
[email protected],
[email protected]
Abstract
the classical method, because the classic method is strictly valid only under translations, and it gives wrong estimates when the right model has rotations, afine or general projective transformations. The rest of this paper is organized as follows. Section 2 describes the registration problem as an optimization problem and Section 3 introduces the bilinear interpolation to compute accurate transformations of images. Section 4 introduces the general projective transformation. Section 5 shows two methods to compute the set of derivatives of images needed, the new method and the classical method reported in the literature. Section 6.1 presents the well known Levenberg-Marquardt non–linear optimization method [12], commonly used in many computer vision problems. Experimental results are shown in Section 7 using the two methods of computing derivatives. Results confirm the accuracy of the new method of computing derivatives. Finally, some conclusions are given in Section 8.
Given a model structure, an input image and a reference image, the parametric registration task is to find a parameters vector (of the model) that transform the input image into the reference image. This paper reviews the general projective model and develops a new method of computing the set of image derivatives needed. It shows how the classical method reported in the literature is only accurate under translation, and it fails when other transformations are involved. Analytical derivations and experimental tests show the superior accuracy of the new method against the classical method.
1
Introduction
Image registration is the process of overlaying two or more images of the same scene taken at different times, from different viewpoints or by different sensors [16]. If the transformation model, that overlaying the input image into another image, called reference image, have a small parameters vector, the task is called parametric image registration [4] (e.g. a parameters vector for the whole image). The literature is plenty of parametric registration techniques. Some of them are based on Spatiotemporal Energy [1] [2] [7], other methods are based on correlation [8], [9], others are based on the minimization of the Sum of Squared Differences (SSD) [10], [14] (also named radial basis function in [16]), and others are based on optical Flow [3]. This paper presents a new method to compute the set of derivatives of images needed by a SSD technique to solve the registration problem, considering a general projective transformation model. The new method is derived analytically and it is a more general method than the classical method reported in the literature to compute derivatives. The new method is more accurate and complete than
0-7695-2899-6/07 $25.00 © 2007 IEEE DOI 10.1109/ENC.2007.27
2
The Parametric Registration Problem
Let I(i, j) denote a gray level image (typically an integer value from 0 to 255), for integer coordinates i, j. I(i, j) gives the intensity value of the pixel associated to position i, j (see Figure 1), and Ir (i, j) denotes the reference image. If the parameters vector is denoted by Θ, the parametric registration problem, is to find a vector Θ that minimizes the Sum of the Square Differences E, between the transformed input image and the reference image. E can be expressed in the following way: E(I(Θ), Ir ) =
(I(x(Θ, i, j), y(Θ, i, j))−Ir (i, j))2
∀∈Ir
(1) For instance, given a position x = i + 1 and y = j, one pixel Ir (i, j) is going to be compared with pixel I(i + 1, j).
58
j
j
i
i
Image It
Image I
Figure 1. Computing transformed image It from I
This situation is equivalent to have a transformed input image, It , where all pixels, of the input image, have moved to the next position upwards (see Figure 1). The error E, between It and Ir , compares each pixel of both images at the same position (i, j). With the right Θ, image It and Ir should be very similar and E should have a minimum value. The new image It (i, j) can be computed by It (i, j) = I(x(Θ, i, j), y(Θ, i, j))
Figure 2. Using Bilinear Interpolation
00000 11111 11111 00000 00000 11111 00000 11111 00000 11111 00000 11111 00000 11111 00000 11111 00000 11111 00000 11111 00000 11111
(2) (a)
If x(Θ, i, j) and y(Θ, i, j) are outside of the image I, a common strategy is to assign a zero value which represents a black pixel. But, What happen when x(Θ, i, j) and y(Θ, i, j) have real values instead of integer values? Remember that image I(x, y) have only valid values when x and y have integer values. An inaccurate method to solve this problem is to use their nearest integer values. Next section presents a much better method to solve this problem.
3
Using the bilinear interpolation, a smooth transformed image is computed. Next section introduces the general projective transformations that maps lines, in the input image, to lines in the transformed image [6].
If xi and xf are the integer and fractional part of x, respectively, and yi and yf the integer and fractional part of y, Figure 2 illustrates the bilinear interpolation method [5] to find I(xi +xf , yi +yf ), given the four nearest pixels to position (xi +xf , yi +yf ): I(xi , yi ), I(xi +1, yi ), I(xi , yi +1), I(xi +1, yi +1) (image values at particular positions are represented by vertical bars in Figure 2). First two linear interpolations are used to compute two new values (I(xi , yi +yf ) and I(xi + 1, yi + yf )) and then another linear interpolation is used to compute the desired value (I(xi + xf , yi + yf )) from the new computed values: =
I(xi + 1, yi + yf )
=
I(xi + xf , yi + yf )
=
(1 − yf )I(xi , yi ) +yf I(xi , yi + 1) (1 − yf )I(xi + 1, yi ) +yf I(xi + 1, yi + 1) (1 − xf )I(xi , yi + yf ) +xf I(xi + 1, yi + yf )
(b)
Figure 3. Linear Transformations in Homogeneous Coordinates: (a) Original, (b) Projective
Bilinear Interpolation
I(xi , yi + yf )
00000000 11111111 11111111 00000000 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111 00000000 11111111
4
Projective Transformation
In this section we review briefly the Projective transformation which is illustrated in Figure 3. In order to have an uniform frame of reference for this transformation, homogeneous coordinates are going to be used [6]. A point x, y in a plane is represented in homogeneous coordinates (HC) by a vector of 3 coordinates, [xh , yh , wh ]T , and both coordinates are related by x = xh /wh and y = yh /wh . In HC, a vector v and κv (κ ∈ R) represent the same point. An important advantage of using homogeneous coordinates is that original and transformed positions, as well as composition of transformations, are related by matrix multiplications [6]. The general projective transformation is the most general transformation that maps lines into lines, and it generalizes an affine transformation. Transformed points [xh , yh , wh ]T and original points [i, j, 1]T are related by:
(3)
59
The matrix of derivatives of x and y respect to θk are: T
T
[xh , yh , wh ] = H [i, j, 1]
θ1 θ4 θ7
θ0 H = θ3 θ6
θ2 θ5 θ8
M (Θ, i, j) =
(4)
The matrix H for this transformation have nine elements (actually only eight parameters, because in HC proportional vectors represent the same vector). An example of this transformation where parallelism is not preserved is shown in Figure 3 (b). In most interesting cases the element θ8 = 1 and so we consider from now on this case. Given two images, the input image and the reference image, the goal of the registration problem is to find the matrix H that transforms the input image into another image similar to the reference image. Next section deals with this objective.
5
∂y ∂θ0 ∂y ∂θ1 ∂y ∂θ2 ∂y ∂θ3 ∂y ∂θ4 ∂y ∂θ5 ∂y ∂θ6 ∂y ∂θ7
∂x ∂θ0 ∂x ∂θ1 ∂x ∂θ2 ∂x ∂θ3 ∂x ∂θ4 ∂x ∂θ5 ∂x ∂θ6 ∂x ∂θ7
i 0 j 0 1 0 0 1 i = wh 0 j 0 1 −ix −iy −jx −jy
where ∇I(x, y) is the gradient image given by ∂I(x(Θ,i,j),y(Θ,i,j)) ∇I(x(Θ, i, j), y(Θ, i, j)) =
In order to compute a parameters vector Θ (the eight parameters of H), eq. (1) can be rewritten as follows ∀∈Ir
eij 2 (5)
Given a vector Θ = [θ0 , θ1 , · · · , θp ] of p parameters, E ∂E = 0 (k = 0, · · · , p). will have a minimum value when ∂θ k For a given parameter θk , and using the chain rule from differential calculus, the desired derivative can be computed as follows,
∂eij =0 ∂θk
(6)
∂I(x(Θ, i, j), y(Θ, i, j)) ∂eij = ∂θk ∂θk
(7)
∂eij ∂θk
=
eij
∀∈Ir
∂I(x(Θ,i,j),y(Θ,i,j)) ∂x(Θ,i,j) + ∂x(Θ,i,j) ∂θk ∂I(x(Θ,i,j),y(Θ,i,j)) ∂y(Θ,i,j) ∂y(Θ,i,j) ∂θk
∀∈Ir
Subsection 5.1 describe some methods to compute image derivatives and subsection 5.2 presents two methods of computing terms not yet described in equation (8): ∇I(x(Θ, i, j), y(Θ, i, j)) the gradient image.
5.1
=
xh wh
=
θ0 i+θ1 j+θ2 θ6 i+θ7 j+1
y(Θ, i, j)
=
xh wh
=
θ3 i+θ4 j+θ5 θ6 i+θ7 j+1
Computing derivatives of images
Derivative of images can be approximated by the following central–difference approximations [15]: ∂I(i,j) ∂i ∂I(i,j) ∂j
(8)
= =
I(i+1,j)−I(i−1,j) 2 I(i,j+1)I(i,j−1) 2
(14)
More accurate approximations consider more pixels in the neighborhood [15]:
Considering a projective transformation (eq. (4)) we have x(Θ, i, j)
∂x(Θ,i,j) ∂I(x(Θ,i,j),y(Θ,i,j)) ∂y(Θ,i,j)
(12) finally the equation (6) for k = 0, 1, · · · , 8 can be presented as
T ∂E ∂E ∂E , ,··· , F = ∂θ0 ∂θ1 ∂θ8 F = 2 Jij (Θ)eij (13)
eij = I(x(Θ, i, j), y(Θ, i, j)) − Ir (i, j)
∂E =2 ∂θk
(10) ∂e with wh = θ6 i + θ7 j + 1 so for equation (6) the term ∂θijk can be rewritten as
T ∂eij ∂eij ∂eij , ,··· , Jij (Θ) = ∂θ0 ∂θ1 ∂θp Jij (Θ) = M (Θ, i, j)∇I(x(Θ, i, j), y(Θ, i, j))(11)
Finding the parameters vector
E(I(Θ), Ir ) =
(9)
60
∂I(i,j) ∂i
=
−I(i+2,j)+8I(i+1,j)−8I(i−1,j)+I(i−2,j) 12
∂I(i,j) ∂j
=
−I(i,j+2)+8I(i,j+1)−8I(i,j−1)+I(i,j−2) 12
(15)
and
An even better method is called derivative of Gaussian Filter [11] [13], and it computes much smaller noise responses, compared with the previous ones. The Gaussian function and its derivative are given, respectively, by g(t) =
g (t) =
− σ2 √t2πσ e 2σ2
The computation of image derivatives is accomplished as a pair of 1-D convolutions with filters obtained by sampling the continuous Gaussian function and its derivative, ∂I(i,j) ∂i
∂I(i,j) ∂j
=
= I(i, j) ∗ g (i) ∗ g(j) w/2 k=−w/2 l=−w/2 (I(k, l)g (i − k)g(j − l))
w/2
5.2
w/2
∂I(x,y) ∂x
= =
∂I(x,y) ∂i ∂It (i,j) ∂i
∂i ∂x ∂i ∂x
+ +
∂I(x,y) ∂j ∂j ∂x ∂It (i,j) ∂j ∂j ∂x
∂I(x,y) ∂y
= =
∂I(x,y) ∂i ∂It (i,j) ∂i
∂i ∂y ∂i ∂y
+ +
∂I(x,y) ∂j ∂j ∂y ∂It (i,j) ∂j ∂j ∂y
(22)
In matrix notation we can rewrite last equation as follows,
= I(i, j) ∗ g(i) ∗ g (j) w/2 k=−w/2 l=−w/2 (I(k, l)g(i − k)g (j − l)) (17) Where σ (in pixels units) controls the Gaussian form, and usually w = 3σ. If the window defined by w is bigger, then more pixels in the neighborhood are considered. =
(21)
The first one is correct (20), but not the second one (21), because increments in x does not necessarily correspond to the same increments in i (and the same argument with y and j). The right derivatives can be computed using the chain rule, as follows:
(16)
−t2
considers ∂It (i, j) ∂I(x(Θ, i, j), y(Θ, i, j)) = ∂x(Θ, i, j) ∂i
−t2
√ 1 e 2σ2 2πσ
also
∂I(x,y) ∂x ∂I(x,y) ∂y
∂It (i,j)) ∂i ∂It (i,j)) ∂j
(23)
∇I(x, y) = N (Θ, i, j) ∇It (i, j)
(24)
=
∂i ∂x ∂i ∂y
∂j ∂x ∂j ∂y
To compute the derivatives of i and j respect to x and y (the elements of matrix N ) we can rewrite the projective transformation (eq. (9)) in the following form:
Computing the gradient image
Here we present two methods to compute the terms not previously described in eq. (8):
(xθ6 − θ0 )i + (xθ7 − θ1 )j = θ2 − x (yθ6 − θ3 )i + (yθ7 − θ4 )j = θ5 − y
∂I(x(Θ,i,j),y(Θ,i,j)) ∂x(Θ,i,j)
This system of equations is solved using the well known Cramer’s rule from Linear Algebra and its solution for i and j are given as,
(18)
∂I(x(Θ,i,j),y(Θ,i,j)) ∂y(Θ,i,j)
5.2.1
Classical Method. Using approximate derivatives of the transformed image
From eq. (2), an approximation is given by ∂I(x(Θ,i,j),y(Θ,i,j)) ∂x(Θ,i,j) ∂I(x(Θ,i,j),y(Θ,i,j)) ∂y(Θ,i,j)
= =
i
=
(θ4 − θ5 θ7 )x + (θ2 θ7 − θ1 )y + (θ1 θ5 − θ2 θ4 ) (θ3 θ7 − θ4 θ6 )x + (θ1 θ6 − θ0 θ7 )y + (θ0 θ4 − θ1 θ3 )
(26)
j
=
(θ5 θ6 − θ3 )x + (θ0 − θ2 θ6 )y + (θ2 θ3 − θ0 θ5 ) (θ3 θ7 − θ4 θ6 )x + (θ1 θ6 − θ0 θ7 )y + (θ0 θ4 − θ1 θ3 )
(27)
From these results, the elements of matrix N are given by
∂It (i,j) ∂i ∂It (i,j) ∂j
(25)
(19)
N (Θ, i, j) = φ
−(θ7 y − θ4 ) (θ7 x − θ1 )
(θ6 y − θ3 ) −(θ6 x − θ0 )
(28)
Because this approximation is reported in many papers, (i,j) (i,j) we named it the classical method. ∂It∂i and ∂It∂j are the derivatives of the image.
where
5.2.2
When the projective transformation is an affine transformation (θ6 = 0, θ7 = 0), the matrix N (Θ, x, y) for this case is
1 θ4 −θ3 (30) N (Θ, i, j) = θ0 θ4 − θ1 θ3 −θ1 θ0
φ=
A New Method. Using derivatives of the transformed image.
The classical method considers It (i, j) = I(x(Θ, i, j), y(Θ, i, j)))
(20)
61
θ0 (θ4 − θ5 θ7 ) − θ1 (θ3 − θ5 θ6 ) + θ2 (θ3 θ7 − θ4 θ6 ) [(θ3 θ7 − θ4 θ6 )x + (θ1 θ6 − θ0 θ7 )y + (θ0 θ4 − θ1 θ3 )]2
(29)
When the transformation is only a translation Θ = [1, 0, Tx , 0, 1, Ty , 0, 0], matrix N is given by
1 0 N (Θ, i, j) = (31) 0 1
Gauss–Newton [12], and it is commonly used due to its simple form and because it warranties to have a semi positive defined Hessian matrix: Arc (Θn ) =
In this case, this method and classical method are the same. In other words the classical method is only valid under translations.
6
∂Fr (Θn ) =2 ∂θc
∂E ∂θk ,
A(Θn ) = 2
n
Let Fk (Θ ) = (k = 0, · · · , p), Θ be an initial vector of parameter values, and Θn+1 an improved vector of parameter values, where θkn+1 = θkn + δθkn . Now the problem is to find a vector of increments δθkn , (k = 0, · · · , p), such that the Fk (Θn+1 ) = 0 (k = 0, · · · , p). To compute the vector of increments, functions Fk can be approximate using the Taylor expansion with first order derivatives: Fk (Θ
n+1
n
) = Fk (Θ )+
6.1
∂Fp (Θn ) ∂θ1
∂Fp (Θn ) ∂θ2
∆Θ = B=
···
δθ1n
···
δθ2n
···
−F1 (Θn ) −F2 (Θn ) · · ·
··· ··· ··· δθpn
···
∂Fp (Θn ) ∂θp
T
−Fp (Θn )
T
The Levenberg–Marquardt Method
1. Pick a modest value for λ, say λ = 0.001 and initial value Θ0 and set n = 0.
(34)
Where
A =
(39)
A ∆Θ = B
∂F1 (Θn ) ∂θp ∂F2 (Θn ) ∂θp
T
Where matrix A and B are defined by eq. (34), and Λ is a diagonal matrix, Λ = diag(λ, λ, · · · , λ), and λ is a parameter which is allowed to vary at each iteration. The process starts with the input image, I, and the reference image, Ir , and initial values for parameters θk (k = 0, · · · , p). The LM algorithm is as follows:
In matrix form we have,
···
[Jij (Θn )] [Jij (Θn )]
The Levenberg-Marquardt method (LM) [12] is a non– linear iterative technique specifically designated for minimizing functions which has the form of sum of square functions, like E. At each iteration, the increment of parameters, ∆Θ, is computed solving the following linear matrix equation: (A + Λ) ∆Θ = B (40)
∂Fk (Θn ) n ∂Fk (Θn ) n ∂Fk (Θn ) n n δθ1 + δθ2 + · · · δθp = −Fk (Θ ) (33) ∂θ1 ∂θ2 ∂θp
∂F1 (Θn ) ∂θ2 ∂F2 (Θn ) ∂θ2
T
Unfortunately, the Newton or Gauss–Newton not always reaches a minimum value for the error E, because we use only first order derivatives in the Taylor Expansion. More robust methods, like the one presented in section 6.1, expand the Newton or Gauss–Newton to avoid the case when E(Θn+1 ) > E(Θn ) and get better results.
∂Fk (Θn ) n ∂Fk (Θn ) n ∂Fk (Θn ) n δθ1 + δθ2 · · · δθp ∂θ1 ∂θ2 ∂θp (32)
∂F1 (Θn ) ∂θ1 ∂F2 (Θn ) ∂θ1
(38)
∀∈Ir
Next we can compute the vector of increments, doing Fk (Θn+1 ) = 0 and solving the system of p equations of the form:
∀∈Ir
∂eij ∂eij ∂θr ∂θc
According with the definition given by eq. (11) the Hessian for Gauss–Newton is given by
Minimization Procedure n
(35) (36)
2. Compute the transformed image Itn (eq. (2)) applying bilinear interpolation to improve the quality of the image.
(37)
3. Compute the total error, E (I(Θ), Ir )(eq. (1)). 4. Compute:
Once this system of equation is solved, a new vector of parameter Θn+1 can be computed, and using the same procedure another vector of parameter Θn+2 is estimated, and so on. The iterative process ends when all increments are very small. n r (Θ ) are computed If full derivatives Arc (Θn ) = ∂F∂θ c from eq. (6), we have the Newton’s method [12]. But, if we discard second order derivatives, the method is called
• M (Θn , i, j) and N (Θn , i, j) using equations (10) and (28) • the gradient vector Image ∇Itn (i, j) applying a derivative of Gaussian Filter (eq. (17)) • the Matrix Jij (Θn ) n n M (Θ , i, j)N (Θ , i, j)∇Itn (i, j)
62
=
• the Gradient vector B(Θn ) and the Hessian matrix A(Θn ) applying equation (13) and (39)
Table 1. Comparative results using different derivative methods for experiment 1
5. Solve the linear system of equations (eq. (40)) Method Classical The new method
6. Calculate E(I(Θn + ∆Θn ), Ir ). Here Θn + ∆Θn represent the new vector of parameters considering the computed increments. 7. if E(I(Θn + ∆Θn ), Ir ) > E(I(Θn ), Ir ), increase λ by a factor of 10, and go to step fifth. If λ grows very large, it means that there is no way to improve the solution Θn and the algorithm ends.
Iterations 68 144
Θ1 − Θ 0.0157 1.7711e-006
1 shows the number of iterations and the Euclidian distance between Θ1 and the vector Θ, computed by the classical method and the new method (with complete derivatives). Numerical results for the Euclidian distance shows that the new method is able to find a much more accurate transformation.
8. if E(Ip (Θn + ∆Θn ), Ir ) < E(Ip (Θn ), Ir ), decrease λ by a factor of 10, set Θn+1 ← Θn + ∆Θn and n ← n + 1, and go to the second step. Note when λ = 0, the GNLM method is a Gauss– Newton (or Newton) method, and when λ tends to infinity, ∆Θ turns to so called steepest descent direction and the size of increments in ∆Θ tends to zero.
7
Experimental results
To test the methods previously described, a computer program was built under the Ubuntu Linux operating System, using the C language. All the experiments were running in a PC Pentium 4, 2.26 Ghz and we use standard routines from the Free Gnu Scientific library (GSL) to solve the linear system of equations. The PGM image format was selected because its simplicity to load and save gray level images with 256 gray levels, from 0 to 255. The PGM format is as follows,
(a) Origin image
Figure 4. Experiment 1.
P5{nl} # CREATOR: The GIMP’s PNM Filter Version 1.0 {nl} 640 480{nl} 255 {nl} ... ...
7.2
Where P 5 means gray level images (P 6 means color images), # starts a comment, 640 and 480 is the width and height of the image respectively, and {nl} is the new line character. I(i, j) is a single byte (an unsigned char in C) and they are ordered from left to right of the first row of pixels, then the second row of pixels, and so on. Next two experiments show the advantage of using the new method to compute the right derivatives.
7.1
(b) Target image
Experiment 2 with Synthetic Transformations
The second experiment consider a transformation (a rotation of 15o and a small = perspective transformation) given by Θ2 The [0.9659, 0.2588, 0, −0.2588, 0.96590, 0.0005, 0]T . origin image and the transformed image is shown in Figure 5(a) and (b) respectively. Table 2 shows the results for both methods. In this case, the new method is able to find the right transformation while the classical method fails. In Figure 5(c) and (d) we can see the difference between the target image and the transformed image obtained by the classical and the new method. Figure 5(d) shows how good is the transformation found by the new method. The perfect transformation would give a constant gray level image, because the target image and the transformed image have no differences.
Experiment 1 with Synthetic Transformations
The first experiment considers a transformation given by Θ1 = [1, 0.3, 0, 0, 1, 0, 0.001, 0.001]T . Θ1 is applied to Figure 4(a) and the result is shown in Figure 4(b). Table
63
References Table 2. Comparative results using different derivative methods for experiment 2 Method Classical The new method
Iterations 75 178
(a) Origin image
(c)
Difference between target image and the image given by the classical method
[1] E. H. Adelson and J. R. Bergen. The extraction of spatiotemporal energy in human and machine vision. In IEEE, editor, IEEE Workshop on Visual Motion, pages 151–156, Charleston USA, 1986. [2] H. Barman, L. Haglund, H. Knutsson, and S. Beauchemin. Estimation of velocity, acceleration and disparity in time sequences. In Princeton, editor, IEEE Workshop on Visual Motion, pages 151–156, USA, 1986. [3] J. L. Barron, D. J. Fleet, and S. S. Beauchemin. Performance of optical flow techniques. International Journal of Computer Vision, 12(1):43–47, 1994. [4] F. Calderon and J. L. Marroquin. A new algorithm for computing optical flow and his application to image registration. Computacion y Sistemas, 6(3):213–226, Enero-Marzo 2003. [5] O. Faugeras. Three-Dimensional Computer Vision. The MIT Press, 1993. [6] R. Hartley and A. Zisserman. Multiple View Geometry in Computer Vision. Cambridge, University Press, second edition, 2000. [7] D. J. Heeger. Model for the extraction of image flow. J. Opt. Soc Am, 4:1455–1471, 1987. [8] S. Kaneko, I. Murase, and S. Igarashi. Robust image registration by increment sign correlation. Pattern Recognition, 35:2223–2234, 2002. [9] S. Kaneko, Y. Satoh, and S. Igarashi. Using selective correlation coefficient for robust image registration. Pattern Recognition, 29:1165–1173, 2003. [10] S. H. Lai and B. C. Vemuri. Reliable and efficient computation of optical flow. International Journal of Computer Vision, 29(2):87–105, 98. [11] Y. Ma, S. Soatto, J. Kosecka, and S. S. Sastry. An Invitation to 3-D Vision From Images to Geometric Models. Springer, 2004. [12] J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 1999. [13] B. M. t. H. Romeny, editor. Geometry-Driven Diffusion in Computer Vision. Kluwer Academic Publishers, 1994. [14] R. Szeliski and J. Coughlan. Spline-based image registration. Technical Report 1, Harvard University, Departmet of Physics, Cambridge, Ma 02138, 1994. [15] E. Trucco and A. Verri. Introductory techniques for 3-D Computer Vision. Prentice Hall, 1998. [16] B. Zitova and J. Flusser. Image registration methods: A survey. Image and Vision Computing 221, pages 977–1000, 2003.
Θ0 − Θ 23.5202 1.3891e-006
(b) Target image
(d)
Difference between target image and the image given by the new method
Figure 5. Experiment 2
8
Conclusions
The new method, to compute image derivatives is an improved version of classical method reported in the literature, because it takes into account the exact analytical derivatives needed, considering a general projective transformation. However the classical method is the same as the new method under translations. Therefore the classical method is only accurate under translations. Experiments confirm the poor estimation computed by the classical method when rotations, scaling, or general projective transformations are involved.
64