Least-squares transformations between point-sets Kalle Rutanen ? , Germán Gómez-Herrero ?? , Sirkka-Liisa Eriksson ? , and Karen Egiazarian ? ? ?
[email protected],
[email protected] [email protected],
[email protected]
Abstract. This paper derives formulas for least-squares transformations between point-sets in Rd . We consider affine transformations, with optional constraints for linearity, scaling, and orientation. We base the derivations hierarchically on reductions, and use trace manipulation to achieve short derivations. For the unconstrained problems, we provide a new formula which maximizes the orthogonality of the transform matrix. Keywords: least-squares, transformation, point-set, rank-deficient
1
Background
Let P = [p1 , . . . , pm ] ∈ Rd×m and R = [r1 , . . . , rn ] ∈ Rd×n . We shall consider affine functions of the form f : Rd → Rd : f (x) = QSx + b,
(1)
where Q ∈ Rd×d , QT Q = I, S ∈ Rd×d , S T = S, and b ∈ Rd . Given P and R, and possible additional constraints for Q, S, and b, the problem is to find Q, S, and b such that the following least-squares error E is minimized: E(Q, S, b) =
m X n X
2
wij kQSpi + b − rj k ,
(2)
i=1 j=1
where W ∈ Rm×n , Wij = wij ≥ 0, W 6= 0, and the norm is induced by the standard inner product hx, yi. We shall call this problem unpaired. Later we will define a simpler paired problem, to which the unpaired problem can be reduced to. Since multiplying E by a positive constant does not change the solution the minimization problem, it is without loss of generality to assume Pmof P n that i=1 j=1 wij = 1. However, we shall derive the results without using this assumption. Let A ∈ Rd×d be an arbitrary matrix, and A = U DV T its singular value decomposition. Let Q = U V T and S = V DV T . Then QT Q = Id , S T = S, and A = QS. Therefore all matrices can be decomposed as required in the problem description. In the rest of the paper we will abbreviate hxi = hx, xi, use the Frobenius norm for matrices, and denote 1n = [1, . . . , 1]T ∈ Rn×1 . ? ?? ???
Department of Mathematics, Tampere University of Technology Netherlands Institute for Neuroscience Department of Signal Processing, Tampere University of Technology
2
Least-squares transformations between point-sets
1.1
Constraints and reductions
Let D ∈ Rd×d be diagonal, and s ∈ R. Several variants of the original problem can be devised depending on the chosen constraints: – – – –
Q: orthogonal (QT Q = Id ), or identity (Q = Id ). S: scaling (S T = S), conformal scaling (S = sId ), or rigid (S = Id ). b: affine (b arbitrary), or linear (b = 0). det(QS): non-oriented (det(QS) arbitrary), or oriented (det(QS) ≥ 0 or det(QS) ≤ 0).
Different combinations of these constraints give 24 variants of the problem. Fortunately, many of the variants can be reduced to each other. In particular, – – – –
1.2
an affine problem can be reduced to a linear problem, an unpaired problem can be reduced to a paired problem, a conformal problem can be reduced to a rigid problem, and the solution to an oriented problem can be obtained quickly from the solution to the corresponding non-oriented problem. Special cases
The following problems are special cases of this problem: – Let n = m and W = In . Then the problem is to minimize n X
2
kQSpi + b − ri k ,
(3)
i=1
i.e. to find a least-squares transformation between paired point-sets. We shall call this problem paired. – Let n = m and W be diagonal. Then the problem is to minimize n X
2
wii kQSpi + b − ri k ,
(4)
i=1
i.e. to find a weighted least-squares transformation point-sets. Pm between paired 1 2 – Let p : Rd → R be defined by p(x) = m N (x; p , σ ), where N is i i=1 the probability density function of a multivariate normal distribution with mean pi and covariance σ 2 Id (this is called a Gaussian mixture model). Let wij = P (i|rj ) be the probability of the point rj being generated by the i:th component of the mixture model. Then minimizing E results in a transformation which maximizes the likelihood of the points in R being generated by the mixture model. In particular, this is used as a component in the Coherent Point Drift algorithm.
Least-squares transformations between point-sets
1.3
3
Applications
Many algorithms use a least-squares transformation between point-sets as a subalgorithm. In point-pattern matching, see e.g. [17] [16], one is given two unordered point-sets, not necessarily of the same cardinality, and then asked to find a transformation which best matches the point-sets, for some criterion of matching, and some class of transformations. In point-set registration, the problem is the same as in point-pattern matching, with the added information that a relatively accurate initial transformation is known so that local optimization suffices to find a global optimum. In both of these fields of study, the least-squares transformations are used either iteratively to get closer to a local optimum, or to improve the transformation between a given candidate matching between the point-sets. The Iterative Closest Point algorithms form a popular family of point-set registration algorithms which use least-squares transformations iteratively [4] [3] [5] [6]. The Coherent Point Drift registration algorithm [10] uses the most general form of least-squares transformation in this paper in its expectation maximization algorithm. Point-pattern matching and point-set registration, in turn, are applied in stereo image matching, content-based image retrieval, image registration, shape recognition, or to reconstruct a representation of an object from multiple overlapping measurements, as when scanning statues from multiple viewpoints by range scanners [7]. 1.4
Previous work
This section reviews previous work done on this problem. Green [2] gives a solution to the paired non-oriented rigid problem in Rd based on Lagrange multipliers and the inverse square-root of the eigenvalue decomposition of RP T P RT . The solution requires RP T P RT to be invertible, which is an unnecessary assumption. Arun et al. [13] give a solution to the paired non-oriented rigid problem in R3 based on the singular value decomposition (SVD). While their interest is actually on the oriented problem, they fail to see how to move forward if the resulting transformation happens to be a reflection. Instead, they accept that the algorithm might sometimes fail, and then give arguments why that should not happen often. It would be trivial to generalize their result to Rd , but for some reason they do not do that. Horn [14] gives a solution to the paired oriented rigid problem in R3 based on quaternions. Because of the use of quaternions, this solution does not generalize to other dimensions. In another paper, Horn et al. [1] give a solution to the non-oriented conformal problem in R3 using an inverse square-root of an eigenvalue decomposition which is very similar to what is in the Green paper, but specific to R3 . Walker et al. [9] give a solution to the paired oriented rigid problem in R3 based on dual quaternions. This algorithm is specific to R3 . Schönemann [11] gives a solution to the paired non-oriented rigid problem in Rd based on the eigenvalue decomposition. While mathematically correct, the author fails to see that the algorithm actually computes the SVD, but in an inefficient and inaccurate manner; the algorithm diagonalizes S T S and SS T to get the right and left singular vectors of S, respectively. Umeyama [15]
4
Least-squares transformations between point-sets
gives a solution to the paired oriented conformal problem in Rd using Lagrange multipliers and the SVD. In this case the results correspond to ours, but the derivations are considerably longer than ours. Eggert et al. [12] compare four algorithms for solving the paired non-oriented rigid problem in R3 . These algorithms are the ones above by Arun et al., Horn, Walker et al, and Umeyama. Eggert et al. found that these algorithms behaved very similarly on all measured aspects. The point-set registration algorithm of Myronenko et al. [10] motivates the generalization of the least-squares criterion to the unpaired case. Their results correspond to ours, with a specific matrix of probabilities for W . Along with gathering multiple results together and deriving them in a uniform manner, our original contributions, to the best extent we are aware, are to give a new formula for the linear problem which maximizes the orthogonality of the transform matrix without assuming a full-rank P , and to derive the solution to the linear scaling problem.
2
Reduction from affine to linear
We will now derive the reduction from an affine problem to a linear problem. Let A = QS. The directional derivative of E with respect to b in direction ∆b ∈ Rd is given by: *m n + XX Db E(A, b) = 2 wij (Api + b − rj ) , ∆b . (5) i=1 j=1
Setting this to zero, and substituting all standard basis vectors one by one for ∆b implies that at an extremum point it necessarily holds that n m X X
wij (Api + b − rj ) = 0,
(6)
i=1 j=1
which in turn is equivalent to A¯ p + b = r¯,
(7)
P W 1n , and 1Tm W 1n RW T 1m r¯ = T . 1m W 1n
(8)
where p¯ =
That is, if b is not restricted, then at an extremum point it holds that the centroid of P should be mapped to the centroid of R. Since it can be shown that E(A, b) is convex in b, the extremum point is a minimum point. This gives us a way to
Least-squares transformations between point-sets
5
reduce an affine problem to a linear problem. At a minimum point the error is given by E(A) =
m X n X
hA(pi − p¯) − (rj − r¯)i .
(9)
i=1 j=1
Thus to solve an affine problem, it is enough to solve a linear problem for the centered point-sets, after which we can compute the optimal b from b = r¯ − A¯ p.
3
Reduction from unpaired to paired
We will now derive the reduction from an unpaired problem to a paired problem. Suppose the problem has already been reduced from affine to linear. Let A = QS, √ and K ∈ Rm×n , where kij = wij , and let Ki: denote the i:th row of K. Then the reduction is as follows. m X n X 2 E(A) = wij kApi − rj k =
i=1 j=1 m X n X
2
kAkij pi − kij rj k
i=1 j=1
=
m X
2
k[Aki1 pi − ki1 r1 , · · · , Akin pi − kin rn ]k
i=1
= =
m X i=1 m X
2
kA[ki1 pi , · · · , kin pi ] − [ki1 r1 , · · · , kin rn ]k kApi Ki: − RdKi: ck
(10)
2
i=1
=
m X
kAP ei Ki: − RdKi: ck
2
i=1
=
mn X
kAb pi − rbi k2 ,
i=1
where Pb = P [e1 K1: , . . . , em Km: ] ∈ Rd×(mn) , b = R [dK1: c, . . . , dKm: c] ∈ Rd×(mn) , R
(11)
[e1 , . . . , em ] = Im . The problem with this reduction is that these matrices are too big to be practically useful. However, this problem can be avoided by noticing that the matrices bPbT and PbPbT . They are given by that are of actual interest are R PbPbT = P dcc P T , and bPbT = RW T P T , R
(12)
6
Least-squares transformations between point-sets
where c = W 1n .
(13)
e ∈ Rd×m such that PbPbT = Guided by this fact, we can even find point-sets Pe, R T T T e e b b e e P P and RP = RP . They are given by √ c , and √ T −1 e=R R W c .
Pe = P
4
(14)
Linear problem
We will now derive the solution to the unconstrained linear problem. Let A = QS. Suppose the problem has already been reduced from affine to linear, and from unpaired to paired. Then the error is given by 2 kAP − Rk = tr AT AP P T − 2tr AT RP T + tr RRT .
(15)
2
The directional derivative of kAP − Rk with respect to A in direction ∆A is given by 2 DA kAP − Rk = 2tr ∆AT (AP P T − RP T ) .
(16)
Substituting ∆A = ei ej T for all i, j ∈ [1, d] shows that at an extremum point it necessarily holds that AP P T = RP T . (17) Since it can be shown that kAP − Rk is convex with respect to A, this a neces is D1 0 sary and sufficient condition for a minimum point. Let P = [U1 , U2 ] [V1 , V2 ]T 0 0 be the singular value decomposition of P , where U1 ∈ Rd×r , U2 ∈ Rd×(d−r) , D1 ∈ Rr×r , V1 ∈ Rn×r , V2 ∈ Rn×(n−r) , and r = rank(P ). Here we have assumed that the positive singular values come before the zero singular values, so that D1 is invertible. Since the resulting matrix P remains the same, this permutation is without loss of generality. Note that we allow either D1 or the lower right 0 to have 0 width, depending on r. Now P = U1 D1 V1 T , and AP P T = RP T ⇔AU1 D1 V1 T V1 D1 U1 T = RV1 D1 U1 T ⇔AU1 D1 2 U1 T = RV1 D1 U1 T ⇔AU1 = RV1 D1 −1 ⇔A[U1 , U2 ] = [RV1 D1 −1 , M ] ⇔A = [RV1 D1 −1 , M ][U1 , U2 ]T ,
(18)
Least-squares transformations between point-sets
7
where M ∈ Rd×(d−r) is arbitrary. The error at a minimum point is then given by
2
2 kAP − Rk = R(V1 V1 T − In ) ,
(19)
which is independent of M , as expected. We can use this freedom to select additional properties for the optimal A. For example, setting M = 0 minimizes kAk (then A = RP + , where + is the Moore-Penrose pseudo-inverse). Since this would
lead to a singular A, we propose instead to minimize the orthogonality error AT A − Id , that is, to maximize the orthogonality of A. Let B = RV1 D1 −1 . Now
T
A A − Id 2 = tr M M T M M T + 2tr M M T (BB T − Id ) + BB T − Id 2 . (20)
T
2
The directional derivative of A A − Id with respect to M in direction ∆M is given by
2 DM AT A − Id = 4tr ∆M M T (BB T − Id + M M T ) .
(21)
T
Substituting ∆M = ei ed−r for all i ∈ [1, d], j ∈ [1, d − r] shows that at an j extremum point it necessarily holds that (BB T − Id + M M T )M = 0.
(22)
Let M = UM DM VM T be the singular value decomposition of M , where UM ∈ Rd×d , DM ∈ Rd×(d−r) , and VM ∈ R(d−r)×(d−r) . Then (BB T − Id + M M T )M = 0 ⇔∀i ∈ [1, d − r] : DM ii = 0 or BB T UM :i = UM :i (1 − DM 2ii ).
(23)
That is, either UM :i and 1 − DM 2ii are a left-singular vector and its squared singular value, respectively, of B, or DM ii = 0. Let B = UB DB VB T be a singular value decomposition of B, where UB ∈ Rd×d , DB ∈ Rd×r , and VB ∈ Rr×r . If DM ii = 0, then the choice of UM :i does not affect the matrix M , and we may as well choose UM = UB . Then all the solutions to the necessary condition are given by M = UB DM VM T , where DM =
q
(24)
max(Id − DB DB T , 0)J, VM is arbitrary orthogonal, and J ∈
Rd×(d−r) is diagonal with zeros or ones, selecting which of the singular vectors should be accepted. Since DM is thin, r singular vectors are always discarded, and thus the ordering of the singular values in DB matters. Now M M T =
8
Least-squares transformations between point-sets
UB DM DM T UB T , and the orthogonality error becomes
T
A A − Id 2 = M M T + (BB T − Id ) 2
2
= DM DM T + (DB DB T − Id ) =
d−r X i=1
=
d−r X
d X
2 Jii max(1 − DB 2ii , 0) + (Jii + (1 − Jii ))(DB 2ii − 1) +
i=d−r+1
Jii max(0, DB 2ii − 1) + (1 − Jii )(DB 2ii − 1)
i=1
2
+
d X
2 DB 2ii − 1 ,
i=d−r+1
(25) which is independent of VM ; we choose VM = Id−r . From this we see that to minimize the orthogonality error we must choose J = Id×(d−r) . Then d−r X
T
A A − Id 2 = max(0, DB 2ii − 1)2 + i=1
d X
2 DB 2ii − 1 .
(26)
i=d−r+1
For the non-negative DB 2ii − 1, the error contribution is always (DB 2ii − 1)2 . For the negative DB 2ii − 1, we would like to discard error contributions of maximal (DB 2ii − 1)2 . Therefore, to minimize the orthogonality error, the singular value T ascends from top to decomposition of B should be reordered such that DB DB bottom. Finally, the optimal matrix A is given by A = [B, M ][U1 , U2 ]T = [UB DB VB T , UB DM ][U1 , U2 ]T T
(27)
T
= UB [DB VB , DM ][U1 , U2 ] . 4.1
2 DB 2ii − 1
Orientation
b be a solution of the nonWe now consider the oriented linear problem. Let A oriented linear problem, and A a solution of the oriented linear problem. Assume, without loss of generality, that the constraint is det(A) ≥ 0. Since the orientation b ≤ E(A). If det(A) b ≥ 0, we are done. constraint restricts the search space, E(A) b < E(A). We claim that the oriented Assume this is not the case, and that E(A) solution A has det(A) = 0. The proof is by contradiction. Let det(A) > 0. By the continuity of the determinant, there is a t ∈ (0, 1) ⊂ R such that det((1 − b + tA) = 0. But now, by the convexity of E, t)A b + tA) ≤ (1 − t)E(A) b + tE(A) E((1 − t)A < (1 − t)E(A) + tE(A)
(28)
= E(A), which contradicts the minimality of A. Therefore det(A) = 0. We leave this problem open.
Least-squares transformations between point-sets
5
9
Linear rigid problem
We will now derive the solution to the linear rigid problem. Suppose the problem has already been reduced from affine to linear, and from unpaired to paired. Let Q ∈ Rd×d be an orthogonal matrix and M ∈ Rd×d be an arbitrary matrix. Since 2 (29) kQ − M k = tr (Id ) − 2tr(QT M ) + tr M T M , minimizing kQ − M k is equivalent to maximizing tr QT M . Let M = U DV T be the singular value decomposition of M , where U, V ∈ Rd×d are orthogonal and D ∈ Rd×d is diagonal. Now tr QT M = tr QT U DV T (30) = tr V T QT U D = tr (XD) , where X = V T QT U is an arbitrary orthogonal matrix. Furthermore, tr (XD) =
d X
Xii Dii ,
(31)
i=1
Pd since D is diagonal. Now, |Xii |2 ≤ k=1 |Xki |2 ≤ 1, since the columns of X are unit vectors. Over arbitrary matrices X, without the orthogonality constraint, but with the diagonal constraint, ∀i : Xii = 1 maximizes tr (XD), since the diagonal elements of D are non-negative. However, the identity matrix Id is an orthogonal matrix with this property. Thus X = V T QT U = Id , from which it follows that Q = UV T . (32) Similarly, Q = −U V T maximizes the error. Now (33) kQP − Rk2 = tr P P T − 2tr QT RP T + tr RRT , which minimizes (maximizes) when tr QT RP T maximizes (minimizes). Therefore, the optimal Q is given by the previous result, where M = RP T . That is, if U DV T is the singular value decomposition of RP T , then the optimal Q is given by Q = U V T . 5.1
Orientation
The corresponding oriented solution is obtained from the non-oriented solution as follows. If Q is required to have det(Q) = ±1, then define D = d1, . . . , 1, ± det(U V T )c, and compute Q = U DV T . (34) Here we have assumed that in the singular value decomposition the singular values are sorted from greatest to smallest downwards. Thus replacing the smallest singular value incurs the smallest error.
10
6
Least-squares transformations between point-sets
Varying S
Suppose the problem has already been reduced from affine to linear, and from unpaired to paired. Then the error E can be written as E(Q, S) = tr S 2 P P T − 2tr SQT RP T + tr RRT . (35) If we are to vary E with respect to S, then the variations ∆S must satisfy ∆S T = ∆S.
(36)
The directional derivative of E with respect to S in direction ∆S ∈ Rd×d is given by (37) DS E(Q, S) = tr ∆S(P P T S + SP P T − 2QT RP T ) . Therefore, at an extremum point it necessarily holds that tr ∆S(P P T S + SP P T − 2QT RP T ) = 0,
(38)
for all valid variations ∆S.
7
Reduction from conformal to rigid
We will now derive the reduction from a conformal problem to a rigid problem. For the conformal problem, S = sId . Then the variation ∆S must also satisfy ∆S = ∆sId .
(39)
Substituting this into Equation 38, the necessary condition for an extremum is then given by tr ∆s(2sP P T − 2QT RP T ) = 0. (40) Since this holds for all ∆s, tr QT RP T s= . tr (P P T )
(41)
tr RP T s= . tr (P P T )
(42)
If Q is constrained to Id ,
Assume Q is not constrained to Id . Substituting s back in Equation 35, we get 2 tr QT RP T E(Q) = − + tr RRT . T tr (P P ) We are thus interested in either minimizing or maximizing tr QT RP T , which ever gives a greater absolute value. Substituting S = Id in Equation 35 shows
Least-squares transformations between point-sets
11
that maximizing (minimizing) tr QT RP T is equivalent to minimizing (maximizing) kQP − Rk2 . Therefore, the minimizer Q for the conformal problem is either the maximizer or the minimizer Q for the rigid problem. In the solution for the linear rigid problem we showed that if U DV T is the singular value decomposition of RP T , then tr QT RP T is maximized by U V T , and minimized by 2 −U V T . Since both of these solutions give the same tr QT RP T , either one can be used. We will use this freedom to choose the minimizing Q = gU V T , where g ∈ {−1, +1}, such that s ≥ 0. Substituting this solution into the expression for s, we get (43) str P P T = gtr (D) . Since we want s ≥ 0, and D is non-negative, it follows that g = 1. Therefore s=
7.1
tr (D) tr (D) . = T tr (P P ) kP k2
(44)
Orientation
The determinant of QS is given by det(QS) = det(Q) det(S). If Q is constrained to Id , and det(QS) = det(S) is of the wrong sign, then the previous derivation can be repeated with s2 instead of s to show that the optimal oriented transformation has s = 0. Therefore S = 0. Assume Q is not constrained to Id . Since we chose s ≥ 0, det(QS) ≥ 0 is equivalent to det(Q) ≥ 0. Therefore the optimal Q can be obtained by solving the oriented linear rigid problem.
8
Linear scaling problem
We will now derive the solution to the linear scaling problem. Suppose the problem has already been reduced from affine to linear, and from unpaired to paired. Now Q = Id , and S is a symmetric matrix. We have already shown that at an extremum point it must hold that tr ∆S(P P T S + SP P T − 2RP T ) = 0, (45) for all symmetric variations ∆S. By substituting ∆S = ei ej T + ej ei T for all i, j ∈ [1, d], it can be seen that this is equivalent to P P T S + SP P T − 2RP T being skew-symmetric. Thus (P P T S + SP P T − 2RP T )T = −(P P T S + SP P T − 2RP T ) ⇔ P P T S + SP P T = RP T + P RT .
(46)
These types of equations are known as continuous Lyapunov equations, or more generally as Sylvester equations. It can be shown that S has a unique solution if and only if P P T is invertible. A classic algorithm for solving Sylvester equations is given in [8]. In Matlab, continuous Lyapunov equations can be solved by the lyap function.
12
8.1
Least-squares transformations between point-sets
Orientation
b Assume the non-oriented solution to the linear scaling problem is given by S, b and det(S) ≤ 0. If an oriented solution S is desired instead, where det(S) ≥ 0, then since a convex combination of symmetric matrices is again symmetric, it b < E(S), then det(S) = 0. The proof is exactly the can be shown that if E(S) same as with the linear problem. We leave this problem open.
References 1. Horn, B. K. P., Hilden, H.M., Negahdaripour, S.: Closed-Form Solution of Absolute Orientation using Orthonormal Matrices. Journal of the Optical Society America, Volume 5, Number 7, 1127–1135 (1988) 2. Green, B.: The orthogonal approximation of an oblique structure in factor analysis. Psychometrika, Volume 17, Number 4, 429-440 (1952) 3. Chen, Y., Medioni, G.: Object modelling by registration of multiple range images. Image Vision Comput., Volume 10, Number 3, 145–155 (1992) 4. Besl, P. J., McKay, N. D.: A Method for Registration of 3-D Shapes. IEEE Trans. Pattern Anal. Mach. Intell., Volume 14, Number 2, 239–256 (1992) 5. Rusinkiewicz, S., Levoy, M.: Efficient Variants of the ICP Algorithm. Proceedings of the Third Intl. Conf. on 3D Digital Imaging and Modeling, 145–152 (2001) 6. Zhang, L., Choi, S., Park, S.: Robust ICP Registration Using Biunique Correspondence. International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission, 3DIMPVT 2011, Hangzhou, China, 16-19 May 2011, 80-85 (2011) 7. Levoy, M., Pulli, K., Curless, B., Rusinkiewicz, S., Koller, D., Pereira, L., Ginzton, M., Anderson, S., Davis, J., Ginsberg, J., Shade, J., Fulk, D.: The digital Michelangelo project: 3D scanning of large statues. Proceedings of the 27th annual conference on Computer graphics and interactive techniques, SIGGRAPH ’00, 131–144 (2000) 8. Bartels, R. H., Stewart, G. W.: Solution of the matrix equation AX + XB = C. Commun. ACM, Volume 15, Number 9, 820–826 (1972) 9. Walker, M. W., Shao, L., Volz, R. A.: Estimating 3-D location parameters using dual number quaternions. CVGIP: Image Underst., Volume 54, Number 3, 358–367 (1991) 10. Myronenko, A., Song, X.: Point Set Registration: Coherent Point Drift. IEEE Trans. Pattern Anal. Mach. Intell., Volume 32, Number 12, 2262–2275 (2010) 11. Schönemann, P.: A generalized solution of the orthogonal procrustes problem. Psychometrika, Volume 31, Number 1, 1-10 (1966) 12. Eggert, D. W., Lorusso, A., Fisher, R. B.: Estimating 3-D rigid body transformations: a comparison of four major algorithms. Mach. Vision Appl., Volume 9, Number 5-6, 272–290 (1997) 13. Arun, K. S., Huang, T. S., Blostein, S. D.: Least-Squares Fitting of Two 3-D Point Sets. IEEE Trans. Pattern Anal. Mach. Intell., Volume 9, Number 5, 698–700 (1987) 14. Horn, B. K. P.: Closed-form solution of absolute orientation using unit quaternions. Journal of the Optical Society of America, Volume 4, 629–642 (1987) 15. Umeyama, S.: Least-Squares Estimation of Transformation Parameters Between Two Point Patterns. IEEE Trans. Pattern Anal. Mach. Intell., Volume 13, Number 4, 376–380 (1991) 16. Goodrich, M. T., Mitchell, J. S. B., Orletsky, M. W.: Approximate Geometric Pattern Matching Under Rigid Motions. IEEE Trans. Pattern Anal. Mach. Intell., Volume 21, Number 4, 371–379 (1999)
Least-squares transformations between point-sets
13
17. van Wamelen, P. B., Li, Z., Iyengar S. S.: A fast expected time algorithm for the 2-D point pattern matching problem. Pattern Recognition, Volume 37, Number 8, 1699-1711 (2004)