Author’s Post-print: Journal of Geodesy (2010) 84(12) 751-762, DOI: 10.1007/s00190-010-0408-0
Generalization of Total Least-Squares (TLS) on example of unweighted and weighted 2D similarity transformation by Frank Neitzel Frank Neitzel University of Applied Sciences Mainz, Germany and School of Earth Sciences The Ohio State University Columbus, Ohio, USA e-mail:
[email protected]
Abstract In this contribution it is shown that the so called “Total Least-Squares Estimate” (TLS) within an EIVModel can be identified as a special case of the Method of Least Squares within the nonlinear Gauss-Helmert Model. In contrast to the EIV-Model, the nonlinear GH-Model does not impose any restrictions on the form of functional relationship between the quantities involved in the model. Even more complex EIV-Models, which require specific approaches like “Generalized Total Least-Squares” (GTLS) or “Structured Total Least-Squares” (STLS), can be treated as nonlinear GH-Models without any serious problems. The example of a similarity transformation of planar coordinates shows that the “Total Least-Squares Solution” can be obtained easily from a rigorous evaluation of the Gauss-Helmert Model. In contrast to weighted TLS, weights can then be introduced without further limitations. Using two numerical examples taken from the literature, these solutions are compared with those obtained from certain specialized TLS approaches. Keywords Method of Least Squares, Total Least-Squares (TLS), Structured Total Least-Squares (STLS), Weighted Total Least-Squares (WTLS), Gauss-Helmert Model, Gauss-Markov Model, coordinate transformation (2D)
1 Introduction A considerable part of the literature on least-squares estimation distinguishes between standard „Least Squares“ (LS) and „Total Least Squares“ (TLS); cf., e.g., Golub and van Loan (1980), or van Huffel and Vandewalle (1991, p. 27 ff.). First, an over-determined linear model y ≈ Aξ
(1)
is considered, in which the functional matrix A links the m ×1 vector of unknowns ξ with the n × 1 vector of observations y. Due to inevitable measurement errors, this equation system can be fulfilled only approximately. Under the assumptions that the measurement errors have merely a stochastic character without bias and that only the components of the observation vector y are affected by these errors, it is appropriate to introduce the n × 1 vector e of random errors that leads to the observation equations y = Aξ + e
(2)
where, under the principle of Ordinary Least-Squares Estimation (OLSE), the objective function n
eT e = ∑ ei2
(3)
i =1
is to be minimized where the number of observations is denoted by n. In the case that different variances are associated with the observations yi which, moreover, could possibly be correlated, a weight matrix P can be introduced following Aitken (1935). If P is chosen proportionate to the inverse variance-covariance matrix, this sort of adjustment is called „Weighted Least Squares“ (WLS) and is associated to the linear Gauss-Markov Model (GM-Model); cf. Niemeier (2008, p. 137 ff.). Obviously, the notion of „Method of Least Squares“ comprises, since its very beginning about two centuries ago, certain nonlinear problems as well, cf; Gauss (1809, p. 215). 1
Author’s Post-print: Journal of Geodesy (2010) 84(12) 751-762, DOI: 10.1007/s00190-010-0408-0
In the case that, after defining the functional matrix A, a decision is made that it is necessary to regard also the elements of this matrix as observations, the inconsistency of the equation system (1) has to be repaired in a way different from (2). Consequently, this can be done in a meaningful way only by introducing random errors both for the vector y and for the elements aij of the functional matrix A. This results in a consistent equation system
y = A∗ξ + e
(4)
with
a11 − e11 a − e A∗ = 21 21 ⋮ an1 − en1
a12 − e12
a1m − e1m ⋯ a2 m − e2 m = A − EA . ⋱ ⋮ ⋯ anm − enm ⋯
a22 − e22 ⋮ an 2 − en 2
(5)
Here, m denotes the number of unknowns. In the absence of weights, the objective function to be minimized obtains the form n
n
m
eT e + eAT eA = ∑ ei2 + ∑∑ eij2 ; eA = vec E A . i =1
(6)
i =1 j =1
Here, the “vec” operator stacks the columns of EA, one underneath the other, into a vector. Corresponding weight matrices could also be taken into account if necessary. This adjustment has been named „Total Least Squares“ (TLS) by Golub and van Loan (1980) although it is known to be the standard LS-method as applied to the Errors-In-Variables (EIV) model, see Schaffrin and Snow (2010), for instance. It should be noted that in some cases only some of the columns of the functional matrix are subject to random errors, depending on the problem definition, a case that can be handled by zero blocks in the respective weight matrix. In order to solve TLS problems, one of the most elegant algorithms ever proposed, is based on Singular Value Decomposition (SVD); cf. Golub and van Loan (1980), or van Huffel and Vandewalle (1991, p. 29 ff.), among others. If it can be extended to weighted TLS problems, however, is unclear. A look at the recent geodetic literature shows that the application of TLS enjoys increasing popularity there as well. This popularity is occasionally justified by the claim, that the results of a TLS adjustment are „better“ compared to the results of a standard LS adjustment; “better” in the sense that, in general, a TLS adjustment can be expected to provide “satisfactory“ or “more realistic“ estimates for the unknown parameters due to increased model flexibility; cf., e.g., Schaffrin et al. (2006) who base this judgment on their interpretation of TLS as “standard LS in a more suitable model”. Starting from this state of the discussion and taking, as an example, the similarity transformation in the plane into consideration, the present study aims at the following objectives: 1. The view of Schaffrin et al. (2006) should be confirmed that TLS does not represent a new adjustment method, but merely uses another adjustment model in the frame of the Method of Least-Squares. 2. It should be checked under which conditions the statement is justified that TLS indeed can provide „better“ results in comparison with standard LS, considering that the GH-Model is a special case of the EIV-Model. 3. Furthermore, it should be shown that the solution of the so-called TLS problem can be obtained by a rigorous evaluation in a nonlinear Gauss-Helmert Model (GH-Model). The example of a planar coordinate transformation in 2D was chosen, because it is one of the most frequent applications of adjustment in the fields of Geodesy, Engineering Surveying, Photogrammetry, Computer Vision and Geographical Information Science (GIS). Specific TLS solutions for coordinate transformations have been presented before; e.g., in Felus and Schaffrin (2005), Akyilmaz (2007), as well as in Schaffrin and Felus (2008).
2 Total Least-Squares as special case of the Method of LeastSquares Considered is the functional model of the similarity transformation (4-parameter-transformation) in the plane. Taking as transformation parameters ξ 0 , η0 ... translation of the coordinate origin, α ... rotation angle, µ ... scale factor, the well known transformation law follows the approximation
2
Author’s Post-print: Journal of Geodesy (2010) 84(12) 751-762, DOI: 10.1007/s00190-010-0408-0
X i cos α Y ≈ sin α i
0 xi ξ 0 + . µ yi η0
− sin α µ cos α 0
(7)
By multiplying out the corresponding expressions, (7) obtains the form
X i = ( µ cos α ) xi − ( µ sin α ) yi + ξ 0 , Yi = ( µ sin α ) xi + ( µ cos α ) yi + η0 .
(8)
ξ 2 = µ cos α , ξ3 = µ sin α ,
(9)
Substituting
results in an approximate linear equation system
X i ≈ ξ 2 xi − ξ3 yi + ξ 0 Yi ≈ ξ3 xi + ξ 2 yi + η0
(10)
with i = 1, …, k , where k denotes the number of homologous points. If there are k > 2 homologous points, the unknowns can be determined thorough an adjustment process. This can essentially result in three different problem formulations. Problem 1: The transformation parameters are to be determined under the assumption that the coordinates Xi, Yi are observations and hence subjected to random errors whereas the coordinates xi, yi represent error-free quantities. Obviously the variances and covariances of the observations are to be taken into account. In this problem formulation it is necessary to introduce random errors eX i , eYi , which results in linear observation equations
X i − eX i = ξ 2 xi − ξ3 yi + ξ 0 , Yi − eYi = ξ 3 xi + ξ 2 yi + η0 .
(11)
These can be written in matrix notation (2) by denoting ξ1 := η0 and
⋯ ξ 0 ⋯ e ξ X Xi y = i , e = , ξ = 1 , e ξ 2 Yi Yi ⋯ ⋯ ξ 3
⋯⋯⋯⋯⋯⋯ 1 0 x − y i i A= . 0 1 yi xi ⋯⋯⋯⋯⋯⋯
(12)
Taking into account the weight matrix P for the observations Xi, Yi the objective function to be minimized obtains the form
eT P e = min subject to (11) - (12) . e, ξ
(13)
The estimation of the parameters can be accomplished by weighted Least-Squares within a linear Gauss-Markov Model (GM-Model). Problem 2: Transformation parameters are to be determined under the assumption that the coordinates xi, yi are observations and hence subjected to random errors whereas the coordinates Xi, Yi represent error-free quantities. Again, variances and covariances of the observations have to be taken into account. In this case it is necessary to introduce random errors exi , eyi which results in the identities
ξ 2 ( xi − ex ) − ξ3 ( yi − ey ) = X i − ξ 0 , i
i
ξ3 ( xi − ex ) + ξ 2 ( yi − ey ) = Yi − η0 . i
(14)
i
By multiplying the first equation with ξ 2 , the second with ξ 3 , and adding the resulting expressions, it follows
xi − exi =
ξ ξ2 ( X − ξ ) + 3 (Y − η ) , µ2 i 0 µ2 i 0 3
(15)
Author’s Post-print: Journal of Geodesy (2010) 84(12) 751-762, DOI: 10.1007/s00190-010-0408-0
and in analogy to this
ξ3 ξ X i − ξ 0 ) + 22 (Yi − η0 ) . 2 ( µ µ
yi − eyi = −
(16)
Substituting
ξ2 =
ξ ξ2 , ξ3 = 32 , 2 µ µ
(17)
and
ξ 0 = −ξ 2ξ 0 − ξ3η0 . ξ1 = ξ3ξ 0 − ξ 2η0
(18)
yields linear observation equations in new parameters
xi − exi = ξ 2 X i + ξ3Yi + ξ 0 ,
(19)
yi − eyi = −ξ3 X i + ξ 2Yi + ξ1 . These can be brought into a matrix notation (2) by denoting
⋯ ξ 0 ⋯ e x ξ xi i y= , e= , ξ = 1 , e yi ξ yi 2 ξ 3 ⋯ ⋯
⋯⋯⋯⋯⋯⋯ 1 0 X Yi i A= . 0 1 Y − X i ⋯⋯⋯⋯⋯⋯
(20)
Taking into account the weight matrix P for the observations xi, yi results in the objective function
e T P e = min subject to (19) - (20) .
(21)
e, ξ
The parameter estimation of type Least-Squares can be performed in the frame of a linear GM Model. The original unknowns ξ 2 and ξ 3 can be obtained from the nonlinear relationships
ξ2 =
ξ ξ2 , ξ3 = 2 3 2 , ξ 22 + ξ 32 ξ 2 + ξ3
(22)
and the values of ξ 0 , η0 from the solution of the equation system
−ξ 2 ξ3
−ξ3 ξ 0 x0 = . −ξ 2 η0 y0
(23)
The corresponding estimates can no longer be claimed as Least-Squares estimates, due to the nonlinear nature of the identities (22) and (23). Problem 3: Transformation parameters are to be determined under the assumption that both the coordinates Xi, Yi, as well as xi, yi, represent observed quantities, thus containing random errors. As always variances and covariances of the observations have to be taken into account. In this problem, both the random errors eX i , eYi , and exi , eyi have to be introduced, which results in the identities
( ) ( ) ( x − e ) + ξ ( y − e ) +η
X i − eX i = ξ 2 xi − exi − ξ 3 yi − e yi + ξ 0 , Yi − eYi = ξ3
i
2
xi
i
yi
0
(24)
.
Putting all corrections into the vector T
eext := eT , e T = ⋯ eX i
eYi
⋯ ⋯ exi
eyi
⋯
T
,
and the accuracy relations into a corresponding weight matrix P, results again in the objective function
4
(25)
Author’s Post-print: Journal of Geodesy (2010) 84(12) 751-762, DOI: 10.1007/s00190-010-0408-0 T eext P eext = min subject to (24) .
(26)
eext , ξ
This adjustment problem cannot be solved directly in the frame of the linear GM-Model, since the functional model cannot be given the form (2). Therefore, there is no standard LS solution of this problem. Rearranging (24), however, an implicit form of the functional follows:
( ) ( ) ( x − e ) − ξ ( y − e ) −η
X i − eX i − ξ 2 xi − exi + ξ 3 yi − eyi − ξ 0 = 0 , Yi − eYi − ξ3
i
2
xi
i
yi
0
(27)
=0.
This form is an example of a functional relationship that leads to an adjustment of (nonlinear) condition equations with unknowns (Helmert 1924, p. 285 ff.); see also Schaffrin and Snow (2010) for a different application using this rearranging. In this context it is of little importance that the functional relationship is nonlinear. The solution of this adjustment by the Method of Least-Squares can be achieved through an evaluation within the GH-Model as will be shown in Section 3. However, with the exception of Schaffrin and Snow (2010), this possibility for solving the problem was rather neglected in the literature discussed previously. After realizing that the model underlying Problem 3 is of type Errors-In-Variables (EIV), the Total Least-Squares (TLS) approach is introduced. The starting point for the TLS adjustment is the definition of a quasi-linear model. While in model (11) only the quantities Xi, Yi are regarded as observations, resulting in a functional matrix
⋯⋯⋯⋯⋯⋯ 1 0 x − y i i A= , 1 0 yi xi ⋯⋯⋯⋯⋯⋯
(28)
the situation changes rather fundamentally in Problem 3. Beside the coordinates Xi, Yi, also the quantities xi, yi are being regarded as observations, and hence are subject to errors. Thus, it is necessary to associate the elements of the third and fourth column of the functional matrix with random errors. Consequently, a new functional model results as described in (4), namely
y = A∗ξ + e ,
(29)
where the respective quantities are defined as
⋯⋯⋯⋯⋯⋯ − yi − eyi 1 0 xi − exi A∗ = xi − exi 0 1 yi − eyi ⋯⋯⋯⋯⋯⋯
⋯ ξ 0 ⋯ e ξ X Xi y = i , e = , ξ = 1 , e ξ 2 Yi Yi ⋯ ⋯ ξ 3
( (
) ( ) (
)
)
.
(30)
The objective function to be minimized is defined in (26), with the vector of random errors defined by (25). Since the product of A∗ with ξ now includes inherently nonlinear terms, the Method of Least-Squares will necessarily lead to nonlinear normal equations which may be solved by linear iteration. One possible algorithm may follow these steps: -
First, the least-squares solution for the coordinate transformation within model (11) is computed as approximation. The fact that the coordinates xi, yi are now treated as observations as well, and hence are subject to random errors, is taken into account afterwards by formally comparing the vector ξ in (29) with its approximation.
Finally, the TLS principle applied to Problem 3 again results in an adjustment problem with nonlinear normal equations. The least-squares solution of this problem should be identical with the result of the evaluation within the original GH-Model as long as in both cases the identical objective function (26) of the random error vector (25) is minimized subject to an identical functional relationship. Note that the structure of the matrix A∗ , where some observations appear twice, see (30), has to be considered within an appropriate TLS approach. It follows that TLS adjustment does not represent a novel type of adjustment method per se, but merely an additional adjustment model (EIV-Model) in the general frame of the Method of Least-Squares. In the considered example this model can be regarded as a special case of the nonlinear GH-Model.
5
Author’s Post-print: Journal of Geodesy (2010) 84(12) 751-762, DOI: 10.1007/s00190-010-0408-0
It should be noted that, in the frame of the TLS approach, methods have been developed which find the solution without iteration, thereby paying special attention to stability, favorable numerical features, and efficiency; cf., e.g., Golub and van Loan (1980), or van Huffel and Vandewalle (1991, p. 29 ff.). These aspects are very important and the respective contribution of the quoted sources can be hardly overestimated. However, which algorithm is used for solving a nonlinear equation system, does not depend on the chosen adjustment model. In spite of the fact that in the frame of adjustment calculus the Gauss-Newton iteration is established as one of the preferred solution methods, it was never the exclusive solution method; cf., e.g., Schwarz et al. (1968, p. 78 ff.), or Lawson and Hanson (1974). Prospects of the minimization of the objective function (26) by an evaluation in the EIV-Model are investigated in this study by considering the example of the planar coordinate transformation. This investigation is based on numerical examples from Felus and Schaffrin (2005) and Akyilmaz (2007). Finally, the opinion from the literature that the results of a TLS adjustment in model (24) tend to be “better” than the results of a LS adjustment in model (11) or (19), in the sense that this method supplies “more realistic” estimates for the unknown parameters, is discussed. Looking at the LS adjustment resulting from Problem 1 and at the TLS adjustment resulting from the Problem 3, it is directly obvious that LS and TLS are not two different methods, but applications of the same method (Method of Least-Squares) to two different problems. Thus is any discussion unnecessary, which of the two approaches is better. It is only necessary always to model the given problem, and not something completely different from it. A statement to this extent can already be found in Petrovic (2003, p. 56).
3 TLS solution generated by Least-Squares adjustment within the nonlinear Gauss-Helmert Model In this section the solution of Problem 3 from Section 2 will be based on “classical” procedures specifically on an iterative evaluation of the nonlinear normal equations as derived for the nonlinear GH-Model by least-squares adjustment. A very popular approach does replace the original GH-Model by a sequence of linearized GHModels which, obviously requires a correct linearization of the condition equations. This means that the linearization has to be done both at the approximate values ξ 0 for the unknowns, and at approximate values e0 for the random errors; alternatively, the linearization can be performed at ( y − e 0 ). Such iterative solution procedures have long been regarded as “rigorous” evaluations of the nonlinear normal equations although no formal proof seems to exist. An extensive presentation of an evaluation of this kind can be found already in Böck (1961) and Pope (1972). Lenzmann and Lenzmann (2004) pick up this problem once more and present another rigorous treatment of a nonlinear GH-Model as well. In doing so, they show very clearly which terms when neglected will produce merely approximate formulas. Unfortunately, these approximate formulas, which can yield an unusable solution, are being found in all too many popular textbooks, among them Mikhail and Gracie (1981), Wolf and Ghilani (1997), Benning (2007), and Niemeier (2008). Therefore, they are widely spread in practical applications. A comparison between the approximate and the rigorous solutions when fitting a straight line can be found, e.g., in Neitzel and Petrovic (2008). The rigorous treatment of the iteratively linearized GH-Model as presented in the following, is based on Lenzmann and Lenzmann (2004). Here, general formulas are given as far as they are necessary for the considered problem. Next, the formulas for the determination of the transformation parameters of a planar coordinate transformation are given for the case when both the coordinates Xi, Yi and xi, yi are regarded as observations which are subject to random errors. The vector of observations is denoted by y. The random error vector e and the vector of unknowns ξ are connected by r nonlinear differentiable condition equations of the form
ψ i ( e, ξ ) = hi ( y − e, ξ ) = 0
(31)
with i = 1, …, r. Introducing appropriate approximate values e0 and ξ 0 the linearized condition equations can be written as
f ( e, ξ ) ≈ B ( e − e0 ) + A (ξ − ξ 0 ) +ψ ( e0 , ξ 0 ) = 0 , involving the matrices of partial derivatives
6
(32)
Author’s Post-print: Journal of Geodesy (2010) 84(12) 751-762, DOI: 10.1007/s00190-010-0408-0
∂ψ ( e, ξ )
B0 ( e, ξ ) =
∂ eT
(33) e0 , ξ 0
and
A0 ( e, ξ ) =
∂ψ ( e, ξ ) ∂ξ T
.
(34)
e0 , ξ 0
These derivatives have to be formed at the approximate values e0 and ξ 0 . It should be noted that the symbol A0 is now used in a different way than in the preceding sections. With the vector of misclosures
w0 = − B0 e0 +ψ ( e0 , ξ 0 ) ,
(35)
the solution ξˆ1 for the unknowns in the first iteration step is obtained from the equation system 1 A λˆ w0 + =0 , 0 ξˆ1 − ξ 0 0
B0 QB0T T A0
(36)
where Q denotes the cofactor matrix of the observations, and λ is a vector of auxiliary “Lagrange multipliers”; hats indicate estimates. The first residual vector can now be obtained from
eɶ1 = QB T λˆ1 .
(37)
The solutions eɶ1 , ξˆ1 , after stripping them of their random character, are to be substituted as new approximate values as long as necessary until a sensibly chosen break-off condition is met. For the choice of break-off conditions, refer, e.g., to Böck (1961) and Lenzmann and Lenzmann (2004). Experience shows that, after convergence, the final solution fulfills the nonlinear least-squares normal equations. Note that oftentimes the update in (35) is being mishandled, in which case convergence may occur, but not to the nonlinear least-squares solution. Considering the problem that, using the coordinates Xi, Yi and xi, yi as observations, the parameters of a planar coordinate transformation are to be determined, then according to (31) the conditions
⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯ X i − eX i − ξ 2 xi − exi + ξ3 yi − e yi − X 0 ψ ( e, ξ ) = =0 Yi − eYi − ξ3 xi − exi − ξ 2 yi − eyi − Y0 ⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯
( (
) )
( (
) )
(38)
are to be satisfied. Taking appropriate approximate values e0X i , eY0i and ex0i , e0yi , as well as ξ 00 , ξ10 , ξ 20 , ξ30 , it is possible to build the matrices
1 0 B1 = 0 ⋮ 0
B2
0
−ξ 20 0 −ξ 3 0 = 0 ⋮ 0 0
0 1 ⋮ 0
⋯ ⋯ ⋱ ⋯
0 0 , ⋮ 1
0
⋯
0
0
⋯
0
ξ30 ⋯ −ξ 20 ⋯
0
(39)
ξ30 −ξ 20
0
0
−ξ 20
0
−ξ
⋮
⋮
⋮
⋱
0
0
0
⋯ −ξ 20
0
0
0
0
0 0 3
7
0 ⋮ −ξ 30
0 0 0 0 , ⋮ ξ30 −ξ 20
(40)
Author’s Post-print: Journal of Geodesy (2010) 84(12) 751-762, DOI: 10.1007/s00190-010-0408-0
⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯ 0 yi − e0yi −1 0 − xi − exi A= , 0 − xi − ex0i 0 −1 − yi − eyi ⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯
( (
) ( ) (
) )
(41)
and the vector of misclosures ⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯ 0 0 0 0 0 0 0 0 0 0 0 eX i − ξ 2 exi + ξ 3 e yi + X i − eX i − ξ 2 xi − exi + ξ 3 yi − eyi − ξ 0 w0 = 0 0 0 0 0 0 0 0 0 0 0 eYi − ξ3 exi − ξ 2 e yi + Yi − eYi − ξ3 xi − exi − ξ 2 yi − eyi − ξ1 ⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯ (42) ⋯⋯⋯⋯⋯⋯⋯⋯ X −ξ 0x +ξ 0 y −ξ 0 = i 02 i 03 i 00 . Yi − ξ3 xi − ξ 2 yi − ξ1 ⋯⋯⋯⋯⋯⋯⋯⋯ Here, B1 and B2 denote the matrices of partial derivatives according to (33). Applying the cofactor matrices
( (
0
)
)
(
(
)
)
(
(
)
)
0
QXY and Qxy of the coordinates in target and start systems, and assuming no correlation between them, it is possible to obtain the estimates for the unknowns from the solution of the linear equations system
QXY + B2 Qxy B2 T 0 0 A0T
A0 λˆ1 w0 + =0 . 0 ξˆ1 − ξ 0 0
(43)
The first residual vector follows from
QXY 1 ˆ . eɶ1 = T λ Qxy B20
(44)
After stripping the solution eɶ1 , ξˆ1 of its random character, it is then used in the next iteration step as the approximation e1 , ξ 1 . It is important to note that the second (simplified) identity in (42) is only valid for the initial step; in all later iteration steps the first identity in (42) must be used. For the considered example, the description of the adjustment problem in the frame of the nonlinear GH-Model is equivalent to the corresponding formulation using the TLS approach within an EIV-Model. This follows from the fact that at all places in the respective matrices where observations appear, the corresponding approximate values for the random errors are to be substituted; cf. (41) and (42). The fact that, in the second identity of (42), the vector of misclosures does not contain any approximate values for the random errors, is a special case for the initial step only. It cannot be generalized to later iteration steps.
4 Case study I: Transformation of equally weighted coordinates 4.1 Transformation with two parameters The first numerical example with the coordinates listed in Tab. 1 comes from Felus and Schaffrin (2005). In this example the coordinates in the target system Xi, Yi and the coordinates in the start system xi, yi are regarded as equally weighted uncorrelated observations. In Felus and Schaffrin (2005) the solutions for the rotation angle α and the scale factor µ are to be determined taking into account random errors for both, the coordinates in the target (“Calibrated Coordinates”) and start (“Measured Coordinates”) systems. Tab. 1: Numerical example from Felus and Schaffrin (2005) Point No. 1 2 3 4
Calibrated Coordinates Xi [mm] Yi [mm] -117.478 0 117.472 0 0.015 -117.41 -0.014 117.451 8
Measured Coordinates xi [mm] yi [mm] 17.856 144.794 252.637 154.448 140.089 32.326 130.40 267.027
Author’s Post-print: Journal of Geodesy (2010) 84(12) 751-762, DOI: 10.1007/s00190-010-0408-0
The transformation formula applied in Felus and Schaffrin (2005) reads
X i cos α Y ≈ − sin α i
sin α cos α
µ 0
0 xi . µ yi
(45)
It should be noted that the rotation direction in the rotation matrix is reversed compared with (7), and no translation parameters are taken into account. With the substitutions (9) the linear functional model receives the form
X i ≈ ξ 2 xi + ξ3 yi ,
(46)
Yi ≈ −ξ3 xi + ξ 2 yi . By introducing the random errors eX i , eYi and exi , eyi it follows
(
)
(
X i − eX i = ξ 2 xi − exi + ξ3 yi − eyi
(
)
(
)
Yi − eYi = −ξ3 xi − exi + ξ 2 yi − eyi
,
).
(47)
The choice of this functional model is somewhat extraordinary because, from the numerical values in Tab. 1, it is directly visible that there must be a large translation between both coordinate systems. This is illustrated in Fig. 1, where these two coordinate systems, XY and xy, are superimposed.
Fig. 1: Coordinate systems XY and xy plotted one on top of the other A suitable consideration with additional translations is presented later, in Section 4.3. First, the solution based on (47) is considered, independently of the question, whether this functional model is appropriate. From Felus and Schaffrin (2005) it follows that a solution is sought which satisfies the equation system (47) while minimizing the function (26) for the random errors in (25). Due to the structure of the matrix A∗ , where some observations appear twice, see (30), Felus and Schaffrin (2005) proposed a new technique and called it “Structured TLS procedure” (STLS). It is, however, noted that this procedure is generated from the standard TLS solution by imposing the appropriate structure on it, without claiming that it is the TLS solution among all other structured solutions. The numerical results for the auxiliary quantities ξ 2 and ξ 3 , as well as the resulting outcome for the scale factor µ and the rotation angle α are listed in Tab. 2. Tab. 2: STLS solution from Felus and Schaffrin (2005) Obtained parameters Parameter ξˆ
STLS solution
2
0.30579145769903
Parameter ξˆ3
0.01254378090726
Scale factor µˆ
0.3060486
Rotation angle αˆ
2°20’56.39’’
Variance component σˆ 0
2
9
6656.6
Author’s Post-print: Journal of Geodesy (2010) 84(12) 751-762, DOI: 10.1007/s00190-010-0408-0
The corresponding residuals are listed in Tab. 3. Tab. 3: Residuals from Felus and Schaffrin (2005) Point No. 1 2 3 4
eɶX i
Calibrated Coordinates [mm] eɶYi [mm]
-119.155 36.562 -41.288 -41.298
-42.075 -42.082 -119.903 35.752
Measured Coordinates eɶxi [mm] eɶyi [mm] 18.014 -5.873 5.579 6.560
7.204 6.225 18.654 -5.224
It needs to be pointed out, however, that the algorithm used by Felus and Schaffrin (2005) to generate these results does not exactly follow the analytical formulas provided in their very paper. After correcting the algorithm, both estimated parameters as well as the residuals do change, with a surprising answer to be discussed below. Now let us solve the so-called STLS problem by iteratively linearizing the nonlinear Gauss-Helmert Model, subject to the same requirements. This means that the same solution is sought that fulfills the equation system (47) while minimizing the function (26) of random errors in (25). The required approximate values for the unknowns can be computed in the first step from a transformation with error-free values xi, yi (Problem 1 in Section 2). These are ξ 20 = 0.25 and ξ30 = 0.01. As approximate values for the random errors, ex0i = e0yi = 0 can be chosen. Applying the formulas introduced in Section 3 (but with the translation parameters set to zero) yields, after several iterations, the solution as presented in Tab. 4. Tab. 4: Solution within an iteratively linearized GH-Model Obtained parameters Parameter ξˆ
GH solution
2
0.30686619800718
Parameter ξˆ3
0.01258786751144
Scale factor µˆ
0.3071243
Rotation angle αˆ
2°20’56.39’’
Variance component σˆ 0
2
6371.5
The objective function (26) at its minimum obtains the value eɶ T P eɶ = 38229.2. The corresponding residuals are listed in Tab. 5. Tab. 5: Residuals within the iteratively linearized GH-Model Point No. 1 2 3 4
eɶX i
Calibrated Coordinates [mm] eɶYi [mm]
-114.025 34.726 -39.641 -39.651
-40.397 -40.404 -114.743 33.949
Measured Coordinates eɶxi [mm] eɶyi [mm] 34.482 -11.165 10.720 12.595
13.832 11.961 35.710 -9.919
4.2 Discussion of the results Comparing the results for the transformation parameters in Tab. 2 and Tab. 4, considerable differences are detected. Merely the values for the rotation angle agree. Furthermore, it should be noted that the outcomes for the variance factor disagree, actually not an unexpected result. Comparing the size of residuals in Tab. 3 and Tab. 5, quite large differences between the results attract attention.
10
Author’s Post-print: Journal of Geodesy (2010) 84(12) 751-762, DOI: 10.1007/s00190-010-0408-0
Now, a natural question arises: What is the cause for the results deviating so much from one another? Let us first take a look at the value of the estimated variance component for the STLS solution in Tab. 2, which is σˆ 0 2 = 6656.6. Computing this factor in the well-known way, which follows from (26) by applying
∑ ( eɶ k
σˆ 0 2 =
i =1
2 Xi
)
(
k
+ eɶY2i + ∑ eɶx2i + eɶy2i i =1
) (48)
2k − m
with k = 4 and m = 2 to the residuals from Tab. 3, yields a value of σˆ 0 2 = 6506.7 which is still too high. The inconsistency may be explained as follows: For the computation of the variance component Felus and Schaffrin (2005) use a matrix representation of the residuals
eɶx1 eɶy1 eɶ x2 eɶ = eɶy2 ⋮ ɶ exk eɶ yk
Eɶ A
eɶX1 eɶY1 eɶX 2 eɶY2 . ⋮ eɶX k eɶYk
eɶy1 −eɶx1 eɶy2 −eɶx2 ⋮ eɶyk
−eɶxk
(49)
For the computation of the variance component, the formula
( )
eɶ T ⋅ eɶ + vec Eɶ A
σˆ 0 = 2
T
( )
⋅ vec Eɶ A
(50)
2k − m
may have been applied which is standard for unstructured TLS problems. In order to understand this formula it should be noted that the operator vec stacks the columns of a matrix one beneath the other taking them from the matrix from left to right; hence,
( )
vec Eɶ A
( )
Building the product vec Eɶ A
T
eɶx1 eɶy1 ⋮ eɶxk eɶyk = . eɶy1 −eɶ x1 ⋮ eɶyk −eɶ xk
(51)
( )
⋅ vec Eɶ A yields
( )
vec Eɶ A
T
( )
(
)
(
)
k
⋅ vec Eɶ A = 2 ⋅ ∑ eɶx2i + eɶy2i i =1
.
(52)
,
(53)
In classical notation (50) can be written as
∑ ( eɶ k
σˆ 0 2 =
i =1
2 Xi
)
k
+ eɶY2i + 2 ⋅ ∑ eɶx2i + eɶy2i i =1
2k − m
which obviously uses half of the residuals twice if compared with (48). This means that the expression (50) used by Felus and Schaffrin (2005) should be replaced by
11
Author’s Post-print: Journal of Geodesy (2010) 84(12) 751-762, DOI: 10.1007/s00190-010-0408-0
σˆ 0 = 2
( )
eɶ T ⋅ eɶ + 0.5 ⋅ vec Eɶ A
T
( )
⋅ vec Eɶ A
2k − m
,
(54)
due to the very structure of the matrix Eɶ A . Using this expression yields a smaller variance component of σˆ 0 2 = 6506.7. For the value of the objective function (26), it follows eɶ T P eɶ = 39040.1. Comparing this amount with the value of the objective function eɶ T P eɶ = 38229.2 that follows from the solution of the same adjustment problem by iterated linearization, it is visible at once that the STLS way of solution as presented in Felus and Schaffrin (2005) does not minimize the objective function (26) as intended. Consequently, the proposed STLS procedure may not generate the TLS solution among all structured solutions. In all fairness, Felus and Schaffrin (2005) had never claimed this. In summary, the solution strategy developed in Section 3 makes an appropriate solution of the considered problem possible, though only by iteration. This problem consists in the determination of the transformation parameters, rotation and scale factor when taking into account the random errors for the coordinates in both the target and the start system. The STLS way of solving the problem, originally proposed by Felus and Schaffrin (2005), has meanwhile been modified accordingly and will be published by Schaffrin and Neitzel (2011) soon.
4.3 Transformation with four parameters From the coordinates in Tab. 1 and the graphical presentation of the points to be transformed in Fig. 1 it is directly visible that there is a large translation between the two coordinate systems. An application of the functional model (46), which neglects the translation parameters, thus leads inevitably to unrealistic model parameter estimates especially for the scale factor. Therefore, for an appropriate solution let us now take the functional model (10) as the basis. Hence, the goal is to find a solution for the rotation angle α, the scale factor µ, and the translation parameters ξ 0 , ξ1 , considering both the coordinates in the target system Xi, Yi and the coordinates in the start system xi, yi as equally weighted uncorrelated observations. The solution has to fulfill the equation system (24) and minimize the function (26) of the random errors (25). As approximate values for the unknowns ξ 20 = 0.25, ξ30 = 0.01, as well as ξ 00 = ξ10 = 0 are chosen, and as approximate values for the random errors ex0i = e0yi = 0. Using the formulas developed in Section 3, the solution listed in Tab. 6 is obtained after several iterations. Tab. 6: Solution within an iteratively linearized GH-Model Obtained parameters Parameter ξˆ
GH solution
2
0.99900748077781
Parameter ξˆ3
-0.04109806319405
Scale factor µˆ
0.99985248784424
Rotation angle αˆ Shifting ξˆ
-2°21’20.72’’ -141.2628 mm
0
Shifting ξˆ1
-143.9316 mm
Variance component σˆ 0
2
0.00016081
The value of the objective function (26) amounts to eɶ T P eɶ = 0.00064325 (more than 107 times less than for the two-parameter solution from Section 4.1), the corresponding residuals are listed in Tab. 7. Tab. 7: Residuals obtained from the iteratively linearized GH-Model Point No. 1 2 3 4
eɶX i
Calibrated Coordinates [mm] eɶYi [mm]
-0.0021 0.0005 -0.0004 0.0020
0.0076 0.0099 -0.0074 -0.0101 12
Measured Coordinates eɶxi [mm] eɶyi [mm] 0.0024 -0.0001 -0.0000 -0.0024
-0.0075 -0.0099 0.0075 0.0100
Author’s Post-print: Journal of Geodesy (2010) 84(12) 751-762, DOI: 10.1007/s00190-010-0408-0
4.4 Discussion of the results Using the solution strategy developed in Sections 3, it is straight-forward to treat the coordinate transformation with the translation parameters ξ 0 , ξ1 included as well. In contrast to the STLS approach, the circumstance that the corresponding columns of the functional matrix do not contain any random errors („fixed“ or „frozen“ columns in the frame of TLS terminology) would not make any problems and can be treated in an appropriate way. The choice of a more suitable functional model which takes into account the translation parameters as well yields, in the considered example, drastically reduced residuals. This is obvious from a comparison of Tab. 5 and Tab. 7. The residuals for the 4-parameter transformation are at least 1000 times smaller than the respective residuals from the 2-parameter transformation. Furthermore, the estimated model parameters are now much more realistic than the parameters estimated in Section 4.1.
5 Case study II: Transformation of weighted coordinates 5.1 Transformation with four parameters The second numerical example is based on the coordinates listed in Tab. 8 and originates from Akyilmaz (2007). In this example, the coordinates in both the target system Xi, Yi and the start system xi, yi are regarded as observations, associated with the weight matrices PXY = Diag [10 14.2857 0.8929 1.4286 7.1429 10 2.2222 3.2259 7.6923 11.1111] ,
Pxy = Diag [5.8824 12.5 0.9009 1.7241 7.6923 16.6667 4.1667 6.6667 8.3333 16.6667 ] . (55)
The goal is to determine the parameters α, µ, ξ 0 , ξ1 by an adjustment taking into account the random errors for the coordinates in both the target system and the start system. Tab. 8: Numerical example from Akyilmaz (2007) Point No. 3 185 2796 2996 5005
Target system Xi [m] 4540134.2780 4539937.3890 4539979.7390 4540326.4610 4539216.3870
Start system
Yi [m] 382379.8964 382629.7872 381951.4785 381895.0089 382184.4352
xi [m] 4540124.0940 4539927.2250 4539969.5670 4540316.2940 4539206.2110
yi [m] 382385.9980 382635.8691 381957.5705 381901.0932 382190.5278
From Akyilmaz (2007) it follows that a solution is sought that satisfies the equation system (24) while minimizing the function (26) of the random errors (25). In order to solve this adjustment problem, Akyilmaz (2007) elaborates on a so-called “Generalized TLS procedure” (GTLS) which is known to not necessarily furnish the weighted TLS solution according to Schaffrin and Wieser (2008). The GTLS solution for the transformation parameters is given in Tab. 9. Tab. 9: GTLS solution from Akyilmaz (2007) Obtained parameters Parameter ξˆ
GTLS solution
Parameter ξˆ3 Shifting ξˆ
-0.0000086397
0
18.5145 m
Shifting ξˆ1
34.1062 m
Scale factor µˆ
0.9999974362809
Rotation angle αˆ
-0.0005500 gon
2
0.9999974364
The result for the variance component σˆ 0 2 was not given in Akyilmaz (2007). The residuals, which were computed only for the coordinates of the target system are listed in Tab. 10.
13
Author’s Post-print: Journal of Geodesy (2010) 84(12) 751-762, DOI: 10.1007/s00190-010-0408-0
Tab. 10: Residuals for the target system from Akyilmaz (2007) Point No.
Target system eɶX i [m] eɶYi [m]
3 185 2796 2996 5005
-0.0048 0.0179 0.0039 0.0075 0.0039
0.0023 -0.0163 -0.0049 -0.0154 0.0017
Now we solve the weighted TLS problem within an iteratively linearized Gauss-Helmert Model, subject to the same requirements. This means that again the solution is sought that fulfils the equation system (24) while the function (26) minimizes the random errors in (25), taking into account the weights (55). First of all, the necessary approximate values for the unknowns are computed based on a transformation for error-free values xi, yi (Problem 1 of Section 2) without taking into account the weights. This results in ξ 20 = 1, ξ30 = 0 and ξ 00 = 19.9, ξ10 = -11.7. As approximate values for the random errors we choose ex0i = e0yi = 0. Applying the formulas as developed in Section 3, several iterations lead to the results as presented in Tab. 11. Tab. 11: Solution from an iteratively linearized GH-Model Obtained parameters Parameter ξˆ 2
GH solution 0.9999953579
Parameter ξˆ3 Shifting ξˆ
-0.0000042049
0
29.6432 m
Shifting ξˆ1
14.7696 m
Scale factor µˆ
0.9999953578895
Rotation angle αˆ
-0.0002677 gon
Variance component σˆ 0 2
0.000179
The value of the objective function (26) is eɶ T P eɶ = 0.001073, the corresponding residuals can be found in Tab. 12. Tab. 12: Residuals obtained from an iteratively linearized GH-Model Point No.
Target system eɶX i [m] eɶYi [m]
Start system eɶxi [m] eɶyi [m]
3 185 2796 2996 5005
0.0032 -0.0066 -0.0011 -0.0035 -0.0014
-0.0055 0.0066 0.0010 0.0018 0.0013
-0.0025 0.0080 0.0010 0.0070 -0.0007
0.0029 -0.0066 -0.0006 -0.0034 0.0005
5.2 Discussion of the results Comparing the results for the transformation parameters from Tab. 9 and Tab. 11 and the residuals in the target system from Tab. 10 and Tab. 12, an obvious difference in the results can be detected. In order to make a deeper comparison of the results, the residuals for the coordinates in the start system as resulting from Akyilmaz (2007) are reconstructed using (24); cf. Tab. 13.
14
Author’s Post-print: Journal of Geodesy (2010) 84(12) 751-762, DOI: 10.1007/s00190-010-0408-0
Tab. 13: Residuals in the start system for the GTLS solution Point No.
Start system eɶxi [m] eɶyi [m]
3 185 2796 2996 5005
0.0001 0.0001 0.0001 0.0000 0.0001
0.0001 0.0001 0.0001 0.0001 0.0001
Since the value of the objective function (26) for the GTLS solution is not given in Akyilmaz (2007), this value is computed from the residuals in Tab. 10 and Tab. 13 while taking into account the weight matrices PXY and Pxy. This yields the value eɶ T P eɶ = 0.002360, and the estimated variance component becomes σˆ 0 2 = 0.000393. Comparison with the value eɶ T P eɶ = 0.001073 of the objective function, as obtained from the solution of the same adjustment problem in the iteratively linearized GH-Model, it is directly seen that the GTLS way of solution from Akyilmaz (2007) does not minimize the objective function (26). Special attention should be paid to the residuals in Tab. 13. Obviously, the GTLS results in Akyilmaz (2007) represent a solution which assigns very small random errors to the coordinates in the start system. The fact that in Tab. 13 in the fourth decimal place some small nonzero values appear, is probably due merely to some rounding effects, e.g. when multiplying the parameters ξˆ2 resp. ξˆ3 given to ten decimal places with seven-place coordinate values. Furthermore, it should be noted that in spite of this, the solution as given in Akyilmaz (2007) is not identical with the solution of Problem 1 from Section 2 (“LS solution”) either, since it does not minimize the objective function (13) of the random errors (12). In summary, the solution strategy developed in Section 3 makes it possible to also consider, without any problems, weight matrices for the coordinates in the start system as well as in the target system, similar to Schaffrin and Wieser (2009) for the affine transformation. Beside the diagonal matrices considered in this example, it is quite possible to introduce completely filled matrices as well. The GTLS solution as applied by Akyilmaz (2007) was never designed to minimize the sum of weighted squared residuals. Additionally, the coordinates in the start system do not obtain any corrections. Thus, the deficiencies of the GTLS solution procedure by Akyilmaz (2007), which had been indicated by Schaffrin (2008) already, can be regarded as confirmed.
6 Conclusions and outlook After a short introduction into the TLS terminology in the context of Errors-In-Variables (EIV) Models, the example of a planar similarity transformation is discussed. Depending on whether the coordinates in the target system, the coordinates in the start system, or the coordinates in both systems are regarded as observations subjected to random errors, three different problems result. All three can be solved appropriately by an adjustment according to Method of Least-Squares either in a standard GM-Model (Problem 1 and 2 of Section 2) or in a nonlinear GH-Model (Problem 3 of Section 2). Various solutions have been considered in the literature on mathematical statistics and, in the recent time, also in the geodetic literature. The case, in which the coordinates in both systems are observations subjected to random errors, if treated as a TLS problem, may lead to an alternative algorithm. However, since in a TLS adjustment within an EIV-Model the same objective function is minimized, as in an adjustment by the Method of LeastSquares within a nonlinear GH-Model, TLS adjustment may not be regarded as a new adjustment method per se, but rather as an additional possibility to formulate a new algorithm in the frame of the general Method of LeastSquares. The discussion whether TLS yields “better” results than a LS adjustment, is always meant to refer to the EIVModel in case of TLS, and to the standard GM-Model in case of LS where the random error matrix is set to zero right away. Consequently, two different models are compared essentially, not two different adjustment methods. This can already be learned from Schaffrin and Snow (2010), and is virtually in complete analogy why LScollocation is just the old LS-adjustment when applied to a model with prior information. The solution of the so-called TLS problem for the planar similarity transformation is demonstrated by means of an evaluation within an iteratively linearized GH-Model. In doing so, special attention should be paid to an appropriate linearization and iteration. A special caution is necessary here, since the treatment of the linearized GH-Model, in many textbooks, presents merely an approximate solution. In contrast, the elegance of the TLS algorithm consists in the lack for the need of iteration. 15
Author’s Post-print: Journal of Geodesy (2010) 84(12) 751-762, DOI: 10.1007/s00190-010-0408-0
On two examples, it is shown that the STLS procedure by Felus and Schaffrin (2005), resp. the GTLS approach proposed by Akyilmaz (2007) for a planar similarity transformation do not minimize the chosen objective function that corresponds to the problem at hand. Based on a comparison of the numerical results as provided in the literature with the results from the iteratively linearized GH-Model, neither the STLS approach from Felus and Schaffrin (2005) nor the GTLS approach from Akyilmaz (2007) will produce the optimal answer right away. While the GTLS technique, favored by Akyilmaz (2007), must be dismissed, however, since it does neither handle the weights properly nor can it maintain the very structure, the Cadzow step in the algorithm by Felus and Schaffrin (2005) can be modified in such a way that it generates the optimal TLS solution; for more details see Schaffrin and Neitzel (2011). Regarding the problem formulation treated in this contribution, it can be concluded that the Method of LeastSquares covers the case of TLS as well, something that the experts obviously knew all along. They only distinguish between different models to which the Method of Least-Squares is applied, or between different algorithms. Thus, by using an iteratively linearized GH-Model as presented here, the correct solution of the adjustment problem can be achieved in a reasonable way, although alternative algorithms are of interest, too. A treatment of the affine transformation in 2D, for which in Schaffrin and Felus (2008) a very elegant “Multivariate Total Least-Squares” approach (MTLS) was developed, is possible with the solution strategy presented here, as well. The basic decision has to be made as to whether the EIV-Model ought to be treated as such or within a nonlinear GH-Model. The user has the choice. Acknowledgements: The author would like to acknowledge the support of a Feodor Lynen research fellowship from the Alexander von Humboldt Foundation (Germany), and the School of Earth Sciences at the Ohio State University (USA), with Prof. Schaffrin as his host.
References Aitken AC (1935) On least squares and linear combinations of observations. Proc. Roy. Soc. Edinburgh 55: 4248 Akyilmaz O (2007) Total Least Squares Solution of Coordinate Transformation. Surv Rev 39(303): 68-80 Benning W (2007) Statistics in Geodesy, Geoinformation and Civil Engineering (in German). 2nd edition, Herbert Wichmann Verlag, Heidelberg Böck R (1961) Most General Formulation of Least-Squares Adjustment Computations (in German). Z. für Vermessungswesen 86: 43-45, 98-106 Felus Y, Schaffrin B (2005) Performing Similarity Transformations Using the Errors-in-Variables-Model. In: Proc. of the ASPRS Meeting, Washington, D.C., May 2005, on CD Gauss CF (1809) Theoria motus corporum coelestium in sectionibus conicis solem ambientium. Hamburg, F. Perthes und I.H. Besser Golub GH; van Loan C (1980) An Analysis of the Total Least-Squares Problem. SIAM Journal on Numerical Analysis, 17(6), 883-893 Helmert FR (1924) Adjustment Computation with the Least-Squares Method (in German). 3rd edition, TeubnerVerlag, Leipzig, Berlin Huffel S van, Vandewalle J (1991) The Total Least-Squares Problem, Computational Aspects and Analysis. SIAM, Philadelphia Lawson CL, Hanson RJ (1974) Solving Least-Squares Problems. Prentice Hall, Englewood Cliffs, New Jersey Lenzmann L, Lenzmann E (2004) Rigorous Adjustment of the Nonlinear Gauss-Helmert Model (in German). Allgem. Verm.-Nachr. 111: 68-73 Mikhail EM, Gracie G. (1981) Analysis and Adjustment of Survey Measurements. Van Nostrand Reinhold Company, New York Neitzel F, Petrovic S (2008) Total Least-Squares (TLS) in the Context of Least-Squares Adjustment on the Example of Straight-Line Fitting (in German). Z. für Vermessungswesen 133: 141–148 Niemeier W (2008) Adjustment Computations (in German). 2nd edition, Walter de Gruyter, Berlin, New York Petrovic S (2003) Parameter Estimation for Incomplete Functional Models in Geodesy (in German). German Geodetic Comm., Publ. No. C-563, Munich Pope AJ (1972) Some Pitfalls to be Avoided in the Iterative Adjustment of Nonlinear Problems. Proceedings of the 38th Annual Meeting of the American Society of Photogrammetry, Washington, D.C., pp. 449-477 Schaffrin B, Lee I, Felus Y, Choi Y (2006) Total Least-Squares (TLS) for Geodetic Straight-line and Plane Adjustment. Boll Geod Sci Affini, 65(3):141-168 16
Author’s Post-print: Journal of Geodesy (2010) 84(12) 751-762, DOI: 10.1007/s00190-010-0408-0
Schaffrin B (2008) Correspondence, Coordinate Transformation. Surv Rev 40(307):102 Schaffrin B, Felus Y (2008) On the Multivariate Total Least-Squares Approach to Empirical Coordinate Transformations. Three Algorithms. J Geod 82(6): 373-383 Schaffrin B, Wieser A (2008) On weighted total least-squares adjustment for linear regression. J Geod 82(7): 415-421 Schaffrin B, Neitzel F (2011) Modifying Cadzow’s algorithm to generate the TLS solution for Structured EIVModels, submitted Schaffrin B, Snow K (2010) Total least-squares regularization of Tykhonov type and an ancient racetrack in Corinth. Linear Alg. & Appls. 432(8): 2061-2076 Schaffrin B, Wieser A (2009) Empirical Affine Reference Frame Transformations by Weighted Multivariate TLS Adjustment. In: Drewes H (Ed.) International Association of Geodesy Symposia Volume 134, Geodetic Reference Frames IAG Symposium Munich, Germany, 9-14 October 2006: 213-218 Schwarz HR, Rutishauser H, Stiefel E (1968) Numerics of Symmetric Matrices (in German). B. G. Teubner, Stuttgart Wolf PR, Ghilani CD (1997) Adjustment Computations: Statistics and Least Squares in Surveying and GIS. Wiley: New York
17