Journal of Geodesy (2003) 76: 605–616 DOI 10.1007/s00190-002-0287-0
Explicit solution of the overdetermined three-dimensional resection problem J. L. Awange, E. W. Grafarend Department of Geodesy and GeoInformatics, University of Stuttgart, Geschwister-Scholl-Str.24D, 70174 Stuttgart, Germany e-mail:
[email protected],
[email protected]; Tel.: +49-711-1213474, Fax: +49-711-1213285 Received: 29 August 2001 / Accepted: 19 July 2002
Abstract. Several procedures for solving in a closed form the three-dimensional resection problem have already been presented. In the present contribution, the overdetermined three-dimensional resection problem is solved in a closed form in two steps. In step one a combinatorial minimal subset of observations is constructed which is rigorously converted into station coordinates by means of the Groebner basis algorithm or the multipolynomial resultant algorithm. The combinatorial solution points in a polyhedron are then reduced to their barycentric in step two by means of their weighted mean. Such a weighted mean of the polyhedron points in R3 is generated via the Error Propagation law/variance–covariance propagation. The Fast Nonlinear Adjustment Algorithm was proposed by C.F. Gauss, whose work was published posthumously, and C.G.I. Jacobi. The algorithm, here referred to as the Gauss–Jacobi Combinatorial algorithm, solves the overdetermined three-dimensional resection problem in a closed form without reverting to iterative or linearization procedures. Compared to the actual values, the obtained results are more accurate than those obtained from the closed-form solution of a minimano of three known stations. Keywords: Groebner basis – Multipolynomial resultants – Gauss–Jacobi combinatorial algorithm – Overdetermined three-dimensional resection
1 Introduction The search for the solution of the three-dimensional (3-D) resection problem originates with the work of a German mathematician, Grunert, whose publication appeared in the year 1841. Grunert (1841) solved the Correspondence to: E. W. Grafarend
3-D resection problem – what was then known as the ‘Pothenot’ problem – in a closed form by solving an algebraic equation of degree four. The problem had hitherto been solved by iterative means mainly in photogrammetry and computer vision. Procedures developed later for solving the 3-D resection problem revolved around the improvements of the approach of Grunert (1841) with the aim of searching for the optimal means of distance determination. Whereas Grunert (1841) solves the problem by a substitution approach in three steps, the more recent desire has been to solve the distance equations in fewer steps, as exemplified in the works of Finsterwalder and Scheufele (1937), Merritt (1949), Fischler and Bolles (1981), Linnainmaa et al. (1988) and Grafarend et al. (1989). Other research done on the subject of resection includes the works of Mu¨ller (1925), Grafarend and Kunz (1965), Horaud et al. (1989), Lohse (1990), and Grafarend and Shan (1997a, b). Extensive reviews of some of the above procedures are presented by Mu¨ller (1925) and Haralick et al. (1991, 1994). For the planar resection problem, solutions have been proposed by e.g. Werner (1913), Brandsta¨tter (1974) and Van Mierlo (1988). Whereas closed-form procedures for solving the minimal 3-D resection problem already exist, the case is not the same for overdetermined 3-D resection. The overdetermined planar resection problem has been treated graphically by Hammer (1896), Runge (1900) and Werkmeister (1916, 1920). Gotthardt (1974) dealt with the overdetermined 2-D resection where more than four points were considered with the aim of studying the critical configuration that would yield a solution. This work was later to be extended by Killian (1990). A special case of an overdetermined 2-D resection has also been considered by Ba¨hr (1991), who uses six known stations and proposes the measuring of three horizontal angles which are related to the two unknown coordinates by nonlinear equations. By adopting approximate coordinates of the unknown point, an iterative adjustment procedure is performed to obtain the improved 2-D coordinates of the unknown point based on the coordinate system of the six known stations. A contri-
606
bution to the work on the overdetermined 2-D resection problem has been presented by Rinner (1962). We present in the present contribution the Gauss– Jacobi Combinatorial algorithm that solves in a closed form the overdetermined 3-D resection problem. The Gauss–Jacobi combinatorial procedure uses the algebraic procedures of multipolynomial resultant or Groebner basis to solve the equations contained in the minimum combinatorial sets to give closed-form solutions (see e.g. Awange and Grafarend submitted a, b). Once the algebraic techniques of multipolynomial resultant or Groebner basis have been applied to solve explicitly the nonlinear 3-D resection equations (Grunerts and Bogenschnitt, as in Awange and Grafarend submitted a, b) within a minimum combinatorial subset, the combinatorial solution points in a polyhedron are reduced to their barycentric in step two by means of their weighted mean. The advantage is that these algebraic algorithms have already been implemented in algebraic software such as MATHEMATICA and MAPLE. We organize the present study as follows: in Sect. 2 we present the Gauss–Jacobi combinatorial algorithm, while Sect. 3 considers its application to the solution of the test network Stuttgart Central. 2 Gauss–Jacobi combinatorial algorithm In this section we present the Gauss–Jacobi combinatorial algorithm which neither is iterative nor requires linearization of the nonlinear observation equations for the solution of the nonlinear Gauss–Markov model. Linearization is permitted only for the nonlinear error propagation/variance–covariance propagation in order to generate the dispersion matrix (i.e. the second central moments). We start by stating the Gauss–Jacobi combinatorial lemma in Lemma 2.1 and refer to S. Wellisch (1910, pp. 46–47) and T. Hornoch (1950) for the proof that the results of the lemma coincide with those of leastsquares (LS) adjustment for linear cases. Theorem 2.1, which we present thereafter, allows the application of the Gauss–Jacobi combinatorial algorithm in solving nonlinear Geodetic observation equations once they have been converted into algebraic (polynomial) equations. We state the Gauss (Awange 2002, Appendix A-4) and Jacobi (1841) combinatorial lemma as follows. Lemma 2.1 (Gauss–Jacobi combinatorial). For n algebraic observation equations in m unknowns a1 x þ b1 y y1 ¼ 0 a2 x þ b2 y y2 ¼ 0 a3 x þ b3 y y3 ¼ 0
ð1Þ
for the determination of the coordinates x and y of the unknown point P , yi ji 2 f1; 2; . . . ; ng being the observable and ai ; bi ji 2 f1; 2; . . . ; ng being the elements of the design matrix A 2 Rn m , there exists no set of solution f x; y g from any combinatorial pair in the equations above that satisfies the entire system of equations. This is because the solution
obtained from each combinatorial pair of equations differs from the others due to the unavoidable random measuring errors. If the solutions from the pair of combinatorial equations are designated x1;2 ; x2;3 ; . . . and y1;2 ; y2;3 ; . . . with the subscript indicating the combinatorials, then the combined solution is the sum of the weighted solutions x¼
p1;2 x1;2 þ p2;3 x2;3 þ ; p1;2 þ p2;3 þ
y¼
p1;2 y1;2 þ p2;3 y2;3 þ p1;2 þ p2;3 þ ð2Þ
with p1;2 ; p2;3 ; . . . being the weights of the combinatorial solutions given by the square of the determinants as p1;2 ¼ ða1 b2 a2 b1 Þ2 p2;3 ¼ ða2 b3 a3 b2 Þ2
ð3Þ
Next, we state Theorem 2.1 which allows the solution of nonlinear equations in geodesy. Theorem 2.1. Given algebraic (polynomial) observational equations (n observations, where n is the dimension of the observation space Y) of order l in m variables (unknowns) (m is the dimension of the parameter space X), the application of least squares solution (LESS) to the algebraic observation equations gives (2l 1) as the order of the set of nonlinear algebraic normal equations. There exist m normal equations of polynomial order (2l 1) to be solved. Proof. Given nonlinear algebraic equations fi 2 k fn1 ; . . . ; nm g (where k is a polynomial ring) expressed as f1 2 kfn1 ; . . . ; nm g f2 2 kfn1 ; . . . ; nm g .. .
ð4Þ
fn 2 kfn1 ; . . . ; nm g and the order considered as l, we write the objective function to be minimized as kfk2 ¼ f12 þ þ fn2 j
8fi 2 kfn1 ; . . . ; nm g
ð5Þ
and obtain the partial derivatives (first derivatives) of Eq. (5) with respect to the unknown variables fn1 ; . . . ; nm g. The order of Eq. (5), which is l2 , then reduces to ð2l 1Þ upon differentiating the objective function with respect to the variables n1 ; . . . ; nm , thus resulting in m normal equations of the polynomial order ð2l 1Þ. ( 2.1 Example 1 (ranging) For distance equations converted into algebraic by squaring (c.f. Grafarend and Schaffrin 1989), the order of the polynomials in the algebraic observational equations is l ¼ 2. If we take the ‘distances squared’, a necessary procedure in order to make the observational equations ‘algebraic’ or ‘polynomial’, and implement LESS, the objective function which is of order l ¼ 4
607
reduces by one to order l ¼ 3 upon differentiating once. The normal equations are of order l ¼ 3 as expected. The significance of Theorem 2.1 above is that by using the Gauss–Jacobi combinatorial approach to solve the nonlinear Gauss–Markov model, all observation equations of geodetic interest are successfully converted to ‘algebraic’ or ‘polynomial’ equations. With Theorem 2.1 allowing us to convert nonlinear geodetic equations into algebraic (polynomials), we proceed to execute the Gauss–Jacobi combinatorial algorithm along the lines of Lemma 2.1 in two steps as follows. Step 1. Combinatorial minimal subsets of observations are constructed and rigorously solved by means of the Multipolynomial resultant or Groebner basis (Awange and Grafarend submitted a, b). Step 2. The combinatorial solution points of Step 1 are reduced to their final adjusted values by means of an adjustment procedure where the Best Linear Uniformly Unbiased Estimator (BLUUE) is used to estimate the vector of fixed parameters within the linear Gauss– Markov model with the dispersion matrix of the realvalued random vector of pseudo-observations from Step 1 generated via the nonlinear error propagation law, also known in this case as the nonlinear variance– covariance propagation. 2.1.1 Construction of the minimal combinatorial subsets (nCm) Since n > m, we construct the minimal combinatorial subsets comprising m equations (i.e. nCm combinations) solvable in closed form using the multipolynomial resultant or Groebner basis (Awange and Grafarend submitted a, b). 2.1.2 Adjustment of the combinatorial minimal subsets solutions Once the combinatorial minimal subsets have been solved using the Multipolynomial resultant or Groebner basis approach, the resulting sets of solutions are considered as pseudo-observations. For each combinatorial, the obtained minimal subset solutions considered as pseudo-observations are used as the approximate values to generate the dispersion matrix via the nonlinear error propagation law/variance–covariance propagation (see e.g. Grafarend and Schaffrin 1993, pp. 469– 471) as follows. From the nonlinear geodetic observation equations that have been converted into algebraic (polynomial) form, the combinatorial minimal subsets will consist of polynomials f1 ; . . . ; fm 2 k½x1 ; . . . ; xm with fx1 ; . . . ; xm g being the unknown variables to be determined and fy1 ; . . . ; yn g the known variables comprising the observations or pseudo-observations. We write the polynomials as f1 :¼ g1 ðx1 ; . . . ; xm ; y1 ; . . . ; yn Þ ¼ 0 f2 :¼ g2 ðx1 ; . . . ; xm ; y1 ; . . . ; yn Þ ¼ 0 .. . fm :¼ gm ðx1 ; . . . ; xm ; y1 ; . . . ; yn Þ ¼ 0
ð6Þ
which is expressed in vector form as f :¼ gðx; yÞ ¼ 0
ð7Þ
where the unknown variables fx1 ; . . . ; xm g are placed in a vector x and the known variables fy1 ; . . . ; yn g are placed in a vector y. Next, we implement the error propagation from the observations (pseudo-observations) fy1 ; . . . ; yn g to the parameters fx1 ; . . . ; xm g that are to be explicitly determined, namely characterized by the first moments, the expectation Efxg ¼ lx and Efyg ¼ ly , as well as the second moments, the variance–covariance matrices/ dispersion matrices Dfxg ¼ Rx and Dfyg ¼ Ry . From Grafarend and Schaffrin (1993, pp. 470–471), we have up to nonlinear terms 1 0 0 ð8Þ Dfxg ¼ J1 x J y Ry J y J x with Jx ; Jy being the partial derivatives of Eq. (7) with respect to x; y respectively at the Taylor points ðlx ; ly Þ. The approximate values of unknown parameters fx1 ; . . . ; xm g 2 x appearing in the Jacobi matrices Jx ; Jy are obtained from multipolynomial resultant or Groebner basis solution of the nonlinear system of Eqs. (6). Given Ji ¼ J1 xi Jyi from the ith combination and J from the jth combination, the correlation Jj ¼ J1 xj yj between the ith and jth combination is given by 0
Rij ¼ Jj Ryj yi Ji
ð9Þ
The submatrices variance–covariance matrix for the individual combinatorials R1 ; R2 ; R3 ; . . . ; Rk (where k is the number of combinations) obtained via Eq. (8) and the correlations between combinatorials obtained from Eq. (9) form the variance–covariance/dispersion matrix 3 2 R1 R12 : : : R1k 6 R21 R2 : : : R2k 7 7 6 7 6 : R 3 7 6 ð10Þ R¼6 7 : : 7 6 5 4 : : : : : Rk Rk1 for the entire k combinations. The obtained dispersion matrix R is then used in the linear Gauss–Markov model [Eq. (15)] to obtain the estimates ^n of the unknown parameters n with the combinatorial solution points in a polyhedron considered as pseudo-observations in the vector y of observations, while the design matrix A consists of nCm fm mg identity submatrices being the coefficients of the unknowns [(e.g. Eq. (13)]. In order to understand the adjustment process, we consider the following example.
2.2 Example 2 Consider that we have four observations in three unknowns in a nonlinear problem. Let the observations be given by fy1 ; y2 ; y3 ; y4 g, leading to four combinations giving the solutions zI ðy1 ; y2 ; y3 Þ, zII ðy2 ; y3 ; y4 Þ, zIII ðy1 ; y3 ; y4 Þ and zIV ðy1 ; y2 ; y4 Þ. If the solutions are
608
placed in a vector zJ ¼ ½ zI zII ment model is then defined as
zIII
zIV 0 , the adjust-
EfzJ g ¼ I12 3 n3 1 ; DfzJ g from variance–covariance propagation. Let 2 3 zI 6 zII 7 12 1 7 nn ¼ LzJ subject to zJ :¼ 6 4 zIII 5 2 R zIV
ð11Þ
ð12Þ
such that the postulations tr Dfnn g ¼ min, i.e. ‘best’, and Efnn g ¼ n for all nn 2 Rm , i.e. ‘uniformly unbiased’ hold. We then have from Eqs. (10), (11) and (12) the result 1 0 1 1 ^n ¼ ðI0 3 12 Rz J I12 3 Þ I3 12 Rz J zJ
ð13Þ
^ ¼ argftr Dfnn g ¼ tr LRzJ L0 ¼ min j UUEg L The dispersion matrix Df^ ng of the estimates ^n is obtained via Eq. (16) below. In Appendix A, we present the error propagation based on the nonlinear random effect model (multivariate) where we illustrate the effect of the biased term. The shift from arithmetic weighted mean to the use of a linear Gauss–Markov model is necessitated as we do not readily have the weights of the minimal combinatorial subsets but their dispersion which we obtain via error propagation/variance–covariance propagation. If we employ the equivalence theorem of Grafarend and Schaffrin (1993, pp. 339–341), an adjustment using a linear Gauss–Markov model instead of the weighted arithmetic mean in Lemma 2.1 is permissible. We define the linear Gauss–Markov model as follows. Definition 2.1 (special linear Gauss–Markov model). Given a real n 1 random vector y 2 Rn of observations, a real m 1 vector n 2 Rm of unknown fixed parameters over a real n m coefficient matrix A 2 Rn m , and a real n n positive-definite dispersion matrix R; the functional model An ¼ Efyg; rkA ¼ m;
Efyg 2 RðAÞ; R ¼ Dfyg; rkR ¼ n
ð14Þ
is called a special linear Gauss–Markov model with full rank. The unknown vector n of fixed parameters in the special linear Gauss–Markov model is usually estimated by BLUUE as ^n ¼ ðA0 R1 AÞ1 A0 R1 y
ð15Þ
with its regular dispersion matrix Df^ng ¼ ðA0 R1 AÞ1
ð16Þ
The dispersion matrix (variance–covariance matrix) R is unknown and is obtained by means of estimators of the type MINQUE, BIQUUE or BIQE, as in Rao (1967, 1971, 1973, 1978), Rao and Kleffe (1979), Schaffrin
(1983) and Grafarend (1985). From Eq. (10), the obtained dispersion matrix R is then used in the linear Gauss–Markov model Eq. (15) to obtain the estimates ^n of the unknown parameters n with the minimum combinatorial solution considered as pseudo-observations in the vector y of observations, while the design matrix A consists of nCm m m identity submatrices being the coefficients of the unknowns as in Eq. (13). The dispersion matrix Df^ng of the estimates ^n is obtained via Eq. (16). In the event that A0 R1 A is not regular (i.e. A has a rank deficiency), the rank deficiency can be overcome by procedures such as those presented by, among others, Mittermayer (1972), Grafarend and Schaffrin (1974, 1993, pp. 107–165), Brunner (1979), Perelmuter (1979), Meissl (1982), Grafarend and Sanso (1985) and Koch (1999, pp. 181–197). The algorithm described above is here referred to as the Gauss–Jacobi combinatorial algorithm. In Appendix A we present for completeness the error propagation based on the nonlinear random effect model (multivariate) where we illustrate the effect of the biased term. 3 Test network ‘Stuttgart Central’ We consider in this section the closed–form solution of the overdetermined 3-D resection problem using the Gauss–Jacobi combinatorial algorithm introduced in Sect. 2. The test network ‘Stuttgart Central’ shown in Fig. 1 is selected for study. First, we consider the observations of the test network ‘Stuttgart Central’ of types GPS coordinates, horizontal directions Ti and vertical directions Bi that will be used in the experiment. 3.1 Observation The following experiment was performed in the centre of Stuttgart on one of the pillars of the University buildings along Kepler Strasse 11, as depicted by Fig. 1. The test network ‘Stuttgart Central’ consisted of eight GPS points, listed in Table 1. A theodolite was stationed at pillar K1 whose astronomical longitude KC and astronomical latitude UC were known from previous astrogeodetic observations made by the Department of Geodesy and GeoInformatics, Stuttgart University. Since theodolite observations of types horizontal directions Ti and vertical directions Bi from the pillar K1 to the target points i; i ¼ 1; 2; . . . ; 6; 7; were only partially available, we decided to simulate the horizontal and vertical directions from the given values of fKC ; UC g as well as the Cartesian coordinates of the station point ðX ; Y ; ZÞ and target points ðXi ; Yi ; Zi Þ: In detail, the directional parameters fKC ; UC g of the local gravity vector were adopted from the astrogeodetic observations reported by Kurz (1996, p. 46) with a root-meansquare (RMS) error rK ¼ rU ¼ 1000 . Table 1 contains the ðX ; Y ; ZÞ coordinates obtained from a GPS survey of the test network ‘Stuttgart Central’, in particular with RMS errors ðrX ; rY ; rZ Þ neglecting the covariances
609 Table 2. Ideal spherical coordinates of the relative position vector in the local horizontal reference frame F : spatial distance, horizontal direction, vertical direction
Fig. 1.
ðrXY ; rYZ ; rZX Þ. The spherical coordinates of the relative position vector, namely of the coordinate differences ðxi x; yi y; zi zÞ, are called horizontal directions Ti , vertical directions Bi and distances Si and are given in Table 2. The standard deviations/RMS errors were fixed to rT ¼ 600 ; rB ¼ 600 . Such RMS errors can be obtained on the basis of a proper refraction model. Since the horizontal and vertical directions of Table 2 were simulated data, with zero noise level, we used a random generator randn in MATLAB version 5.3 (see e.g. Hanselman and Littlefield 1997, pp. 84, 144) to produce additional observational data sets within the framework of the given RMS errors. For each observable of type Ti and Bi , 30 randomly simulated data were obtained and the mean taken. Eleven sets of observation were generated. We present here only the observations of the first data set. Let us refer to the observational data set fTi ; Bi g; i ¼ 1; 2; . . . ; 6; 7; of Table 3 which were enriched by the RMS errors of the individual randomly generated observations as well as by the differences DTi :¼ Ti Ti (generated), DBi :¼ Bi Bi (generated). Such differences ðDTi ; DBi Þ indicate the difference between the ideal values of Table 2 and those randomly generated.
Station observed from K1
Distances (m)
Horizontal Vertical directions (gon) directions (gon)
Schlossplatz Haussmanstr. Eduardpfeiffer Lindenmuseum Liederhalle Dach LVM Dach FH
566.8635 1324.2380 542.2609 364.9797 430.5286 400.5837 269.2309
52.320062 107.160333 224.582723 293.965493 336.851237 347.702846 370.832476
)6.705164 0.271038 4.036011 )8.398004 )6.941728 )1.921509 )6.686951
The observations are thus designed such that by observing the other seven GPS stations, the orientation of the local-level reference frame F , whose origin is station K1, to the global reference frame F is obtained. The direction of Schlossplatz is chosen as the zero direction of the theodolite and this leads to the determination of the third component RC of the 3-D orientation parameters. Observations of the types horizontal directions Ti and vertical directions Bi are measured to each of the GPS target points i. The spatial distances Si ðX; Xi Þ ¼ kXi Xk2 are readily obtained from the observations horizontal directions Ti and vertical directions Bi using the algebraic computational technique of Groebner basis or multipolynomial resultants discussed in Awange and Grafarend (submitted a, b). The following symbols have been used: rX ; rY ; rZ are the standard errors of the GPS Cartesian coordinates; covariances rXY ; rYZ ; rZX are neglected; rT ; rB are the standard deviation of horizontal and vertical directions respectively after an adjustment; and DT ; DB are the magnitude of the noise on the horizontal and vertical directions, respectively. 3.2 Experiment In Awange and Grafarend (submitted a, b), we presented the multipolynomial resultant and the Groebner basis approaches that could be used to solve in a closed form the 3-D resection problem for position, and referred to Awange (1999) and Grafarend and Awange (2000) for the solution of the unknown 3-D orientation parameters of the unknown point K1. If superfluous observations are available, made possible by the availability of several known points as in the case of the test network ‘Stuttgart Central’, the closed-form 3-D resection procedure for the
Table 1. GPS coordinates in the global reference frame F ðX ; Y ; ZÞ; ðXi ; Yi ; Zi Þ; i ¼ 1; 2; . . . ; 6; 7 Station name
X ðmÞ
Y ðmÞ
Z ðmÞ
Dach K1 Schlossplatz Haussmanstr. Eduardpfeiffer Lindenmuseum Liederhalle Dach LVM Dach FH
4 4 4 4 4 4 4 4
671 671 672 671 671 671 671 671
4 4 4 4 4 4 4 4
157 157 156 156 157 157 157 157
066.1116 246.5346 749.5977 748.6829 066.8851 266.6181 307.5147 244.9515
429.6655 877.0281 711.4554 171.9385 064.9381 099.1577 171.7006 338.5915
774 774 774 775 774 774 774 774
879.3704 581.6314 981.5459 235.5483 865.8238 689.8536 690.5691 699.9070
rX ðmÞ
rY ðmÞ
rZ ðmÞ
0.00107 0.00076 0.00177 0.00193 0.00138 0.00129 0.00020 0.00280
0.00106 0.00076 0.00159 0.00184 0.00129 0.00128 0.00010 0.00150
0.00109 0.00076 0.00161 0.00187 0.00138 0.00134 0.00030 0.00310
610 Table 3. Randomly generated spherical coordinates of the relative position vector: horizontal direction Ti and vertical direction Bi ; i ¼ 1; 2; . . . ; 6; 7; RMS errors of individual observations; differences DTi :¼ Ti Ti ðgeneratedÞ, DBi :¼ Bi Bi ðgeneratedÞ with respect to ðTi ; Bi Þ ideal data of Table 2; first data set, set 1 Station observed from K1
Horizontal directions (gon)
Vertical directions (gon)
rT ðgonÞ
rB ðgonÞ
DT ðgonÞ
DB ðgonÞ
Schlossplatz Haussmanstr. Eduardpfeiffer Lindenmuseum Liederhalle Dach LVM Dach FH
0.000000 54.840342 172.262141 241.644854 284.531189 295.382909 318.512158
)6.705138 0.271005 4.035491 )8.398175 )6.942558 )1.921008 )6.687226
0.0025794 0.0028756 0.0023303 0.0025255 0.0020781 0.0029555 0.0026747
0.0024898 0.0027171 0.0022050 0.0024874 0.0022399 0.0024234 0.0024193
)0.000228 )0.000298 0.000293 0.000350 )0.000024 0.000278 )0.000352
)0.000039 0.000033 0.000520 0.000171 0.000830 )0.000275 0.000500
minimal case gives way to the overdetermined 3-D resection case. In this case, therefore, all the known GPS network stations (Haussmanstr., Eduardpfeiffer, Lindenmuseum, Liederhalle, Dach LVM, Dach FH and Schlossplatz) of the test network ‘Stuttgart Central’ in Fig. 1 are used. For the 11 observation data sets (here we have only shown the first observation data set in Table 3), we proceed in six steps, as follows: Step 1 (construction of minimal combinatorial subsets for determination of distances): from the seven stations of the test network ‘Stuttgart Central’, 35 minimal combinatorials are formed and for each minimal combinatorial simplex, the distances are computed from the polynomials derived from either Groebner basis or multipolynomial resultants approaches presented in Awange and Grafarend (submitted a, b). Each combinatorial minimal subset results in three distances, thus giving rise to a total of ð3 35Þ 105 distances which we consider in the subsequent steps as pseudo-observations. The computed distances Si link the known points Pi ji ¼ 1; . . . ; 7 to the unknown point P (K1) in Fig. 1. Step 2 (nonlinear error propagation to determine the dispersion matrix R): in this step, the dispersion matrix R is sought. This is achieved via the error propagation law/variance–covariance propagation for each of the combinatorial set j ¼ 1; . . . ; 35 above. The closed-form observational equations for the first combinatorial subset j ¼ 1 are written algebraically (Awange and Grafarend (submitted a, b) as f1 : ¼ S12 þ S22 2S1 S2 ðcos B1 cos B2 cosðT2 T1 Þ 2 þ sin B1 sin B2 Þ S12
f2 : ¼ S22 þ S32 2S2 S3 ðcos B2 cos B3 cosðT3 T2 Þ 2 þ sin B2 sin B3 Þ S23
ð17Þ
f3 : ¼ S12 þ S32 2S1 S3 ðcos B1 cos B3 cosðT1 T3 Þ 2 þ sin B1 sin B3 Þ S31
where Sij ji; j 2 f1; 2; 3g; i 6¼ j are the distances between known GPS stations of the test network ‘Stuttgart Central’, Sk jk 2 f1; 2; 3g are the unknown distances measured from the unknown GPS point P 2 E3 to the known GPS stations Pi 2 E3 ji 2 f1; 2; 3g, and Ti ; Bi ji 2 f1; 2; 3g are the LPS observables of types horizontal and vertical directions from the unknown point P 2 E3 to the known GPS stations
Pi 2 E3 ji 2 f1; 2; 3g respectively. From Eq. (17), and with Eq. (8), we have the Jacobi matrices as 2 @f @f @f 3 1
1
1
3 6 @S @f2 Jx ¼ 6 4 @S3
@S1 @f2 @S1 @f3 @S1
@S2 @f2 @S2 @f3 @S2
@f3 @S3
7 7 5
ð18Þ
and 2 Jy ¼
@f1 @S 6 @f122 6 4 @S12 @f3 @S12
@f1 @S23 @f2 @S23 @f3 @S23
@f1 @S31 @f2 @S31 @f3 @S31
@f1 @B1 @f2 @B1 @f3 @B1
@f1 @B2 @f2 @B2 @f3 @B2
@f1 @B3 @f2 @B3 @f3 @B3
@f1 @T1 @f2 @T1 @f3 @T1
@f1 @T2 @f2 @T2 @f3 @T2
@f1 @T3 @f2 @T3 @f3 @T3
3 7 7 5 ð19Þ
The values fS1 ; S2 ; S3 g needed for the solution of the Jacobi matrices Jx ; Jy are obtained from the closed-form solution of the first combinatorial set in Step 1. From the dispersion Ry of the vector of observations y and with Eqs. (18) and (19) forming J ¼ J1 x Jy , the submatrices variance–covariance matrix for the individual combinatorials R1 ; R2 ; R3 ; . . . ; Rk (where k is the number of combinations) obtained via Eq. (8) and the correlations between combinatorials obtained from Eq. (9) form the variance–covariance/dispersion matrix R in Eq. (10). Step 3 (rigorous adjustment of the combinatorial solution points in a polyhedron): once the 105 combinatorial solution points in a polyhedron have been obtained in Step 1, they are finally adjusted using the linear Gauss–Markov model of Eq. (14), with the dispersion matrix R obtained via the error propagation law or variance–covariance propagation in Step 2. Expressing each of the 105 pseudo-observation distances as Sij ¼ Si þ eji ji 2 f1; 2; 3; 4; 5; 6; 7g; j 2 f1; 2; 3; 4; 5; 6; 7; . . . ; 35g and placing the pseudo-observation distances Sij in the vector of observation y, the coefficients of the seven unknown distances Si of the test network ‘Stuttgart Central’ forming the coefficient matrix A and n comprising the vector of unknowns Si , the adjusted solution is obtained via Eq. (15) and the dispersion of the estimated parameters through Eq. (16). Step 4 (construction of minimal combinatorial subsets for position determination): once the adjusted distances and their dispersion matrix have been estimated using
611
Eq. (15) or (16), respectively, in Step 3, the position of the unknown point is then determined using equations derived from either the Groebner basis or multipolynomial resultant approaches presented in Awange and Grafarend (submitted a, b). Similar to the distances, we have 35 combinatorial subsets giving 35 different positions X ; Y ; ZjP of the same point P . In total we have 105 (35 3) values of X ; Y and Z which will be treated as pseudoobservations. Step 5 (nonlinear error propagation to determine the dispersion matrix R): The variance–covariance matrices are computed for each of the combinatorial set j ¼ 1; . . . ; 35 using error propagation. The closed-form observational equations for the first combinatorial subset j ¼ 1 are written algebraically as f1 :¼ ðX1 X Þ2 þ ðY1 Y Þ2 þ ðZ1 ZÞ2 S12 f2 :¼ ðX2 X Þ2 þ ðY2 Y Þ2 þ ðZ2 ZÞ2 S22 f3 :¼ ðX3 X Þ2 þ ðY3 Y Þ2 þ ðZ3 ZÞ2 S32
ð20Þ
where Sij ji 2 f1; 2; 3g j j ¼ 1 are the distances between known GPS stations Pi 2 E3 ji 2 f1; 2; 3g of the test network ‘Stuttgart Central’ and the unknown GPS point P 2 E3 for the first combination set j ¼ 1. From Eq. (20), and with Eq. (8), we have the Jacobi matrices as 2 3 @f1
@f1 @Y @f2 @Y @f3 @Y
@X 6 @f 2 Jx ¼ 6 4 @X @f3 @X
@f1 @Z @f2 @Z @f3 @Z
7 7 5
ð21Þ
and 2 @f1 @S 6 @f21 Jy ¼ 6 4 @S1 @f3 @S1
@f1 @S2 @f2 @S2 @f3 @S2
@f1 @S3 @f2 @S3 @f3 @S3
@f1 @X1 @f2 @X1 @f3 @X1
@f1 @Y1 @f2 @Y1 @f3 @Y1
@f1 @Z1 @f2 @Z1 @f3 @Z1
@f1 @X2 @f2 @X2 @f3 @X2
@f1 @Y2 @f2 @Y2 @f3 @Y2
@f1 @Z2 @f2 @Z2 @f3 @Z2
@f1 @X3 @f2 @X3 @f3 @X3
@f1 @Y3 @f2 @Y3 @f3 @Y3
3
@f1 @Z3 7 @f2 7 @Z3 5 @f3 @Z3
ð22Þ The values fX ; Y ; Zg needed for the solution of the Jacobi matrices Jx ; Jy are obtained from the closed-form solution of the first combinatorial set in Step 4. From the dispersion matrix Ry of the vector of observations y, and with Eqs. (21) and (22) forming J ¼ J1 x Jy , the submatrices’ variance–covariance matrix for the individual combinatorials R1 ; R2 ; R3 ; . . . ; Rk (where k is the number
of combinations) obtained via Eq. (8) and the correlations between combinatorials obtained from Eq. (9) form the variance–covariance/dispersion matrix R in Eq. (10). Step 6 (rigorous adjustment of the combinatorial solution points in a polyhedron): for each of the 35 computed coordinates of point K1 in Fig. 1 in Step 4, we write the observation equations as X j ¼ X þ ejX j; j 2 f1; 2; 3; 4; 5; 6; 7; . . . ; 35g Y j ¼ Y þ ejY j j 2 f1; 2; 3; 4; 5; 6; 7; . . . ; 35g Z j ¼ Z þ ejZ j; j 2 f1; 2; 3; 4; 5; 6; 7; . . . ; 35g
ð23Þ
with the values fX j ; Y j ; Z j g treated as pseudo-observation and placed in the vector of observation y, the coefficients of the unknown position fX ; Y ; Zg being placed in the coefficient matrix A and n comprising the vector of unknowns fX ; Y ; Zg: The solution is obtained via Eq. (15) and the dispersion of the estimated parameters through Eq. (16). 4 Results In Tables 4, 5 and 6 are presented the results of the adjusted distances, RMS errors and the deviations in distances computed using the Gauss–Jacobi combinatorial algorithm. The RMS errors are computed from Eq. (16). The deviations in distances are obtained by subtracting the computed distance Si from its ideal value S (in Table 2). The adjusted distances in Table 4 were: K1–Haussmanstr. ðS1 Þ, K1–Eduardpfeiffer ðS2 Þ, K1–Lindenmuseum ðS3 Þ, K1–Liederhalle ðS4 Þ, K1–Dach LVM ðS5 Þ, K1–Dach FH ðS6 Þ and K1–Haussmanstr. ðS7 Þ. The obtained positions of station K1 from the 11 sets under study are presented in Table 7. Set 0* indicates the results of the theoretical set. Since the value of K1 is known (see e.g. Table 1), the deviations fDX ; DY ; DZg of the computed positions from their real values are computed for both the closed-form 3D resection (Awange and Grafarend submitted a, b) and the overdetermined 3-D resection for each observational data set and are plotted as shown in Fig. 2. Figure 2 indicates the results of the overdetermined 3-D resection for the test network ‘Stuttgart Central’ computed from the Gauss–Jacobi algorithm to be more accurate (i.e. have smaller deviation from the true values) than those
Table 4. Distances computed by Gauss–Jacobi combinatorial algorithm Observational set no.
S1 ðmÞ
S2 ðmÞ
S3 ðmÞ
S4 ðmÞ
S5 ðmÞ
S6 ðmÞ
S7 ðmÞ
1 2 3 4 5 6 7 8 9 10 11
1324.2394 1324.2387 1324.2381 1324.2363 1324.2396 1324.2378 1324.2368 1324.2388 1324.2393 1324.2337 1324.2375
542.2598 542.2606 542.2604 542.2545 542.2611 542.2584 542.2558 542.2575 542.2646 542.2598 542.2573
364.9782 364.9801 364.9791 364.9782 364.9779 364.9791 364.9790 364.9779 364.9794 364.9832 364.9787
430.5281 430.5274 430.5267 430.5355 430.5259 430.5300 430.5328 430.5324 430.5232 430.5350 430.5326
400.5834 400.5818 400.5847 400.5931 400.5834 400.5868 400.5857 400.5845 400.5770 400.5904 400.5884
269.2303 269.2292 269.2296 269.2385 269.2306 269.2320 269.2345 269.2342 269.2265 269.2346 269.2344
566.8641 566.8635 566.8632 566.8664 566.8658 566.8637 566.8644 566.8664 566.8623 566.8608 566.8650
612 Table 5. RMS errors of distances shown in Table 4
Table 6. Deviations of distances shown in Table 4 from real values in Table 2
Table 7. Position of station K1 computed by Gauss–Jacobi combinatorial algorithm
Observational set no.
S1
S2
S3
S4
S5
S6
S7
1 2 3 4 5 6 7 8 9 10 11
0.0004 0.0011 0.0008 0.0007 0.0008 0.0008 0.0012 0.0010 0.0009 0.0006 0.0005
0.0004 0.0012 0.0008 0.0007 0.0008 0.0008 0.0012 0.0010 0.0009 0.0006 0.0005
0.0004 0.0013 0.0009 0.0008 0.0009 0.0009 0.0013 0.0011 0.0010 0.0006 0.0005
0.0005 0.0015 0.0010 0.0010 0.0010 0.0010 0.0016 0.0013 0.0012 0.0008 0.0006
0.0004 0.0014 0.0009 0.0009 0.0010 0.0010 0.0014 0.0012 0.0011 0.0007 0.0006
0.0006 0.0019 0.0013 0.0013 0.0013 0.0013 0.0020 0.0016 0.0015 0.0010 0.0008
0.0003 0.0009 0.0006 0.0006 0.0006 0.0006 0.0010 0.0008 0.0007 0.0005 0.0004
Observational set no.
DS1 ðmÞ
DS2 ðmÞ
DS3 ðmÞ
DS4 ðmÞ
DS5 ðmÞ
DS6 ðmÞ
DS7 ðmÞ
1 2 3 4 5 6 7 8 9 10 11
)0.0014 )0.0007 )0.0002 0.0017 )0.0016 0.0002 0.0012 )0.0009 )0.0013 0.0042 0.0004
0.0011 0.0003 0.0005 0.0064 )0.0002 0.0025 0.0051 0.0034 )0.0037 0.0011 0.0036
0.0015 )0.0004 0.0006 0.0015 0.0018 0.0006 0.0007 0.0018 0.0003 )0.0035 0.0010
0.0005 0.0012 0.0019 )0.0069 0.0027 )0.0014 )0.0042 )0.0037 0.0054 )0.0063 )0.0040
0.0002 0.0019 )0.0010 )0.0094 0.0003 )0.0031 )0.0020 )0.0008 0.0067 )0.0067 )0.0047
0.0006 0.0017 0.0013 )0.0075 0.0003 )0.0011 )0.0035 )0.0033 0.0044 )0.0037 )0.0034
)0.0006 0.0000 0.0004 )0.0028 )0.0023 )0.0002 )0.0009 )0.0028 0.0013 0.0027 )0.0015
rX
rY
rZ
0.00007 0.00009 0.00010 0.00008 0.00016 0.00017 0.00009 0.00009 0.00004 0.00005 0.00005
0.00002 0.00001 0.00002 0.00002 0.00003 0.00003 0.00002 0.00002 0.00001 0.00001 0.00001
0.00007 0.00008 0.00010 0.00008 0.00015 0.00016 0.00009 0.00008 0.00003 0.00005 0.00005
Set no.
X ðmÞ
1 2 3 4 5 6 7 8 9 10 11
4 4 4 4 4 4 4 4 4 4 4
157 157 157 157 157 157 157 157 157 157 157
066.1142 066.1150 066.1100 066.1040 066.1089 066.1127 066.1089 066.1102 066.1106 066.1121 066.1100
Y ðmÞ
Z ðmÞ
671 671 671 671 671 671 671 671 671 671 671
4 4 4 4 4 4 4 4 4 4 4
computed from closed-form procedures in Awange and Grafarend (submitted a, b). Figure 3 illustrates the plotted 3-D positional scatter of the 35 minimal combinatorial subsets (indicated by dotted points, ) around the adjusted value of position indicated by a star () for the observational data set 1 in Table 3. The results demonstrate that the overdetermined 3-D resection problem is solvable without iteration or linearization once we invoke the use of the Gauss–Jacobi combinatorial algorithm. Appendix A
429.6642 429.6656 429.6650 429.6648 429.6635 429.6651 429.6655 429.6643 429.6649 429.6694 429.6654
774 774 774 774 774 774 774 774 774 774 774
879.3705 879.3695 879.3676 879.3688 879.3699 879.3684 879.3699 879.3720 879.3699 879.3697 879.3705
Case 1 (lz assumed to be known): by Taylor series expansion we have f ðzÞ ¼ f ðlz Þ þ þ
1 0 f ðlz Þðz lz Þ 1!
1 00 f ðlz Þðz lz Þ2 þ Oð3Þ 2!
Efyg ¼ Eff ðzÞg ¼ f ðlz Þ þ
1 00 f ðlz ÞEfðz lz Þ2 g þ Oð3Þ 2!
Error propagation (nonlinear random effect model – univariate)
leading to (cf. E. Grafarend and B. Schaffrin 1983, p. 470)
Consider a function y ¼ f ðzÞ where y is a scalar-valued observation and z a random effect. Three cases can be specified as follows.
Efyg ¼ f ðlz Þ þ
1 00 f ðlz Þr2z þ Oð3Þ: 2!
613
Fig. 2.
614
Fig. 3. Table 8. Deviation of the computed position of K1 in Table 7 from the real value in Table 1 Set no.
DX ðmÞ
DY ðmÞ
DZðmÞ
1 2 3 4 5 6 7 8 9 10 11
)0.0026 )0.0034 0.0016 0.0076 0.0027 )0.0011 0.0027 0.0014 0.0010 )0.0005 0.0016
0.0013 )0.0001 0.0005 0.0007 0.0020 0.0004 )0.0000 0.0012 0.0006 )0.0039 0.0001
)0.0001 0.0009 0.0028 0.0016 0.0005 0.0020 0.0005 )0.0016 0.0005 0.0007 )0.0001
1 Efðy EfygÞ g ¼ E f ðlz Þðzlz Þþ f 00 ðlz Þðzlz Þ2 2! 2 ) 1 00 þOð3Þ f ðlz Þr2z Oð3Þ 2! 1 Hence Ef½y Efyg½y Efygg is given by14 14 4 r2y ¼ f 02 ðlz Þr2z 14 f 002 ðlz Þr4z þ f 0 f 00 ðlz ÞEfðz lz Þ3 g 2
0
þ 14 f 002 Efðz lz Þ4 g þ Oð3Þ Finally, if z is quasi-normally distributed, we have from Grafarend and Schaffrin (1993, p. 468) that p3 ¼ Efðz lz Þ3 g ¼ 0 and p4 ¼ Efðz lz Þ4 g ¼ 3p22 ¼ 3r4z , leading to r2y ¼ f 02 ðlz Þr2z þ 1=2f 002 ðlz Þr4z þ Oð3Þ
Case 2 (lz unknown, but n0 known as a fixed effect approximation [this implies in Grafarend and Schaffrin (1993, p. 470) that n0 6¼ lz ]): by Taylor series expansion we have f ðzÞ ¼ f ðn0 Þ þ þ
1 0 f ðn0 Þðz n0 Þ 1!
1 00 f ðn0 Þðz n0 Þ2 þ Oð3Þ 2!
using n0 ¼ lz þ ðn0 lz Þ ) z n0 ¼ z lz þ ðlz n0 Þ we have 1 0 1 f ðn0 Þð z lz Þ þ f 0 ðn0 Þðlz n0 Þ 1! 1! 1 00 1 þ f ðn0 Þðz lz Þ2 þ f 00 ðn0 Þðlz n0 Þ2 2! 2!
f ðzÞ ¼ f ðn0 Þ þ
þ f 00 ðn0 Þð z lz Þðlz n0 Þ þ Oð3Þ and Efyg ¼ Eff ðzÞg ¼ f ðn0 Þ þ f 0 ðn0 Þðlz n0 Þ þ 12 f 00 ðn0 Þr2z þ 12 f 00 ðn0 Þðlz n0 Þ2 þ Oð3Þ leading to Ef½ y Efyg½ y Efygg as r2y ¼ f 02 ðn0 Þr2z þ f 0 f 00 ðn0 ÞEfðz lz Þ3 g þ 2f 0 f 00 ðn0 Þr2z ðlz n0 Þ þ 1=4f 002 ðn0 ÞEfðz lz Þ4 g
615
þ f 002 ðn0 ÞEfðz lz Þ3 gðlz n0 Þ 2
14 f 002 ðn0 Þr4z þ f 002 ðn0 Þr2z ðlz n0 Þ þ Oð3Þ
and with z0 being quasi-normally distributed, thus p3 ¼ Efðz0 Efz0 gÞ3 g ¼ 0 and p4 ¼ Efðz0 Efz0 gÞ4 g ¼ 3p22 ¼ 3r4z0 ; we have
and with z being quasi-normally distributed, thus p3 ¼ Efðz lz Þ3 g ¼ 0 and p4 ¼ Efðz lz Þ4 g ¼ 3p22 ¼ 3r4z , we have
r2y ¼ f 02 ðlz Þr2z0 þ 2f 0 f 00 ðlz Þr2z0 b0 þ 1=2f 002 ðlz Þr4z0
r2y ¼ f 02 ðn0 Þr2z þ 2f 0 f 00 ðn0 Þr2z ðlz n0 Þ
with the first and third terms (on the right-hand-side) being the right-hand-side terms of Case 1 (cf. Grafarend and Schaffrin 1993, p. 470).
2
þ 12 f 002 ðn0 Þr4z þ f 002 ðn0 Þr2z ðlz n0 Þ þ Oð3Þ with the first and third terms (on the right-hand side) being the right-hand-side terms of Case 1 (cf. Grafarend and Schaffrin 1983, p. 470). Case 3 (lz unknown, but z0 known as a random effect approximation): by Taylor series expansion we have f ðzÞ ¼ f ðlz Þ þ þ
1 0 1 f ðlz Þðz lz Þ þ f 00 ðlz Þðz lz Þ2 1! 2!
1 000 f ðlz Þðz lz Þ3 þ Oð4Þ 3!
Changing z lz ¼ z0 lz ¼ z0 Efz0 g ðlz Efz0 gÞ and the initial bias ðlz Efz0 gÞ ¼ Efz0 g lz ¼: b0 leads to z lz ¼ z0 Efz0 g þ b0 If we also change f ðlz Þ ¼ f ðz0 Þ þ f 0 ðz0 Þðlz z0 Þ þ Oð2Þ; f 0 ðlz Þ ¼ f 0 ðz0 Þ þ f 00 ðz0 Þðlz z0 Þ þ Oð3Þ etc., the derivatives become random effects and can no longer be separated in Eff 0 ðlz Þðz lz Þg ¼ f 0 ðlz ÞEfðz lz Þg etc. Considering ðz lz Þ2 ¼ ðz0 Efz0 gÞ2 þ b20 þ 2ðz0 Efz0 gÞb0 we have 1 0 1 f ðlz Þðz0 Efz0 gÞ þ f 0 ðlz Þb0 1! 1! 1 1 þ f 00 ðlz Þðz0 Efz0 gÞ2 þ f 00 ðlz Þb20 2! 2! þ f 00 ðlz Þðz0 Efz0 gÞb0 þ Oð3Þ
f ðzÞ ¼ f ðlz Þ þ
Efyg ¼f ðlz Þ þ f 0 ðlz Þb0 þ 12 f 00 ðlz Þr2z0 þ 12 f 00 ðlz Þb20 þ Oð3Þ leading to Ef½y Efyg½y Efygg as r2y ¼ f 02 ðlz Þr2z0 þ f 0 f 00 ðlz ÞEfðz0 Efz0 gÞ3 g þ 2f 0 f 00 ðlz Þr2z0 b0 þ 14 f 002 ðlz ÞEfðz0 Efz0 gÞ4 g þ f 002 ðlz ÞEfðz0 Efz0 gÞ3 gb0 þ f 002 ðlz Þr2z0 b20 þ 14 f 002 ðlz Þr4z0 12 f 002 ðlz ÞEfðz0 Efz0 gÞ2 gr2z0 þ Oð3Þ
þ f 002 ðlz0 Þr2z0 b20 þ Oð3Þ
References Awange LJ (1999) Partial procrustes solution of the threedimensional orientation problem from GPS/LPS observations. In: Krumm F, Schwarze VS (eds) Quo vadis geodesia...? Festschrift to EW Grafarend on the occasion of his 60th birthday, Rep no. 1999.6–1, Department of Geodesy, Stuttgutt University Awange LJ (2002) Gro¨bner basis, multipolynomial resultant and the Gauss–Jacobi combinatorial algorithms – adjustment of nonlinear GPS/LPS observations. Dissertation. Tech rep no. 2002(1), Department of Geodesy, Stuttgart University Awange LJ, Grafarend E (submitted a) Groebner basis solution of the threedimensional resection problem (P4P). J Geod Awange LJ, Grafarend E (submitted b) Multipolynomial resultant solution of the threedimensional resection problem (P4P). J Geod Ba¨hr HG (1991) Einfach u¨berbestimmtes ebenes Einschneiden, differentialgeometrisch analysiert. Z Vermess 116: 545–552 Brandsta¨tter G (1974) Notiz zur analytischen Lo¨sung des ebenen Ru¨ckwa¨rtsschnittes. O¨st Z Vermess 61: 134–136 Brunner FK (1979) On the analysis of geodetic networks for the determination of the incremental strain tensor. Surv Rev 25: 146–162 Finsterwalder S, Scheufele W (1936) Das Ru¨ckwartseinschneiden im Raum. Sebastian Finsterwalder zum 75 Geburtstage. H. Wichmann, Berlin, pp 86–100 Fischler MA, Bolles RC (1981) Random sample consensus a paradigm for medell fitting with application to image analysis and automated cartography. Commun ACM 24: 381–395 Gotthardt E (1974) Ein neuer gefa¨hrlicher Ort zum ra¨umlichen Ru¨ckwa¨rtseinschneiden. Bildm Luftbildw 6–8 Grafarend E (1985) Variance–covariance component estimation; theoretical results and geodetic applications. Statist Decis Suppl 2: 407–441 Grafarend E, Awange JL (2000) Determination of vertical deflection by GPS/LPS measurements. Z Vermess 125: 279–288 Grafarend E, Kunz J (1965) Der Ru¨ckwa¨rtseinschnitt mit dem Vermessungskreisel. Bergbauwissenschaften 12: 285–297 Grafarend E. Sanso F (1985) Optimization and design of geodetic networks. Springer, Berlin Heidelberg New York Grafarend E, Schaffrin B (1974) Unbiased Freenet adjustment. Surv Rev 22: 200–218 Grafarend E, Schaffrin B (1989) The geometry of nonlinear adjustment – the planar trisection problem. In: Kejlso E, Poder K, Tscherning C (eds) Festschrift to T. Krarup, Department of Geodesy, Denmark, pp 149–172 Grafarend E, Schaffrin B (1993) Ausgleichungsrechnung in Linearen Modellen. BI Wissenschaftsverlag, Mannheim Grafarend E, Shan J (1997a) Closed-form solution of P4P or the three-dimensional resection problem in terms of Mo¨bius barycentric coordinates. J Geod 71: 217–231 Grafarend E, Shan J (1997b) Closed form solution of the twin P4P or the combined three dimensional resection–intersection problem in terms of Mo¨bius barycentric coordinates. J Geod 71: 232–239
616 Grafarend E, Lohse P, Schaffrin B (1989) Dreidimensionaler Ru¨ckwa¨rtsschnitt. Z Vermess 114: 61–67, 127–137, 172–175, 225–234, 278–287 Grunert JA (1841) Das Pothentsche Problem in erweiterter Gestalt; nebst Bemerkungen u¨ber seine Anwendungen in der Geoda¨sie. Grunerts Arch Math Phys 1: 238–241 Hammer E (1896) Zur graphischen Ausgleichung beim trigonometrischen Einschneiden von Punkten. Optim Meth Softw 6: 247–269 Hanselman D, Littlefield B (1997) The student edition of Matlab. Prentice-Hall, Englewood Cliffs, NJ Haralick RM, Lee C, Ottenberg K, No¨lle M (1991) Analysis and solution of the three point perspective pose estimation problem. Proc IEEE Org on Computer Vision and Pattern Recognition: 592–598 Haralick RM, Lee C, Ottenberg K, No¨lle M (1994) Review and analysis of solution of the three point perspective pose estimation problem. Int J Comput Vis 13: 331–356 Horaud R, Conio B, Leboullex O (1989) An analytical solution for the perspective 4-point problem. Comput Vis, Graph Image Proc 47: 33–44 Hornoch T (1950) U¨ber die Zuru¨ckfu¨krung der methode der kleinsten Quadrate auf das prinzip des arithmetischen mittels, O¨sterr. Z. f. Vermessungswesen 38: 13–18 Jacobi CGI (1841) Deformatione et proprietatibus determinantum. Crelle’s Journal fu¨r die reine und angewandte Mathematik, Bd. 22 Killian K (1990) Der gefa¨hrliche Ort des u¨berbestimmten ra¨umlichen Ru¨ckwa¨rtseinschneidens. O¨st Z Vermess Photogram 78: 1–12 Koch KR (1999) Parameter estimation and hypothesis testing in linear models, 2nd edn. Springer, Berlin Heidelberg New York Kurz S (1996) Positionierung mittels Ru¨ckwa¨rtsschnitt in drei Dimensionen. Studienarbeit, Geoda¨tisches Institut, University of Stuttgart Linnainmaa S, Harwood D, Davis LS (1988) Pose determination of a three-dimensional object using triangle pairs. IEEE Trans Patt Anal Mach Intell 105: 634–647 Lohse P (1990) Dreidimensionaler Ru¨ckwa¨rtsschnitt. Ein Algorithmas zur Streckenberechnung ohne Hauptachsentransformation. Z Vermess 115: 162–167 Meissl P (1982) Least squares adjustment. A modern approach. Mitteilungen des Geoda¨tischen Instituts der Technischen Universita¨t Graz, Folge 43 Merritt EL (1949) Explicit three-point resection in space. Phot Engng 15: 649–665
Mittermayer E (1972) A generalization of least squares adjustment of free networks. Bull Geod 104: 139–155 Mu¨ller FJ (1925) Direkte (Exakte) Lo¨sungen des einfachen Ru¨ckwa¨rtsschnittseinschneidens in Raum. 1 Teil. Z Vermess 37: 249–255, 265–272, 349–353, 365–370, 569–580 Perelmuter A (1979) Adjustment of free networks. Bull Geod 53: 291–295 Rao CR (1967) Least squares theory using an estimated dispersion matrix and its application to measurement of signals. Proc Fifth Barkeley Symposium, Barkeley Rao CR (1971) Estimation of variance and covariance components – MINQUE theory. J Multivar Anal 1: 257–275 Rao CR (1973) Representation of the best linear unbiased estimators in the Gauss–Markov model with singular dispersion matrix. J Multivar Anal 3: 276–292 Rao CR (1978) Choice of the best linear estimators in the Gauss– Markov model with singular dispersion matrix. Commun Statist Theor Meth A7(13): 1199–1208 Rao CR, Kleffe J (1979) Variance and covariance components estimation and applications. Tech rep 181. Department of Statistics, The Ohio State University, Columbus Rinner K (1962) U¨ber die Genauigkeit des ra¨umlichen Bogenschnittes. Z Vermess 87: 361–374 Runge C (1900) Graphische Ausgleichung beim Ru¨ckwa¨rtseinschneiden. Z Vermess 29: 581–588 Schaffrin B (1983) Varianz–Kovarianz–Komponenten–Scha¨tzung bei der Ausgleichung heterogener Wiederholungsmessungen. DGK, series C, no. 282 Van Mierlo J (1988) Ru¨ckwa¨rtsschnitt mit Streckenverha¨ltnissen. Allgemein Vermess Nachr 95: 310–314 Wellisch S (1910) Theorie und Praxis der Ausgleichsrechnung, Bd. II, Wien–Leipzig Werkmeister P (1916) Trigonometrische Punktbestimmung durch einfaches Einschneiden mit Hilfe von Vertikalwinkeln. Z Vermess 45: 248–251 Werkmeister P (1920) U¨ber die Genauigkeit trigonometrischer Punktbestimmungen. Z Vermess 49: 401–412, 433–456 Werner D (1913) Punktbestimmung durch Vertikalwinkelmessung. Z Vermess 42: 241–253 Wellisch S (1910) Theorie und Praxis der Aus=gleichsrechnung, Bds II, Wien-Zeipzig