Recursive Algorithm for Fast GNSS Orbit Fitting ...

3 downloads 0 Views 419KB Size Report
School of Geological and Surveying Engineering, Chang'an University, Yanta Road, Xi'an. 710054, China;. 3. National ...... John Wiley & Sons. Seemkooei, A.A. ...
Recursive Algorithm for Fast GNSS Orbit Fitting XUE Shuqiang1,2, YANG Yuanxi3 (1. Chinese Academy of Surveying and Mapping, Beijing, 100830, China; 2. School of Geological and Surveying Engineering, Chang’an University, Yanta Road, Xi’an 710054, China; 3. National Key Laboratory for Geo-information Engineering, Xi’an Research Institute of Surveying and Mapping, Xi’an, 710054, China) Tel: 0086-13717998948; Fax: +861063880706; Email: [email protected]

Abstract: Gaussian elimination is an efficient and numerically stable algorithm for estimating parameters and their precision. However, before estimating the parameters, it is often prudent to perform statistical tests to achieve the best fitting model. We use Gaussian elimination to select the best fitting model among candidate models. A succinct relation between the weighted sum of squared residuals and the previous one is revealed by a volume formula. For quick parameter estimation and determination of weighted sum of squared residuals, a recursive elimination algorithm is proposed in the context of Gaussian elimination. In order to improve the model selection efficiency, the parameter estimation and the determination of the weighted sum of squared residuals are carried out in parallel using the proposed recursive elimination algorithm in which the improvement at each recursive stage is judged by the Bayesian information criterion. Ultimately, the computational complexity and numerical stability of the recursive elimination proposed are briefly discussed, and a GNSS orbit interpolation example is used to verify the results. It shows that the proposed recursive elimination algorithm inherits the numerical stability of the Gaussian elimination and this algorithm can be used to examine the gain from the newly introduced parameter, dynamically assess the fitting model and fix the optimal model efficiently. The optimal fitting model with the lowest information is very close to the real situation verified by check points. Key words: least squares; variance estimation; Gaussian elimination; recursive elimination; model selection; statistical test

Introduction GNSS data processing often involves interpolating a GNSS ephemeris. Two common interpolation strategies based on polynomials and trigonometric functions have been compared to represent GNSS orbits (Feng and Zheng 2005; Schenewerk 2003; Yousif and El-Rabbany 2007). The effect of the Runge phenomenon on the interpolation has been analyzed with different orders of polynomial, fit intervals, and validity intervals (Horemuž and Andersson 2006). GNSS orbit fitting is a typical regression analysis problem to obtain the best interpolation and a key is model selection to determine the optimal order of the fitting model. When optimizing the function model or random model with a large number of parameters, there are a great number of candidate models to be evaluated and the 1

efficiency of the algorithm is of the most concern (Amiri-Simkooei et al. 2012; Yang and Gao 2006). Because the equation system becomes ill-conditioned when the design matrix is approximately multicollinear with the increase of candidate parameters, the numerical stability of the model selection procedure should be given extra attention (Björck 1996). Gauss’s algorithms written in his notation survived into the twentieth century in geodesy and Gaussian elimination was the first of many reductions of quadratic and bilinear forms that later became our familiar matrix decompositions, including among others the LU decomposition, LDL decomposition, Cholesky decomposition, the Jordan canonical form and the singular value decomposition (Stewart 1995; Xu 2011). Including the numerical methods based on the matrix decompositions, Gaussian elimination can efficiently and stably solve the normal equation avoiding direct inversion. The weighted sum of squared residuals is the squared distance from the observation vector to the range space spanned by the columns of the design matrix so as to measure the fitting degree in least squares estimation (Teunissen 2000). From the view point of probability, the weighted sum of squared residuals is a fundamental statistic following the Chi-square distribution and it can be used to perform the precision estimation of observations and to achieve various hypothesis tests (Leick 2004, Seemkooei 1998; Huber 1981). However, without a priori variance of unit weight, the model selection may fail when only observing the weighted sum of squared residuals or its change. The model selection is usually carried out by information criterions, such as the Akaike criterion and Bayesian criterion (Akaike 1974; Efroymson 1960; Schwarz 1978). These information criterions can be used in model selection to establish a balance between the model complexity and the data fitting degree by minimizing the information entropy, so it is practically significant to obtain the gains from the newly introduced parameters to evaluate the fitting model. Besides efficiently and stably performing the parameter estimation, Gaussian elimination can efficiently produce the weighted sum of squared residuals without the parameter estimation and this advantage can be explored to optimize the model selection procedure (Stewart 1995). To efficiently obtain the best fitting model among candidate models by evaluating the results of linear regression with a fixed number of observations, we propose a recursive elimination algorithm. We first briefly introduce Gaussian elimination to quickly estimate the parameters and the weighted sum of squared residuals, and then use the information criterion to perform the model selection. Next, we describe how we used the elimination history information produced in solving the kth candidate model to improve the computational efficiency in evaluating the (k+1) candidate model. Moreover, a succinct relation between the weighted new sum of squared residuals and the previous one is revealed by a volume formula. A recursive elimination algorithm for quickly figuring out the weighted sum of squared residuals is also given. At last, the computational complexity of the elimination proposed is briefly discussed. Ultimately, we apply the proposed algorithm to polynomial fitting for GNSS orbits as an example.

2

Least squares fitting and model selection Typical GNSS data processing requires interpolating a GNSS ephemeris. It is a question that how to select a suitable model from the candidate regression models (Schenewerk 2003). We’ll express this as Ak xk = b + e

k=1,2,…,K

(1)

where b ∈ R n is the observation vector, n is the fixed number of observations, x k ∈ R k is the parameter vector, k is to index the kth candidate model containing k

parameters, e is the error vector, K ≤ n is the number of candidate models, and A k = [a1 a 2 L a k ]

(2)

is the design matrix in which ai = [ fi (t1 )

fi (t2 ) L

f i (tn ) ] , fi (t ) (i=1,2,…,k) T

are the base functions of the fitting. The commonly used base functions are polynomial functions 1, t , t 2 ,L , and the trigonometric functions 1, cos ωt ,sin ωt , cos 2ωt ,sin 2ωt ,L (Feng and Zheng 2005; Mathews and Fink 2004; Schenewerk

2003). For the kth candidate model in (1), the weighted sum squared residuals (WSSR) is defined by the least squares (LS) criterion as

ρ k = min ⎡⎣ (b − A k x k ) T P (b − A k x k ) ⎤⎦

(3)

x k ∈R k

where P is the weight matrix and it yields the least squares solution (LSS) as xˆ k =(A Tk PA k ) −1 A Tk Pb

(4)

To quickly solve the LSS from the normal equation system (A Tk PA k )xˆ k =A Tk Pb , one can factor the augmented cross-product matrix into a lower triangular matrix, a diagonal matrix and the transpose of the lower triangular matrix, that (Stewart 1995) ⎡N G := ⎢ Tk ⎣ uk 0 k

u k ⎤ ⎡ R Tk ⎥=⎢ b T Pb ⎦ ⎣ s Tk

0 ⎤ ⎡ diag −1 (R k ) 0 ⎤ ⎡ R k ⎥⎢ ⎥⎢ ρk ⎦ ⎣ ρ k−1 ⎦ ⎣ 0 0

sk ⎤ ⎥ ρk ⎦

(5)

T T where N k := A k PA k , u k := A k Pb . Merging the first two terms of (5), we can obtain

G 0k = LU called LU decomposition. Particularly for the symmetrical positive G 0k , 0 T Cholesky decomposition G k = SS can be obtained from (5) in which

diag −1 ( R k )

−1 and ρ k apply the square root decomposition. Including Cholesky decomposition

3

and LU decomposition, the decomposition (5) is essentially the Gaussian elimination in matrix form as ⎡R G kk : = ⎢ k ⎣0

where

Li

sk ⎤ 0 ⎥ =Lk ...L2L1G k ρk ⎦

(i = 1, 2,..., k ) is

the

elementary

(6)

transformation

r j = r j − ri pi , j − i

( j = i + 1, i + 2,..., k ) that represents the replacement of the jth row r j of the matrix G ik−1 = Li −1...L 2L1G 0k

by a combination

r j − ri pi , j − i

where

pi , j − i = g ij−,i1 gii,−i 1

to

i generate the eliminated matrix G k . As to the LU decomposition G 0k = LU , it takes

L = (Lk ...L2L1 )−1

and

U = G kk .

Because WSSR gradually decreases to be zero while the adjustment freedom tends to 0, without a priori variance of unit weight the model selection may fail when only minimizing WSSR or the descent speed of WSSR. In statistics, the Akaike criterion or Bayesian criterion can be used to perform the model selection by matching the model fitting degree measured by ρ k and the model complexity measured by k (Akaike 1974; Cavanaugh 1997; Kadane and Lazar 2004; Schwarz 1978). These criterions are subject to the number of parameters and WSSR, such as

BICk = n ln ( ρ k n) + ln( n) k + n(1 + ln 2π )

(7)

where the first term represents the degree of fitting (slightly different from the use of unbiased estimation of variance in surveying adjustment, here ρk n is employed as the variance estimation) and it generally decreases with the increase of the number k, the second term represents the complexity of the fitting, the third term is a constant without affecting the minimization of (7), and the natural logarithm ln() is to associate the quantities with information measure. The model with the lowest information takes k = arg min BICk

(8)

k ≤ N , k ∈N

and it is the best fitting model. Employing the criterion (8) one can then optimize the fitting model (1). If gross error is treated as a kind of function model error, the gross error detection will become selecting the best function model by the criterion (8). In the following discussion we suppose that the observations are clean without gross errors. It is notable that Gaussian elimination can yield xˆ k and ρ k in parallel and it provides an efficient procedure to perform LU decomposition or Cholesky decomposition. Since WSSR can be figured out before the parameter estimation, Gaussian elimination can be used to perform model selection and the back 4

substitution is to obtain LSS until the best fitting model has been obtained. However, when a large number of parameters are involved, separately solving WSSRs of all candidate models one by one still seems costly. In the following, we will show an algorithm, named recursive elimination, to efficiently perform the model selection and parameter estimation.

Recursive elimination for parameter estimation When k is fixed and n increases, Kalman filtering is a general technique to update the LSS for newly introduced observations. In this case, the information matrix N k , n + s can be decomposed into the history information N k , n with n observations,

and the new information N k , s with s observations. Rank-one Update is a well-known procedure to efficiently update the Cholesky decomposition of N k , n + s = N k , n + N k , s by making full use of the history decomposition of N k , n . However, when n is fixed and k increases, updating LSS and WSSR needs to make full use of the elimination history information in the lower triangular form (6) while the dimension of the information matrix increases with the increased k. Next, we will discuss the update for the newly introduced parameter in the context in Gaussian elimination with ρ k , R k and s k in (6) as the history information. For the (k+1)th candidate model in (1),

that is simply the kth model with the next polynomial or trigonometric term in the series added, the augmented cross-product matrix reads ⎡ Nk ⎢ 0 G k +1 = ⎢a Tk +1PA k ⎢ u Tk ⎣

A Tk Pa k +1 uk ⎤ ⎥ T T a k +1Pa k +1 a k +1Pb ⎥ b T Pa k +1 b T Pb ⎥⎦

(9)

where the sub matrices Nk, uk and bTPb are the same with those in the first matrix in (5), and the others are the new information due to the newly introduced parameter

xk +1 to the kth candidate model. To use the elimination history information (6), we first structure an augmented matrix of the form ⎡ Rk ⎢ M k +1 := ⎢a Tk +1PA k ⎢ 0 ⎣

A Tk Pa k +1 sk ⎤ ⎥ T T a k +1Pa k +1 a k +1Pb ⎥ b T Pa k +1 ρ k ⎥⎦

(10)

Comparing the matrix (10) with the last form of Gaussian elimination, we can find that there are elimination operations missing on the (k+1)th column of (10), and that det(G 0k +1 ) ≠ det(M k +1 )

(11) 5

⎡ R det( N k +1 ) ≠ det ⎢ T k ⎣a k +1PA k

A Tk Pa k +1 ⎤ ⎥ a Tk +1Pa k +1 ⎦

(12)

In short, directly starting the elimination with the augmented system (10) we cannot get LSS rigorously. For this, let h 0k +1 : = ⎡⎣ h10

h20

h30

L hk0

T

hk0+ 2 ⎤⎦ = ⎡⎣ a Tk +1PA k

a Tk +1Pb ⎤⎦

T

(13)

then we can recover the elimination to (13), that h kk +1 := ⎡⎣ h10

h21

T

h32 L hkk −1 hkk+−21 ⎤⎦ = L k ...L 2L1h 0k +1

(14)

where Lk ...L2L1 is the elementary transformation from Gaussian elimination (6). Once the (k+1)th column in like 0 0 ⎡ g1,1 g1,2 L ⎢ 1 ⎢ 0 g 2,2 L ⎢ M M O M′k +1 := ⎢ 0 L ⎢ 0 ⎢ h0 h20 L ⎢ 1 ⎢⎣ 0 0 L

(10) is replaced by (14),the extended matrix will look

g1,0 k g 12, k M k −1 gk ,k hk0 0

g1,0 k +1 ⎤ ⎥ g 12, k +1 ⎥ M ⎥ ⎥ k −1 g k , k +1 ⎥ hk0+ 2 ⎥ ⎥ ρ k ⎥⎦

h10 h21 M k −1 hk hk0+1 hkk+−21

(15)

By creating and using the M′k +1 matrix, the inequalities in (11) and (12) become equalities. Now, the matrix M′k +1 becomes a quick start for continuously performing Gaussian elimination. Eliminating the k+1 row in (15), we have 0 0 ⎡g1,1 g1,2 L g1,0k h10 g1,0k +1 ⎤ ⎢ ⎥ 1 1 1 1 ⎢ 0 g2,2 L g2,k h2 g2,k +1 ⎥ ⎢M M O M M M ⎥ M′kk++11 := ⎢ ⎥ = Lk +1M′k +1 k −1 k −1 gkk,−k1+1 ⎥ ⎢ 0 0 L gk,k hk ⎢ 0 0 L 0 hk hkk+2 ⎥ + k 1 ⎢ ⎥ ⎢⎣ 0 0 L 0 hkk+−21 ρk ⎥⎦

where

(16)

r =r −r p L k +1 is the elementary transformation k +1 k +1 i i , k − i +1 ( i = 1, 2,..., k ) that

represents the replacement of the (k+1)th row rk +1 of (15) by a combination rk +1 − ri pi , k − i +1

where

pi , k − i +1 = hii −1 gii,−i 1 . From (16) we can obtain the (k+1)th

parameter estimation

xˆk +1 = hkk+ 2 hkk+1

(17)

6

One can continue to do the back substitution for calculating the remaining parameters.

Recursive estimator of the WSSR From the relation (5), we can get

det ( G k ) = ρ k det R k

(18)

where det() is the determinant. Using the relation det R k = det N k , from (18) we immediately get that

ρ k = det ( G k ) det ( N k )

(19)

We next establish a formula connecting ρ k +1 with ρ k . Laplace's formula of determinant reduces the determinant of a n-dimensional square matrix to be the sum of n determinants of (n-1)-dimensional square matrices (Meyer 2000). Appling the Laplace's expansion to the last row in (16), the determinant reads:

det M′k +1 = (−1) 2 k + 4 ρ k E + (−1) 2 k + 3 hkk+−21 F = ρ k E − hkk+−21 F

(20)

where 0 ⎡ g1,1 ⎢ ⎢ 0 E = det ⎢ M ⎢ ⎢ 0 ⎢ h0 ⎣ 1

0 g1,2 g 12,2 M 0 h20

h10 ⎤ L g1,0 k ⎥ L g 12, k h21 ⎥ O M M ⎥ ⎥ 0 g kk −1 hkk −1 ⎥ L hk0 hk0+1 ⎥⎦

(21)

and 0 ⎡ g1,1 ⎢ ⎢ 0 F = det ⎢ M ⎢ ⎢ 0 ⎢ h0 ⎣ 1

0 g1,2 g 12,2 M 0 h20

g1,0 k +1 ⎤ ⎥ g 12, k +1 ⎥ M ⎥ ⎥ g kk,−k1+1 ⎥ hk0+ 2 ⎥⎦

L g1,0 k L g 12, k O M 0 g kk −1 L hk0

(22)

Because Gaussian elimination is isovolumetric, we have

E = det(N k +1 ) , and det G k +1 = det M′k +1

(23)

Moreover, from the Cramer’s ruler (Meyer 2000), we have the relation xˆk +1 = F E

(24)

where xˆk +1 is the (k+1)th parameter of LSS and it has been obtained by (17). From (23), (24) and (20), then 7

ρ k +1 =

det(G k +1 ) det(M ′k +1 ) ρ k E − hkk+−21 F = = = ρ k − hkk+−21 xˆk +1 det(N k +1 ) det( N k +1 ) E

(25)

where hkk+−21 is given by (14), and the quantity − hkk+−21 xˆk +1 is the change of WSSR due to the newly introduced parameter. Only if − hkk+−21 xˆk +1 is negative, will the WSSR decrease.

Computational complexity analyses For solving the normal equation system, the total number of multiplications plus divisions in Gaussian elimination is on the order of k 3 /3 . Furthering considering WSSR calculated in parallel, then the total number of multiplications plus divisions is on the order of (k + 1)3 /3 . Therefore, the computational cost of Gaussian elimination for solving the candidate models in (1) takes K

∑ (k + 1) /3 = ( K + 2) ( K + 1) 3

2

2

/12

(26)

k =1

However, the proposed recursive elimination only needs a little additional cost at each stage composed of the following aspects: (a1) k (k + 1) / 2 in (14); (a2) k (k + 1) / 2 in (16); (a3) k (k + 1) / 2 for back substitution. so the computational cost of the recursive elimination proposed takes 3 K 2 3 K 2 3 K k + k = k = K (K +1)(2K +1)/4+3K (K +1) / 4 ( ) ∑ ∑k + 2 ∑ 2 k =1 2 k =1 k =1 = K (K +1)(K +2)/2

(27)

and it reduces the computational cost at least by ⎡ K (K +1)(K +2)/2 ⎤ ( K − 2)( K − 1) ⎢1 − ( K + 2) 2 ( K + 1) 2 /12 ⎥ × 100%= ( K + 2)( K + 1) × 100% ⎣ ⎦

(28)

Besides this, if one further uses the parallel strategy, then the cost (27) becomes

∑(k K

k =1

2

+ k ) = K (K +1)(K +2)/3

(29)

implying even greater computational savings. In short, the formula (29) shows that the total cost of the recursive elimination is approximately equal to that of solving just the 8

last model in (1).

Polynomial fitting of a GNSS orbit For GNSS orbit fitting, let f1 (t )=1, f 2 (t )=t ,L , f k (t ) = t k −1 be the base functions, b be the time series of coordinate X, or Y, or Z, and let x k be the unknown coefficients of the polynomials, then the model selection is to determine the optimal order of the fitting model. We use the precise ephemeris file igs13124.sp3 with sampling interval of 15 minutes, which contains data for GPS satellites during the day 2005-03-03. The following calculations analyses use the first 10-hours with 40 samples. Among the total 40 samples, the odd-numbered 20 samples are used to establish fitting models while the even-numbered 20 samples are used to check the fitting model. The fitting models will be evaluated by the criterion (8) and the check points respectively.

Fig. 1 Optimal fitting model evaluated by BIC sequences

The BIC (7) defined previously generally penalizes free parameters more strongly than does the AIC as AICk = n ln ( ρ k n) + 2k + n(1 + ln 2π ) , though it depends on the size of n and the relative magnitude of n and k. For the satellite PRN01, the BICs are given in Figure 1 where “R” and “I” of the titles index the recursive elimination algorithm and the inversion algorithm respectively. The figure shows that the optimal orders obtained from the recursive elimination algorithm are 16, 16 and 15 for the component X, Y, Z respectively while the optimal orders obtained from the inversion algorithm are 13, 15 and 13 for the component X, Y, Z respectively. The BICs obtained from different algorithms are the same when k12. This indicates that the recursive elimination algorithm can achieve minimum AICs because of the good numerical stability while the inversion algorithm becomes numerically unstable when the normal equation is seriously ill-conditioned with the increase of the parameter number.

9

(a)

(b) Fig. 2 Optimal fitting model evaluated by check points

The Runge phenomenon detected by the check points indicates that both ends of fitting model are not applicable and this has been discussed by Horemuž and Andersson (2006). For this, we use 10 check points in the middle for 5 hours to check the applicability of the fitting model. The fitting precision is evaluated by the mean error as 10

m k = ∑ abs ( X kfitting − Xicheck ) 10 ,i

(30)

i =1

is the predicted position from the kth fitting model, Xicheck is the where X kfitting ,i position of the ith check point, abs() is the absolute value of the fitting error. As shown in Figure 2 (a) and (b), the optimal fitting orders evaluated by the check points are 15, 16 and 14 for the component X, Y, Z respectively corresponding to the fitting precisions 6.36 ×10−8 m, 3.52 ×10−8 m, 6.49 ×10−10 m and among which the highest fitting precision is due to the lowest BIC of Z component shown in Figure 1. Slightly different from the optimal model obtianed by the check points, the fitting precisions of the fitting models with the lowest BICs are 9.73 ×10−8 m, 6.50 ×10−10 m for the X, Z respectively. It shows that the model selection performed by BIC criteria is very close to the actual situation and the consistency is partly due to the numerical stability 10

of the elimination algorithm proposed. This also indicates that the 5-hours arc can be replaced by the best polynomial fitting model obtained. With 12-hours data, the best polynomial fitting model can produce the 6-hours arc almost without any accuracy loss, but this is convenient for calculating the satellite position in time.

Fig. 3 Running time comparison

In order to accurately measure the running time, the recursive elimination proposed and Gaussian elimination are both continuously repeated 100000 times on a Matlab® 7.0 platform (using double-precision, 4×1.9 GHZ CPU and 2 GB RAM) and the total time T is counted, then the mean running time T/100000 is treated as the time consumed once. Figure 3 shows that the recursive elimination is more efficient than Gaussian elimination. As to the accumulated time for evaluating and solving all candidate models, the recursive elimination takes about 41 microseconds while Gaussian elimination takes about 110 microseconds. As to the mean running time for solving the last candidate model containing 30 parameters, the recursive elimination takes 3.75 microseconds only about one-third of the running time 1.02 microseconds consumed by Gaussian elimination and it indicates that: the larger the number K involves, the more running time the recursive elimination saves.

Conclusions Gaussian elimination provides us with an efficient tool for quickly solving least squares problems. Including recursive elimination, as proposed here, offers a simple relation between the elimination history information and the information of the newly added parameter. This makes the proposed recursive elimination algorithm of high efficiency greatly reducing the computational cost of testing the efficacy of candidate function models for fitting data. Although the recursive elimination here is induced in the context of Gaussian elimination, this algorithm can be also extended by the matrix decompositions such as the Cholesky decomposition and LU decomposition. Formula (19) is a simple expression with clear geometric meaning, and it shows that the weighted sum of squared residuals is an invariable quantity under any isovolumetric matrix transformation, such as QR decomposition. This indicates that by employing this relation, one can propose quick algorithms to calculate the weighted sum of squared residuals. The recursive relation (25) of WSSR provides us with a new approach to dynamically and efficiently evaluate the gain of the fitting degree from the new introduced parameter. 11

The recursive elimination, in which the improvement at each recursive stage is judged based on the information criterion, can be used to examine the fitting model by the gain from the newly introduced parameter, dynamically assess the fitting model and then efficiently choose the optimal model. It shows that the optimal fitting model with the lowest BIC is very close to the real situation verified by the check points. Including GNSS orbit fitting and interpolation, the proposed recursive elimination here may be applied to many GNSS fitting or modeling applications.

Acknowledgements We would like to thank the referees for their helpful comments. This work is partly supported by National Science Foundation of China (grant No. 41020144004, 41104018) and National High-tech R&D Program (grant No. 2009AA121405, 2012BAB16B01), GFZX0301040308-06. References Akaike, H., 1974. A new look at the statistical model identification. IEEE Transactions on Automatic Control 19(6): 716–723. Amiri-Simkooei, A.R., Asgari, J., Zangeneh-Nejad, F. and Zaminpardaz, S., 2012. Basic Concepts of Optimization and Design of Geodetic Networks. Journal of Surveying Engineering, 138(4): 172-183. Björck, Å., 1996. Numerical methods for least squares problems. SIAM, Philadelphia. Cavanaugh, J.E., 1997. Unifying the derivations for the Akaike and corrected Akaike information criteria. Statistics & Probability Letters, 33(2): 201-208. Efroymson, M., 1960. Multiple regression analysis. In Ralston, A. and Wilf, HS, editors, Mathematical Methods for Digital Computers. Wiley. Feng, Y. and Zheng, Y., 2005. Efficient interpolations to GPS orbits for precise wide area applications. GPS Solutions, 9(4): 273-282. Horemuž, M. and Andersson, J.V., 2006. Polynomial interpolation of GPS satellite coordinates. GPS Solutions, 10(1): 67-72. Huber, P.J., 1981. Robust statistics. Wiley series in probability and mathematical statistics. Wiley, New York. Kadane, J.B. and Lazar, N.A., 2004. Methods and Criteria for Model Selection. Journal of the American Statistical Association, 99(465): 279-290. Mathews, J.H. and Fink, K.D., 2004. Numerical methods using MATLAB. Pearson Prentice Hall, Upper Saddle River, N.J. Meyer, C.D., 2000. Matrix analysis and applied linear algebra. Society for Industrial and Applied Mathematics, Philadelphia. Schenewerk, M., 2003. A brief review of basic GPS orbit interpolation strategies. GPS Solutions 6(4): 265-266. Schwarz, G., 1978. Estimating the Dimension of a Model. The Annals of Statistics, 6(2): 461-464. Stewart, G.W., 1995. Gauss, Statistics, and Gaussian Elimination. Journal of Computational and Graphical Statistics, 4(1): 1-13. Yang, Y. and Gao, W., 2006. An Optimal Adaptive Kalman Filter. Journal of Geodesy, 80(4): 177-183. Yousif, H. and El-Rabbany, A., 2007. Assessment of Several Interpolation Methods for Precise GPS

12

Orbit. Journal of Navigation, 60(03): 443-455. Leick, A., 2004. GPS satellite surveying. John Wiley & Sons. Seemkooei, A.A., 1998. Analytical Methods in Optimization and Design of Geodetic Networks. Department of Surveying Engineering, K.N Toosi University of Technology, Tehran, Iran. Xu, P., 2011. Parallel Cholesky-based reduction for the weighted integer least squares problem. Journal of Geodesy, 86(1): 35-52. Teunissen, P. J. G. 2000. Adjustment theory : an introduction. Delft: Delft University Press.

Author Biographies

Xue Shuqiang is Associate Professor at the Chinese Academy of Surveying and Mapping, China. His studying PhD deals with nonlinear adjustment, adjustment system optimization, and dynamic and discrete configuration optimization. His main interests are nonlinear least squares, geodetic network and satellite constellation design, GNSS data processing and analysis.

Yang Yuanxi is currently a Professor of Geodesy and Navigation at Xi’an Institute of Surveying and Mapping and China National Administration of GNSS and Applications (CNAGA). He got his PhD from Wuhan Institute of Geodesy and Geophysics of Chinese Academy of Science. He was honored as an Academic Member of Chinese Academy of Science in 2007. His research interests mainly include geodetic data processing, geodetic coordinate system, crustal deformation analysis and integrated navigation.

13

Suggest Documents