Some remarks on GNSS integer ambiguity validation methods T. Li and J. Wang* Ambiguity resolution is an indispensable step in fast and high precision Global Navigation Satellite System (GNSS) based positioning. In general, ambiguity resolution consists of three steps. The first step is to estimate the ambiguities using a least-squares estimation process, from which the so called ‘float solution’ or real valued solution is obtained. Then in the second step, the float solution is used to search for the integer ambiguities. Once integer ambiguities are resolved, the last step is to apply the integer ambiguities into the models so that fairly accurate fixed solution can be generated. Owing to the importance of the integer ambiguities, one indispensable procedure that needs to be implemented in the second step is integer ambiguity validation. Over the past decades, considerable work has been concentrated on this procedure and various approaches have been proposed, such as R-ratio test, F-ratio test, W-ratio test, integer aperture estimator, etc. However, their performances are controversial. Therefore, in this contribution, an overview of the existing ambiguity validation methods is firstly presented, and then some numerical analysis is carried out to evaluate their performances. Keywords: GNSS, Integer least-squares, Ambiguity validation, Integer aperture, Ratio test
Introduction In Global Navigation Satellite System (GNSS) positioning, carrier phase measurements are more precise than the pseudoranges. However, one troublesome problem with the carrier phase measurements is that each carrier phase contains an ambiguity in the number of wavelengths, which has an intrinsic nature of being integer. Consequently, the most important issue in precise GNSS positioning is that integer ambiguities have to be resolved correctly, and once integer ambiguities are obtained, the precision of carrier phases can be fully exploited in positioning. The whole process of ambiguity resolution includes three steps. Initially, with an appropriate mathematical model, the float solution (or real valued solution) can be estimated by least-squares or Kalman filter. Then, a searching strategy will be carried out around the float solution so as to find the integer candidates [3]. As there could be several sets of integer ambiguity candidates, a validation procedure is required, and the most likely candidate should be validated and separated from the others based on statistical theory or other methods. To fix the real valued ambiguities to the integer candidates, in this paper, LAMBDA [12], [13], [14], was adopted. The last step is to estimate the unknown parameters with the correct integer ambiguities.
School of Surveying and Spatial Information Systems, University of New South Wales, NSW 2052, Sydney, Australia *Corresponding author, email
[email protected]
230
Survey Review
2012
VOL
44
NO
326
For the ambiguity validation step, several procedures have been developed, with the quadratic form of residuals associated with the most likely set of integer ambiguities, and the second quadratic form of residuals associated with the second most likely set of integer ambiguities, such as F-ratio [5], R-ratio [4], W-ratio [24], difference test [23], projector test [7]. With a given critical value, the best candidate will be identified. In such ambiguity validation procedures, the fixed ambiguities are treated as constant values. In another approach to ambiguity validation, integer aperture (IA) estimator [18] has been developed. In [28], the IA estimator was considered as a framework for all other classical theory of hypothesis testing methods, and the geometries of different validation methods are then reflected in the aperture pull-in region. Following the definition, ambiguity validation is carried out by determining the critical values based on the pre-defined failrate, which sounds more reasonable. However, as commented in [9], the theory of the IA estimator assumes the most likely set of candidates are correct (or true integer values) in the designing stage. Through numerical tests, it is found that determining the critical values according to the IA estimator is not that straightforward, especially in real applications, which shows that ambiguity validation by the IA estimator has some limitations. In this contribution, the performances of different ambiguity validation methods have been studied and analysed. This paper firstly starts with introducing the mathematical models, and then in the following section, an overview of these validation methods as well as some discussion is presented. After that, according to numerical analysis, the performance of different ambiguity
ß 2012 Survey Review Ltd Received 5 December 2011; accepted 5 December 2011 DOI 10.1179/1752270611Y.0000000027
Some remarks on GNSS integer ambiguity validation methods
Li and Wang
validation methods and the limitation of IA theory are extensively analysed and discussed. Some conclusions and suggestions are given in the final section.
Mathematical models The initial GNSS observation mathematical models are constructed with unknown parameters such as coordinates, integer ambiguities, ionosphere delay, troposphere delay, etc. Ionosphere delay and troposphere delay certainly have impacts on the accuracy, and in order to eliminate those unnecessary factors, double differenced functional models are employed as follows 1 D+w~ D+rzD+Nzew l
(1)
D+P~D+rzeP
(2)
where D= is the double differencing operator between satellites and receivers, w and P are carrier phase measurements and code measurements respectively, l is the carrier phase wavelength, r is the geometric distance between satellites and receivers, N is the integer ambiguities in cycles and e represents the measurement errors. With an approximate rover position given, the above models are linearised as l~Axzv (3) T T 1 where ~ D+w{ D+r ,D+P{D+r , v~ ew ,eP , l x~ðxr ,D+N ÞT , A is the design matrix of both coordinates Axr and ambiguities Aa, r* is the approximate distance, and xr is the coordinates. Thus, the stochastic model for the double differenced measurements can be expressed as DðyÞ~s20 Q~s20 P{1
(4) s20
is a priori variance where D is the covariance matrix, factor, and Q and P are the cofactor matrix and weight matrix of the measurements, respectively. By applying the classical least-squares approach, whose criterion is vT Pv~min, the unknown parameters and their covariance are uniquely estimated as [25] T {1 T _ _ _ A Pl x~ xr a ~ AT PA
T
Q_x ~ A PA
{1
" ~
Q_x
Q_x _a
r
#
r
Q_a _x
r
Q_a
(5)
_
where a represents the real valued solution of integer ambiguities. Then the posteriori variance is _2
_
s0~
_
v TPv V0 ~ f f
(6)
with f is the degree of freedom.
An overview of existing ambiguity validation methods The unknown parameters estimated from a normal least-squares procedure are the so called the float solution. Considering the integer nature of the ambiguities, an integer least-squares (ILS) problem needs to be taken into account, e.g. [8], [12], [16]. In this paper,
LAMBDA is used to search the integer ambiguities within the constructed hyper-ellipsoid. The remaining problem is that the best integer candidate should be selected from all the combinations of integer candidates, and traditionally, it is accomplished by investigating the quadratic forms of residuals. Another proposed theory is the integer aperture estimation, which has a hybrid nature of yielding integer outcomes as well as noninteger outcomes. This approach allows users to control the pre-defined fail-rate by themselves, yet its application in practice requires more numerical tests and analysis. Based on the estimation from least-squares, ambiguity _ _2 validation will start from a, s 0 and Q_a , and we assume _ that the float solution of the ambiguity a is normally distributed as N ða,Qa Þ. The quadratic form of residuals for float solution is define as V0 , while the quadratic form of the fixed solution residuals is V, while _
^
R~ a{a
T
_
^
Q{1 a{a _ a
means the geometric distance
between V0 and V [25]. We assume V1 and V2 are the minimum and second minimum quadratic forms of the residuals in the ambiguity fixed solutions, which correspond to a˘1, a˘2 and R1, R2, then ambiguity validation methods can be generally divided into two main groups [9].
Statistical tests based on the best and second best ambiguity candidates Traditionally, ambiguity validation methods consider both the best and second best ambiguity candidates and the relationship between them are analysed from a statistic point of view. Most commonly used methods are the Fratio, R-ratio, W-ratio, difference test and projector test, and their detailed definitions are expressed as follows: Ratio tests
The first approach proposed for ambiguity validation is the F-ratio test [5], [27] (here a inverse version is used for analysis), which is the ratio between V1 and V2 F~
V1 V0 zR1 ~ V2 V0 zR2
(7)
Normally, the test statistic for F-ratio is assumed to have an F-distribution, with degrees of freedom for V1 and V2 respectively. Then, a critical value c could be obtained by specifying a significance level. However, according to, e.g. [24] and [15], the numerator and denominator are not independent, and F-ratio does not follow an F-distribution. Sometimes, there are large discrepancies between the obtained results and their true values, which clearly show that F-ratio is that not reliable, even though, certain empirical values, e.g. 1?5 (inversed 0?67), 2 (inversed 0?5), 3 (inversed 0?3), perform satisfactorily [2], [10]. An alternative popular test similar to the F-ratio test is the ratio between R1 and R2, known as the R-ratio [4], [27]. The inverse of R-ratio is constructed as R~
R1 ƒc R2
(8)
As described in the above formula, the distribution of Rratio is also unknown. Hence, it would be incorrect to determine the performance of the ratio test on the basis of the distributional results as provided by the classical theory of hypothesis testing [22], [28]. Theoretically, the
Survey Review
2012
VOL
44
NO
326
231
Li and Wang
Some remarks on GNSS integer ambiguity validation methods
chosen of critical value is groundless by an empirical value. In the original work of [4], the critical values ranges from 5 to 10 were given. However, in practice, many researchers validate the ambiguities based on the R-ratio, with critical values like 2 (inversed 0?5), 2?5 (inversed 0?4), see e.g. [6], [11], [22]. In [24], the discrimination procedure is constructed either by comparing the likelihood of two integer candidates or by artificially nesting the two compared models by a nesting parameter, and then the W-ratio is defined as follows W~
d ½varðd Þ1=2
(9)
§c
where d~V2 {V1 , varðd Þ~r2 Qd
(10)
2
where r could be decided by users either from the priori _2
variance s20 or from the posteriori variance s 0 . By applying the variance–covariance propagation law, the variance for T ^ ^ ^ ^ d could be Qd ~4 a2 {a1 Q{1 a {a . 2 1 _ a
Assuming that Wa and Ws are the two ratios correspond_2 ing to a priori variance s20 and a posteriori variance s 0 , they are supposed to have a truncated standard normal distribution and a truncated student t-distribution respectively, from which the critical values could be easily obtained. Under the assumption that the fixed ambiguities are deterministic quantities, the W-ratios can provide a rigorous confidence level for the validation test. Difference test
In [23], the validation approach is to analyse the difference between R1 and R2. First, a global model test is carried out. When it passed, the ambiguity could be validated by comparing the difference test with its critical value as follows R2 {R1 §c
(11)
where c is a non-negative scalar, which is a user defined tolerance level. This test accepts the best solution if the float solution is much closer to the best solution than to the second best solution. Critical values of 15 and 12 were suggested in [23] and [7]. Still, the determination of the critical value, which is also empirical, is controversial, as the distribution of the difference test is unknown. A looser setting of critical values increases the performance, but also increases the number of wrongly accepted candidates (type I errors). On the other hand, the difference statistic is dependent on the a priori variance factor given in equation (4). Projector test
The projector test is abstracted from [1], and proposed in [7], also deducted in [24], the null hypothesis is that ^ there is no outlier so a1 is accepted, while the alternative hypothesis is that there is an outlier in the direction of ^ ^ (a2 {a1 ); consequently, we have ^ ^ ^ _ y~Aa a1 zAa a2 {a1 czAxr xr zv With a quantity c and its variance as T ^ ^ ^ _2 a2 {a1 Q{1 a^{a1 _ 4V1 {c Qd a 2 ^c~ ^ , s ~ T c ^ ^ ^ ^ ðn{4ÞQd a2 {a1 a2 {a1 Q{1 _ a
232
Survey Review
2012
VOL
44
NO
326
_
Integer aperture theory
On the basis of ILS estimator, IA theory was firstly introduced by [18], and the IA estimator a- is developed as ( ) X h _i X _ _ (13) zvz a za 1{ vz a a~ z [ Zn
z [ Zn
With the indicator function vz(x) defined as 1 if x [ Vz vz ðxÞ~ 0 otherwise
(14)
where the Vz are the aperture pull-in regions, and their union V5Rn is the aperture space, which also has a property of translational invariant. With the above definition, three outcomes can be distinguished as _ a [ Va Success: correct integer estimation _ a [ V=Va Failure: incorrect integer estimation _ a [/ V Undecided: ambiguity not fixed to an integer The corresponding probabilities of success (Ps), failure (Pf) and undecided (Pu) are given by ð Þ~ f_a ðxÞdx Ps ~Pða~a Va
ð Pf ~ V=Va
f_a ðxÞdx
(15)
Pu ~1{Ps {Pf In the case of GNSS model, f_a ðxÞ represents the probability density function of the float ambiguities, and is usually assumed to be normally distributed. The IA estimator allows the user to choose a predefined fail-rate, and then determine the critical value accordingly. With the determination of the critical value, the user is capable of comparing different validation methods by calculating the corresponding success rates. Ellipsoidal integer aperture
A rather straightforward way of determining the IA pull-in region is the ellipsoidal integer aperture (EIA) [19], which can be constructed as Ez ~E0 zz, E0 ~S0 \Ce,0 , Vz [ Z n
(12)
_
Then the definition of the test statistic is c =s_c ~t(n24) (in this contribution, only baseline components are ^ ^ ^ considered) if a2 {a1 is selected. If a2 is selected, the test statistic is then a non-central t-distribution. The quantity has been proved to be always smaller than or equal to 0?5, so that the definition and the distribution are actually not strict as well. Besides, as discussed in [24], the derivation is unfortunately not rigorous because the non-central parameter is formulated with the estimated variance instead of its known value. So far, there has been no criterion identifying which of the ambiguity validation methods proposed is the best, since the critical values are chosen differently for different distributions. Their performances should be evaluated and tested through various means, including theoretical analysis and evaluation with the ground truth for the integer ambiguities.
(16)
With nS0 being the least-squares pull-in region and o Ce,0 ~ x [ Rn jjjxjj2Qa ƒc2 , an origin centred ellipsoidal
Li and Wang
Some remarks on GNSS integer ambiguity validation methods
region of which the size is controlled by the aperture parameter (or critical value, in the sequel, critical value is used instead) e with kxk2Qa ~xT Q{1 a x The structure of the EIA pull-in region is considered to have a Chi-square distribution, and thus the EIA probabilities of success, failure, and undecided are given as X
Pf ~ P x2 ðn,lz Þƒc2 z [ Zn =f0g
Ps ~P x2 ðn,0Þƒc2
(17)
Pu ~1{Pf {Ps where x2 ðn,lz Þ represents a random variable having a non-central Chi-square distribution, and n and lz are the degree of freedom and the non-central parameter lz ~zT Q{1 a z respectively. Optimal integer aperture
Optimal integer aperture (OIA), which aims at the maximisation of the success rate, is proposed by [20], [28] as max Ps
subject to : Pf ~b
with the pull-in region V0 5S0 , and b is the pre-defined fail-rate. The realisation of OIA could be considered as the ratio between the PDF of the ambiguity residual and the PDF of the ambiguities as P ^ 2 ^ exp { ð 1=2 Þ e zz f^e e Qa z [ Zn ~ ƒc (18) 2 ^ ^ f_a e exp {ð1=2Þ e Qa
^
_
^
with e ~a{a. According to the definition of OIA, it has the advantage of obtaining the highest success rate. Whereas its application in reality is quite time consuming, this can be known from the PDF of the ambiguity residual. In [17], an exact formula of f^e ðxÞ is derived, and in order to calculate f^e ðxÞ, a summation over infinite integers should be carried out. This is, however, impossible to operate in practice. As a consequence, several finite integers are searched within an ellipsoidal space, and applied [29]. In the next section, we will discuss the following two questions through numerical analysis: (1) how to compare different validation methods; and (2) how to determine critical values from a given fail-rate for different validation methods.
Numerical analysis According to the above mentioned statements, different ambiguity validation approaches are hard to compare directly, as the different chosen critical values. However, under the framework of the IA estimation, this is possible if a pre-defined fail-rate is provided. In this section, the comparison of different validation methods is firstly studied by simulation, and then the limitations of IA theory are discussed. In addition, the performance
1 Determination of critical values with a given fail-rate
of both IA and non-IA based methods are analysed in real applications.
Comparison of different validation methods With the same pre-defined fail-rate given, one is able to select the critical value for different validation methods, and then determine their corresponding success rates. By comparing their success-rates, we are able to figure out which method may have the best performance. Detailed procedures are shown as Fig. 1. The geometry information of an existing GNSS model is entirely reflected in Q_a , so Monte Carlo simulations were used to investigate the performance of different validation methods with the information of Q_a given. By following Fig. 1, a random generator was used to simulate a number of samples (e.g., 50 000) of the estimated float ambiguities. Here, a bivariate normal distribution with the following variance–covariance matrix is considered first [28] Q_a ~½0:1021, {0:0364; {0:0364, 0:1100;
Survey Review
2012
VOL
44
NO
326
233
Some remarks on GNSS integer ambiguity validation methods
Li and Wang
The relationships between the fail-rate and the critical value for each of the ambiguity validation methods are always on a one to one basis, which implies that the failrate changes with different chosen critical values for each validation method. The corresponding success rates obtained with a given fail-rate are shown in Table 1. It is noted that the OIA, which has the highest success rate, performs the best, no matter how big the fail-rate is. In this case, the R-ratio, Wa-ratio and difference test, which are all treated as a separated IA method, have a performance quite close to that of the optimal IA estimator. When the fail-rate increases from 0?005 to 0?02, these three tests have a quite similar success rate, but it is impossible to judge which one is the best among these three. The reason is that the sample size of simulation will have some impact on the results. Such impacts are large enough to cause slight differences in the success rates. As for the projector test as an IA method, its success rate is smaller than the above three. For the EIA, it is easy to manipulate. However, its performance is not that good. In [9], a combination of R-ratio and overlapped EIA was proposed. To avoid the conservativeness of the EIA, the EIA is allowed to be overlapped to increase the success rate, and for the purpose of ensuring that the best ambiguity candidate is statistically better than the others, the R-ratio is combined with the overlapped EIA. Theoretically, however, the definition of the EIA is not overlapped, and the overlapped EIA increases the fail-rate as well. Apart from this, the combination of the EIA and the R-ratio makes the success rate decrease, which is not the purpose of the overlapped EIA. F-ratio as an IA method performs slightly worse than the other validation methods. The reason could be that, as shown in equation (7), and also commented in [28], both float solution residuals and ambiguity residuals are considered, so that the dimension of aperture space is different. Table 2 presents a four-dimensional real data example, with the variance-covariance matrix as follows: Q_a ~½1:1834, {1:3348, 0:2814, {0:6537; {1:3348, 1:6088, {0:2619, 0:8123; 0:2814, {0:2619, 0:0975, {0:1139; {0:6537, 0:8123, {0:1139, 0:4177 Table 1 Two-dimensional case for success rates critical values with the pre-defined-fail rate
and
Pf
FTIA
RTIA
WTIA DTIA
PTIA
EIA
OIA
0.005 c Ps 0.01 c Ps 0.02 c Ps
0.237 0.146 0.288 0.244 0.355 0.353
0.043 0.150 0.084 0.252 0.146 0.361
1.148 0.150 0.981 0.250 0.801 0.358
0.494 0.148 0.594 0.246 0.835 0.355
0.568 0.149 0.748 0.244 0.940 0.356
1.047 0.162 1.066 0.253 1.112 0.363
7.447 0.151 6.424 0.247 5.230 0.359
For the fail-rate of 0?001, the OIA still yields the highest success rate, and the difference test, in this case, has a much better performance than the others. When the failrate is 0?01, the W-ratio test is more preferable than any other non-IA validation methods. Meanwhile, in both cases, the EIA has the worst performance. It should be mentioned that for a four-dimensional simulation, it consumes hours of computation for each method when the user choose the Matlab build-in function ‘fzero’ (a good tool to find a zero of a function near a roughly given critical value) to run the simulation. The numerical results reveal three issues: (1) the OIA in these two cases performs the best in the simulations, but its efficiency is the worst among all the validation methods, which indicates that this method is hardly applicable in real time GNSS positioning; (2) The higher the dimension, the more time the simulations will consume; (3) using the function ‘fzero’ to search for the critical value, the pre-defined fail-rate should be reasonably determined by the user, since for a certain satellite geometry, there is an upper bound for the fail-rate.
Limitations of the IA methods: pre-defined failrate versus the critical value The integer aperture theory allows the user to determine the critical values based on the user preferred fail-rate. Theoretically, once the satellite geometry is obtained, the relationship between the pre-defined fail-rate and the critical value has already been determined, which means that when a fail-rate is pre-defined, the critical value will be uniquely fixed under the current satellite geometry. However, the problem is that their relationship cannot be mathematically represented; only simulations are feasible. Thus the way of calculating a critical value with a pre-defined fail-rate will be entirely dependent on simulations. In order to generate reliable results, the number of samples should be as large as possible. As suggested in [19], [28], a desirable sample size N should be larger than 5000. In order to show the influence of the sample size N, two simulations were carried out to compare with the values given in [28], note that the same Q_a 5[0?0865, 20?0364; 20?0364, 0?0847] was utilised. First, the sample size was increased. As listed in Table 3, we utilised the same critical value, and the Table 3 Comparison of R-ratio as an IA estimator N
c
Pf
Ps
Ref. 28 5000 10 000 50 000 100 000 500 000 5 000 000
0.035 0.035 0.035 0.035 0.035 0.035 0.035
0.001 0.001400 0.000700 0.000480 0.001260 0.001342 0.000733
0.168 0.165600 0.172000 0.170780 0.166340 0.167228 0.171139
Table 2 Four-dimensional case for success-rates and critical values with the pre-defined fail-rates Pf 0.001 0.01
234
Survey Review
c Ps c Ps
2012
FTIA
RTIA
WTIA
DTIA
PTIA
EIA
OIA
0.9815 0.8978 0.9981 0.9831
0.3884 0.8803 0.9020 0.9834
0.8316 0.9015 0.0867 0.9835
7.9241 0.9071 1.0887 0.9825
1.6938 0.9012 2.3774 0.9832
2.1351 0.6644 2.7963 0.9016
1.0193 0.9072 1.5860 0.9836
VOL
44
NO
326
Li and Wang
Some remarks on GNSS integer ambiguity validation methods
2 Fail-rate changes with both sample size and number of simulation for R-ratio
3 Fail-rate changes with both sample size and number of simulation for Wa-ratio
simulated results fluctuated near the given value. Even if the sample size is constant, as shown in Table 4, the failrate and success rate are not stable. The sample size N undoubtedly influences the fail-rate with the critical value known (or critical value with the fail-rate given). Hence, the sample size N, to some extent, determines the reliability of Integer Aperture theory. As shown in Figs. 2–4, when the sample size increases from 5000 to 50 000, the fail-rate and the success rate are becoming smoother. This tells us that the larger the sample size, the more reliable results we can have. However, as a routine, the simulated results may be problematic if the accuracy of the critical value is important. According to numerical analysis, a sample size larger than 50 000 will have a more reliable solution. However, with a powerful PC, such simulations (given the critical value to calculate the fail-rate) for one solution will take over 1?5 s and thus hardly be usable for real time operations.
[22]. The look-up table is generated based on the lower bound of ILS fail-rate (expressed by the fail-rate of integer bootstrapping), and then apply linear interpolation to find the corresponding critical value. In order to analyse the performance of the look-up table, real data were collected at a sampling rate of 1 s on 12 September 2009, in Sydney, Australia, and then processed by different session lengths. With single frequency carrier phase measurements and an elevation angle of 15u, five satellites were used to execute the double difference so that four ambiguities need to be resolved. As shown in Table 5, it is obvious that, by determining the critical value c from the look-up table, when the session length is longer than 3 min, the resolved ambiguities were accepted as undoubtedly correct. However, the truth is that correct ambiguities (given by processing the whole dataset) could only be obtained when the session length is longer than 6 min. This tells us that using a look-up table to determine the critical value is not versatile, and a fixed
Limitations of the IA methods in real applications However, the problem is that increasing the sample size aggravates the computation burden, which is a troublesome problem in kinematic positioning. The way of determining the critical value more rapidly and reliably based on the pre-defined fail-rate is perhaps another major issue. A feasible way could be using a look-up table (here only for R-ratio), which was created in [21], Table 4 Ten simulations of R-ratio with the sample size N
c
Pf
Ps
Ref. 28 5 000 000 5 000 000 5 000 000 5 000 000 5 000 000 5 000 000 5 000 000 5 000 000 5 000 000 5 000 000
0.035 0.035 0.035 0.035 0.035 0.035 0.035 0.035 0.035 0.035 0.035
0.001 0.001099 0.001221 0.000977 0.000833 0.001020 0.000976 0.001026 0.001120 0.001143 0.000932
0.168 0.167604 0.170165 0.168333 0.169142 0.166257 0.168333 0.166258 0.166259 0.167235 0.170143
4 Fail-rate changes with both sample size and number of simulation for difference test
Survey Review
2012
VOL
44
NO
326
235
Li and Wang
Some remarks on GNSS integer ambiguity validation methods
5 W-ratio (Wa and Ws) value changes with the epoch number
6 R-ratio and F-ratio values change with epoch number
look-up table sometimes does not properly reflect the changing of satellite geometry, even though it is more efficient to operate the simulations. In Table 6, the differences between the look-up table and simulation are given. With the critical values from the look-up table, the simulated fail-rates are not quite close to 0?001 and 0?01 respectively, with a notable error of 10–20%, and in the case of the same fail-rates given, the simulated critical values also have some discrepancies with the look-up table results. Note that the Matlab build-in function ‘fzero’ was utilised here to find the corresponding critical values more efficiently and rigorously (compared with the repeats of the simulation). However, an even more noticeable phenomenon is neither using a look-up table nor simulation gives the correct decision in this case (type I error).
Performances of both the IA and non-IA based methods in a real application Another real dataset was collected on 9 June 2010, in Sydney, Australia. A session length of 10 min (sampling rate: 1 s) was used here to briefly show the performance of both traditionally used methods to determine the critical value and using simulations based on the IA theory. Only single frequency code and carrier phase were utilised, and after double differencing, there are
7 Difference and projector tests values change with epoch number Table 6 An insight into the determination of critical values from simulations and a look-up table 0.75 min 1 min
c50.068 Pf50.0009 c50.236 Pf50.0008
c50.233 Pf50.0087 c50.611 Pf50.0095
Pf50.001 c50.076 Pf50.001 c50.283
Pf50.01 c50.248 Pf50.01 c50.652
Table 5 Ambiguity validation by using the look-up tables to determine the critical values Pf50.001
Integer ambiguity Session length/min sv17 sv3
sv18 sv21
RTIA c Critical value c01
Ps
Critical value c02
Ps
Validation results*
0.75 1 3 5 6 7 8 15
18 223 222 222 222 223 223 223
0.899 0.947 0.267 0.439 0.980 0.525 0.391 0.009
0.034 0.569 0.969 1.0 1.0 1.0 1.0 1.0
0.233 0.611 1 1 1 1 1 1
0.247 0.914 1.0 1.0 1.0 1.0 1.0 1.0
CR CR WA WA WA CA CA CA
217 235 231 231 231 236 236 236
41 222 227 227 227 221 221 221
225 216 218 218 218 215 215 215
0.068 0.236 1 1 1 1 1 1
*CR5correctly rejected, WA5wrongly accepted, CA5correctly accepted.
236
Pf50.01
Survey Review
2012
VOL
44
NO
326
Li and Wang
seven ambiguities to be estimated with the degree of freedom as 4, and the data were processed on an epoch by epoch basis. Figures 5–7 show the ratio values between the best solution and the second best solution. As shown in Table 7, F-ratio adopts a commonly used critical value 0?67, which yields 560 accepted epochs out of the total 600 epochs. The R-ratio accepts 506 epochs with an empirical critical value of 0?5. For the Wa-ratio and Ws-ratio here, with the truncated normal distribution and the truncated Student’s t-distribution respectively, by a significance level, the corresponding critical value can be generated. Five hundred and sixty-eight epochs were accepted for Wa with confidence level of 0?99. For the Ws, confidence level of 0?99 with degree of freedom as 4 allows 438 epochs accepted. For the difference test, if an empirical value of 15 was chosen, it is too conservative to operate and the accepted number of epoch is zero. The empirical value suggested in [7] is also too conservative in this case. When specifying a critical value of 0?87 for the projector test [28], 412 epochs were accepted. However, among the 600 best ambiguity combinations, the total numbers of correct ambiguities are 597, and for the W-ratio test, and the projector test, the three sets of wrong ambiguities were correctly rejected, whereas for the F-ratio and R-ratio tests, those three epochs had been wrongly accepted. In the case of running simulations to determine the critical value, for each epoch, with a given fail-rate, the geometry of float solution was used to obtain the critical values. Owing to the short observation time span, the satellite geometry changes slightly, and so does the critical values determined from the simulations. Meanwhile, as mentioned before, there are heavy computation burdens for higher dimension cases (more than 2 h per simulation); consequently, the approximate critical values were used, and the performance of the IA theory in real applications was shown in Table 8. The results are, however, not preferable comparing the results from the non-IA as listed in Table 7 (except the WTIA, which performed the best in both approaches). When a fail-rate of 0?01 is given, the corresponding critical values can be simulated, and then the IA based critical values were applied. For the EIA, once the critical value obtained, we could compare its real fail-rate with the predefined fail-rate. For the OIA, equation (18) was implemented. Besides the heavy computation burden, it can be seen that in this case, the OIA does not make the
Some remarks on GNSS integer ambiguity validation methods
optimal decision, and the non-IA methods outperform the IA based method for most of the time in this real application. Another issue is that the success rates are extremely low, much smaller than the ILS success rate (59?3%), and the probability of ‘undecided’ solution is quite high. This phenomenon is not consistent with the real performance, which can be judged from the number of the accepted solutions. Moreover, the IA estimator committed type I error as did some conventional methods. Obviously, in this real application, the success rate does not reflect the true correct rate properly, and the IA estimator does not perform reasonably.
Concluding remarks To sum up, this contribution presents an overview of current ambiguity validation approaches which have been numerically compared and analysed. Traditionally, the comparison among different ambiguity validation methods is difficult as the determination of critical values vary with different methods. Within the framework of the IA theory, the users are able to control the fail-rate of ambiguity validation, and different validation methods could be compared. The two-dimensional simulation indicates that OIA performs the best as the maximisation of success rate, yet the computation burden is extremely heavy. While for the other validation methods, R-ratio, W-ratio and difference test are preferred. However, a higher dimensional case gives slightly different results, and this shows that in practical applications, such simulations cannot show which ambiguity validation method is the best in terms of ambiguity success rate, because the ambiguity success rate is not the correct rate of ambiguities, which has also been shown in the practical example discussed in Table 8. Therefore, for the IA based methods, there are some issues that need to be emphasised: (1) even if we can determine the critical value from a given fail-rate by using a look-up table or simulation, the reliability of both the look-up table and the simulation needs to be paid more attention, as they are always affected by the sample size. In order to obtain reliable results, the sample size should be as large as possible, which is, however, contradictory to the computational efficiency. Numerical results have shown that simulations are
Table 7 The performance of the non-IA based methods Non-IA based methods
F-ratio
R-ratio
Wa-ratio
Ws-ratio
D-test
P-test
Critical values No. of accepted solutions Performance in three special epochs*
0.67 560 WA
0.50 506 WA
2.32 568 CR
3.75 438 CR
15.0 0 CR
0.87 421 CR
*The best solutions from those three epochs are not correct. Table 8 The performance of the IA based methods IA based methods
FTIA
RTIA
WTIA
DTIA
PTIA
EIA
OIA
Critical values (Pf50.01) No. of accepted solutions Ps Pu Performance in three special epochs*
0.57 507 0.057 0.933 WA
0.36 388 0.058 0.932 WA
0.97 591 0.057 0.933 CR
5.33 226 0.054 0.936 CR
0.34 173 0.051 0.939 CR
1.71 221 0.051 0.939 CR
1.03 443 0.059 0.931 WA
Survey Review
2012
*The best solutions from those three epochs are not correct.
VOL
44
NO
326
237
Li and Wang
Some remarks on GNSS integer ambiguity validation methods
extremely time consuming and not stable, and are accordingly not applicable in practice; (2) it should be noted that the success rate defined under the framework of IA theory is not equal to ‘correct rate’, which is unknown during ambiguity resolution. The real GPS dataset has been used to evaluate both the conventional and IA based methods with the ground truth. In terms of ambiguity correct rate, the Wa-ratio has outperformed all the other validation methods. Moreover, it should be stressed that the current ambiguity validation methods endeavour to discriminate the best integer ambiguity from the others, whereas the correctness of the resolved best integer ambiguity is only validated indirectly. Therefore, their performances in validating the correctness of the ambiguities should be evaluated by using the ground truth in various application scenarios.
The first author is a PhD student sponsored by Chinese Scholarship Council (CSC) for his studies at the University of New South Wales.
References
12.
13.
14.
15.
17. 18. 19. 20.
1. Baarda, W., 1968. A Testing Procedure for Use in Geodetic Networks. Publications on Geodesy, 2/5, Netherlands Geodetic Commission, Delft. 2. Chen, Y., 1997. An Approach to Validate the Resolved Ambiguities in GPS Rapid Positioning. Proceedings of the International Symposium on Kinematic Systems in Geodesy, Geomatics and Navigation, 3–6 June, Banff, Alta, Canada. 3. Counselman, C. C. and Abbot, R. I., 1989. Method of Resolving Radio Phase Ambiguity in Satellite Orbit Determination. Journal Geophysical Research, 94(B6): 7058–7064. 4. Euler, H. J. and Schaffrin B., 1991. On a Measure for the Discernability between Different Ambiguity Solutions in the StaticKinematic GPS Mode. IAG Symposia no 107, Kinematic Systems in Geodesy, Surveying, and Remote Sensing, Springer, Berlin/ Heidelberg/New York: 285–295. 5. Frei, E. and Beutler, G., 1990. Rapid Static Positioning Based on the Fast Ambiguity Resolution Approach FARA: Theory and First Results. Manuscripta Geodaetica, 15(4): 325–356. 6. Han, S. and Rizos, C., 1996. Validation and Rejection Criteria for Integer Least-Squares Estimation. Survey Review, 33(260): 375–382. 7. Han, S., 1997. Quality Control Issues Relating to Instantaneous Ambiguity Resolution for Real-Time GPS Kinematic Positioning. Journal of Geodesy, 71(6): 351–361. 8. Hassibi, A. and Boyd, S., 1998. Integer Parameter Estimation in Linear Models with Applications to GPS. IEEE Transactions on Signal Processing, 46(11): 3219–3225. 9. Ji, S. Y., Chen, W., Ding, X. L., Chen, Y. Q., Zhao, C. M. and Hu, C. W., 2010. Ambiguity Validation with Combined Ratio Test and
Survey Review
11.
16.
Acknowledgements
238
10.
2012
VOL
44
NO
326
21.
22.
23.
24.
25.
26.
27. 28.
29.
Ellipsoidal Integer Aperture Estimator. Journal of Geodesy, 84(8): 597–604. Leick, A., 2003. GPS Satellite Surveying, New York, John Wiley & Sons, 3rd edition. Takasu, T. and Yasuda, A., 2009. Development of the Low-Cost RTK-GPS Receiver With an Open Source Program Package RTKLIB. Proceedings of the International Symposium on GPS/ GNSS, 4–6 November, Jeju, Korea. Teunissen, P. J. G., 1993. Least-Squares Estimation of the Integer GPS Ambiguities. Invited lecture, Section IV, Theory and Methodology, Proceedings of the IAG General Meeting, 8–15 August, Beijing, China. Teunissen, P. J. G., 1994. A New Method for Fast Carrier Phase Ambiguity Estimation. Proceedings of the IEEE Position Location and Navigation Symposium, 11–15 April, Las Vegas, NV: 562–573. Teunissen, P. J. G., 1995. The Least-Squares Ambiguity Decorrelation Adjustment: A Method for Fast GPS Integer Ambiguity Estimation. Journal of Geodesy, 70(1–2): 65–82. Teunissen, P. J. G., 1998. Success Probability of Integer GPS Ambiguity Rounding and Bootstrapping. Journal of Geodesy, 72(10): 602–612. Teunissen, P. J. G., 1999. An Optimality Property of the Integer Least-Squares Estimator. Journal of Geodesy, 73(11): 587–593. Teunissen, P. J. G., 2002. The Parameter Distributions of the Integer GPS Model. Journal of Geodesy, 76(1): 41–48. Teunissen, P. J. G., 2003a. Integer Aperture GNSS Ambiguity Resolution. Artificial Satellites, 38(3): 79–88. Teunissen, P. J. G., 2003b. A Carrier Phase Ambiguity Estimator with Easy-to-Evaluate Fail-Rate. Artificial Satellites, 38(3): 89–96. Teunissen, P. J. G., 2004. Optimal Integer Aperture Estimation. Artificial Satellite, to be published. Teunissen, P. J. G., 2007. On GNSS Ambiguity Acceptance Tests. Proceedings of the International Global Navigation Satellite Systems Society IGNSS Symposium 2007, 4–6 December, Sydney, NSW, Australia. Teunissen, P. J. G., 2009. The GNSS Ambiguity Ratio-Test Revisited: A Better Way of Using It. Survey Review, 41(312): 138– 151. Tiberius, C. C. J. M. and de Jonge, P. J., 1995. Fast Positioning Using the LAMBDA Method. Proceedings of DSNS-95, 24–28 April, Bergen, Norway, no. 30. Wang, J., Stewart, M.P. and Tsakiri, M., 1998. A Discrimination Test Procedure for Ambiguity Resolution On-the-fly. Journal of Geodesy, 72(11), 644–653. Wang, J., Stewart, M. P. and Tsakiri, M., 2000. A Comparative Study of the Integer Ambiguity Validation Procedures. Earth, Planets & Space, 52(10): 813–817. Wang, J., 2000. Stochastic Modeling for RTK GPS/Glonass Positioning. Journal of the US Institute of Navigation, 46(4): 297– 305. Verhagen, S., 2004. Integer Ambiguity Validation: An Open Problem? GPS Solutions, 8(1): 36–43. Verhagen, S., 2005. The GNSS Integer Ambiguities: Estimation and Validation. PhD thesis, Publications on Geodesy, 58, Netherland Geodetic Commission, Delft. Verhagen, S., 2006. On the Probability Density Function of the GNSS Ambiguity Residuals. GPS Solutions, 10(1): 21–28.