Exploiting the Orthogonality of L2C Code Delays

0 downloads 0 Views 291KB Size Report
GNSS receiver design, and ground-based wireless location. ... ION GNSS 2006, Fort Worth TX, 26-29 September 2006. 1/9 ... The longer code is 750.
Exploiting the Orthogonality of L2C Code Delays for a Fast Acquisition Ahmad R. Abdolhosseini Moghaddam1, Robert Watson1, Gérard Lachapelle1, John Nielsen2 1 Position, Location and Navigation Research Group Department of Geomatics Engineering 2 Department of Electrical and Computer Engineering Schulich School of Engineering University of Calgary, Alberta, Canada

BIOGRAPHIES Ahmad R. Abdolhosseini Moghaddam is an Msc candidate in the department of Geomatics Engineering at the University of Calgary. He received his MSc in Biomedical Engineering from the Amir Kabir University of Technology, Tehran, Iran in 1998 and bachelor’s degree in Electrical Engineering from the Sharif University of Technology, Tehran, Iran in 1996. His research interests include stochastic signal processing, GNSS receiver design, and ground-based wireless location. Robert Watson is a Research Engineer with the PLAN Research Group. He completed his MSc in Geomatics Engineering in 2005, following a BSc in Electrical Engineering in 2002, both at the University of Calgary. His research interests include signal processing for indoor and high-sensitivity GPS, and modernized GPS tracking. Dr. Gérard Lachapelle holds a CRC/iCORE Chair in Wireless Location in the Department of Geomatics Engineering. He has been involved with GPS developments and applications since 1980 and has authored/co-authored numerous related publications and software. More information is available on http://PLAN.geomatics.ucalgary.ca Dr. John Nielsen is a professor in the Department of Electrical and Computer Engineering at the University of Calgary. He is involved with physical layer wireless communications research and with wireless position and location signal processing. ABSTRACT The GPS modernization effort is now providing the first new signals for general civilian use since the initial launch of GPS. The newly launched L2C (civilian) signal is a Time Division Multiplexed (TDM) signal with a

ION GNSS 2006, Fort Worth TX, 26-29 September 2006

unique structure designed to alleviate some inherent limitations in the structure of the legacy C/A code on L1. The two TDM codes forming the overall L2C signal structure—namely the CM (Moderate Length) and CL (Long Length) codes—are combined to form the full L2C code. The dataless property of the CL code is potentially valuable for situations where receiver sensitivity is critical. However, the CL code is very long (in comparison with the C/A code or the CM part of L2C), making direct acquisition of the code phase challenging. In addition, the hardware or software resources required to conduct a direct CL code search are costly or prohibitively complex for many implementations. Normally, therefore, a CM code acquisition is conducted prior to CL acquisition. A fast scheme for acquisition of the CL code is desirable for situations in which acquisition of CM is difficult or impossible. In this paper, one approach for fast acquisition of long PRN codes is discussed and analyzed with respect to performance on the L2C code. The acquisition approach is based on the orthogonality of shifted versions of a long PRN code such as CL, exploiting the excellent cross correlation properties of shifted versions of such a code in the process. New local codes—referred to as Hyper Codes herein—are created that contain more information per chip than the original local code. By using this class of local codes in the receiver, fast acquisition of the code offset in the CL segment of the L2C is viable in an acquisition based on the standard FFT approach without CM assistance. In this method, there is a tradeoff between speed and reliability of acquisition; the amount of degradation in reliability is analyzed and discussed with respect to complexity improvements. INTRODUCTION The L2C signal now becoming available was designed to improve upon some of the shortcomings of the legacy

1/9

C/A code, while remaining within the limited bandwidth available. The introduction of more advanced receiver technologies has allowed the use of longer PRNs, with the shortest of the two L2C components being already 10 times longer than the C/A code. The longer code is 750 times longer than the C/A code, resulting in a significant improvement over the worst-case 21.1 dB auto- and cross-correlation interference isolation of the C/A code [1]. Another major improvement is the inclusion of a dataless “pilot” segment in the signal which allows true carrier phase tracking and long coherent integration for enhanced sensitivity. These improvements are above and beyond the basic fact that L2C finally provides an authorized signal on a second carrier frequency, allowing civilian users to conduct real-time ionospheric corrections more successfully. Whereas the tracking method for L2C signals is nearly identical to that required for legacy signals, the new signal structure invites the development of different acquisition schemes. This paper focuses on implementation and evaluation of an acquisition scheme suited for direct acquisition of the longer PRN component of L2C; an equivalent technique was first developed for application to the extremely long P(Y) code [2] and denoted XFAST (Extended Replica Folding Acquisition Search Technique). The research reported herein was developed independently of the prior work, however, so a different terminology is used—the basis of the technique is the use of local PRN codes herein referred to as hyper codes. This paper begins with a brief description of the L2C signal structure and acquisition techniques, including FFT-based acquisition. The implementation details of the fast acquisition technique are then presented and analyzed theoretically. The technique requires a tradeoff between how quickly and how reliably acquisition can be accomplished, which is discussed in detail. The paper concludes with simulation results using the fast acquisition technique algorithm with corresponding analyses to illustrate performance and reliability issues related to hyper code use. L2C SIGNAL AND ACQUISITION The L2C PRN code consists of two code sets—the civil moderate (CM) and civil long (CL) PRNs—each at a rate of 511.5 kcps. These two codes are multiplexed on a chip-by-chip basis to form a final code with rate of 1.023 Mcps. CM is the shorter period code with length 10,230 chips (20 ms), and is modulated by data messages with a symbol period of 50 Hz—corresponding exactly to the CM code word length. The CL code is unmodulated by data, and has length of 767,250 chips (1.5 s). The CM and CL codes are synchronized such that each CL code period contains exactly 75 CM code periods as illustrated in Figure 1.

ION GNSS 2006, Fort Worth TX, 26-29 September 2006

Figure 1: L2C PRN Code Generation The L2C signal is transmitted by modulating a carrier at 1227.60 MHz with the composite PRN-plus-data signal described above. Acquisition and Tracking The acquisition of direct-sequence spread spectrum signals is a well-known process of determining PRN code phase offset and carrier frequency Doppler for each signal present. Based on the TDM structure of the L2C code, three different classes of acquisition suitable for this particular signal can be readily conceived: acquisition based on CM only, acquisition based on CL only, and joint acquisition of both CM and CL [3, 6]. In general, for L2C, it is desirable to use only the CM code during acquisition due to its shorter code length and correspondingly lower code phase ambiguity search region. The CL code offers better sensitivity thresholds, though, due to its dataless property and higher immunity to cross-correlation interference [3]. Thus, direct CL code acquisition may be required in conditions when CM acquisition is impossible. This is only an option if sufficient receiver resources are available to conduct direct acquisition, though. Direct CL code acquisition can present two significant challenges to receiver resources. First, the much larger ambiguity search region requires either a large number of sequential correlation tests, or a large amount of parallelism (either “hard” or “soft”). A second challenge arises if longer coherent integration is used on this dataless signal. For every increase in coherent integration time by a factor of N, the tolerable frequency error decreases by that same factor. Thus, overall search time or complexity can increase by a factor of N2. This is particularly important when implementing a software receiver with limited resources, but is equally applicable when considering hardware receivers. The third approach—joint acquisition using both code parts—is more complex still than the direct CL acquisition, offering only a small increase in sensitivity. The acquisition scheme described in this paper is directed to reducing the acquisition complexity to an extent that direct CL code acquisition can be made more feasible with limited resources.

2/9

Standard Code Acquisition Algorithms A standard acquisition process begins with selection of a certain satellite (SV) to search for. Normally this is done simultaneously on multiple parallel channels. In fact, it is reasonable to focus our attention on only a single channel, as receivers often have dedicated resources for each channel. Furthermore, successful code phase and Doppler acquisition must be conducted simultaneously to actually receive a signal. The code and Doppler search spaces will be considered separately, though, for the majority of this research. The technique researched herein is intended to improve code phase acquisition search times; the resultant effects of code integration time on Doppler search time will be revisited later in this paper. Traditional acquisition schemes attempt to locally replicate the known (or presumed) PRN code being received. This local code is correlated with the incoming signal, after Doppler removal, and the level of correlation used to indicate successful matching of code phase offset. The actual implementation of correlation can be either a direct approach or a block scheme using frequencydomain transforms (specifically an FFT). In the direct approach, an incoming signal is correlated coherently with local references having different code offsets in a single unit known as a correlator. One such correlator can test a single code phase offset with a given amount of incoming data corresponding to the coherent integration period (TCOH). Thus, to test multiple code phase offsets quickly requires multiple parallel correlators. This is the “hard” parallelism mentioned earlier, and is often implemented in hardware receiver chipsets. The second common correlation technique, often used in software, uses a block processing techinque. In software, limited resources usually prevent the use of hard parallelism. Instead, incoming data is often buffered for the desired coherent integration time, and then a correlation performed using frequency-domain complex multiplication (typically using the FFT). This “soft” parallelism simultaneously tests all possible code offsets. In fact, the FFT-based method requires fewer overall operations than a direct approach testing the same number of code offsets, but suffers from some inherent delay due to the need to buffer data. Furthermore, the complexity of an FFT-based algorithm increases on the order of Nlog(N) [4]—more quickly than the direct method when increasing coherent integration length. The approach for a fast acquisition researched in this paper is suitable for either direct correlation or FFT-based methods. Improvements in computational efficiency are based on the use of more computationally-efficient hyper codes, rather than the simple PRN codes themselves.

ION GNSS 2006, Fort Worth TX, 26-29 September 2006

Comparisons of complexity are based on operations in a microprocessor using the FFT approach, but can be extended easily to the direct acquisition. FAST ACQUISITION SCHEME The fast acquisition scheme is specifically suited for acquisition of very long PRN codes with good crosscorrelation properties. Thus, in this context, application to the CM code is of little interest. In all references to a local code, therefore, it is assumed that the CM component of the multiplexed L2C code has been replaced by zeros. For convenience, the remaining CL code component for any given PRN can be considered to consist of 75 segments, each of length 20 ms (or 10 230 chips). These segments will be referred to using subscript notation where CLi refers to the ith 20-ms segment of any given CL code. The concatenation of CL1 through CL75 creates a version of the entire L2C code with the CM part replaced by zeros. The so-called Hyper Code technique is hereby introduced in the context of FFT-based correlation. Extension to hardware-based serial correlators is straightforward. It is assumed that the reader is familiar with the mechanics of the FFT method [4-5], which are not discussed herein. Considering a received signal with completely unknown CL code phase offset, the possible range of code offsets is expected to be [0, 767 250). This range of offsets may be tested in a variety of ways with FFT-based correlation. First, and most directly, a full 1.5 s of data may be buffered in a receiver’s memory. It is assumed at this point that Doppler removal has been conducted with the presumed residual Doppler frequency. A local code consisting of the entire 1.5-s CL code may then be created, and the two series correlated by using frequencydomain multiplication. This method is straightforward, but suffers from two main drawbacks. First, the amount of memory required to buffer 1.5 s of data can be significant at high sampling rates. Second, and more importantly, the correlation described above represents a 1.5 s coherent integration, which reduces the frequency error tolerance of the Doppler removal to a very small value. In fact, an error of only 0.67 Hz would destroy all received power, and such an error can accumulate solely due to satellite dynamics over 1.5 s. For these reasons alone, 1.5-second FFT-based correlation is generally not attempted in the context of L2C acquisition. A less direct method involves sequentially reading shorter segments of data into memory and correlating with an equal-length local segment of the CL code. For a data segment length of 20 ms, for example, the local code generated may be only the first 20 ms of the desired CL code, or CL1. Repeatedly correlating this local segment with sequentially arriving segments of the incoming data

3/9

should eventually result in a positive correlation in one of the segments. By using shorter data segments, the effective length of coherent integration is also shorter than 1.5 s, which increases the tolerance to frequency errors during acquisition. Additionally, the average time to find a correlation peak can be reduced from 1.5 s of data to approximately 750 ms. The total acquisition process can therefore proceed more quickly than for correlation using the full CL code. Reduced memory usage is a further advantage over using full CL correlation. The increase in acquisition speed is achieved at the cost of increased noise and cross-correlation interference. The processing gain effected by coherent integration is reduced by a factor equal to the reduction in integration time. Thus, using (e.g.) 20 ms segments instead of the full 1.5 s code causes a (0.02 / 1.5) = 18.8 dB increase in noise. Cross-correlation immunity is also degraded significantly, with the worst-case cross-correlation between CL codes worsening from -43.9 dB for a 1.5 s segment to only -25.4 dB for any 20 ms segments [6]. Hyper Codes Although it may seem counter-productive, the Hyper Code method further sacrifices noise and correlation immunity for processing efficiency. In concept, different segments of the CL code are mutually orthogonal. Thus, creating a linear superposition of multiple segments of the CL code results in a shorter code containing all of the information in the total CL code, but in a shorter form. Such a code is referred to as a Hyper Code herein because it has the potential to allow simultaneous operations, thereby increasing acquisition speed. The binary “chips” composing the CL code are replaced with multi-valued elements in the Hyper Code. To illustrate, the 1.5 s CL code may be broken into M different segments, which could then be summed to form a Hyper Code with length 767,250/M elements. This is illustrated in Figure 2.

When the Hyper Code is used as a local reference, a segment of received data may be compared against multiple segments of the CL code simultaneously, reducing the overall number of correlation operations that must be conducted. Of course, the various short segments of a CL code are not actually orthogonal, but have some finite, non-zero cross-correlation. The effects of using a Hyper Code in the correlation process are now considered with this knowledge in mind. A segment of incoming data will be denoted as S, which is presumed to consist of some 20 ms segment of the CL code and additive white gaussian noise: S = CLi + N

(1)

Denoting individual samples within the incoming signal as s(k), the signal can alternately be represented as S = [s(1), s(2), ..., s(L)],

(2)

where L is the number of samples in 20 ms of data. For simplicity, if we assume one sample per chip of L2C code, then L = 20 460. A zero-padded version of the incoming signal is denoted Szp, and consists of the vector S concatenated with a vector of zeros of the same length, denoted by 0: Szp = [S, 0]

(3)

Zero padding as in Equation 3 is required for correlating segments of CL code shorter than the full period—20 ms in this case. The input data is twice as long as necessary, but only half of the output is meaningful. The local code of interest is not zero-padded though, but rather consists of segment of CL code that is twice as long as necessary. For example, to search for CL code segment q, the local code would consist of segments q and q+1. Correlation of this local code and the zero-padded signal of Equation 3 gives the correlation values corresponding to different offsets for 20 ms effective coherent integration time. Equation 4 demonstrates the basic operations required to find the correlation output, Xi, for the ith segment of data: Xi = IFFT{ FFT(CLi) .* conj [ FFT (Szp) ] }

Figure 2: Hyper Code Construction

(4)

In Equation 4, IFFT is an inverse fast Fourier transform operation and .* denotes element-by-element multiplication. Xi is the resultant correlation vector which contains code offsets corresponding to the ith segment of the CL code. It should be emphasized that because of the periodic nature of the FFT process, half of the Xi is useful and the remaining half creates only partial correlation, which should be disregarded. In general, after a successful correlation operation, the output takes the form Xi = Ri(τ) + {CLi . N},

ION GNSS 2006, Fort Worth TX, 26-29 September 2006

(5)

4/9

where the output is seen to consist of an autocorrelation term Ri with some offset τ, and a noise term. To determine the effects of correlation with Hyper Codes, we first generate a representation for a particular Hyper Code for test, with M=2 in this case. This code is again 40 ms long, with the first 20 ms consisting of a summation of two adjacent segments of the CL code, and the second 20 ms consisting of a similar summation but shifted by one segment:

There are some problems and issues related to the use of Hyper Codes which are discussed in the next sections. PROBLEMS IN HYPER CODE ACQUISITION As mentioned previously, Hyper Codes are designed to improve acquisition time at the expense of reliability. The major drawbacks of the technique are listed here. Ambiguity in correlation peak

HCi = [CLi CLi+1] + [CLi+1 CLi+2]

(6)

Note that CLi+75 = CLi so CL76 = CL1. The same incoming signal S is correlated with the Hyper Code, and the second half of the output once again discarded. If we assume positive correlation between S and the ith segment of the local code, the result is similar to that reported above, but includes two new terms: Xi = Ri(τ) + { CLi . CLi+1 } + {(CLi + CLi+1 ). N}.

Using different orthogonal segments of the CL code for creating Hyper Codes introduces an ambiguity when a strong peak is found in the correlation domain. The peak may be contributed by any one of the segments linearly added. This is a minor problem, however, that is easily resolved by testing the possible ambiguities in a subsequent correlation. This additional overhead is negligible in comparison with the complexity reduction potentially realized.

(7) Cross-Correlation Interference

The correlation with Hyper Codes results in a non-zero cross-correlation between the summed codes, seen in the second term of Equation 7, and an amplified noise term visible in the third term. The effects of these extra terms are considered in the following. A key point to note, however, that the correlation output of Equation 7 would appear fundamentally the same if the input, S, were positively correlated with Segment i+1 of the local code, rather than segment i. It is this interchangeability that gives the Hyper Code method its strength, as two correlations can effectively be done for the processing complexity of a single operation. Complexity Improvements Improvements introduced by the use of Hyper Codes must be considered from various bases for comparison. For a given length of optimal FFT, a Hyper Code method may perform code phase acquisition approximately M times faster than a standard acquisition, where M is the number of CL code segments linearly summed. A factor not included in this improvement is the potential complexity increase due to using non-binary local codes, which could decrease the above factor. In cases where noise dominates, as opposed to strong cross-correlations, the complexity tradeoff may be reasonable. If FFT length is not fixed, improvements resulting from a decrease in FFT length can be on the order of Mlog(M) in the code phase domain, and M in the Doppler search domain [4]. In this case, an overall search time improvement on the order of M2log(M) is possible. It is the latter case which seems most attractive to receiver designers, as a full code phase offset search could be conducted with only a small amount of received data in an efficient manner.

ION GNSS 2006, Fort Worth TX, 26-29 September 2006

It was noted previously that cross-correlation properties of the CL code are degraded as shorter segments of data are used. The use of Hyper Codes adds additional crosscorrelation interference dependent the number of linear summations conducted to construct the Hyper Code. Without losing generality, if the received signal is S = CLi + N, successful correlation with a Hyper Code results in correlated component and an additional interference component. The correlation output with a simple M=2 Hyper Code was illustrated in Equation 7 where the first output term corresponds to successful autocorrelation, the second output term is Hyper Code interference, and the third is noise, which has evidently been amplified by the Hyper Code term as well. When using the original code set, the second term (i.e., {CLi • CLi+1} ) does not appear in the right hand side of the equation. Normally this term is negligible in comparison with noise induced term (i.e., {N • (CLi + CLi+1)} ). As the number of superimposed segments is increased this self interference noise becomes significant. Figures 3 and 4 depict a statistical comparison of autocorrelation side-peak immunity for short segments of a standard CL code, versus the same property for Hyper Codes of the same length with a given number of linear summations (M).

5/9

the strongest available signal is being sought, in which situation the cross-correlation effects are largely irrelevant. This factor will be discussed further later. Noise Analysis Increasing the noise level is the most serious side effect of using Hyper Code in a correlation process. In Equation 7, the noise induced term is {N • (CLi + CLi+1)}, but when using the original reference the noise induced term is {N • CLi}. Expanding the noise induced term by using the linear property of the correlation process gives the effective noise after accomplishing correlation as follows: Figure 3: Autocorrelation Immunity of CL Code Segments of Various Lengths

Figure 4: Autocorrelation Immunity of CL Hyper Code Segments with Various Numbers of Linear Superpositions When comparing segments of the same length, the extra cross-correlation interference induced by Hyper Codes is indicated by a shift to the right (lower immunity) when using Hyper Codes. Table 1 illustrates the level of additional Hyper Code interference as a function of M. It is evident when comparing correlations of equal length (e.g. 300 ms and M=5) that the amount of interference is proportional to M. Table 1: Selected levels of Side Peak Immunity for Standard and Hyper Code Correlation

The loss of autocorrelation side peak and crosscorrelation immunity may be justifiable in cases where

ION GNSS 2006, Fort Worth TX, 26-29 September 2006

{ N . (CLi + CLi+1 ) } = { N . CLi } + { N . CLi+1 }

(8)

Given the Gaussian independent identical density (IID) noise process in the received signal, the two noise terms appearing after expansion are independent because they are produced by multiplying with largely uncorrelated sequences CLi and CLi+1. The assumption hold as valid if T (integration time) is long enough the that uncorrelated nature of the two segments of CL code is a valid assumption. As T increases, the correlation between the two noise sequences asymptotically goes to zero so it can be concluded that they are independent. The conclusion is that by using the Hyper Code, in all cases except for an extremely large number of Hyper Code summations M which yields a very short integration time T, the amount of the noise is increased by a factor of M simultaneously. The net effects of noise enhancement on reliability of acquisition can now be considered. The quantitative measure for degradation in reliability shows itself in the probability of detection Pdet given a fixed value in the probability of false alarm Pfa. Equation (9) gives the relationship between Pdet and Pfa in a binary hypothesis test for detection of a signal buried in noise with some signal to noise ratio.

Pdet = Q(Q −1 ( Pfa ) − SNR )

(9)

where Q (α ) =

1 2π



∫e



x2 2

(10)

α

As a reference, a post-correlation SNR of 15 dB is considered. With this SNR, setting Pfa to 0.1% yields a probability of detection Pdet of 99%. This is a relatively good detection scenario, and we shall thus consider 15 dB post-correlation SNR to be a reasonable target. On the other hand, a 3 dB degradation in post-correlation SNR for the same Pfa, yields a probability of detection of Pdet = 81%. This is a major sacrifice in terms of detection reliability, and indicates that any acquisition method yielding a post-correlation SNR less than about 15 dB will likely be unsuccessful.

6/9

The applicability of the Hyper Code technique thus depends on analyzing several factors. First, one must consider whether or not CM code acquisition will likely achieve 15 dB SNR in a reasonable time using coherent or non-coherent methods. If so, CL acquisition is unnecessary. If this condition is not met, the second stage is to consider whether or not coherent CL code acquisition can improve on either acquisition time or sensitivity as compared with the CM acquisition mentioned previously. If this potential exists, the final stage is an evaluation of potential CL acquisition performance to determine if any noise or cross-correlation margins may exist which can be sacrificed for improved acquisition time or complexity. The number of factors involved in determining usefulness of the Hyper Code method is very large, and must include presumed signal propagation conditions, as well as a priori knowledge of any factors such as Doppler frequency or timing. As such a complicated situation, the nature of these design factors have yet to be fully evaluated. The focus of this work has only been on verification of the functionality of the Hyper Code method for L2C. As such, we conclude this paper with verification results from a set of hardware-simulated L2C data indicating that the performance of Hyper Codes in acquisition is at least as expected.

only the first 20 ms of correlation outputs were retained. Using this technique, the 20 ms of incoming data samples were effectively correlated with the entire CL code, in 75 different segments. For the Hyper Code techniques, the incoming data samples were buffered in the same manner as above and had identical Doppler removal techniques applied. However, each FFT-based correlation used a stacked set of 3 or 6 adjacent 20-ms CL code segments instead of a single segment as in the previous example. With this method, the total number of output correlation segments was reduced to 25 (for M=3) and 13 (for M=6). Note that the final (13th) segment corresponding to the M=6 case used only 3 stacked CL code segments instead of 6. The segment of incoming data and local codes used for each test segment are illustrated for both standard correlation and Hyper Code correlation in Figure 6.

SIMULATION RESULTS To verify the theoretical and software-simulated performance of the Hyper Code technique with respect to noise and interference, realistic IF samples were collected from a Spirent GSS 7700 hardware L2C simulator using a data acquisition system as illustrated in Figure 5.

Figure 5: Live Test Configuration Received samples were processed to extract the signal from a given satellite using both a standard FFT-based acquistion technique and two FFT-based Hyper Code techniques with different numbers of code segments. Details of the implementation follow: For the standard technique, a 20 ms segment of incoming data was buffered, and then zero-padded to create an overall 40 ms segment. The FFT technique was then used to correlate this received data with 40 ms segments of the full CL code for the desired PRN, after having removed the Doppler which was determined in advance by examination of simulator outputs. After FFT correlation,

ION GNSS 2006, Fort Worth TX, 26-29 September 2006

Figure 6: Comparison of Local Codes for Live Data Test (Standard and Hyper Codes) In all cases, the output segments were examined and evidence of a strong correlation peak identified. The segment containing this peak, and the location within the segment, were used to determine overall code phase. In each case, also, the deflection coefficient of the peak versus the correlation noise floor was calculated for comparison of the degradation. Deflection coefficient is a relevant way to represent SNR, calculated as the difference between the peak power and the mean noise power, divided by the standard deviation of the noise power. Figure 7 illustrates correlation power in the code offset domain as determined using the standard technique. The domain shown is code offsets in Segment 14 of the correlation. The peak correlation power is normalized to unity. As can be seen, the code peak was located at a code offset of approximately 1850 chips within the segment, plus an offset of 132 990 chips corresponding to 13 previous segments with no correlation peak. The final unambiguous code phase is 134 840 chips. The SNR calculated from the illustrated segment is 18.6 dB.

7/9

and practice include the effect of measurement noise, which has not been accounted for in the correlation peak, non-ideal effects of dealing with only a single PRN, as opposed to an average of all PRNs as was done in theory, and the varying cross-correlation effects present when PRNs have varying power. In addition, while the two effects of noise amplification and cross-correlation immunity reduction have been quantified, their joint effect has not been fully considered.

Figure 7: Output Code Offset with Standard Code Acquisition Technique The results of acquisition searches using Hyper Codes are shown in Figure 8. The upper panel shows the result from a Hyper Code with M=3, and the lower panel a code with M=6. As expected, the signal is detected in both cases using Hyper Codes. However, two points should be noted. First, the output 20-ms segments shown in each panel actually correspond to an ambiguous set of possible code phase offsets. In the M=3 case, the code phase offset is approximately 1850 chips plus an ambiguity corresponding to 12, 13, or 14 segments of 10 230 chips. Similarly, the peak in the M=6 case corresponds to a code phase of 1850 chips plus an ambiguity of between 12 and 17 segments of 10 230 chips. The second point to note is the SNR degradation. The SNR in the two cases is approximately 15.3 and 12.8 dB respectively for the cases of M=3 and M=6.

To date, only a limited set of practical verifications have been conducted. Further work is being undertaken to conclusively identify the sources of error present, which will be required for determining the utility of this technique in actual receivers. CONCLUSIONS A technique for reducing computational complexity of L2C acquisition algorithms has been presented consisting of folding and stacking parts of the long (CL) code and correlating with a short segment of the received code. This technique has been previously suggested for use with the P(Y) code present on both L1 and L2 [2], but deserves reconsideration at this time for use with L2C receivers. The CL code is too long to be conveniently acquired using direct acquisition, but is still almost 7 orders or magnitude shorter than the P(Y) code, for which XFAST was previously suggested [2], making this technique once again relevant as a potential acquisition technique. The Hyper Code technique offers an improvement in code phase acquisition speed or complexity by some factor M, which comes at the cost of reduced cross-correlation immunity and noise margins. The most appropriate conditions for use of such a code are those in which the L2C signal is too weak to acquire with non-coherent integration of the CM code, but may be acquired with coherent integration of CM, even with some loss. The exact determination of how and when to apply the technique—as opposed to simpler techniques—is a matter for the receiver designer. Further research is planned to provide some basic guidelines in this regard.

Figure 8: Ambiguous Output Code Offset with Hyper Code Acquisitions (M=3, 6) When evaluating the performance of the technique, it is apparent that the immunity to noise and/or crosscorrelation interference is compromised by the Hyper Code technique as was expected. The decrease in performance from M=1 to M=3 was observed to be approximately 3.3 dB, as opposed to a theoretically expected value of 4.7 dB (see Table 1). Similarly, when increasing further to M=6, the degradation was only a further 2.5 dB instead of the expected 3 dB. The possible reasons for this lack of correspondence between theory

ION GNSS 2006, Fort Worth TX, 26-29 September 2006

ACKNOWLEDGMENTS The authors acknowledge the assistance of Mr. Surendran Shanmugam with various technical matters that arose during this work. REFERENCES [1] Ward, P. W., J. W. Betz, and C. J. Hegarty (2006) “GPS Satellite Signal Characteristics,” Chapter 4 in Understanding GPS: Principles and Applications, 2nd Edition, E. D. Kaplan & C. J. Hegarty (eds), Artech House Publishers, Boston MA.

8/9

[2] Yang, C., J. Vasquez, and J. Chaffee (1999) “Fast Direct P(Y)-Code Acquisition Using XFAST,” in Proceedings of ION GPS 1999, U.S. Institute of Navigation, 14-17 September, Nashville TN, pp. 317324 [3] Fontana, R. D., W. Cheung, and T. Stansell (2001) “The Modernized L2 Civil Signal: Leaping Forward in the 21st Century,” in GPS World, September 2001 [4] Lin, D. M. and J. B. Y. Tsui (2000) “Comparison of Acquisition Methods for Software GPS Receiver,” in Proceedings of ION GPS, U.S. Institute of Navigtion, 19-22 September, Salt Lake City, UT, pp. 2385-2390 [5] Tsui, J. B. Y. (2000) Fundamentals of Global Positioning System Receivers: A Software Approach, John Wiley & Sons

[6] Tran, M. and C. Hegarty (2002) “Receiver Algorithms for the New Civil GPS Signals,” in Proceedings of ION NTM 2002, U.S. Institute of Navigation, 28-30 January, San Diego CA, pp. 778789

ION GNSS 2006, Fort Worth TX, 26-29 September 2006

9/9

Suggest Documents