Correction of Dual PRF Velocity Errors for Operational ... - AMS journals

7 downloads 20309 Views 938KB Size Report
Apr 20, 2003 - locity is only about 13 m s 1 for a maximum range of about 150 km. Increasing ... Velocity dealiasing with a dual PRF radar is described in detail in ...... spurious winds and a large degree of banded reflectivity structure that is ...
VOLUME 20

JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY

APRIL 2003

Correction of Dual PRF Velocity Errors for Operational Doppler Weather Radars P. JOE Meteorological Service of Canada, Downsview, Ontario, Canada

P. T. MAY Bureau of Meteorology Reseach Centre, Melbourne, Victoria, Australia (Manuscript received 26 April 2002, in final form 13 August 2002) ABSTRACT Dual pulse repetition frequency (PRF) is a commonly used technique, in operational Doppler weather radar networks, that extends the unambiguous Doppler velocity. The technique requires the measurement of the radial velocity at two different PRFs and it assumes that the data is collected from the same velocity. In practice, the data are collected from adjacent alternating PRF radials as the antenna rotates. However, high azimuthal shear or statistical errors due to uncertainty in the measurements can create dealiasing errors. This paper proposes two algorithms to correct these dealiasing errors and analyzes the results. Simulations and real cases are presented to illustrate the benefits and limitations.

1. Introduction Doppler weather radars suffer from limitations associated with the trade-off between range and velocity aliasing. Methods to surmount this problem are an active area of research and development (Frush and Doviak 2001; Sachidananda et al. 1998). The trade-off is particularly severe as the radar wavelength decreases. There are many operational C-band (5-cm wavelength) Doppler radars around the world where the unambiguous velocity is only about 13 m s 21 for a maximum range of about 150 km. Increasing the unambiguous velocity results in decreasing the maximum unambiguous range, which is not acceptable for operational radar networks. Wind speeds in the atmosphere often exceed this relatively small unambiguous velocity making the resulting radial velocity products difficult to interpret. Automated processing techniques to dealias the resulting radar data are fairly limited in their usability for such a small Nyquist interval. An alternative strategy, first mooted in the mid-1970s (Doviak et al. 1976; Sirmans et al. 1976) and commonly employed in operational radar networks, is the use of multiple pulse repetition frequencies. This may be on a pulse-pair-topulse-pair basis, where the time between pulses is varied, with every pulse-pair. This is referred to as a staggered pulse repetition time (PRT) sampling (Doviak et al. 1976; Sirmans et al. 1976). Or, it may be on a Corresponding author address: Dr. P. Joe, Meteorological Service of Canada, 4905 Dufferin St., Downsview, ON M3H 5T4, Canada. E-mail: [email protected]

q 2003 American Meteorological Society

batch basis, where two blocks of pulses (e.g., 32 pulses) are transmitted at a single pulse repetition frequency (PRF) within a block, but with different PRFs between blocks. This is termed dual PRF or dual PRT sampling. The former has intrinsic advantages for unambiguous dealiasing since the data may be collected from a single radar sample volume, quasi-simultaneously in time and in space. However, a time domain implementation of this staggered sampling technique limits the performance of simple ground clutter filters (e.g., Banjanin and Zrnic 1991). It can also stretch the performance of the pulse modulator beyond its capability. So the dual PRF technique is operationally used. This is employed on operational Doppler radars in Canada, Australia, and in Europe and is available on commercial signal processors (e.g., SIGMET 1997). However, there are problems dealiasing the velocity data if the azimuthal gradient of the radial velocity is large or if there are random errors. The goal of this paper is to evaluate and demonstrate the benefits and limitations associated with the dual PRF method in the presence of wind fields with large horizontal shears, to discuss error correction methods when there are dealiasing errors, and to assess the magnitude of the inherent problems in this approach. 2. Dual PRF radar operation Velocity dealiasing with a dual PRF radar is described in detail in commercial signal processor documentation (SIGMET 1997) and in the literature (e.g., Sauvageot 1992; Doviak and Zrnic 1993; Joe et al. 1998; May and

429

430

JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY

FIG. 1. Schematic of a dual PRF dealiasing scheme for a single radar bin showing how two folded estimates, V1 and V 2 , with Nyquist velocities of VN1 and VN2 , respectively, are combined to form a dealiased radial velocity estimate V c using Eq. (2), or V 2 1 2VN2 using Eq. (3). The latter estimate has a lower uncertainty.

Joe 2001). Only a brief description will be given here. The following discussion follows May (2001). For the purpose of this discussion, suppose pulse-pair processing is used. The radial velocity along a ray i, is given by Vi 5

l ui l or Vi 5 PRF i u i , 4p Ti 4p

(1)

where u i , T i , and PRF i are the phase of the autocorrelation function at the first lag, the time between pulses, the pulse repitition frequency, respectively, and l is the radar wavelength. Velocity estimates on adjacent rays use different values of T i , usually in a 4:3 or 3:2 ratio, allowing a tripling or doubling of the Nyquist velocity of the ray with the shorter T i , respectively. Let s i be the error associated with V i . Dealiasing is usually performed by comparing the radial velocities along two rays (1 and 2) where the second ray is the target ray for dealiasing (see Fig. 1). Assuming a uniform radial velocity, the estimates can be combined to give a combined estimate, given by Vc 5

l (u2 2 u1 ) . 4p (T2 2 T1 )

(2)

This is done at a radar sample volume basis and represents the estimated extended radial velocity at the target sample volume. The difference in PRFs determines

VOLUME 20

the extended Nyquist velocity. The accuracy is poor as it depends on the error in the difference of the two phases rather than just a single phase (Holleman and Beekhuis 2003). As the difference between the pulse repetition time decreases, the uncertainty in the estimate of the dealiased velocity increases. To improve the estimate of V c , Eq. (2) is only used to estimate the number (n) of velocity aliases. Then n multiples of the Nyquist interval (twice the magnitude of the Nyquist velocity) for ray 2 (VN2 ) are added to V 2 to retrieve the dealiased radial velocity estimate. That is, (3) V c 5 V 2 1 n2VN2 . Note that n could be negative or positive, that this estimate lies within the Nyquist interval centered on V c and, most importantly, that the uncertainty in the estimate is determined by the variance of only one (not two) velocity estimate. An example of this is shown in Fig. 1. The radial velocity measurements of the two rays (V1 , V 2 ) are combined to produce an estimate V c . Then, the dual PRF dealiased estimate for the radial velocity for the radial 2 is given by V 2 1 2VN2 with an error of just s 2 . The original phase measurements are preserved, so that each pair of adjacent velocities is treated independently as successive rays are processed. In the above example, the original V 2 and not V 2 1 2VN2 would be used to dealias the next (adjacent) radial velocity estimate. Two important properties of this technique are, 1) dealiasing errors in the velocity are localized and 2) the errors are distinct and easily identifiable. The basic assumption in the dual PRF technique is that the data, making up a single radar bin, are collected from targets with the same velocity. If the velocities change significantly from azimuth to azimuth (i.e., from resolution volume to resolution volume), the assumption is violated and errors may arise. In practice, errors occur when the difference in the true radial velocities on adjacent radar sample volumes exceeds a certain threshold. Rewriting Eq. (3), we have V c 2 V 2 2 2nVN2 5 0. In practice, we select n such that | V c 2 V 2 2 2nVN2 | , VN2 . Conversely, an error occurs when | V c 2 V 2 2 2nVN2 | . VN2 . Without loss of generality, we can drop the 2nVN2 term since it scales with the V c term. Using Eqs. (1) and (2), we can write the following inequality that describes the error condition of the dual PRF dealiasing technique: T2 V2 2 T1 V1 2 V2 . VN2 . (4) T2 2 T1

)

)

For the ith ray, this becomes Ti21 (Vi 2 Vi21 ) PRF i (Vi 2 Vi21 ) . VNi or . VNi Ti 2 Ti21 PRF i21 2 PRF i

)

)

or |Vi 2 Vi21| .

)

)

)

)

PRF i21 2 PRF VNi . PRF i

(5)

APRIL 2003

JOE AND MAY

431

Nyquist velocities or greater maximum unambiguous ranges, will have larger error rates. Also, important to note, is that signal processing artifacts are reproduced in the extended interval. In particular, clutter filter artifacts are repeated at velocities of 62V Ni , . . . . This may cause problems in velocity measurement, if the clutter is poorly filtered. This problem is explored in section 6. 3. Algorithms for correcting noisy data Two techniques are presented here to correct the dealiasing errors. A cursory examination of raw velocity images from a dual PRF Doppler radar show two distinct types of errors. The first, and easiest to address, is the speckle. The second, which is more difficult to deal with, are ‘‘coherent patches of suspect data,’’ where the dealiasing errors are not isolated but appear as small areas of anomalous velocity data. FIG. 2. Schematic of the dual PRF dealiasing scheme showing the relationship between the velocity difference (ordinate at bottom) to the true velocity (abscissa), and (top) the folded radial velocity (ordinate for a C-band radar pulsing at 1200 and 900 Hz. The Nyquist velocities are 16 and 12 m s 21 , respectively. The arrows on the right are 8 m s 21 apart and indicate the spacing between the expected velocity differences. Therefore, the velocity differences must be within 4 m s 21 of their expected value to be dealiased correctly. (bottom) The arrows indicate where a velocity value would be dealiasing incorrectly.

For example, a C-band radar using PRFs of 1200 and 900 Hz, for a ratio 4:3, and with corresponding Nyquist velocities of 16 and 12 m s 21 , respectively, errors are expected when | V i 2 V i21 | . 4 m s 21 . (Note that we use these parameters for illustration throughout this paper to provide clarity.) This condition can be exceeded due to random errors in the velocity measurements as well as due to velocity gradients. That is, the dual PRF approach can have folding errors even where the original data are not folded. Another way of understanding this result is to graphically explore the expected velocity differences as a function of the true velocity. This is shown in Fig. 2, for the radar parameters of the previous paragraph. If we compute the velocity difference from the measured dual PRF velocities, then this graph can be used to determine the fold number by comparing the measured difference with the expected differences and then finding the corresponding true velocity for the fold number. Note that in Fig. 2, these differences are separated by 8 m s 21 from their neighbors. To correctly determine the fold number, the estimated velocity difference must be closer—half of the separation distances of 8 m s 21 or 4 m s 21—to its expected value than to its neighbor. If it is farther, greater than 4 m s 21 , then it will be matched to an incorrect velocity difference and hence to an incorrect fold number. Note that Eq. (5) indicates that attempts to use 5:4 or 7:5 PRF ratios, which allow for greater extended

a. Median filtering of the data/image An obvious approach to removing speckle from an image is to simply smooth the image using a median filter. This filter retains strong average gradients in the fields while removing isolated bad pixels. The cost is lower spatial resolution. There are other possible approaches, such as consensus averaging the velocities over a small region in a manner similar to that used with wind profilers (e.g., Strauch et al. 1984); but for this application, there is little obvious advantage. In this paper, the median filter serves as a reference for more complex methods presented later. The approach taken here is to replace the target pixel’s velocity with the median value obtained by taking a 3 3 3 matrix of pixels centered on the target bin. A 3 3 3 was chosen (rather than, for example, a 5 3 5 array) after examination of data with some significant real velocity gradients. The higher-order median filters were subjectively rejected as smoothing the data too much. This effect is dependent on the range and resolution of the data and also on the velocity features. Smoothing is not desirable when looking for very small-scale features. For real radar data, the filter algorithm must handle missing data; that is, not all the pixels in the 3 3 3 array may have valid velocity estimates. The median filtering algorithm, described here, requires that (a) the target pixel must be valid and (b) the median is calculated on all valid points within the 3 3 3 array. There was some experimentation on whether there should be a requirement on the number of valid points before applying the median filter or displaying the pixel. However, setting any kind of threshold severely decreased the availability of clear air velocity data, particularly at longer ranges. Therefore, no thresholds were set resulting in some speckle in the data. Isolated data points are filtered out by the signal processor. (Australia uses

432

JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY

VOLUME 20

TABLE 1. The Laplacian filtering performance. The analysis is for a Nyquist velocity of 16 m s 21 . When the center value has a dealiasing error (indicated by 32 m s21 in the second column), the technique is able to detect it (indicated by ‘‘error’’ in the fifth column) for a variety of error patterns. Case

Center value (m s21 )

1 2 3 4 5 6 7 8 9

0 32 0 32 32 32 0 32 32

0 0 32 32 232 32, 32, 32, 32,

32 32 32, 32 32, 32, 32

the Sigmet RVP6 and Canada uses the Sigmet RVP7 signal processors.) b. A detection/correction technique The above method is straightforward, but has at least two potential drawbacks. It smooths the small-scale velocity fields, which may not be desirable, and it does not take advantage of the discrete nature of the velocity dealiasing [Eq. (3)]. The technique, described in this section, tries to solve both of these problems as well as dealing of coherent patches of suspect data. The nature of the discrete dealiasing errors suggests that a ‘‘detection then correction’’ technique can be formulated where (i) the dealiasing error is detected by searching for isolated radial velocity values that are approximately one Nyquist interval different from its neighbors, and (ii) the value is corrected by adding or subtracting integral multiples of the Nyquist interval till the resulting value is approximately the same as their neighbors. ‘‘Approximately’’ means that the value is within a Nyquist velocity of the average value of the neighbors. In many implementations of the dual PRF scheme, the PRF of the ray is not available from the signal processor (e.g., RVP6 and early versions of the RVP7). This complicates the detection/correction scheme since it is not clear which Nyquist interval to use to detect or to correct the dealiasing error. Since there are many existing operational systems where the PRF is not known, we address both situations. 1) DETECTION The detection is done by applying a ‘‘Laplacian’’ operator on a radial velocity field to compute a discrimination parameter (x) that will be used to ‘‘detect’’ an error. The Laplacian is defined in the digital convolution sense (Gonzalez and Woods 1992; Marr 1982) where a spatial operator of the form 1 1   1 28 1    1 1 1

x

Detection criteria |x | . 16 m s21

0 232 4 228 236 224 8 220 216

No error Error No error Error Error Error No error Error Ambiguous

Neighbors

1

is convolved with the two-dimensional data field. This is a slight modification of the standard Laplacian operator where the elements in the corners are set to zero and the central value is 24 instead of 28. By using more neighbors than the standard approach, the operator is more robust to dealiasing errors in the nearest neighbors (discussed below) and therefore handles the coherent patch of suspect data case. We compute the following at each grid point:

x x,y 5

w i, j Vx1i,y1j ,

(7)

where x, y refer to the ray and azimuth of the target grid point; w i,j is equal 1 except when i 5 j 5 0, where it is equal to 28, as in Eq. (6), when there are no missing data; x is the value of the modified Laplacian discrimination parameter; V is the radial velocity. For convenience, the value x x,y is divided by the number of valid neighboring points to scale it to the Nyquist interval. The algorithm can be implemented in either radar (range, azimuth) or Cartesian space. Note that the nonuniform distance between the nearest neighbors does not play a role since we are only interested in finding discontinuities. In the following, we use the term ‘‘target pixel’’ to refer to either an element in a Cartesian or a radar sample volume in range–azimuth space. 2) MISSING

DATA

If there is missing data, the central weight is adjusted. Its value is given by the negative of the number of valid neighboring points. For example, if the target pixel is in error by a Nyquist interval and the neighbors are all correct, then x x,y will have a value equal to the negative of a Nyquist interval. This assumes that the velocity field is locally constant or odd symmetric, which is generally a safe assumption. In practice, the velocities have an associated error and the previous statement is only approximate. 3) COHERENT

(6)

O O

i521,1 j521,1

PATCHES OF SUSPECT DATA

Table 1 shows the how the algorithm works for various error patterns. The first column in the table indi-

APRIL 2003

433

JOE AND MAY

cates the case number. The second column indicates the velocity difference of the target pixel from the mean in the neighborhood. The absolute values are not important as the Laplacian computes differences and the mean is inherently subtracted out. The third column indicates, by the number of entries and by their value, the number of errors in the neighbors. So ‘‘32, 32’’ means that two of the eight neighbors have errors of a Nyquist interval (32 m s 21 in this case) and the rest of the neighbors have no error. The fourth column shows the value of x computed from Eq. (7). The fifth column indicates whether the algorithm detects a dealiasing error or not by comparing x to the Nyquist velocity (16 m s 21 ). If the value of x is less than the Nyquist velocity (column 4), then the target pixel is assumed to be be correct. If the value is greater, then an error is assumed to be detected. In this analysis, the measurement is assumed to have zero variance. There are other detection approaches, such as comparing the target pixel to the median of some local area velocity estimate (Holleman and Beerkuis 2002), which is not needed in this algorithm. In the null case, where there are no errors at all (case 1), then the value x is zero, and no correction is needed. In the isolated pixel case, where only the center point is in error (case 2), then the absolute value of x will have a value equal to the Nyquist interval, the dealiasing error is detected, and the value is corrected. This is the most common case. If one of the neighboring values, but not the center point, is in error (case 3), then the absolute value of x is an eighth of the Nyquist interval and the target pixel is not corrected, which is the appropriate result. Cases 4–9 address the issue of coherent areas of suspect data. If both the center and one neighbor are in error (case 4), then the value of x decreases by only an eighth of the Nyquist interval and the algorithm detects and appropriately corrects the target pixel. If the errors are in the opposite sense (case 5), then there is a greater difference, and therefore x is even bigger, and again the target pixel is appropriately corrected. Without belaboring the analysis, the results in Table 1 indicate that the Laplacian technique can handle up to about four suspect neighbors. If the errors negate each other, then the technique will work with even more suspect neighbors. This opens up the possibility that even more aggressive dual PRF techniques can be pursued. In the limit, if there is only one neighbor, it is not possible to determine which one of the two is correct. So, we impose a points threshold where at least two neighbors are needed in order to be able to make a

comparison. Table 2 shows an analysis of the limiting cases. 4) CORRECTION The correction algorithm depends on whether the PRF of the ray is known. If the PRF of the ray is unknown, then the algorithm is to compute all the possibilities using both Nyquist velocities and then choose the velocity that has the smallest magnitude for x computed from Eq. (6). Note that these corrections are added to the original dealiased velocity estimate given by Eq. (3). For a 4:3 dual PRF radar, they are C1 5 62VN1 or 64VN1 ,

and

C2 5 62VN2 or 64VN2 or 66VN2 .

(8) (9)

If the PRF of the radial velocity is known, then we can reduce the number of corrections and apply the corrections from Eqs. (8) or (9), depending on the PRF of the ray. This reduces the number of computations, thereby speeding up the algorithm. It also reduces the possibility of a wrong correction. 4. The effect of random measurement errors Since the dual PRF technique relies on accurate measurements of the radial velocity, an obvious question to ask is, what are the expected error rates given the uncertainties in real velocity measurements and how do these compare with observed error rates? The Canadian and Australian operational dual PRF Doppler radars use slightly different scanning and sampling strategies (Table 3). This section will examine the expected differences in performance. There are well-documented theoretical relations describing the random measurement errors (Zrnic 1977; Doviak and Zrnic 1993). Figure 3 shows the magnitude of the expected errors as a function of spectral width as well as the errors as a function of signal-to-noise ratio for two extreme values of spectral width for each of the four PRF regimes given in Table 3. The statistical errors for the two strategies are similar. The next question is, what dealiasing error rates do these measurement uncertainties translate into? This can be assessed using Eq. (5) and knowing that s (V i 2 V i21 ) 5 1.5s (V i ) for independent errors. For a given uncertainty in the velocity measurements and Gaussian error, we can estimate how often the condition of Eq. (5) will be exceeded. In order to test this and following ques-

TABLE 2. The Laplacian filtering performance for the limiting case of two points (including the target pixel). Case

Center value (m s21 )

Neighbors

x

Detection criteria |x | . 16 m s21

1 2

32 32

0 32

232 216

Error Ambiguous

434

JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY

VOLUME 20

TABLE 3. The dual PRF radar sampling characteristics in Australia and Canada.

Country

PRFs

Extended Nyquist (m s21 )

Canada Australia

1200/900 1000/750

48 40

Angular resolution (8)

Antenna rotation rate (rpm)

Number of samples

0.5 1.0

1.5 3.0

64 40

tions, we create a model wind field and compute the errors. First, an ideal wind field is mapped onto a radar grid and the radial wind component is calculated at each grid point. This wind field may include features such as mesocyclones and large-scale wind shifts or, as in the first case to be discussed, a uniform wind. Gaussian distributed noise is then added to the field with a standard deviation corresponding to the measurement errors under consideration. Then, if the radial velocities along individual beams exceed the Nyquist velocity for that beam, the velocity is folded. The resulting data field is then dealiased using the techniques described in section 2, and we can compare this ‘‘measured’’ field with the original ‘‘true’’ field to derive ‘‘observed’’ error rates. Error rate is defined as the number of incorrectly dealiased data points divided by the total number of data points. Figure 3 shows the error rates for the 1000-/750-Hz (4:3) and 1200-/800-Hz (3:2) simulations (Australian radar setup). Reasonable agreement with the theory is seen, although the observed error rates are, in general, a little smaller. The rates increase as s (y ) increases and are higher with smaller Nyquist velocity differences. However, the error rates for even small measurement

Beamwidth (8) 0.65 1.0

uncertainties will produce problems for automated algorithms for the detection of circulations and microbursts. Therefore, error corrections are required. These are seen to reduce the error rates to very small values for even very large statistical errors in the winds (Table 4). The Laplacian is seen to be significantly more efficient at reducing the random errors than a simple median filter. Applying the median filter after the Laplacian correction produces a small improvement in error rates (not shown). In practice, there is often missing data. Table 5 shows the simulated performance of the Laplacian correction algorithm for an increasing number of missing points for uncertainties of 1.25 and 2 m s 21 . The error rates were not affected much by 1 or 2 missing neighbors, but the corrections failed when half the values are missing. The median filter performed worse (not shown). In order to compare this analysis to real data, we have selected a case from Canada in an intense synoptic-scale storm, but with a fairly uniform wind field and widespread strong echoes. Figure 4 shows data from rain, so the signal-to-noise ratio of the data is high. The precipitation is predominantly stratiform in nature with some embedded convection so that there are no large amplitude wind shear features such as microbursts. The example shows a strong synoptic-scale wind field with a low level jet of about 36 m s 21 from the northwest. Incorrectly dealiased data show up as speckling. In particular, the area around range 80–120 km at an azimuth of 108 shows the most speckling. First, the strong wind field shows the need for the dual PRF dealiasing technique since the wind field would be aliased two or three times and difficult to interpret. The middle figure is TABLE 4. Error rates derived from simulation for a variety of velocity measurement uncertainty for both the median and Laplacian techniques.

FIG. 3. Theoretical error rates for 1000-/750- (solid), 1200-/900(dashed), 1000-/667- (dotted), and 1200-/800- (dash–dotted) Hz dual PRF operation as a function of the std dev of the velocity estimates. The squares are the observed error rates in simulations for the 1000/750-Hz dealiasing and triangles for 1200/800 Hz.

s (V)

Raw error rate (%)

After median filtering

After Laplacian filtering

0.5 1.0 1.25 1.5 1.75 2.0 2.5 3.0 4.0

;0 1.4 5.2 10.5 16.1 22.1 32.5 41 54

;0 0.1 0.4 1.4 4.5 9.6 21.5

;0 0.06 0.4 2.3

APRIL 2003

JOE AND MAY

435

TABLE 5. Effect of missing values on the Laplacian technique.

s (V)

Uncorrected error rate (%)

1

2

3

4

5

1.25 2

5.2 22

0 0.1

0 0.6

0.2 3.4

3.5 14

5.2 21

Laplacian filtered and the bottom figure is median filtered. Some speckling still appears with the median filter. For synoptic-scale wind fields, either filtering technique works effectively since the filters operate on very small scales and the important features in this case are on the large scale. The error rate here is approximately 2.3%. The estimate is approximate because the algorithm may not have caught all the pixels that are in error and corrected some that did not need to be corrected. However, manual examination of the results indicate that these factors are negligible. These rates are consistent with uncertainties in the wind field of ;1.2 m s 21 . Given the high signalto-noise ratio of the data, the statistical uncertainty should be much less than this (Fig. 3), which indicates that even in this fairly benign environment, it is meteorological noise that is the dominant source of wind variance and hence dealiasing error. Further evidence to support this is that the error rates vary only slightly with respect to reflectivity levels between 0 and 40 dbZ. The most obvious nonrandom regions of ‘‘speckle’’ in the raw velocity images occurs, not in areas of large mean velocity shear, but, rather in areas where there is a significant reflectivity gradient, which indicates a region of very small-scale flow distortion around cells that are producing the enhanced error rates. 5. Effects of meteorological features In this section, we analyze the behavior of the correction algorithms through simulation and examples. a. Strong azimuthal gradients—Simulation The main source of error for data with a high signalto-noise ratio is associated with significant azimuthal gradients of the radial wind component. As discussed above and by May (2001), if the gradient is large, errors may occur even if the data from the individual pixels are not aliased. To examine the limitations of the correction techniques, a wind field with an arbitrary discontinuity in azimuth and in range is created. We also add noise to the field to illustrate this contribution in creating dealiasing errors. Radial gradients do not cause dealiasing errors, although they have an effect on our ability to automatically correct dealiasing errors. First, we will consider a field with a step in velocity along a radial and a second step at constant range without noise (Fig. 5, top figure). This wind field violates

FIG. 4. An example of an intense synoptic scale storm from the Marble Mountain radar in Newfoundland, Canada. The C-band radar operates at 1200/900 Hz, alternating on 0.58 azimuths. (top) Dealiased radial velocity images before filtering. The middle and bottom panels show the data filtering (median and Laplacian, as marked). The Laplacian technique is almost perfect. The median technique improves the results but is not as good as the Laplacian technique.

436

JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY

FIG. 5. Simulation of linear wind shifts in azimuth and in range. (a) The ‘‘input’’ wind field; (b) the radial wind field as sampled by a dual PRF radar; (c), (d) Laplacian and median filtered. This is an unrealistic example but illustrate the limitations of the techniques. (See text for full description.)

VOLUME 20

FIG. 6. Same as Fig. 5, except random noise has been added to the data. Note the patch of uncorrected data. This is somewhat an artifact of the simulation data but nonetheless demonstrates that when the velocity simultaneity assumption is violated, unpredictable results arise.

APRIL 2003

JOE AND MAY

437

the inherent ‘‘same velocity’’ assumption of the dual PRF technique. The raw dual PRF estimate (second from the top) shows, as expected, a ray with dealiasing errors along the region of the strong azimuthal gradient. Since the step is constant in magnitude, the aliasing error is also constant. The second figure from the bottom shows the Laplacian filtering results and the bottom figure shows the median filtering results. What is interesting is the effect of the discontinuities on the Laplacian technique. In this instance, the median filtering is able to correct the dealiasing errors because of its generic nature but the Laplacian technique is strongly coupled to the assumptions of the dual PRF technique and therefore breaks down in this case. This example shows a limiting pathological situation and we have not seen real examples of it. The addition of measurement noise to the simulations can have a considerable effect when the small-scale variability is large (Fig. 6). When the standard deviation exceeds about 1.5 m s 21 , errors start appearing in the corrected radial velocity field (bottom two figures). These errors appear in random positions along the wind shift boundaries, both with some correct and some incorrect dealiasing. The smoothing effect of the median filtering is evident in the bottom figure. A situation where the median filter would breakdown is when the true wind velocity exceeds the extended Nyquist velocity and the median filter may choose a velocity around zero velocity instead of the correct velocity near the extended Nyquist velocity. b. Wind shift–cold frontal passage In practice, the radial wind fields have either simpler large synoptic-scale structure or much finer small convective-scale structure. Let us first consider a large-scale wind shift structure. Figure 7 shows data collected in Canada using a 1200-/900-Hz (4:3) sampling strategy. As expected the largest amount of speckle is seen near the wind shift boundary, which is clearly seen in the data. However, both the median and Laplacian filtering are seen to produce high quality data with the Laplacian filtering producing a very clean result. This case is straightforward, as the data coverage is almost complete. c. Wind shift—Thin line on a gust front boundary/low SNR case In situations where there is an atmospheric boundary such as a frontal passage or gust boundary from a thunderstorm (in clear air), the azimuthal radial shear across the boundary can be a main source of error. Figure 8 presents an example from Sydney with a ‘‘southerly change’’—a low-intensity example of a topographically modulated front known as a Southerly Buster (McInnes and McBride 1993). This is a situation where the signal-

FIG. 7. Example of a wind shift from the Woodlands radar, Manitoba, Canada, along a cold frontal passage. Again, the Laplacian filtering is almost perfect while the median filtering leaves some speckles.

438

JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY

VOLUME 20

FIG. 8. An example of a thin line along a Southerly Buster from the Sydney radar. The Southerly Buster is a wind shift feature. It is evident on radar as a thin line. This is a difficult feature to correct since the data are sparse.

to-noise ratio is relatively low and random errors are thus enhanced, and significant dealiasing errors are expected. Figure 8 shows the filtering results. At this scale, the median filter removes the speckle, but the ‘‘smoothing’’ becomes more evident. When the data are sparse, as in this case, not all the errors are corrected properly (as discussed earlier). d. Mesocyclones and microbursts However, the type of weather systems where these dealiasing problems are expected to be most severe are in tornado and microbursts. These are finescale features that have the most significance for severe weather forecasts. We analyze a microburst seen in Canada and a mesocyclone that was observed in a severe thunderstorm during the Sydney 2000 Forecast Demonstration Project (Keenan et al. 2003). See May (2001) for a simulation study and discussion. Data from a tilt where the mesocyclone was at its most intense (;3 km) are shown in Fig. 9. A clear hook echo is seen in the reflectivity data and this extends to lower elevations (not shown). The uncorrected radial velocity shows some similarity to the striped structures shown by May (2001). The data have been subjected to both median filtering and Laplacian correction pro-

cedures with somewhat similar results. The median filtering results in a slightly smoother field, but the same data points have all been corrected except a small number of pixels in the low-reflectivity regions. In any case, the corrected data are clearly of high enough quality for automated detection algorithms to find the circulation, despite a large number of pixels being corrected (25%). Figure 10 shows an example of a microburst case. Similar to the mesocyclone case, the data have been almost perfectly corrected, particularly in the Laplacian filtering case. 6. Clutter problems In the previous section, we have demonstrated that in the absence of significant clutter, high quality data can be obtained when the signal levels are strong enough that there is some continuity between pixels. However, in real data, there are often areas with significant ground clutter present that can affect the velocity measurements. Ground clutter filters are often applied to the data to mitigate their effect. However, they can create problems if they do not work perfectly (either too much or too little). First, zero velocity effects are aliased and occur at values 2 and 4 times the Nyquist velocity of the individual rays within the dealiased data. These veloc-

APRIL 2003

JOE AND MAY

439

FIG. 9. An example of a mesocyclone from the Sydney radar taken during the World Weather Research Programs Sydney 2000 Forecast Demonstration Project. Both techniques seem to come up with the same excellent result, with smoothing evident in the median filter technique.

ities are referred to as ‘‘blind velocities’’ (Skolnik 2000). The Australian system uses infinite impulse response (IIR) filters and the Canadian system uses adaptive fast fourier transform filters. It is beyond the scope of this paper to address the nuances of the filtering techniques (Doviak and Zrnic 1993; Passarelli et al. 1981; Joe et al. 1998; Seltmann 2001; Lapczak et al. 1999). In order to demonstrate the problems, the worst case we have encountered is discussed. The radial velocity estimates around the blind velocities are undoubtably less accurate than other velocities, particularly if the clutter is broad and variable, and poorly filtered. Figure 11 shows velocity fields measured by the Sydney Doppler radar over the Blue Mountains, approximately 70 km west of central Sydney. Unfiltered data from this area show clutter signals with an intensity of 40 dbZ and greater. These data were collected on a day with high wind speeds. IIR filters were applied to the time series and pulse-pair processing was employed. The speed was approximately 25 m s 21 at the lowest elevation over the high terrain, covering the velocity range near twice the Nyquist velocity of 12 m s 21 . The clutter contamination clearly has caused extreme problems at the lower tilt shown. There are bands of spurious winds and a large degree of banded reflectivity

structure that is clearly erroneous. (The spoking is thought to be the result of using the same IIR filter coefficients for the two PRF’s resulting in filtering some of the weather signal.) The spatial scales of these structures is such that neither the median filter nor Laplacian filter help much, although the median is slightly better than the Laplacian filter results. While this example appears disastrous, the occurrence of such data is quite unusual and quality control algorithms can be used to remove clutter affected data; such as, simple masking will leave usefully clean images. However, it is clear that such contaminated data should not be passed onto users of automated algorithms. 7. Conclusions The dual PRF technique for resolving velocity ambiguities is considered de rigeur for operational forecast use with C-band radars. It is no longer acceptable to present folded data to operational forecasters for interpretation. Both Australian and Canadian Doppler radars, use 4:3 dual PRF sampling and extend the Nyquist velocity range to 40 and 48 m s 21 , respectively. Even at these extended Nyquist intervals, folding can occur that is generally only seen in intense synoptic-scale storms.

440

JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY

FIG. 10. An example of a microburst from the Woodlands radar at 2310 UTC 4 Jul 2002 in Canada. Again, Laplacian filtering is superior to median filtering in this case.

VOLUME 20

The paper focused on C-band radars but the results and techniques presented in the paper can be applied to Doppler weather radars at other wavelengths. The dual PRF technique is implemented on alternating azimuths. The inherent assumption is that the data from azimuthally adjacent range bins are from the same velocity. Dealiasing errors can occur when this assumption is violated or due to statistical errors. Dealiasing error correction routines, discussed in this paper, applied to the dual PRF velocity data provide clean datasets that greatly aid easy analysis by forecasters, particularly in severe weather situations. These correction techniques are also needed for use in automated shear-based feature algorithms for mesocyclones and microbursts (Zrnic et al. 1985). These correction algorithms are straightforward and can be implemented as part of the routine signal processing code or during the data analysis stage. The median filtering technique is straightforward, preserves gradients but suppress local extrema. It may have problems when the velocity data straddles the extended Nyquist velocity. The Laplacian technique is a detection then correction scheme that attempts to detect an dealiasing error and then correct the error based on knowledge of the nature of the dealiasing scheme. It preserves local peaks and the inherent assumption that adjacent velocity differences are separated by less than a Nyquist velocity. Because of the preservation of local velocity peaks, the Laplacian is preferred. However, if smoothing can be tolerated and small-scale (at the azimuthal resolution of the data) features are not significant, then the median filter is acceptable. Both techniques are robust with respect to missing and sparse data. The Laplacian filter performance was robust to various error patterns indicating that it might be successful with higher PRF ratios. The algorithms described in this paper were formulated for the cases of when the PRF of the ray was not known and known and for polar coordinate radar data. The PRF of the ray from the signal processor is not always known and there are a large number of signal processors in operations where this is the case. The techniques work well in both situations. This Laplacian technique has also been applied to Cartesian image data, where the ray information is lost and the data are very coarse (32 levels), with good success. With this error correction we have demonstrated that high quality data can be obtained including cases even where the data coverage is quite limited and where significant severe weather events are occurring. Examples ranging from widespread precipitation, mesocyclones, and fronts show highly detailed accurate wind estimates. In extreme severe weather cases, where the velocity difference in adjacent range bins exceeds the Nyquist velocity, as in a tornadic vortex signature, there may still be a problem but this has not been seen in the Australian or Canadian weather regimes. There has been substantial work on retrieving second

APRIL 2003

441

JOE AND MAY

FIG. 11. An example where clutter-contaminated echo poses additional difficulties. The scan is from the lowest tilt (0.58). The source of the ground clutter is the Blue Mountains to the west of the Sydney basin.

trip echoes for operational applications (Joe et al. 1998; Sachidananda and Zrnic 1998; Frush and Doviak 2001). Another benefit of the dual PRF approach is that the ‘‘dead zone’’ between the first and second trips, due to switching off of the receiver during pulse transmission, is also staggered since the unambiguous range is dependent on the PRF. The dead zone can be filled with reflectivity and velocity data from the appropriate PRF. The largest problems occur where there is significant clutter contamination of the velocity estimates. This is worst when the velocity is large and the weather spectral peak is aliased and is near the clutter peak at zero velocity. Acknowledgments. This paper originated during the World Weather Research Programs Sydney 2000 Forecast Demonstration Project. It provided a wonderful opportunity for scientific interaction and collaboration. Many people were involved in its planning but the leadership and efforts of Tom Keenan are very much appreciated. Also the cooperation of the technical staff and management of the New South Wales Forecast Office, as well as the Radar Unit of the Bureau of Meteorology deserve mentioning.

REFERENCES Banjanin, Z. B., and D. S. Zrnic, 1991: Clutter rejection for Doppler weather radars which use staggered pulses. IEEE Trans. Geosci. Remote Sens., GRS-29, 610–620. Doviak, R. J., and D. S. Zrnic, 1993: Doppler Radar and Weather Observations. 2d ed. Academic Press, 562 pp. ——, D. Sirmans, D. S. Zrnic, and G. Walker, 1976: Resolution of pulse-Doppler radar range and velocity ambiguities in severe storms. Preprints, 17th Conf. on Radar Meteorolgy, Seattle, WA, Amer. Meteor. Soc., 15–22. Frush, C., and R. J. Doviak, 2001: Performance assessment of the SZ(8/64) phase code to separate weather sigals in a research WSR-88D. Preprints, 17th Int. Conf. on Interactive Information and Processing Systems, Albuquerque, NM, Amer. Meteor. Soc., 133–136. Gonzales, R. C., and R. E. Woods, 1992: Digital Image Processing. Addison-Wesley, 716 pp. Holleman, I., and J. Beekhuis, 2003: Analysis and correction of dualPRF velocity data. J. Atmos. Oceanic Technol., 20, 443–453. Joe, P., D. Hudak, C. Crozier, J. Scott, R. Passarelli Jr., and A. Siggia, 1998: Signal processing and digitial IF on the Canadian Doppler radar network. Advanced Weather Radar Systems, COST 75 International Seminar, Locarno, Switzerland, European Commission, 544–556. Keenan, T., and Coauthors, 2003: The Sydney 2000 World Weather Research Programme Forecast Demonstration Project: Overview and current status. Bull. Amer. Meteor. Soc., in press. Lapczak, S., and Coauthors, 1999: The Canadian National Radar

442

JOURNAL OF ATMOSPHERIC AND OCEANIC TECHNOLOGY

Project. Preprints, 29th Conf. on Radar Meteorology, Montreal, QC, Canada, Amer. Meteor. Soc., 327–330. Marr, D., 1982: Vision. W. H. Freeman and Co., 397 pp. May, P. T., 2001: Mesocyclone and microburst signature distortion with dual-PRT radars. J. Atmos. Oceanic Technol., 18, 1299– 1233. ——, and P. Joe, 2001: The production of high quality Doppler velocity fields for dual-PRT weather radar. Preprints, 30th Conf. on Radar Meteorology, Munich, Germany, Amer. Meteor. Soc., 286–288. McInnes, K. L., and J. L. McBride, 1993: Australian southerly busters. Part I: Analysis of a numerically simulated case study. Mon. Wea. Rev., 121, 1904–1920. Passarelli, R., P. Romanik, S. G. Geotis, and A. D. Siggia, 1981: Ground clutter rejection in the frequency domain. Preprints, 20th Conf. on Radar Meteorology, Boston, MA, Amer. Meteor. Soc., 308–313. Sachidananda, M., D. S. Zrnic, R. J. Doviak, and S. Torres, 1998: Signal design and processing techniques for WSR-88D ambiguity resolution, Part 2. National Severe Storms Laboratory, Norman, OK, 105 pp.

VOLUME 20

Sauvageot, H., 1992: Radar Meteorology. Artech House, 366 pp. Seltmann, J., 2001: Quantitative effects of clutter highpass filtering as used by DWD. Preprints, 30th Conf. on Radar Meteorology, Munich, Germany, Amer. Meteor. Soc., 328–330. SIGMET, 1997: RVP 6 Doppler signal processor user’s manual. Sigmet Inc., Westford, MA, 218 pp. Sirmans, D., D. S. Zrnic, and W. Baumgarner, 1976: Extension of maximum unambiguous Doppler velocity by use of two sampling rates. Preprints, 17th Conf. on Radar Meteorolgy, Seattle, WA, Amer. Meteor. Soc., 23–28. Skolnik, M., 2000: Introduction to Radar Systems. 3d ed. McGrawHill, 784 pp. Strauch, R. G., D. A. Merritt, K. P. Moran, K. B. Earnshaw, and D. van de Kamp, 1984: The Colorado wind profiling network. J. Atmos. Oceanic Technol., 1, 37–49. Zrnic, D. S., 1977: Spectral moment estimates from correlated pulse pairs. IEEE Trans. Aerosp. Electron. Syst., AES-13, 344–354. ——, D. S., D. Burgess, and L. Hennington, 1985: Automatic detection of mesocyclonic shear with Doppler radar. J. Atmos. Oceanic Technol., 2, 425–438.