Signal Sampling and Recovery Under Dependent Errors - IEEE Xplore

1 downloads 0 Views 484KB Size Report
originating from the Whittaker–Shannon (WS) sampling interpo- lation formula. .... we can display our estimate and obtain a great deal of information about the ...
2526

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY 2007

Signal Sampling and Recovery Under Dependent Errors Mirosław Pawlak, Member, IEEE, and Ulrich Stadtmüller

Abstract—The paper examines the impact of the additive correlated noise on the accuracy of the signal reconstruction algorithm originating from the Whittaker–Shannon (WS) sampling interpolation formula. A class of band-limited signals as well as signals which are non-band-limited are taken into consideration. The proposed reconstruction method is a smooth post-filtering correction of the classical WS interpolation series. We assess both the pointwise and global accuracy of the proposed reconstruction algorithm for a broad class of dependent noise processes. This includes short and long-memory stationary errors being independent of the sampling rate. We also examine a class of noise processes for which the correlation function depends on the sampling rate. Whereas the short-memory errors have relatively small influence on the reconstruction accuracy, the long-memory errors can greatly slow down the convergence rate. In the case of the noise model depending on the sampling rate further degradation of the algorithm accuracy is observed. We give quantitative explanations of these phenomena by deriving rates at which the reconstruction error tends to zero. We argue that the obtained rates are close to be optimal. In fact, in a number of special cases they agree with known optimal min-max rates. The problem of the limit distribution of the L2 distance of the proposed reconstruction algorithm is also addressed. This result allows us to tackle an important problem of designing nonparametric lack-of-fit tests. The theory of the asymptotic behavior of quadratic forms of stationary sequences is utilized in this case. Index Terms—Convergence, dependent noise, lack-of-fit tests, limit theorems, long-memory noise, quadratic forms, signal recovery, smoothing, Whittaker–Shannon (WS) interpolation.

I. INTRODUCTION HE Whittaker–Shannon (WS) interpolation series plays a fundamental role in representing signals/images in the discrete domain. In fact, it is commonly recognized as a milestone in signal processing, communication systems, as well as Fourier analysis [22], [27], [36], [39], [41], [43]. The WS reconstruction such that theorem says that every signal

T

(1.1) where and is the Fourier transform of can be perfectly reconstructed from its discrete values

,

Manuscript received October 14, 2005; revised November 28, 2006. M. Pawlak is with the Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg, Manitoba, R3T 5V6, Canada (e-mail: [email protected]). U. Stadtmüller is with the Department of Number Theory and Probability Theory, University of Ulm, 89069 Ulm, Germany (e-mail: ulrich. [email protected]). Communicated by P. L. Bartlett, Associate Editor for Pattern Recognition, Statistical Learning and Inference. Color versions of Figures 1 and 2 in this paper are available online at http:// ieeexplore.ieee.org. Digital Object Identifier 10.1109/TIT.2007.899531

. Indeed, we have the following celebrated WS reconstruction formula: (1.2) , and where the sampling interval is such that . The finite number in (1.1) is called the bandwidth and a class of signals for which (1.1) holds is named . band-limited. This class in the sequel will be denoted as The WS interpolation series has been extended to a number of circumstances including multiple dimensions, random signals, non-band-limited signals, sampling in generalized spaces, and reconstruction from irregularly spaced data [22], [27], [36], [39], [41], [43]. In practice, one rarely has access to perfect samples but rather to their noisy version due to measurement and are observed transmission errors. Hence, suppose that data according the model (1.3) is the where is the sampling rate. The noise process , second-order stationary stochastic process with , and , where is the covariance function depending generally on . The problem of designing reconstruction formulas from noisy samples has been addressed lately [30]–[36]. It has been noted that the presence of noise causes the reconstruction formula in (1.2) to break down. This is due to the fact that it is dangerous to interpolate the noisy data. A number of smoothly corrected versions of the interpolation series have been developed and their convergence properties have been established. The convergence results obtained in [30]–[36] have been established under for , i.e., only white noise the condition that process was considered. In [32], these results were extended is a linear stationary process with the to the case when fast decaying correlation function and such a situation is often called the short-range-dependent errors case. Note also that in [32] only the global convergence results were established. In [30], on the other hand, a general discussion on emerging problems of dealing with dependent noise in sampling theorems was given and some preliminary results were pointed out. An interesting account of generalized sampling theorems for noisy data and their links to learning theory has been examined in [38]. It should be noted, however, that the methodology presented in [38] although very general is confined only to independent errors. The data observation model in (1.3) has been used in these contributions and also accepted in this paper. In Section II, we

0018-9448/$25.00 © 2007 IEEE

PAWLAK AND STADTMÜLLER: SIGNAL SAMPLING AND RECOVERY UNDER DEPENDENT ERRORS

give a detailed discussion of noise models with various correlation structures. Among many possible reconstruction algorithms, we shall utilize the post-filtering correction of the WS series introduced in [33] and further examined in [34], [35], [32]. Hence, will be throughout the paper, the following estimate of employed: (1.4)

2527

Let us note that Assumption 1 can be also expressed in terms on . of the number of existing derivatives of In the case of non-band-limited signals, we need both inforat and some smoothness mation about the decay of . The first requirement is similar to Assumpconditions on tion 1 and has the following form. Assumption 2: Let such that

. There exist

,

where (1.5) is known to reproduce the space but has The kernel decay and cannot be very practical in numerical impleslow mentations of the reconstruction algorithm. Numerous modifications of the above kernel are available and they can be found in [25], [42], [32]–[35]. It has been shown that these kernels are able to improve the pointwise bias of the corresponding reconstruction algorithm [33] but they do not offer any advantages in terms of the variance reduction. On over the estimate the contrary, it has been demonstrated in [33] that the estimate utilizing the fast decaying kernels has larger variance than that . Hence, we concentrate excluof the standard estimate sively on the post-filtering method since it is the most representative technique in view of its simplicity and the aforementioned properties. Furthermore, the obtained results can be easily extended to other reconstruction methods examined in [31]–[36]. Indeed, the prime goal of this work is to evaluate the impact of various types of dependent errors on the choice of the sampling rate and on the accuracy of the reconstruction method. These results seem to be universal in the sense that other possible reconstruction algorithms will exhibit similar if not identical behavior. In fact, we show that in many special cases our rates agree with known optimal min-max rates obtained in the signal processing and statistical literature. The issue of sampling representations analogous to (1.2) for non-band-limited signals is much more delicate. Generalized . sampling theorems exist for some specific subspaces of In particular, they exist for the so-called shift-invariant sub, see [39]–[41] for a discussion of the theory spaces of of sampling theorems in shift-invariant subspaces. In this paper, we approximate non-band-limited signals by using the reproducing kernel in (1.5) with increasing , such with a certain rate. In this case, our algothat rithm requires not only optimal choice of the sampling period but also the parameter . Our asymptotic analysis will mostly focus on the stochastic part of a reconstruction error. Concerning the deterministic part (bias) of the error, we require assumptions on the decay and then it the smoothness of underlying signals. If is an analytic function and as is known that . Therefore, we only need an assumption on the decay of at and this is described by the following condition. Assumption 1: Let such that

. There exist

,

,

On the other hand, the smoothness of following condition.

is described by the

Assumption 3: Let with all derivatives of order , , , being bounded and in . Assumption 3 can be also expressed in terms of the behavior at . In fact, under Assumption 3, decays as , for . We assess the accuracy of our estimate by the pointwise mean squared error (1.6) and its global counterpart the mean integrated squared error (1.7) We observe that global and local properties are quite different and in both cases we establish the rates. The rates are obtained for a wide class of dependent noise processes (see Section II) and are believed to be close to the best optimal rates. This includes both short- and long-range-dependent errors as well as a noise model depending on the sampling rate. The long-range-dependent case corresponds to processes for which . There has been recently growing interest in the theory and practical aspects of the long-memory noise processes. In fact, such processes have been empirically observed in a wide range of natural processes, communication systems, and signal processing, see [1], [11], [24], [26], [8], and the references cited therein. We obtain three types of convergence rates. First, we conclude that rates for short-range-dependent error processes is the same as for independent noise. In the case of long-range dependence, the rates are substantially slower. Further deterioration of the rates is observed for noise processes with the correlation function depending on the sampling rate. distance Furthermore, the limit distribution for the global is established. This limit theorem is found to be useful for testing hypothesis about a parametric form of a signal [4]. In order to capture the shape of the distance between and signal we measure the global the signal that is expected under the null hypothesis. If the distance is sufficiently small then there is evidence that the null hypothesis is true, otherwise it is rejected. The advantage of our approach compared to the classical detection theory is that in the case of rejecting the null hypothesis we can display our estimate and obtain a great deal of information about the

2528

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY 2007

shape of the underlying signal. Such a nonparametric approach to the problem of testing the fit of a parametric signal model has been rarely addressed in the signal processing literature. The first important result into this direction was obtained in [4] where independent data has been considered. Our paper gives an extension of these results to dependent errors. The paper is organized as follows. Section II concerns the structure of the noise process used in the paper. Sections III and IV give our basic results on the accuracy of the reconstruction method. Hence, Section III describes the pointwise error behavior of the estimate, whereas Section IV is concerned with properties. These sections are divided into subthe global sections dealing with the aforementioned three types of noise models. In all situations, we give rates for both band-limited and non-band-limited signals. Section V provides further insight into the accuracy of the reconstruction method by establishing its limit distribution. Here our derivations rely heavily on the existing theory of the asymptotic behavior of quadratic forms of stationary sequences. We generalize this theory to our case and find sufficient conditions for the limiting distribution of the error of our estimate. The established central limit theorem allows us to design a lack-of- fit test for verifying a parametric assumption on a signal. Some more involved technical derivations are deferred to the Appendix. II. STRUCTURE OF NOISE PROCESS: SHORT-RANGE- AND LONG-RANGE-DEPENDENT ERRORS There has recently been much interest in examination of various statistical models in the case when the classical white noise assumption is not valid. This seems to be also the case in the problem of signal sampling and recovery. In fact, the presence of dependent errors is typical in a number of applications related to signal processing and communication engineering [1], [11], [24], [26], [8] where a signal is transmitted through multipath random environments. The aggregation effect of such random environments generates a noise process which is not only correlated but also exhibits the so-called long-range dependence [1], [11], [24], [26], [8], [17]. Particularly, this takes place in wireless networks where the multipath random environment yields fading effect resulting in dependent noise with long-memory effect. Nevertheless, the noise structure is independent of the sampling rate of the transmitted signal. Thus, the correlation funcof the noise process in (1.3) is independent of . tion On the other hand, the model in (1.3) can be viewed as the submerged in a consampling process of the analog signal such that . In tinuous-time colored process of must depend on this case, the correlation function in such a way that gets more dependent with higher oversampling, i.e., when gets smaller. This situation may appear during the actual sampling process of a signal embedded in a continuous stationary stochastic process. The above discussion motivates two types of dependence models examined in this paper. In the first model we assume that (2.1) for all .

Despite of the question whether the covariance function depends on the sampling rate or not, there are two distinct categories of noise processes. The first one is characterized by the following condition and will be called short-range dependent (SRD): (2.2) On the other hand, the second class is characterized by the following requirement and is named long-range dependent (LRD) processes: (2.3) In the LRD case, we wish to characterize the decay of the covariance function in a more specific way and this is given in the assumption below. Here we use the concept of a slowly varying is slowly function. Hence, we say that a positive function , . varying at infinity if Assumption 4: Let , process with Let

where

be a weakly stationary stochastic , and .

is a slowly varying function at

.

An important class of stationary models which can satisfy both (2.2) and (2.3) is the linear process which is defined as follows: (2.4) where is a sequence of independent and identically dis, tributed (i.i.d.) random variables with , and . Since the correlation function of this process is given by (2.5) the decay of is entirely controlled by the sequence fact, one can show that

. In

(2.6) and therefore the linear process is SRD if (2.7) In particular, this condition is satisfied by autoregressive moving average (ARMA) processes whose autoregressive polynomial has no zeros on the unit circle. The precise characterization of the decay of in terms of the decay is given in the

PAWLAK AND STADTMÜLLER: SIGNAL SAMPLING AND RECOVERY UNDER DEPENDENT ERRORS

following lemma. Here and throughout the whole paper means that as . Lemma 2.1: Let

where

be a linear process defined in (2.4) with

is a slowly varying function at

we can easily obtain that

(2.8) is another slowly

and .

The proof of Lemma 2.1 can be found in [3] if and . Clearly, for , the linear in [28] if process is LRD. In particular, this takes place for fractional ARIMA processes whose autoregressive polynomial has no zeros on the unit circle. Other important examples of LRD processes include fractional Gaussian noise and linear autoregressive conditional heteroskedasticity (ARCH) models [26], [2]. The concept of short and long-memory processes can also be defined in the frequency domain. In fact let

be the spectral density of . Then it is seen that corresponds to property (2.3). The precise behavior of at , implied by Assumption 4, is given by the following formula: as

(2.9)

where and is the Gamma function. The formula in (2.9) can be easily inferred from the general theory of the asymptotic behavior of the Fourier coefficients [5], see also [18] for a recent discussion on the controversy concerning this issue. Similar characterizations as in (2.9) can be and . If , then given for the cases is often we have short-memory processes, whereas called antipersistence—a rare type of processes met in practice. is obtained by In the second model, the noise process with the sampling of a continuous time stationary process , i.e., . In this case, we covariance function will use the following model of dependence of the covariance function on the sampling rate. Assumption 5: Let be a weakly stationary stochastic and such that process with . Let moreover depend on in such a way that as where

, , as , then and the process may exhibit long-range dependence. Indeed, to see the influence on the behavior of the covariance function of the product of let us assume that is a nonincreasing function on , and let exist. Then since

. Then we have as

where varying function at

2529

(2.10)

.

The factor in (2.10) is a measure of the strength of the increases fast dependence of the noise process. In fact, if enough to infinity, e.g., , , as , then the noise process is almost independent. On the contrary, if

Allowing

along with

we have proven that (2.11)

Thus, the sum of the covariances of may be unbounded if , i.e., the property in (2.3) holds. It is also worth noting that if then we have and thus we obtain the first model for the then noise process. On the contrary, if and no consistency can be expected in this case. Our asymptotic results will be given for both aforementioned noise models. It is not surprising that the obtained rates depend distinctly on whether we are dealing with noise processes satisfying (2.2) (SRD) or with those meeting condition (2.3) (LRD). Furthermore, the dependence of the covariance function of on the sampling rate (quantified by Assumption 5) yields the additional reduction of the accuracy of the reconstruction formula. III. ACCURACY ANALYSIS—

CONVERGENCE

We start our discussion on the global properties of by a standard observation that the reconstruction error can be decomposed into variance and bias components, i.e., we have

where (3.1) and (3.2) It is clear that only is influenced by the correlation structure of the noise process and will be evaluated under the proposed noise models. On the other hand, the bias term has been already examined in [34] and we summarize these results in the following two lemmas. The first one class, whereas the second one deals with signals of the describes the bias for non-band-limited signals satisfying Assumptions 2, 3.

2530

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY 2007

Lemma 3.1: Let Assumption 1 be satisfied. Then we have

where , norm of . the

,

Lemma 3.1 reveals that . This is satisfied by Lemma 3.2: Let met. Then we have

where and

, and if with

for an even function the integrated variance

is

(3.4)

and .

This explicit formula will be essential for the sake of bounding for various dependent noise processes. It is also , then the second term (3.4) is worth noting that if equal to zero. Nevertheless, in this paper we have to use the . oversampling strategy and thus we always have In Sections III-A–C, we examine three separate noise cases introduced in Section III, namely, the SRD and LRD processes, and the noise model with the covariance function depending on the sampling rate, see Assumption 5.

and let Assumptions 2 and 3 be

,

,

,

, we obtain our final expression for

,

.

A. Mean Integrated Squared Error (MISE) With SRD Noise

It should be noted that in Lemma 3.2 the parameter depends as with a certain rate. In fact, on such that in order to have in Lemma 3.2, we need the : following constraints for as Thus, for and we need the following . This defines the trirestriction: -plane. angular region in the We are now in a position to evaluate the term . To do so, let us first observe that the Fourier transform of is given by

Let us now consider the general class of SRD processes characterized by condition (2.2) and which are falling into the first model discussed in Section III, i.e., when the covariance funcis independent of the sampling period . By virtue of tion (3.4) and assumption (3.3) we can readily obtain the following : upper bound for (3.5) On the other hand, an asymptotic formula for is given in the following lemma.

as

Lemma 3.3: Suppose that the noise process satisfies condition (2.2). Then we have Then due to the Parseval’s identity we obtain as where It is clear that the formula under the integral sign can be written in the following way:

(3.3) Due to the stationarity of lowing equivalent expression:

we can readily obtain the fol-

.

The proof of this fact results from the regularity of the Cesàro summation method and the fact that . The asymptotic formula given in Lemma 4.3 reveals that the reconstruction error can be reduced for negatively correlated data, i.e., . This phenomenon will be seen to hold for any when finite sample size in the case of the noise process discussed at the end of this section. The following theorem gives the rate at which for both band-limited and non-band-limited signals. Theorem 3.1: Let , be positive constants and assume that the noise process satisfies condition (2.2). and satisfy Assumption 1 with . (a) Let Then for (3.6)

Next, using the elementary fact that

we have (3.7) (b) Let

and satisfy Assumptions 2, 3.

PAWLAK AND STADTMÜLLER: SIGNAL SAMPLING AND RECOVERY UNDER DEPENDENT ERRORS

2531

Then for (3.8) and we have (3.9) It can be easily derived that the constant specified as follows:

in (3.6) can be be Fig. 1. A (;  ) for AR(1) noise versus ,  = 0:1; 0:55; 0:95; n = 125.

Hence, the effect of the correlation sum on the optimized choice of the sampling rate is easily seen. Indeed, if then we should choose the sampling period smaller than in the , then the case of uncorrelated noise. Conversely, if larger choice of would be preferable. It is also worth noting that in the non-band-limited case, the optimized sampling rate is independent of the smoothness of and is inversely proportional to the correlation sum . depends On the other hand, the optimized resolution degree both on the tail parameter and the smoothness factor . It can is proportional to the correlation also be easily shown that , i.e., for positively correlated data we must choose sum larger . The rate of convergence obtained in Theorem 3.1 seems to be is time limited (this close to the optimal one. In fact, if corresponds to taking a very large ) we obtain in (3.9) the with the choice and rate . This rate is known [13] to be the best possible for any nonparametric estimate of a signal belonging to a class of compactly supported functions satisfying Assumption 3. An important case of the SRD noise is a linear processes defined in (2.4) and which satisfies condition (2.7). In this case, can be the upper bound and asymptotic constant for . In expressed in terms of the impulse response sequence fact, recalling (2.6) we can find that the bound in (3.5) is given by (3.10) The asymptotic constant appearing in Lemma 4.3 can also be given in this case by applying Fubini’s theorem. In fact, we can easily obtain the following asymptotic formula for in the case of linear noise processes of the SRD type: as (3.11) To get further insight into the behavior of the correlation term appearing in (3.4) , let us consider the autoregressive noise , , where is a sequence process , . Noting of i.i.d. random variables with

Fig. 2. A (;  ) for AR(1) noise versus  ,  =

that in this case mula (it is assumed that

00:6; 0:6; n = 125.

, let us consider the following for):

Note that represents the principal part of describing the correlation structure of the noise process, see formula (3.4). Observe also that the correlation sum ap. pearing in Lemma 4.3 is given here by as a function of for various In Fig. 1, we depict values of and . It can be seen that the positively correlated noise reduces the estimate accuracy, whereas the negatively correlated data may lower the error below that for the unis negative for all correlated noise. In fact, the factor . Furthermore, the higher oversampling (smaller ) makes larger. If, however, then smaller helps to reduce , e.g., for all . In Fig. 2, we show how depends on for fixed values of . Again it is seen that for the negatively correlated data the smaller sampling period helps to reduce the reconstruction error. It should be noted, however, that in practice positively correlated data are much more often encountered. B. MISE With LRD Noise Let us now turn to the LRD case. At first, we consider the noise model when the covariance function is independent of , i.e., the noise model characterized by Assumption 4. The following lemma gives the asymptotic behavior of the for the long-memory noise processes of this kind.

2532

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY 2007

Lemma 3.4: Let Assumption 4 be met. Then for and we have (3.12) shown at the bottom of the page, where is a suitable selected slowly varying function and where . Proof: We begin with the identity in (3.4). Hence, let us evaluate the second term in (3.4). The summation formula in ) as follows: (3.4) can be written (for any

(3.13) Note that the second term in (3.13) is negligibly small. In fact, a simple algebra shows that it is of order . On the other hand, the first term in (3.13) can be decomposed as follows:

Combining Lemma 3.4 with Lemma 3.1 and Lemma 3.2, and then optimizing the resulting formulas with respect to and , in the case of we can derive the following rates for LRD data. Theorem 3.2: Let , , be positive constants and assume and that the noise process obeys Assumption 4 with as . (a) Let satisfy Assumption 1. Then selecting

we obtain

(b) Let lecting

and satisfy Assumptions 2, 3. Then se(3.15)

and

we have say, (3.16) for some way:

. The term

can be written in the following where

The result obtained in Theorem 3.2 reveals that if is a and time-limited signal then with the selection we obtain the following rate: where we have used the fact that uniformly on any compact interval. Arguing in the similar way we can also show that the term is of the order

for some

. Now letting

and using the fact that (3.14)

we can get the formula in (3.12) for . can be treated in the identical way, the Since the case proof has been completed.

(3.17) This rate is known to be the optimal minimax rate for long-range dependent errors [19], [29], [12]. . Nevertheless, In Theorem 3.2 we assume that can be treated in a very similar way yielding the the case factor. identical rates with the additional It is also worth noting that contrary to the SRD errors (see for nonTheorem 3.1 (b)) , the optimized sampling period band-limited signals depends both on the tail index and the smoothness parameter . Furthermore, the sampling period is required to be smaller for the LRD data than the SRD ones. It is also clear that there is an apparent deterioration of the , then the rate for the LRD errors. In fact, let is ranging between (for ) and

if if

(3.12)

PAWLAK AND STADTMÜLLER: SIGNAL SAMPLING AND RECOVERY UNDER DEPENDENT ERRORS

(for ). The latter is the rate obtained for the SRD case as well as the independent errors. Finally, let us note that by recalling the result of Lemma 3.1 we can easily establish the rates for the linear noise process. Indeed, we have to , where only replace the coefficient in Theorem 3.2 by is the coefficient defining the rate of decay of the impulse reof the linear noise process. In particular, sponse function for we obtain the LRD errors. This special case in the context of time limited signals has been thoroughly examined in [19].

2533

we obtain

(b) Let selecting

and satisfy Assumptions 2, 3. Then (3.19)

and

C. MISE With the -Dependent Correlation Function In this subsection, we deal with the noise model satisfying Assumption 5. As we have noticed in the discussion below Asas yields the LRD errors. sumption 5, the case Let us first examine the variance part of the reconstruction error. and

Lemma 3.5: Under Assumption 5 with as and we have

(3.18) replaced by we get Proof: Using (3.4) with for the dominating second term with some small positive constant

where

we obtain (3.20) The proof of this theorem follows the same line as that of Theorem 3.2. It is worth noting that the rates in Theorem 3.3 can be arbitrary , slow if is approaching . In the extreme case, when there is no even convergence of the estimate to the true signal. This corresponds to the strong dependence of the correlation function on the sampling rate, i.e., when with being an integrable function. It remains an open problem how to tackle this type of noise processes. It is also expected that the rate obtained in Theorem 3.3 is slower than that in Theorem 3.2. In fact, in order to compare , , and conthese rates let us choose, e.g., sider the case of band-limited signals. Then we must specify such that the correlathe parameter defining the decay of tion function converges to zero in the same speed for both noise models. By Assumption 4, we see that the partial sum of the is of order . correlation function For the -dependent noise process this sum, see (3.8), is of . Choosing the optimal value order of the sampling period as it is given in Theorem 3.3 we see . Since this should be that we must choose . This leads to of order the rate in Theorem 3.2 and the rate in Theorem 3.3. Clearly, the latter rate is slower than the former one. IV. ACCURACY ANALYSIS—POINTWISE CONVERGENCE

for all which gives the desired result. Combining the above lemma with Lemmas 3.1 and 3.2, and then optimizing the resulting formulas with respect to and we can derive the following rates for the dependent noise process. Theorem 3.3: Let , , , be positive constants and assume that the noise process obeys Assumption 5 with and with . (a) Let satisfy Assumption 1. Then selecting

In this section, we examine the pointwise properties of the . It is worth noting that the pointwise behavior of estimate the estimate is quite different than the global one and the pointwise error analysis of sampling expansions has been commonly examined in the literature [27]. Let us begin with two lemmas . The first result concerns describing the pointwise bias of . the case when Lemma 4.1: Let satisfy Assumption 1. Then for and we have where

2534

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY 2007

The proof of this result can be found in [35]. We should also note that as Let us consider the pointwise bias of when satis, i.e., when is a non-bandfies Assumptions 2, 3 with limited signal. This case has not been studied in the literature. Lemma 4.2: Let we have

Theorem 4.1: Let , be positive constants and assume that satisfies condition (2.2). the noise process . Then selecting (a) Let satisfy Assumption 1 with

we have for

satisfy Assumptions 2, 3. Then for (b) Let

some constants

,

,

satisfy Assumptions 2, 3. Then selecting

.

The proof of this lemma is given in the Appendix. The following lemma gives an universal bound for the pointfor an arbitrary covariance structure of wise variance of the noise process. This is a crucial result for the future evaluation of the pointwise error and its lengthy proof is deferred to the Appendix. Lemma 4.3: Let the noise process be stationary with the . Then we have covariance function

we have for

It is again seen that if is a time-limited signal satisfying Assumption 3, we obtain the optimal pointwise mini-max rate , [13]. B. MSE With LRD Noise Combining Lemmas 4.1 and 4.2 with the result in (4.4) we can readily obtain the following pointwise rates for the LRD noise.

(4.1)

appearing in the second Remark 4.1: The constant term of (4.1) was obtained (see the proof of Lemma 4.3 in the . It is clear that Appendix) under the tacit assumption that . this is not any restriction on the rate of the decay of Corollary 4.1: Let Then we get

Theorem 4.2: Let , , be positive constants and assume that the noise process satisfies Assumption 4 with and let as . (a) Let satisfy Assumption 1. Then selecting

we obtain for

, i.e., we have the SRD noise.

(4.2)

(b) Let

satisfy Assumptions 2, 3. Then selecting

On the other hand, for the LRD noise, i.e., when with some , we obtain (4.3)

we obtain for

The proof of (4.3) is given in the Appendix. Using the integral identity in (3.14) we can rewrite (4.3) in the following form: (4.4) where

.

A. Mean Squared Error (MSE) With SRD Noise Combining Lemmas 4.1 and 4.2 with the result in (4.2) we can readily obtain the following pointwise rates.

It is worth noting that the pointwise rate is faster than the , , then the rate in Theglobal one. In fact, if, e.g., , whereas the rate in Theorem 4.2 (a) orem 3.2 (a) is is . Furthermore, if is a time-limited signal satisfying Assumption 3, then we can recover the optimal pointwise mini-max rate for the LRD noise errors, i.e., , [12], [29].

PAWLAK AND STADTMÜLLER: SIGNAL SAMPLING AND RECOVERY UNDER DEPENDENT ERRORS

C. MSE With the -Dependent Correlation Function Similarly as in Section III-C, let us consider the most interas . As esting case of Assumption 5, i.e., when we have already argued, this leads to the long-memory errors case. corresponding to AsFirst, we need a formula for sumption 5. By virtue of Lemma 4.3 and arguing as in the proof of Lemma 3.5 we obtain the following result. Lemma 4.4: Under Assumption 5 with as and we have

2535

knowledge the existing results in the field are mostly confined to linear processes with both the short-range dependence and longrange dependence behaviors, see [14]–[16] and the references cited therein. defined in (2.4) Hence, let us consider the linear process having moments with an i.i.d. random sequence , , and . Let us recall, see (2.5), that the covariance function of is given by

and

Also, let sequence

be the Fourier transform of the impulse response , i.e.,

Combining the above result with Lemmas 4.1 and 4.2 we can readily obtain the following pointwise rates for the dependent noise. Theorem 4.3: Let , be positive constants and assume satisfies Assumption 5 with that the noise process and , . (a) Let satisfy Assumption 1. Then selecting

It is known that the power spectral density of is given by . The main assumption in our derivation of the central limit theorem is the asymptotic behavior of at . Thus, we require that be continuous at or that for some as

(5.1)

we obtain for

(b) Let

satisfy Assumption 2, 3. Then selecting

where is a slowly varying function at infinity, see [3]. It is worth noting that by virtue of Lemma 2.1 and (2.9), the required in (5.1) is implied by the following behavior of limit behavior of the impulse response function of the linear process: (5.2)

we obtain for

Analogously as for the global error, see the discussion below Theorem 4.3, we have no convergence if .

as and where is a slowly varying function which . Also, the correlation function of the is proportional to as . noise process is characterized by Hence, the assumption in (5.1) describes the LRD case. In the theorem below we also establish the limit distribution for SRD errors, i.e., when restriction (2.7) holds. class Let, for signals of the

V. LIMIT DISTRIBUTIONS AND LACK-OF-FIT TESTS In this section, we shall discuss the problem of limit distributions of our reconstruction method. We consider only the case error since it is a natural distance measure in the imof the portant problem of testing a parametric form of a signal. The problem of designing consistent nonparametric lack-of-fit tests for the functional form of a regression function has been extensively studied in the statistical literature, see [21], [10], and the references cited therein. On the other hand, this problem has been rarely addressed in the context of signal recovery and, as we have already mentioned, the first attempt to tackle this quesis assumed tion was made in [4] where the noise process to be of the i.i.d. type. We extend the results of [4] to the case of correlated errors. Nevertheless, we only consider the linear noise process defined in (2.4) due to the fact that our technical developments relay heavily on the presently known theory of central limit theorems for quadratic forms of dependent processes. In fact, to our best

be the stochastic part of the integrated squared error. Let the following be the centered version of :

Let us observe that (5.3) Hence, it is seen that the statistic is expressed as the . This is a crucial quadratic form of the noise process observation for our further developments in establishing the and then designing a nonparacentral limit theorem for metric lack-of-fit test for verifying the functional form of a be the standard normal random variable signal. Let

2536

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY 2007

and let be its distribution function. Convergence in distribution is denoted by . The following theorem . gives the central limit theorem for Theorem 5.1: Let (1.6) with then we have

be the linear noise process defined in . If such that

The preceding result forms the basis for designing a nonparametric lack-of-fit test. Hence we wish to test the null hypothesis , where is a fixed band-limited signal satisfying Assumption 1. We will test using a statistic based on the es. At the first attempt one would use a natural test timate distance between and , i.e., statistic which is the

(5.4) where

is given in (5.5) and (5.6) at the bottom of the page.

The proof of Theorem 5.1 is given in the Appendix. Remark 5.1: The expression in (5.5) gives a general formula . On the other hand, the first for the asymptotic variance of expression in (5.6) describes the behavior of the variance in the SRD case. Since

we can obtain the following central limit theorem for SRD errors:

Note, however, that under the null hypothesis , where the mapping have defined as follows:

we is

Hence, we can consider the following test statistic as a tool for : verifying the hypothesis

Let us begin with the investigation of the behavior of when is true. Then clearly takes the folthe null hypothesis lowing form:

(5.7) Owing to Theorem 5.1 and the fact that we have

where

(5.10)

(5.8) where As a special case of the above let us consider the i.i.d. noise . Then and Theorem 5.1 process, i.e., reduces to Theorem 1 in [4]. Remark 5.2: Formula (5.6) yields the following central limit theorem for LRD errors: (5.9) where

,

and

is defined in (5.5), (5.6) and

see also (3.4). Let us now consider the case that the alternative hypothesis is is a true signal but not . Then, assuming that true, i.e., that we can distinguish between and , i.e., that , we have the following decomposition for the test statistic shown in the equation at the bottom of the page. Note first that

.

(5.11)

(5.5) if if

is continuous at meets (5.1) with

(5.6)

PAWLAK AND STADTMÜLLER: SIGNAL SAMPLING AND RECOVERY UNDER DEPENDENT ERRORS

Consequently, we have

(5.12)

2537

can be found in [2], [8], [11]. Particularly interesting, for the problem of designing data-driven lack-of-fit tests, are methods which allow to estimate memory parameters of the noise that do not require any preliminary estimator of the signal, see [23], [37] for the examination of such methods in the context of the classical regression analysis. VI. CONCLUDING REMARKS

Let us choose the sampling period as follows: in the SRD case in the LRD case. (5.13) Note that this choice ensures the validity of (5.10) and that . Hence, by (5.10) we can easily see that the first term in (5.12) is asymptotically normal and hence is stochastically bounded. By this fact and (5.11) we can see that the last term in (5.12) is smaller than the second one. Regarding the second term in (5.12) we observe that it tends to infinity since for the above selected . Hence, we have shown that under the alternative hypothesis tends to infinity. All the aforementioned considerations allow us to design a , where practical way of testing the hypothesis is a fixed band-limited signal satisfying Assumption 1. Indeed, for the selected confidence level we reject if with

(5.14) where is the upper quantile of the standard normal distribution. If the inequality in (5.14) is not valid we . accept the hypothesis There are a number of computational issues related to the pro. This posed test. First, we need to evaluate the test statistic can be done in the frequency domain since due to the Parseval’s formula we have

where is the Fourier transform of . The above integral can be easily evaluated by various numerical integration methods. Furthermore, the calculation of the test statistic . requires the knowledge of , , and the noise correlation The sampling interval can be selected according to (5.13). The tail parameter appearing in (5.13) can be found by applying the linear regression analysis to the logarithmic transform of the ab. Then the slope of the solute value of a preliminary data set linear fit can define an estimate of . The bandwidth is often known in many practical applications. Nevertheless, as has been noted [32]–[36], we only need to know the upper bound for . as well as for in (5.14) require esThe formulas for . This is the most challenging timating the correlation problem and we refer to [20] for a class of consistent difference-based methods for estimating noise covariance in the classical nonparametric setting with SRD time series errors. In the LRD case we can use the asymptotic formulas for and established in Lemma 3.4 and Theorem 5.1, respectively. In both expressions we need to estimate the memory parameter . Current methods of estimating of an LRD time series

The problem of recovering both band-limited and non-bandlimited signals in the presence of dependent errors has been exand pointwise estimation eramined. The rates for the global rors of the post-filtering reconstruction method originating from the WS interpolation series were evaluated. The rates are believed to be near-optimal since in the case of time-limited signals they agree with known min-max optimal rates. The limiting reconstruction error of the estimate distribution of the global was obtained. This result finds a direct application in an important problem of testing parametric assumptions on signals observed in the presence of noise. The central limit theorem was proved only for band-limited signals but the extension to the non-band-limited case is rather straightforward. All the aforementioned convergence results were obtained under three different dependent noise models. This includes SRD, LRD noise models, and the noise process whose covariance function depends on the sampling rate. We have observed that SRD errors have a minimal effect on the estimate accuracy, i.e., the rates are identical to those for independent noise. On the other hand, in the LRD case we have obtained the essential reduction of the estimate accuracy. In fact, for the -dependent noise model we may have an arbitrary slow rate of convergence. It has been observed [9], [12] (in the context of classical regression analysis) that this curse of long-memory errors can be partially overcome if instead of fixed sampling points one uses random points. It is an interesting issue to incorporate this idea into the signal recovery problem by replacing points by some random sampling scheme. We conjecture that recovering methods based on random sampling can have an improved rate of convergence in the case of LRD errors. In fact, the rate obtained in Theorem 4.2 is expected to . Clearly, the latter is a be replaced by faster rate, e.g., for , we obtain and , respectively. The problem of establishing exact rates of signal recovery methods utilizing random sampling is left for future research. In many practical situations we sample a continuous-time which can be decomposed into the stochastic process deterministic trend and the covariance stationary residual , i.e., . In this case, the process is of the covariance function of the sampled process and as we have already pointed out, our estimation form method cannot converge to the true signal. The noise model described in Assumption 5 is an introductory proposition for dealing with this interesting problem. APPENDIX In this section we give proofs of the results presented in the paper. At first we start with the following two auxiliary facts.

2538

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY 2007

Lemma A.1: Let then

,

. If

where

(A4) The proof of Lemma A.1 readily results from the fact that the set of functions forms an orthonormal basis in provided that . Lemma A.2: Let , we have

If, moreover,

. Then for all

(A5)

and

. It is worth noting that Let us first consider the term since and that therefore , i.e., we consider the range of those ’s which are small compared to . Then we observe that then the right-hand side of the above

inequality takes the following form:

.

is the norm. The proof of Lemma A.2 can be Here found in [34]. We present now the proofs of Lemmas 4.2 and 4. 3. Due to the importance of Lemma 4.3, we start with proving it first, then we follow with the proof of Lemma 4.2. Proof of Lemma 4.3: Recalling the definition of the estiwe can write mate

(A6) The result in (A3) readily yields (A7) . On the other hand, by Lemma A.1 the second term in (A6) is of order as

(A1) First by splitting the summation and then by changing variables we can obtain the following equivalent formula for (A1)

(A8) , . , Let us first note that the Fourier transforms of are , , respectively. By this, (A8), and Parseval’s theorem we can conclude that the second term in (A6) is of order where

(A9) (A2) in (A2) can be easily evaluated by using Lemma The term A.1. Indeed, by denoting we have

(A3) where we used the fact that . is much more involved. Let The evaluation of the term into us first split the first summation in the definition of and two disjoint sets of indices, i.e., . This gives the following decomposition for :

Taking into account (A7) and (A9) we finally obtain (A10) in (A5). This requires some Let us now turn to the term more involved analysis. Let us start by splitting the set of admissible indices

into four disjoint subsets. Hence, we have

PAWLAK AND STADTMÜLLER: SIGNAL SAMPLING AND RECOVERY UNDER DEPENDENT ERRORS

Let us now turn to the term we have

2539

. It is clear that for

(A14) In like manner, we evaluate the term Fig. 3. Partition of the set of admissible indices I in the domain (v , l ). The white region corresponds to I , whereas the black one to I . The subsets ,I are depicted in the intermediate gray levels. I

This partition in the domain is graphically illustrated in Fig. 3. The proposed partition gives the following representation : for

(A15) Assuming, without loss of generality, that clude from (A12)–(A15) and (A11) that

we can con-

(A16)

(A11)

The proof of Lemma 4.3 is thus complete. Proof of Lemma 4.2: Arguing as in the proof of Theorem 1 in [35] we can write

where

In order to evaluate , let us assume, without loss of generality, that . We begin with . The set consists of two disjoint subsets and both give of indices the same upper bound. Hence, let us consider the subset . Therefore, for , we have

(A12) Regarding the term

The arguments used in the proof of Theorem 1 in [35] and Lemma A.2 yield

we get

(A13) where

is the ceiling function of .

(A17)

Similarly, we get

2540

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 7, JULY 2007

Let us consider the term

in (A17). Let us write

By the arguments used in the proof of Theorem 1 in [35] we obtain that

Regarding the term , let us observe that it represents the and the orthogonal projecpointwise distance between onto defined by the convolution operator tion of . To evaluate let us define

so that it is seen that

Since

, see [6] for similar arguments. Then

and

Let us now turn to the sum in the last term of (4.1). Similarly as above we have

for arbitrary small . This completes the proof of formula (4.3). Proof of Theorem 5.1: Consider the quantity

with given by

. The Fourier transform of

is

We follow the lines of the proof of Theorems 2 and 3 in [15] where the central limit theorem of the quadratic form with was established. The main difthe general sequence ference between that paper and our case is that our sequence depends on , since . We explain here which changes we must make in the proofs given in [15] in order to establish our result. in [15, p. 91] In the definition of the signed measure have to be replaced by the argument in the function , . Now we substitute this quantity by and then we get . This leads to the following new signed measure on :

therefore we obtain that

where

Consequently

denotes the Dirichlet kernel. Since as , we can still get Lemma 7.1 in [14] or Lemma 2 in [15],

i.e., Since

satisfies Assumption 3, we get

where is the Fourier transform of the proof of Lemma 4.2.

. This completes

Proof of (4.3): To proof the inequality in (4.3) let us first evaluate the sum in the second term of (4.1). To do so, let us and some large , the following sum: consider, for

for all bounded measurable functions . Now we may proceed as in [15] to approximate the quadratic form by the linear form for which one can show the central limit theorem in the standard way. Next, we can show . Then by denoting by that the fourth-order cumulant of we have (note we have a different normalization of the Fourier transforms)

PAWLAK AND STADTMÜLLER: SIGNAL SAMPLING AND RECOVERY UNDER DEPENDENT ERRORS

which yields the desired result observing that and (see (4.3) ) differ by the sample size which is for . Moreby the factor in order to over, we have to multiply . get To get the first asymptotic expression in (5.6) we note that

provided that is continuous at using assumption (5.1) and the fact that

. On the other hand

all and then that , we can obtain the second formula in (5.6). Theorem 5.1 is thus proved. REFERENCES [1] J. Beran, Statistics for Long-Memory Processes. New York: Chapman and Hall, 1994. [2] R. Bhansali and P. Kokoszka, “Estimation of the long-memory parameter: A review of recent developments and an extension,” in Proc. Symp. Inference for Stochastic Processes, I. V. Basawa, C. C. Heyde, and R. L. Taylor, Eds., 2001, IMS Lecture Notes, pp. 125–150. [3] N. H. Bingham, C. M. Goldie, and J. L. Teugels, Regular Variation. Cambridge, U.K.: Cambridge Univ. Press, 1987. [4] N. Bissantz, H. Holzmann, and A. Munk, “Testing parametric assumptions on band- or time-limited signals under noise,” IEEE Trans. Inf. Theory, vol. 51, no. 11, pp. 3796–3805, Nov. 2005. [5] R. P. Boas, Integrability Theorems for Trigonometric Transforms. New York: Springer-Verlag, 1967. [6] J. L. Brown, “On the error in reconstructing a nonbandlimited function by means of the bandpass sampling theorem,” J. Math. Anal. Applic., vol. 18, pp. 75–84, 1967. [7] P. L. Butzer, W. Engels, and U. Scheben, “Magnitude of the truncation error in sampling expansions of band-limited signals,” IEEE Trans. Acoust., Speech, Signal Process., vol. ASSP-30, no. 6, pp. 906–912, Dec. 1982. [8] O. Cappe, E. Moulines, J. C. Pesquet, A. Petropulu, and X. Yang, “Long-range dependence and heavy-tail modeling for teletrafic data,” IEEE Signal Process. Mag., vol. 19, pp. 14–27, 2002. [9] S. Csörgö and J. Mielniczuk, “Nonparametric regression under longrange dependent normal errors,” Ann. Statist., vol. 23, pp. 1000–1014, 1995. [10] H. Dette and A. Munk, “Validation of linear regression models,” Ann. Statist., vol. 26, pp. 778–800, 1998. [11] P. Doukhan, G. Oppenheim, and M. Taqque, Eds., Theory and Applications of Long-Range Dependence. Boston, MA: Birkhäuser, 2003. [12] S. Efromovich, “How to overcome the curse of long-memory errors,” IEEE Trans. Inf. Theory, vol. 45, no. 5, pp. 1735–1741, Jul. 1999. [13] S. Efromovich, Nonparametric Curve Estimation: Methods, Theory and Applications. New York: Springer-Verlag, 1999. [14] R. Fox and M. Taqqu, “Limit theorems for quadratic forms in random variables having long-range dependence,” Probab. Theory Related Fields, vol. 74, pp. 213–240, 1987.

2541

[15] L. Giraitis and D. Surgailis, “A central limit theorem for quadratic forms in strongly dependent linear variables and its application to asymptotical normality of whittle’s estimate,” Probab. Theory Related Fields, vol. 86, pp. 87–104, 1990. [16] L. Giraitis and M. Taqqu, “Convergence of normalized quadratic forms,” J. Statist. Planning and Inference, vol. 80, pp. 15–35, 1999. [17] C. W. Granger, “Long memory relationships and the aggregation of dynamic models,” J. Econometrics, vol. 14, pp. 227–238, 1980. [18] J. A. Gubner, “Theorems and fallacies in the theory of long-range-dependent processes,” IEEE Trans. Inf. Theory, vol. 51, no. 3, pp. 1234–1239, Mar. 2005. [19] P. Hall and J. D. Hart, “Nonparametric regression with long-range dependence,” Stochastic Processes Their Applic., vol. 36, pp. 339–351, 1990. [20] P. Hall and I. Van Keilegom, “Using difference-based methods for inference in nonparametric regression with time series errors,” J. Roy. Statist. Soc.B, vol. 65, pp. 443–456, 2003. [21] W. Härdle and E. Mammen, “Comparing nonparametric versus parametric regression fit,” Ann. Statist., vol. 21, pp. 1926–1947, 1993. [22] J. R. Higgins, Sampling Theory in Fourier and Signal Analysis. Oxford, U.K.: Clarendon, 1996. [23] C. Hurvich, G. Lang, and P. Soulier, “Estimation of long memory in the presence of a smooth nonparametric trend,” J. Amer. Statist. Assoc., vol. 100, pp. 853–871, 2005. [24] M. Keshner, “1/f noise,” Proc. IEEE, vol. 70, no. 3, pp. 212–218, Mar. 1982. [25] A. J. Lee, “Approximate interpolation and the sampling theorem,” SIAM J. Appl. Math, vol. 32, pp. 731–744, 1977. [26] B. B. Mandelbrot and J. W. Van Ness, “Fractional Brownian motions, fractional noises and applications,” SIAM Rev., vol. 10, pp. 422–437, 1968. [27] R. J. Marks, Introduction to Shannon Sampling and Interpolation Theory. New York: Springer-Verlag, 1991. [28] J. Mielniczuk, “Long and short-range dependent sums of infinite-order moving averages and regression estimation,” Acta Sci. Math. (Szeged), vol. 63, pp. 301–316, 1997. [29] J. Opsomer, Y. Wang, and Y. Yang, “Nonparametric regression with correlated errors,” Statisti. Sci., vol. 16, pp. 134–153, 2001. [30] M. Pawlak, “Signal sampling and recovery under dependent noise,” J. Sampling Theory in Signal and Image Processing, vol. 1, pp. 77–86, 2002. [31] M. Pawlak and E. Rafajłowicz, “On restoring band-limited signals,” IEEE Trans. Inf. Theory, vol. 40, no. 5, pp. 1490–1503, Sep. 1994. [32] M. Pawlak, E. Rafajłowicz, and A. Krzy˙zak, “Post-filtering versus pre-filtering for signal recovery from noisy samples,” IEEE Trans. Inf. Theory, vol. 49, no. 12, pp. 3195–3212, Dec. 2003. [33] M. Pawlak and U. Stadtmüller, “Recovering band-limited signals under noise,” IEEE Trans. Information Theory, vol. 42, pp. 1425–1438, 1996. [34] M. Pawlak and U. Stadtmüller, “Kernel regression estimators for signal recovery,” Statist. Probab. Lett., vol. 31, pp. 185–198, 1997. [35] M. Pawlak and U. Stadtmüller, “Nonparametric estimation of a class of smooth functions,” J. Nonparametric Statist., vol. 8, pp. 149–183, 1997. [36] M. Pawlak and U. Stadtmüller, “Statistical aspects of sampling for noisy and grouped data,” in Advances in Shannon Sampling Theory: Mathematics and Applications, J. Benedetto and P. Ferreira, Eds. Boston, MA: Birkhäser, 2001, pp. 317–342. [37] P. M. Robinson, “Large-sample inference for nonparametric regression with dependent errors,” Ann. Statist., vol. 25, pp. 2054–2083, 1997. [38] S. Smale and D. X. Zhou, “Shannon sampling and function reconstruction from point values,” Bull. Amer. Math. Soc., vol. 41, pp. 279–305, 2004. [39] M. Unser, “Sampling—50 years after Shannon,” Proc. IEEE, vol. 88, no. 4, pp. 569–587, Apr. 2000. [40] M. Unser and I. Daubechies, “On the approximation power of convolution-based least squares versus interpolation,” IEEE Trans. Signal Process., vol. 45, no. 7, pp. 1697–1711, Jul. 1997. [41] P. P. Vaidyanathan, “Generalizations of the sampling theorems: Seven decades after Nyquist,” IEEE Trans. Ciruits and Systems–I: Fundamental Theory and Applications, vol. 48, no. 9, pp. 1094–1109, Sep. 2001. [42] M. Zakai, “Band-limited functions and the sampling theorem,” Inf. Contr., vol. 8, pp. 143–158, 1965. [43] A. I. Zayed, Advances in Shannon’s Sampling Theory. Boca Raton, FL: CRC, 1994.

Suggest Documents