2160
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 5, MAY 2011
Covariance Matrices for Second-Order Vector Random Fields in Space and Time Chunsheng Ma
Abstract—This paper deals with vector (or multivariate) random fields in space and/or time with second-order moments, for which a framework is needed for specifying not only the properties of each component but also the possible cross relationships among the components. We derive basic properties of the covariance matrix function of the vector random field and propose three approaches to construct covariance matrix functions for Gaussian or non-Gaussian random fields. The first approach is to take derivatives of a univariate covariance function, the second one is to work on the univariate random field whose index domain is in a higher dimension and the third one is based on the scale mixture of separable spatio-temporal covariance matrix functions. To illustrate these methods, many parametric or semiparametric examples are formulated. Index Terms—Covariance matrix function, cross covariance, direct covariance, elliptically contoured random field, Gaussian random field.
I. INTRODUCTION
S
TOCHASTIC and statistical modeling of phenomena over space and/or time through stochastic processes or random fields is important in various areas of applications and examples can be found in many disciplines including signal processing [10], [13], image processing [8], [29], communications [16], engineering [4], [27], [31], [35], earth sciences [25], atmospheric studies [7], geology [33], hydrogeology [30], oceanography [28], [32], natural resources [14], neural networks [1], acoustics [26], and so on. Many variables of interest are contemporaneous aggregates of variables observed over time and across space and increasingly there is need for analyzing multivariate measurements observed in space and time. For instance, in engineering one may be interested in the study of the simultaneous behavior over time or space of current and voltage, or of pressure, temperature and volume, or of strength, modulus of elasticity and fracture energy. The difficulty for analyzing such multivariate or vector spatial or spatio-temporal data is that, in a vector setting, a framework is needed for describing not only the properties of each component but also the possible cross relationships among the components, which are often hard to forManuscript received June 21, 2010; revised November 15, 2010; accepted January 16, 2011. Date of publication February 10, 2011; date of current version April 13, 2011. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Jean-Christophe Pesquet. This work was supported in part by the US Department of Energy by Grant DE-SC0005359, in part by the Kansas NSF EPSCoR by Grant EPS 0903806, and in part by a Kansas Technology Enterprise Corporation grant on Understanding Climate Change in the Great Plains: Source, Impact and Mitigation. The author is with the Department of Mathematics and Statistics, Wichita State University, Wichita, KS 67260 USA (e-mail:
[email protected]). Digital Object Identifier 10.1109/TSP.2011.2112651
mulate. As a result, there are few correlation structures in the literature known for practical use, expect for vector (multiple, or multivariate) time series defined on the set of integers [15]. This motivates us to investigate the correlation and cross-correlation structure of second-order vector random fields in this paper towards potential applications in practice. In this paper we use to denote an -variate random function or field, which is a family of -variate real random vectors on the same probability space over an index set , where is a positive integer and stands for the transpose of a vector or matrix . Alternative names are often used for a vector random function associated with a particularly specified index domain . More specifically, it is called a vector (or multiple) time series when is the set of integers, a stochastic vector process when is the set of real numbers [6] and a multidimensional (or vector) random or with . In the particular case field when is , it reduces to a univariate random function. In a where spatio-temporal setting, the index set will be denoted as , where is a spatial domain and is a temporal domain. is called a secondA random field order random field if the variances of its components, , exist for all . Under this assumption the mean vector (function), , is well-defined. So is the covariance matrix (function) of the random field, which matrix function defined by the equation at the is an bottom of the next page. A diagonal entry of the above mais the covariance function of the trix and is called component random function a direct covariance function . An off-diag, is called the cross onal entry, covariance function between component random functions and . The square matrix is not necessarily symand metric, since cross covariances may not match each other for distinct and and for distinct and . Nevertheless, the transpose of is actually identical to . Thus, this square matrix itself may not be positive definite, although the symis. One property of the cross covariance metric matrix function is the Cauchy-Schwarz inequality
Except for this, little seems to be known about the cross covariance function. We will study basic properties of the covariance matrix in Section II.
1053-587X/$26.00 © 2011 IEEE
MA: SECOND-ORDER VECTOR RANDOM FIELDS
It is known that if a real a covariance matrix, then
2161
matrix function
is and (1)
and any holds for every positive integer , any . Conversely, for a given matrix function with these properties, there is a zero-mean Gaussian or elliptically contoured random field on with as its covariance matrix [6], [12], [21], [38]. Thus, when assuming that the underlying random field is a Gaussian or elliptically contoured one, the issue becomes the specification of the covariance matrix. On the other hand, it should be remarked that the positive definiteness may be not enough for a non-Gaussian setting even in the univariate case [20], [24]. is A second-order -variate random field said to be (weakly) stationary or homogeneous, if its mean is a constant vector and its covariance matrix function is a function only of the lag . as and call In this case, we rewrite it a stationary covariance matrix. For a stationary covariance , it is easy to see that matrix
An example of nonstationary covariance matrices is the one with entries
which may be associated with an -variate random field , where are -variate random vectors and . In early 1940s Cramér characterized the covariance matrix in the particular case where the corresponding vector random field is stationary, continuous in mean square and possesses the spectral densities. The same result was found independently by A. Kolmogorov [5]. We refer the reader to [6, Ch. 8] for the Cramér-Kolmogorov characterization. In this paper we deal with second-order vector random fields not only with the Cramér-Kolmogorov feature but also with more general settings, such as nonstationarity. Three approaches are proposed for constructing covariance matrices
of Gaussian or non-Gaussian vector random fields that may or may not be stationary. The first two methods described in Sections III and IV start from a univariate second-order random field or its covariance function and proceed to the multivariate case via an appropriate operation. In Section III we assume that a univariate spatio-temporal random field is differentiable in mean square with respect to [18], so that its covariance function has an even order partial derivative with respect to and then take partial derivatives to form a matrix function, which may be treated as the covariance matrix for a vector Gaussian or elliptically contoured random field. Section IV works on a univariate random field whose dimension is higher than the desired one, yields vector random fields by keeping some components fixed of the original univariate random field and, in particular, constructs covariance matrices using the conditionally negative definite matrix as a key ingredient. The third method is presented in Section V, based on scale mixtures of separable spatio-temporal covariance matrices. Proofs of our theorems appear in Section VI and some concluding remarks are given in Section VII. II. BASIC PROPERTIES OF COVARIANCE MATRICES This section presents basic properties of covariance matrix functions of vector Gaussian or second-order elliptically concovariance matrices is toured random fields. The set of a convex cone, as the following theorem describes. and Theorem 1: If are covariance matrices, then so is , where and are nonnegative constants. Theorem 2: If , is a sequence covariance matrices, then so is their limit of , whenever it exists. Each of Theorems 1 and 2 may be thought of as a typical extension from the univariate case to the multivariate case. One might wonder whether the product of two covariance matrices . The answer is no, as the is still a covariance matrix for following counterexample shows. 1) Example 1: Let
be a covariance matrix and let
2162
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 5, MAY 2011
which is a covariance matrix as well. The product of the two and covariance matrices
is not a covariance matrix if, for instance, , or . Recall that the Hadamard or Schur product of two matrices and of the same size is just their ele. The ment-wise product and is denoted as Hadamard product of two covariance matrices is a covariance matrix, although their product is not so. and are covariTheorem 3: If ance matrices, then so is their Hadamard product . The following consequence of Theorem 3 follows by noticing that a positive definite matrix whose entries are constants is a covariance matrix. is an covariance matrix, Corollary 3.1: If then so is , where is an positive definite matrix. It might be of interest to see if the matrix in Corollary 3.1 could be replaced by a nonnegative matrix, whose all entries are nonnegative. This is, however, not true. As a counterexample, let
, where and are positive definite matrices. As we know, each direct covariance in a covariance matrix is a univariate covariance function. A related question is: how is a cross covariance? The following identity is an answer to this question:
which may be interpreted as that is the difference of the covariances of two and random fields . The method developed in Section V may be thought of as another version of the following theorem on scale mixtures of covariance matrices. See also Lemma 2 of [22]. is a nonnegative function in and Theorem 4: If is an covariance matrix on for every fixed , then there is an -variate second-order random field with direct and cross covariances
assuming that the above integrals exist. where
is a 2 2 matrix with all entries equal to 1. Clearly, is a covariance matrix, but
is not a covariance matrix because the Cauchy-Schwarz inequality does not hold. This example also indicates that a square matrix with each entry being a univariate covariance function is not necessarily a covariance matrix. The next corollary follows from Corollary 3.1 and Theorem 1. and are Corollary 3.2: If covariance matrices, then so is
III. COVARIANCE MATRICES GENERATED BY DIFFERENTIATION In this section we introduce some spatio-temporal covariance matrices by taking partial derivatives of a univariate covaridenotes a univariate ance function. In what follows that is stationary spatio-temporal covariance function on in time, where is a spatial index set and is the temporal index set. Theorem 5: If a univariate, spatio-temporal covariance funcon is stationary in time and has the tion partial derivative with respect to , then [see (2) at the bottom of -variate Gaussian the page] is the covariance matrix of an . or elliptically contoured random field on
..
. (2)
MA: SECOND-ORDER VECTOR RANDOM FIELDS
2163
It is easy to see that off-diagonal entries of the matrix (2) are connected through the following relation:
and function for every fixed
, is a covariance , where
(3)
The assumptions of Theorem 5 mean that the underlying univariate random field is stationary in time, i.e., and has the mean square derivative with respect to . Some univariate spatio-temporal random fields that are mean square differentiable in space and/or time are illustrated in [20]. 1) Example 2: As a simple example, a purely temporal covariance function , is infinitely differentiable on . By Theorem 5 we obtain an covariance matrix
.. .
..
.
.. .
where , are the Hermite polynomials. As another example, consider a purely temporal covariance function (see, for example, [37, (2.150)])
where is a positive constant. Obviously, it is only twice differentiable on and a 2 2 covariance matrix is obtained by using Theorem 5, with direct and cross covariances
2) Example 3: Let be a univariate variogram on and let and be positive constants with . By , the function Theorem 4 with
is a covariance function associated with a univariate random , since it can be rewritten as field on
is a positive function. Since has infinite partial derivatives with respect to , one can apply Theorem 5 to obtain a covariance matrix with any order. For example, a 2 2 covariance matrix has the entries
Similarly, one may apply Theorem 5 to the univariate covariance function
which is stationary in time and has infinite partial derivatives with respect to and obtain a spatio-temporal covariance matrix with any order. IV. COVARIANCE MATRICES GENERATED FROM UNIVARIATE RANDOM FIELDS WITH LATENT DIMENSIONS This section constructs Gaussian or non-Gaussian vector random fields from the univariate random field with higher dimensional index domain. More precisely, in order to obtain on , we start an -variate random field from a univariate random field on . Select vectors in . Then we formulate an -variate random field on with components
Apparently, this is a vector Gaussian random field whenever is a univariate Gaussian random or another non-Gaussian random field field and is a vector is a univariate or whenever another non-Gaussian random field. In the particular case where has the original random field second-order moments, we obtain the following theorem.
2164
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 5, MAY 2011
Theorem 6: If , is a univariate covariance function and are constant vectors in , then there is an -variate second-order random field with covariance matrix
..
. (4)
. where 1) Example 4: For a positive constant with consider a univariate fractional Brownian random field in , which is a Gaussian random field with covariance
where are constant vectors in . 2) Example 5: The turning bands method is a simulation method which enables the construction of simulations of a univariate random field in space from simulations on lines [23]. Starting from a univariate, stationary covariance function on the real line and using the turning bands method, we obtain a univariate, stationary random field with covariance
in
, the
Here an matrix whose entry is is an example of the so-called conditionally negative definite matrix [2], which is briefly reviewed below for our construction of covariance matrices. , an symmetric matrix is said to For be a conditionally negative definite matrix, if
,
where denotes the usual Euclidean norm of . Starting from it and using (4), we obtain a vector Gaussian random field with direct and cross covariances
For given constant vectors random field
is isotropic with direct and cross covariances
-variate
holds for any real numbers , subject to . One such an example is a symmetric matrix whose entries are identical to a constant. Another example is a symmetric matrix whose diagonal entries are zero and off-diagonal entries are equal to a nonnegative constant. The third example is a sym. It is metric matrix with entries is a conditionally negative definite known [2] that is a positive definite matrix matrix if and only if denotes the for every nonnegative constant , where Hadamard exponential of a matrix , i.e.
where is a matrix of the same size of and all its entries are equal to 1. Next we illustrate how to construct covariance matrices with the conditionally negative definite matrix as a key ingredient, with the help of Theorem 4. 3) Example 6: We are going to show that there is an -variate random field on that is stationary in time and has the direct and cross covariances (see the equation at the is a univariate variogram bottom of the page), where is an conditionally negative definite on , matrix with nonnegative entries, is a positive constant and denotes the complementary error function
MA: SECOND-ORDER VECTOR RANDOM FIELDS
2165
To this end, notice that (see[3 , p. 15])
Using the following formula (see, for example, [3, p. 146]):
if if we can rewrite (6) as Substituting by formula, we rewrite
and
by
in the above as
Since trix
is conditionally negative definite, the mais positive definite. The matrix function , is matrix with obviously a covariance matrix, where is an all entries equal to 1. Thus, it follows from Corollary 3.1 that entry is the matrix function whose is a covariance matrix and from Theorem as 4 that there is an -variate random field with direct or cross covariance. Its spatial margin, obtained by letting , is shown in (5) at the bottom of the page. and are positive constants 4) Example 7: Suppose that and is a constant with with . Denote by the modified Bessel function of the second kind of order [36]. We will verify that there is an -variate stationary random field in with direct and cross covariances
Now Theorem 4 is applicable to this case, since an matrix entry is is positive definite for each fixed whose and, according to [19, Theorem 1 (i)], the function
is positive definite in , so that an entry is tion whose
matrix func-
, is a covariance matrix. be a univariate variogram on 5) Example 8: Let , and be positive constants with and let an matrix be conditionally negative definite with nonnegative entries. Then there is an -variate second-order random field on with direct and cross covariances
To see this, we need the identity (6) where
assuming that are positive constants. Clearly, each of those direct and cross covariances in (6) is a linear combination of the so-called von Kármán-Whittle model [18]. The range of here is chosen to be a proper subset of that in [18, Corollary varies from entry to entry in the 7], in order to ensure that above matrix function.
where the function to rewrite
is defined by (3) and use this identity as
which Theorem 4 can be applied to, since the matrix function whose entry is is a covariance matrix for any fixed .
..
.
.. . (5)
2166
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 5, MAY 2011
6) Example 9: Suppose that is a univariate variare positive constants with and ogram on , and is an conditionally negative definite mathat trix with positive entries. By Theorem 4, there is an -variate second-order random field on with direct and cross covariances
spatial and temporal covariance matrices. The resulting models would be useful, for instance, in video processing where an emerging demand on nonseparable spatio-temporal covariance structures is noted. be a bivariate Bernoulli random 1) Example 10: Let vector with
where the sum of nonnegative constants , is 1. In this case, from (7) we obtain direct and cross covariances since
can be re-expressed as
each of which has a product-sum form. The resulting covariance matrix may be written as where
.
V. COVARIANCE MATRICES GENERATED BY SCALE MIXTURES functions and , are purely spatial and and , respectively. purely temporal covariance matrices in Based on these given matrices, we are going to formulate a . Evidently, it spatio-temporal covariance matrix on is possible to extend this approach in a more general context than a spatio-temporal one, so that the variable would be a vector-valued one. It follows from Theorem 3 that , is a spatio-temporal . Obviously, each entry in this covariance matrix on covariance matrix is separable. To construct a nonseparable one, we employ a scale mixture approach, a univariate version of which was developed in [17]. For this purpose suppose that is an -variate random field with mean zero and covariance matrix . Now define a new spatio-temporal vector random field by Suppose
that
two
matrix
where
is a bivariate random vector with distribution and is independent with . This is a second-order random field with direct and cross covariances
(7) Obviously, one benefit of the mixing approach is that it generates a large variety of valid covariance matrices through apand the purely propriate choices of the mixing function
where and are purely spatial and purely temporal covariance matrices whose index sets contain the are given in origin, respectively. Two examples of . Example 2 and (5) is an example of VI. PROOFS Proof of Theorem 1: One of the approaches to prove this result starts by assuming that and are -variate random fields defined on the same probability and respace with covariance matrices spectively and that they are independent. Then it is easy to verify posthat a new random field sesses the covariance matrix . satisfies (1) Proof of Theorem 2: Since yields that for every positive integer , taking also satisfies (1). Thus, there is an -variate zero-mean Gaussian or elliptically contoured random field [21] with as its covariance matrix. Proof of Theorem 3: Suppose that two -dimensional second-order random fields , are independent and possess covariance matrices and , -dimensional random field respectively. Define a new by the equation at the bottom of the page. Clearly, this is a second-order random field with mean zero and . covariance matrix Proof of Theorem 4: It suffices to verify (1). Since is an covariance matrix for every fixed
MA: SECOND-ORDER VECTOR RANDOM FIELDS
holds for every positive integer , any . Therefore
2167
and any
Proof of Theorem 5: Suppose that is a second-order random field with mean zero and covariance . Let , be positive constants. -variate random field Define an by
and using the same way we are able to derive other entries in (2). Proof of Theorem 6: Suppose that is a univariate second-order random field on with . Then, it is easy to verify that an covariance -variate random field
possesses second-order moments and has the covariance matrix (4). VII. CONCLUSION Given univariate second-order random fields on the same probability space, , if they are assumed to be independent, then one may easily formulate a second-order vector random field,
say, or Clearly, has second-order moments and is stationary in time. To obtain (2), we are going to evaluate the covariance matrix of this random field and then apply Theorem 2 to taking the limit of the covariance matrix by letting . In fact, we have
where is an constant matrix. Without the independence assumption, the above formulations may be meaningless since no cross-relationships are provided among the components. These relationships are stored in the covariance matrix, whose diagonal entries reveal individual component correlation structure and off-diagonal entries reveal cross-correlation structure among the components. Although a cross covariance cannot be arbitrarily specified, there is not a clear restricted condition applicable to the cross covariance, unlike the positive definiteness for a direct covariance. But, as we have shown, the cross covariances depend on the direct covariances in certain way so that (1) holds. Directly verifying inequality (1) is often a hard job. For the stationary case, the Cramér-Kolmogorov characterization is a useful tool by taking a look at the Fourier transform matrix. Another approach to construct a covariance matrix function is based on the convolution operation [11], [34]. Moreover, this paper proposes three simple approaches to formulate the covariance matrix for a second-order vector random field. The procedures of the first two approaches start from a univariate random field either with (partial) mean square derivatives or with latent dimensions. The third approach is a kind of scale mixtures of separable spatio-temporal covariance matrices. Our approaches are applicable to the Gaussian or elliptically contoured case, but, to other non-Gaussian cases with caution since inequality (1) may be a necessary but not sufficient condition [20], [24]. Also, these approaches are useful to derive vector random fields with either short range dependence or long range dependence [22]. ACKNOWLEDGMENT Helpful comments and suggestions from two anonymous reviewers are gratefully acknowledged.
2168
REFERENCES [1] S. Bannour and M. R. Azimi-Sadjadi, “Principal component extraction using recursive least squares learning,” IEEE Trans. Neural Netw., vol. 6, pp. 457–469, 1995. [2] R. B. Bapat and T. E. S. Raghavan, Nonnegative Matrices and Applications. Cambridge, U.K.: Cambridge Univ. Press, 1997. [3] H. Bateman, Tables of Integral Transforms. Boston, MA: McGrawHill, 1954, vol. 1. [4] A. Chakraborty and S. Rahman, “Stochastic multiscale models for fracture analysis of functionally graded materials,” Eng. Frac. Mech., vol. 75, pp. 2062–2086, 2008. [5] H. Cramér, “On the theory of stationary random processes,” Ann. Math., vol. 41, pp. 215–230, 1940. [6] H. Cramér and M. R. Leadbetter, Stationary and Related Stochastic Processes: Sample Function Properties and Their Applications. New York: Wiley, 1967. [7] R. Daley, Atmospheric Data Analysis. Cambridge, U.K.: Cambridge Univ. Press, 1991. [8] K. B. Eom, “Long-correlation image models for textures with circular and elliptical correlation structures,” IEEE Trans. Image Process., vol. 10, pp. 1047–1055, 2001. [9] R. Fisch, “Random-field models for relaxor ferroelectric behavior,” Phys. Rev. B, vol. 67, p. 094110, 2003. [10] J. Flgwer, “Multidimensional random process synthesis and simulation,” Mult. Syst. Signal Process., vol. 11, pp. 381–394, 2000. [11] G. Gaspari and S. E. Cohn, “Construction of correlation functions in two and three dimensions,” Q. J. R. Meteorol. Soc., vol. 125, pp. 723–757, 1999. [12] I. I. Gikhman and A. V. Skorokhod, Introduction to the Theory of Random Processes. Philadelphia, PA: W. B. Saunders, 1969. [13] R. M. Gray and L. D. Davisson, An Introduction to Statistical Signal Processing. Cambridge, U.K.: Cambridge Univ. Press, 2004. [14] P. Goovaerts, Geostatistics for Natural Resources Evaluation. New York: Oxford Univ. Press, 1997. [15] E. J. Hannan, Multiple Time Series. New York: Wiley, 1970. [16] X. Li, S. Jin, X. Gao, and K.-K. Wong, “Near-optimal power allocation for MIMO channels with mean or covariance feedback,” IEEE Trans. Commun., vol. 58, pp. 289–300, 2010. [17] C. Ma, “Spatio-temporal covariance functions generated by mixtures,” Math. Geol., vol. 34, pp. 965–975, 2002. [18] C. Ma, “Spatio-temporal variograms and covariance models,” Adv. Appl. Prob., vol. 37, pp. 706–725, 2005. [19] C. Ma, “Linear combinations of space-time covariance functions and variograms,” IEEE Trans. Signal Process., vol. 53, pp. 857–864, 2005. [20] C. Ma, “ random fields in space and time,” IEEE Trans. Signal Process., vol. 58, pp. 378–383, 2010. [21] C. Ma, “Vector random fields with second-order moments or secondorder increments,” Stoch. Anal. Appl., vol. 29, pp. 197–215, 2011. [22] C. Ma, “Vector random fields with long range dependence,” Fractals, 2011. [23] G. Matheron, “The intrinsic random functions and their applications,” Adv. Appl. Prob., vol. 5, pp. 439–468, 1973.
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 59, NO. 5, MAY 2011
[24] G. Matheron, “The internal consistency of models in geostatistics,” in Geostatistics, M. Armstrong, Ed. Boston, MA: Kluwer Academic, 1989, vol. 1, pp. 21–38. [25] R. A. Olea, Geostatistics for Engineers and Earth Scientists. Boston, MA: Kluwer Academic, 1999. [26] V. E. Ostashev, V. Mellert, R. Wandelt, and F. Gerdes, “Propagation of sound in a turbulent medium. I. Plane wave,” J. Acoust. Soc. Amer., vol. 102, pp. 2561–2570, 1997. [27] M. Ostoja-Starzewski, “Microstructural randomness versus representative volume element in thermomechanics,” J. Appl. Mechan., vol. 69, pp. 25–35, 2002. [28] T. M.Özgokman, L. I. Piterbarg, A. J. Mariano, and E. H. Ryan, “Predictability of drifter trajectories in the tropical pacific ocean,” J. Phys. Oceanogr., vol. 31, pp. 2691–2720, 2001. [29] S. J. Reeves, “Imaging a class of non-Gaussian fields beyond the diffraction limit,” J. Opt. Soc. Amer. A, vol. 16, pp. 264–275, 1999. [30] Y. Rubin, Applied Stochastic Hydrogeology. Oxford, U.K.: Oxford Univ. Press, 2002. [31] M. Shakeri, K. R. Pattipati, and D. L. Kleinman, “Optimal measurement scheduling for state estimation,” IEEE Trans. Areosp. Elec. Syst., vol. 31, pp. 716–729, 1995. [32] M. D. Tsyroulnikov, “Proportionality of scales: An isotropy-like property of geophysical fields,” Q. J. R. Meteorol. Soc., vol. 127, pp. 2741–2760, 2001. [33] J. A. Vargas-Guzmaán, “Geostatistics for power models of Gaussian fields,” Math. Geol. , vol. 36, pp. 307–322, 2004. [34] J. M. Ver Hoeff and R. P. Barry, “Constructing and fitting models for cokriging and multivariable spatial prediction,” J. Statist. Plan. Infer., vol. 69, pp. 275–94, 1998. [35] M. Vorechovsky, “Simulation of simple cross correlated random fields by series expansion methods,” Structural Safety, vol. 30, pp. 337–363, 2008. [36] G. N. Watson, A Treatise on the Theory of Bessel Functions, 2nd ed. Cambridge, U.K.: Cambridge Univ. Press, 1954. [37] A. M. Yaglom, An Introduction to the Theory of Stationary Random Functions. Englewood Cliffs, NJ: Prentice-Hall, 1962. [38] A. M. Yaglom, Correlation Theory of Stationary and Related Random Functions: Basic Results. New York: Springer, 1987, vol. 1.
Chunsheng Ma received the Ph.D. degree from the University of Sydney, Australia, in 1997. After two years with the University of British Columbia as a Postdoctoral Fellow, he joined Wichita State University, Wichita, KS, in 1999, where he is currently a professor. During the 2006-2007 academic year, he was a SAMSI Univesity Fellow with the Statistical and Applied Mathematical Science Institution (SAMSI). His research areas include statistics and probability and his current research interests are in vector random fields and spatio-temporal statistics.