Recovering gradients from sparsely observed functional data

4 downloads 12976 Views 1MB Size Report
The recovery of gradients of sparsely observed functional data is a challenging ill-posed inverse problem. Given observations of smooth curves (e.g., growth ...
Recovering gradients from sparsely observed functional data Sara L´opez-Pintado

Ian W. McKeague

Department of Biostatistics

Department of Biostatistics

Mailman School of Public Health

Mailman School of Public Health

Columbia University

Columbia University

722 West 168th Street, 6th Floor

722 West 168th Street, 6th Floor

New York, NY 10032, U.S.A.

New York, NY 10032, U.S.A.

e-mail: [email protected]

e-mail: [email protected]

October 12, 2012 Abstract The recovery of gradients of sparsely observed functional data is a challenging ill-posed inverse problem. Given observations of smooth curves (e.g., growth curves) at isolated time points, the aim is to provide estimates of the underlying gradients (or growth velocities). To address this problem, we develop a Bayesian inversion approach that models the gradient in the gaps between the observation times by a tied-down Brownian motion, conditionally on its values at the observation times. The posterior mean and covariance kernel of the growth velocities are then found to have explicit and computationally tractable representations in terms of quadratic splines. The hyperparameters in the prior are specified via nonparametric empirical Bayes, with the prior precision matrix at the observation times estimated by constrained `1 minimization. The infinitessimal variance of the Brownian motion prior is selected by crossvalidation. The approach is illustrated using both simulated and real data examples. K EY WORDS: Growth trajectories; Functional data analysis; Ill-posed inverse problem; Nonparametric Empirical Bayes; Tied-down Brownian motion.

1

1

Introduction

The extensive development of functional data analysis over the last decade has led to many useful techniques for studying samples of trajectories (Ramsay and Silverman, 2005, Ferraty and Vieu, 2006). Typically, a crucial first step is needed before such analyses are possible: the trajectories need to be reconstructed on a fine grid of equally spaced time points (if they are not already in such a form). Methods for reconstructing trajectories in this way have been studied using kernel smoothing (Ferraty and Vieu, 2006), smoothing splines (Ramsay and Silverman, 2005), local linear smoothing (Hall et al., 2006), mixed effects models (James et al., 2000, Rice and Wu, 2000), and principal components analysis through conditional expectations (Yao et al., 2005b,a). In this paper we study the problem of reconstructing gradients of trajectories on the basis of sparse (or widely separated) observations. The proposed approach is specifically designed for reconstructing growth velocities from longitudinal observation of childhood developmental indices, e.g., height, weight, BMI, head circumference, or measures of brain maturity obtained via fMRI (Dosenbach et al., 2010). Growth velocities based on such indices play a central role in life course epidemiology, often providing fundamental indicators of prenatal or childhood development that are related to adult health outcomes (Barker et al., 2005). Repeated measurements of childhood developmental indices may be available on most subjects in a study, but usually only sparse temporal sampling is feasible (McKeague et al., 2011). It can thus be challenging to gain a detailed understanding of growth patterns. Moreover, the problem is exacerbated by the presence of large fluctuations in growth velocity during early infancy, and high variability between subjects. In addition, although the patterns of examination times vary among

2

children, they tend to cluster around “nominal” ages (e.g., birthdays), so there can be large gaps without data. We call this regular sparsity, in contrast to irregular sparsity in which the observation times for each individual are widely separated but become dense when merged over all subjects. The problem of reconstructing gradients under irregular sparsity (even with only one observation time per trajectory) has recently been studied by Liu and M¨uller (2009). In their approach, the best linear predictor of the gradient is estimated (assuming Gaussian trajectories) in terms of estimated functional principal component scores. The accuracy of the reconstruction depends on how well each individual gradient can be represented in terms of a small number of estimated principal component functions. This in turn requires an accurate estimate of the covariance kernel of the trajectories, which is not possible in the case of regular sparsity. Numerical analysis methods can be used to reconstruct gradients using only individual level data in the case of regular sparsity. For example, difference quotients between observation times provide simple approximate gradients, but these estimates are piecewise constant and would not be suitable for use in functional data analysis unless the observation times are dense. Spline smoothing to approximate the gradient of the trajectory over a fine grid is recommended by Ramsay and Silverman (2005). More generally, methods of numerical differentiation, including spline smoothing, are an integral part of the extensive literature on ill-posed inverse problems for linear operator equations. In this literature, the observation times are usually viewed as becoming dense (for the purpose of showing convergence), see Kirsch (1996); in particular, the assumption of asymptotically dense observation times plays a key role in the study of penalized least squares estimation and cross-validation (Nashed and Wahba, 1974, Wahba, 1977). In this paper we develop a flexible Bayesian approach to reconstructing gradients, focusing on

3

the regular sparsity case. The prior gradient is specified by a general multivariate normal distribution at n fixed observation times, and a (conditional) tied-down Brownian motion between the observation times. This leads to a simple and explicit representation of the posterior distribution of the gradient in terms of the prior mean and the prior precision matrix at the observation times. Based on a sample of subjects, the nonparametric empirical Bayes method along with cross-validation is used to specify the hyperparameters in the prior, with the prior precision matrix estimated by constrained `1 minimization (Cai et al., 2011). An important aspect of the proposed approach is that the reconstructed gradients can be computed rapidly over a fine grid, and then used directly as input into existing software, without the need for sophisticated smoothing techniques. In addition, our approach furnishes ways of assessing the errors in the reconstruction (using credible intervals around the posterior mean) and of assessing uncertainties in the conclusions of standard functional data analyses that use the reconstructed gradients as predictors (e.g., using repeated draws from the posterior distribution in a sensitivity analysis). For background and an introduction to empirical Bayes methods we refer the interested reader to Efron (2010). Previous work on the use of such methods in the setting of growth curve modeling include reconstructing individual growth velocity curves from parametric growth models (Shohoji et al., 1991), and nonparametric testing for differences in growth patterns between groups of individuals (Barry, 1995). A nonparametric hierarchical-Bayesian growth curve model for reconstructing individual growth curves has also been developed (Arjas et al., 1997), but requires the use of computationally intensive Markov chain Monte Carlo methods (MCMC) and may not be suitable for exploratory analyses. As already mentioned, our motivation for developing the proposed Bayesian reconstruction

4

method comes from the problem of carrying out functional data analysis for growth velocities given measurements of some developmental index at various ages. The first panel of Figure 1 shows cubic-spline-interpolated growth curves for 10 children based on their height (or length) measurements at birth, four, eight, and twelve months, and three, four and seven years. There are no data to fill in the gaps between these observation times. The corresponding growth velocities, obtained by differentiating the cubic splines (Ramsay and Silverman, 2005), are displayed in the second panel. Unfortunately, however, such growth rate curves are unsuitable surrogates for the actual growth rates because artifacts of the spline interpolation emerge as the dominant features, and there is no justification for ignoring all random variation between observation times. Moreover, it is not easy to see how to quantify the error involved in such reconstructions. Our proposed reconstruction method, as developed in Section 2, provides a way around this problem. Simulation and real growth data examples are used to illustrate the performance of the proposed method in Sections 3 and 4. In Section 5 we compare our approach with the popular method of analyzing growth trajectories via latent variable models. Proofs of the main results are provided in the Appendix. We have developed an R package growthrate implementing the proposed reconstruction method (L´opez-Pintado and McKeague, 2011); this package is available on the CRAN archive, and includes the real data set used in Section 4.

2

Gradients of sparsely observed trajectories

In this section we develop the proposed Bayesian approach to recovering gradients. Explicit formulae for the posterior mean and covariance kernel of the gradients are provided. An empirical Bayes approach to estimating the hyperparameters in the prior is also developed.

5

2.1

Posterior gradients

We first consider in detail how to reconstruct the gradient for a single subject. In the growth velocity context, the observation times will typically vary slightly across the sample, but will be clustered around certain nominal ages. Let the observation times for the specific individual be 0 = t1 < t2 < . . . < tn = T , and assume that the endpoints of the time interval over which the reconstruction is needed are included. The statistical problem is to estimate the whole growth velocity curve X = {X(t), 0 ≤ t ≤ T } from data on its integral (i.e., growth) over the gaps between the observation times. Equivalently, we observe 1 yi = ∆i

Z

ti+1

X(s) ds, i = 1, . . . , n − 1,

(1)

ti

where the ∆i = ti+1 − ti are the lengths of the intervals between the observation times. Here yi is the one-sided difference quotient estimate of X(t) for t ∈ [ti , ti+1 ], as commonly used in numerical differentiation. Reconstructing X based on such data is an ill-posed inverse problem in the sense that no unique solution exists, so some type of regularization is needed to produce a unique solution. When the observation times are equally-spaced, the one-sided difference quotient can be derived as the X minimizing kXkL2 under the constraint (1), see Kirsch (1996), page 97. A more sophisticated approach is to take into account the proximity of ti to neighboring observation times ti−1 and ti+1 , and estimate X(ti ) by the second-order difference quotient y i = wi yi−1 + (1 − wi )yi , where wi = ∆i /(∆i−1 + ∆i ), for i = 2, . . . , n − 1. As we shall see, the building blocks needed to

6

construct the proposed Bayes estimator consist of the yi and the n-vector y = (y1 , y 2 , . . . , y n−1 , yn−1 )T . The central problem is to specify a flexible class of prior distributions for X in such a way that is tractable to find its posterior distribution. An unusual feature of our problem, however, is that a direct approach via Bayes formula does not work: the conditional distribution of yi given X is degenerate, so there is no common dominating measure for all values of X (i.e., there is no full likelihood). Our way around this difficulty is to first find the marginal posterior distribution of X restricted to the observation times, for which the usual Bayes formula applies, and then show that this leads to a full posterior distribution. This approach turns out to be tractable when we specify the prior using the following hierarchical model: X has a multivariate normal distribution at the observation times, and is a tied-down Brownian motion in the gaps between observation times (conditional on X at those time points). This provides a fully coherent prior distribution of X. In some cases the prior of X can be specified unconditionally (as with the shifted Brownian motion discussed in Section 2.4), but in general the hierarchical specification we use is simpler and provides greater flexibility. More precisely, we specify the prior on X as follows: 1. At the observation times: X ≡ (X(t1 ), . . . , X(tn ))T ∼ N (µ0 , Σ0 ), where Σ0 is non-singular. 2. The conditional distribution of X given X is a tied-down Brownian motion with given infinitesimal variance σ 2 > 0. 7

In the growth velocity setting, the observation times are typically chosen to concentrate data collection in periods of high variability (e.g., the first year of life), so it is natural that the prior should reflect such information. Moreover, allowing an arbitrary (multivariate normal) prior at the observation times provides flexibility that would not be possible using a Brownian motion prior for the whole of X. In addition, the availability of data at these time points makes is possible to specify the hyperparameters in the multivariate normal (as we see later), which is crucial for practical implementation of our approach. We now state our main result giving the posterior distribution of X. In particular, the result shows that the posterior mean takes the computationally tractable form of a quadratic spline with knots at the observation times. The posterior mean is the best linear predictor of X, providing the optimal reconstruction of X in the sense of mean-squared error. Theorem 2.1. The posterior distribution of X is Gaussian with mean µ b(t) = µ bi + [b µi+1 − µ bi ](t − ti )/∆i +6(t − ti )(ti+1 − t) [yi − (b µi + µ bi+1 )/2] /∆2i for t ∈ [ti , ti+1 ], where −1 −1 b = (b µ µi ) = (Σ−1 0 + Q) (Σ0 µ0 + Dy)

is the posterior mean of X, and    3   Q= 2 σ   

1 ∆1 1 ∆1

1 ∆1 1 ∆1

1 ∆2

1 ∆2

..

1 ∆2

0 .. .

..

..

. ···

0

6 D = 2 diag σ

+

0



.

. 0

··· ...

0 .. .

..

.

0

..

.

1 ∆n−1 1 ∆n−1

1 ∆n−1

     ,   

1 1 1 1 ,..., + ,..., ∆1 ∆i−1 ∆i ∆n−1

 .

b = σ2K e + K ∗ , where The posterior covariance kernel of X is K e t) = (s ∧ t − ti ) − (s − ti )(t − ti )/∆i K(s, −3(s − ti )(t − ti )(ti+1 − s)(ti+1 − t)/∆3i 8

e t) = 0 if s and t are in disjoint intervals; for s, t ∈ [ti , ti+1 ), with K(s, " # b b Σ Σ k,l+1 T K ∗ (s, t) = (ak (s), bk (s)) b kl b k+1,l+1 (al (t), bl (t)) Σk+1,l Σ for s ∈ [tk , tk+1 ) and t ∈ [tl , tl+1 ), with k, l = 1, . . . , n − 1, where ai (t) = 1 − (t − ti )/∆i − 3(t − ti )(ti+1 − t)/∆2i , bi (t) = (t − ti )/∆i − 3(t − ti )(ti+1 − t)/∆2i , for t ∈ [ti , ti+1 ), i = 1, . . . , n − 1, and b = (Σ b ij ) = (Σ−1 + Q)−1 Σ 0 is the posterior covariance matrix of X. Remarks 1. The posterior mean of X provided by Theorem 2.1 has a simple and explicit form that can be computed rapidly once the hyperparameters in the prior (namely µ0 , Σ0 and σ 2 ) are provided. We discuss various ways of specifying the hyperparameters in the next section. 2. The infinitessimal variance σ 2 can be regarded as a smoothing parameter, and plays the role of a time-scale, cf. the adaptive Bayesian estimation procedure of van der Vaart and van Zanten (2009) based on Gaussian random field priors with an unknown time-scale. When σ 2 → ∞, the posterior distribution of X converges to its prior distribution, and between observation times the posterior variance of X tends to infinity. 3. The matrix Q represents the posterior precision (at the observation times) corresponding to a non-informative (improper) prior, and it is singular, reflecting the fact that we are dealing with n parameters to be estimated from n − 1 observations y1 , . . . , yn−1 . The problem is ill-posed, but an “informative” prior provides regularization: the posterior precision matrix Σ−1 0 + Q is non-singular. 9

4. In the special case that the prior distribution of X is a Gaussian Markov random field, i.e., non-neighboring components are conditionally independent given the rest (Rue and Held, 2005), its posterior distribution is also a Gaussian Markov random field. That is, since Q is tridiagonal, whenever the prior precision matrix Σ−1 0 is tridiagonal, so is the posterior precision matrix Σ−1 0 +Q. Tridiagonal matrices often arise in inverse problems, and efficient algorithms for computation of their inverses and eigenvalues are available. 5. In typical Bayesian settings, the information in the data tends to swamp the prior information as the sample size increases. It can often be shown in such settings that the posterior contracts to the true parameter given that it is in the support of the prior distribution (Ghosal and van der Vaart, 2007, Knapik et al., 2011), but such results rely on the existence of a full likelihood and are not applicable in our setting.

2.2

Specifying the hyperparameters

Suppose that we are given data on y for a sample of N individuals, each having the same fixed set of observation times t1 , . . . , tn . How can we use such data to specify the hyperparameters? The hyperparameters are not identifiable in general, even if the prior distribution is correctly specified. To see this, note that the (Gaussian) marginal distribution of y is determined by p = n−1+n(n−1)/2 means and covariances, but there are n+n(n+1)/2+1 > p hyperparameters in the prior. Although it is possible to identify these hyperparameters by imposing extra structure on Σ0 and using a parametric empirical Bayes approach (as we show in the next section), the presence of prior misspecification would be a serious issue for applications. Another possibility would be to define a flexible higher-level prior for the hyperparameters, but this would again require the use

10

of computationally intensive methods (MCMC). Instead we adopt the following nonparametric empirical Bayes approach. The prior mean µ0 is naturally specified by the sample mean of y, and this does not require the prior to be correctly b N , however, is singular [having rank specified. The corresponding sample covariance matrix, Σ min(N, n − 1) < n] and hence unstable for estimating Σ0 , and cannot specify the posterior distribution which depends on the prior precision matrix Ω0 = Σ−1 0 . We use the constrained `1 minimization method of sparse precision matrix estimation (CLIME) recently developed by Cai et al. (2011): Ω0 is specified as the (appropriately symmetrized) solution of b N Ω − I|∞ ≤ λN , Ω ∈ Rn×n , min ||Ω||1 subject to |Σ where the tuning parameter λN is selected by 5-fold cross validation using the loss function Tr[diag(ΣΩ − I)2 ]. The infinitessimal variance σ 2 , or equivalently σ, is selected by a form of cross validation introduced by Wahba (1977). The prediction error based on leaving-out an interior observation time (ti+1 , for some fixed i = 1, . . . , n − 2) is given by 2  Z ti+1 N 1 X 1 −(i+1) b CV(σ) = Eij yij − X (s) ds , j N j=1 ∆i ti b −(i+1) (·) from the posterior diswhere j indexes the subjects, the expectation Eij is over draws X j tribution of Xj (·) based on the reduced data with ti+1 removed; here µ0 and Ω0 are specified as above, except the (i + 1)th component is not used. This expression can be written explicitly using the bias-variance decomposition as 2 Z ti+1 Z ti+1Z ti+1 N  1 X 1 1 −(i+1) b −(i+1) (s, u) ds du, CV(σ) = yij − µ bj (s) ds + 2 K N j=1 ∆i ti ∆i ti ti 11

b −(i+1) (·) that are available from Theorem 2.1; note in terms of the mean and covariance kernel of X j that the covariance kernel only depends on the observation times so it is not indexed by j.

2.3

Variations in observation times among subjects

In this section we discuss how our approach to specifying the hyperparameters can be adapted to handle situations in which the observation times vary among subjects. In the applications we have in mind, most observation times tend to be close to “nominal” observation times {ti , i = 1, . . . , n}, so it is reasonable to use these as a first approximation. That is, in terms of the nominal observation times, and using the procedure described above, we find initial estimates µ bj (·) for all subjects, and a value of σ. Then, for the purpose of specifying µ0 and Ω0 in a way that is tailored to the observation times of the kth subject, we adjust the data on the other subjects to become (k) yij

=

1 (k)

∆i

Z

ti+1,k

tik

µ bj (s) ds, i = 1, . . . , nk , j 6= k, (k)

where {tik , i = 1, . . . , nk } are the actual observation times for the kth subject and ∆i

= ti+1,k −

tik . The final estimate µ bk (·) is then calculated by applying our formula for µ b(·) to the data on the kth subject, with the hyperparameters estimated from the adjusted data (thus borrowing strength from the whole sample). The adjustment has no effect when the observation times agree with their nominal values, and small perturbations around the nominal values would have little effect on the reconstructed gradients. A computationally simpler approach, that is essentially equivalent to what we just described, is to restrict the posterior covariance kernel and mean based on the nominal observation times to the actual observation times for each given subject, thus directly obtaining suitable hyperparameters across the whole sample that adjust for any changes from the nominal observation times; this is the 12

approach implemented in the growthrate package.

2.4

Example: shifted Brownian motion prior

Shifted Brownian motion priors have been used in various nonparametric Bayesian settings in recent years (van der Vaart and van Zanten, 2008a,b) and provide a simple illustration of Theorem 2.1. Suppose the prior distribution of X at the observation times (i.e., the prior of X) is specified so that X(ti ) = µi + σ1 Z + γB(ti ), i = 1, . . . , n,

(2)

where Z ∼ N (0, 1), B is an independent standard Brownian motion, σ1 > 0, γ > 0, and the prior mean µ0 = (µ1 , . . . , µn )T as before. In particular, when γ = σ the prior distribution for the entire trajectory X takes the form of a shifted Brownian motion. Under (2), the prior covariance matrix at the observation times has (i, j)th entry (Σ0 )ij = σ12 + γ 2 min(ti , tj ).

(3)

The prior precision matrix has a simple (tridiagonal) form similar to Q, namely  

Σ−1 0

      1  = 2 γ       

γ2 σ12

+

1 ∆1

− ∆11

− ∆11 1 ∆1

+

1 ∆2

0

···

0

− ∆12

..

.

.. .

..

.

0

0

− ∆12

..

.. .

...

...

...

0

···

0

1 − ∆n−1

.

1 − ∆n−1 1 ∆n−1

       ,       

cf. Rue and Held (2005), page 99. In the special case that γ 2 = σ 2 /3 the posterior covariance matrix becomes diagonal: σ2 −1 −1 b Σ = (Σ0 + Q) = diag (∆i−1 wi , i = 1, . . . , n) , 6 13

where w1 ≡ ∆1 /(∆0 + ∆1 ), ∆0 = 6σ12 /σ 2 , wn ≡ 1, and the other wi are defined as in Section 2.1. The components of the posterior mean at the observation times are then given by     (1 − w1 )y1 + w1 (µ1 + µ2 )/2, i = 1;     µ bi = y i + [µi − (wi µi−1 + (1 − wi )µi+1 )]/2, i = 2, . . . , n − 1;        yn−1 + (µn − µn−1 )/2, i = n. It can then be seen that µ b provides a uniformly consistent estimator of X in the numerical analysis sense: if µi = µ(ti ), where µ(·) is a fixed continuous function, and maxi=1,...,n−1 ∆i → 0, then µ b(t) → X(t) uniformly in t for any continuous X. A parametric empirical Bayes approach to specifying the hyperparameters can be developed in this setting. Conditioning on X(ti ) and X(ti+1 ) yields 2(n − 1) estimating equations involving means and second moments: 2 Eyi = (µi + µi+1 )/2, Eyi2 = (µi + µi+1 )2 /4 + σ 2 ∆i /12 + (σi2 + σi+1 + 2σi,i+1 )/4,

i = 1, . . . , n − 1, where σi,i+1 is the prior covariance of X(ti ) and X(ti+1 ), and σi2 is the prior variance of X(ti ). Under the shifted Brownian motion model (2), the prior distribution has only n+3 parameters (µ1 , . . . , µn , σ12 , γ 2 and σ 2 ), and the second-moment estimating equation simplifies to Eyi2 = (µi + µi+1 )2 /4 + σ 2 ∆i /12 + σ12 + γ 2 (ti + ∆i /4). Another (n − 1)(n − 2)/2 estimating equations are obtained from the covariances of the yi : Eyi yj = (µi + µi+1 )(µj + µj+1 )/4 + σ12 + γ 2 (ti + ∆i /2), for i = 1, . . . , n − 2, j = i + 1, . . . , n − 1, provided n ≥ 3. The marginal distribution of the data is Gaussian with mean and covariance only depending on the prior means µi through the sums 14

µi + µi+1 , i = 1, . . . , n − 1, so these means are not identifiable unless one of them (say µ1 ) is known. Once µ1 is given (or specified say using the sample mean of y1 ), all the other parameters in the shifted Brownian motion prior are identifiable. In view of known convergence-rate results for the CLIME estimator (Cai et al., 2011), it may be possible to extend the consistency result shown above to the general setting, to the effect that the empirical Bayes version of each µ b(t) converges uniformly to X(t) as maxi=1,...,n−1 ∆i → 0 and N → ∞. However, µ b(t) is is not analytically tractable in general (only in the special case discussed in this section), so this would be a challenging problem.

3

Simulation study

In this section we report the results of a simulation study designed to assess the performance of µ b(t) as a method of estimating X. In order to calibrate µ b(t), the prior mean and precision matrix are specified from the data as in Section 2.2. We also examine the performance of the cross validation method for choosing σ. The simulation model for generating the underlying X is defined in a parallel fashion to the prior: 1. At the observation times, X ≡ (X(t1 ), . . . , X(tn ))T is a zero-mean stationary Gaussian Markov random field with covariance matrix having (i, j)th entry e−α|ti −tj | , where α > 0. 2. The conditional distribution of X given X is a tied-down fractional Brownian motion (fBm) with Hurst exponent 0 < H < 1. The tied-down fBm used here is represented (conditionally on X) as in (6), except that Bi0 (t) = Bi (t) − tBi (1), t ∈ [0, 1], where the Bi are independent standard fBms, and σ = 1. The Hurst 15

exponent H is a measure of the smoothness of the sample paths between the observation times: H = 1/2 gives standard tied-down Brownian motion and agrees with the prior (provided σ = 1); we also consider the cases H = 0.7 and 0.9 to give examples with much smoother sample path behavior than Brownian motion. We considered two values of the simulation model parameter: α = 3 and 6, representing “high” and “low” levels of correlation in X, respectively. The sample size is fixed at N = 100, and we consider n = 5 and n = 10 equispaced observation times on the interval [0, 1] (that is, T = 1). Figure 2 shows boxplots comparing the MSE of our approach with the MSE of the spline interpolation approach described in the Introduction. The boxplots are based on 50 independent samples, and setting α = 3 in the simulation model. Web Figure 1 shows the corresponding boxplots for α = 6, and the results are very similar. Here the MSE of µ b(·) is defined by MSE =

N n 1 X1X (b µj (ti ) − Xj (ti ))2 , N j=1 n i=1

where µ bj (·) is the calibrated posterior mean of Xj (·) with σ = 1. In all cases, µ b(·) has a smaller median MSE than the spline estimator, and in some cases the reduction is more than 50% (namely for n = 10 and H = 0.9). Note that the improvement of µ b(·) over the spline method increases with H (and also with n). This indicates that µ b(·) is robust to departures from the prior that involve smoother trajectories X. Similar results (not shown) can be obtained in terms of the mean absolute deviation. We applied the cross validation method for choosing σ to a single sample generated by the above simulation model for α = 3, H = 1/2 and n = 10. Figure 3 shows plots of cross validation error CV(σ) based on removing three of the interior observation times, as well as averaging CV(σ) with each interior point successively removed. In all cases, the minimum is located close to the 16

true value of σ = 1, so we calibrated µ b(·) using σ = 1 in all the simulations reported above. We have also done extensive simulations (not shown) based on shifted Brownian motion priors, and found that the two competing approaches have comparable MSE; this is not surprising, because, as can be seen from the explicit form in Section 2.4, µ b(·) is very close to the spline estimator in this case. In addition, we found that measurement error in the observations y has little affect on the accuracy of the reconstructions.

4

Growth velocity curves

In this section we illustrate our approach using data from the Collaborative Perinatal Project (CPP). This was an NIH study of prenatal and familial antecedents of childhood growth and development conducted during 1959–1974 at 12 medical centers across the United States. There were approximately 58, 000 study pregnancies, mothers being examined during pregnancy, labor and delivery. The children were given neonatal examinations and follow-up examinations at four, eight, and twelve months, and three, four, and seven years. We restrict attention to the subsample of girls having birthweight 1500–4000 gms, gestational age 37–42 weeks, non-breast-fed, maternal age 20–40 years, the mother did not smoke during pregnancy, and for whom complete data on height (at these ages) and all the covariates are available. This gave a sample of size N = 532. The data are included in the growthrate package, and additional discussion can be found in McKeague et al. (2011). Figure 5 shows the cross validation error based on removing each of the 5 interior observation times (1/3, 2/3, 1, 3, 4 years) in turn. Note that, due to the non-equispaced observation times, the various curves are ordered according to which time point is removed. The curves suggest that a

17

choice of σ in the range 1–3 is reasonable, although, since cross validation tends to overfit, the lower end of this range might be preferable. Figure 4 gives the reconstructed growth velocity curves for two of these subjects, and for three choices of σ. The choice σ = 1 produces very tight bands, which may be unrealistic because the growth rate is unlikely to have sharp bends at the observation times; the more conservative choices σ = 2 and 3 allow enough flexibility in this regard and appear to be more reasonable. Notice that the σ = 2 and σ = 3 bands bulge between observation times (and this is especially noticeable in the last observation time interval), which is a desirable feature since we would expect greater precision in the estimates close to the observation times. Plots of the posterior mean growth velocity curves for a random subset of 200 subjects based on σ = 1, 2, 3 are provided in Web Figure 2.

5

Discussion

The standard approach to the longitudinal data analysis of growth trajectories is via mixed-effects or latent variable models (Bollen and Curran, 2006, Wu and Zhang, 2006), and in this section we compare it with the proposed approach. The hierarchical prior we use for growth velocity can be seen as a flexible nonparametric model for an infinite-dimensional latent process. This contrasts with the standard mixed model approach of representing a trajectory by a polynomial, allowing additive uncorrelated random disturbances at the observation times, and treating some or all of the coefficients as random, i.e., a finite-dimensional latent structure. In contrast, our approach does not model within-subject (within-curve) and between-subject (between-curves) variations separately, as in mixed-effects models. Rather, the between-curve variation is represented by the prior—a random effect specified

18

in terms of a tied-down Brownian motion process; this prior also suffices to provide the withinsubject variation that in mixed models is typically provided by the uncorrelated disturbance terms, or white noise. At the infinitessimal scale, Brownian motion is white noise, so in a sense the two approaches are parallel in their handling of within-subject variation, but the advantage of using a single prior is that the full power of the Bayesian approach comes into play. In particular, this allows a closed-form calculation of the estimated growth velocity curves, without the need for sophisticated numerical methods that play a role in fitting complex mixed models. In summary, although mixed models provide an array of effective techniques for understanding trajectories, and longitudinal data more generally, in the context of growth velocity reconstruction we believe that the proposed approach can offer some advantages: greater flexibility and computational efficiency. A referee raised the question of whether the proposed approach is sensitive to outliers. The sample mean of y could indeed be a poor estimate of the prior mean µ0 if there are outliers. The same issue could be raised about the CLIME estimator we use for Σ−1 0 (as with any method based on a sample mean or sample covariance). One recourse would be to use robust estimators for µ0 and Σ−1 0 , but, as our reconstruction is linear in the data y, it would still be sensitive to outliers. A better recourse would be to carefully prescreen the data for outliers before using the method. In the real data examples (of childhood growth curves) that we have studied, it has not been necessary to remove outliers.

Supplementary Materials Web Figures referenced in Sections 3 and 4 and R code implementing the simulation study in Section 3 are available with this paper at the Biometrics website on Wiley Online Library.

19

Acknowledgments. The work of Ian McKeague was supported by NIH Grant R01GM095722-01. The authors thank Russell Millar for his helpful suggestions.

References Arjas, E., Liu, L., and Maglaperidze, N. (1997). Prediction of growth: a hierarchical Bayesian approach. Biometrical Journal 39, 741–759. Barker, D. J. P., Osmond, C., Fors´en, T. J., Kajantie, E., and Eriksson, J. G. (2005). Trajectories of growth among children who have coronary events as adults. The New England Journal of Medicine 353, 1802–1809. Barry, D. (1995). A Bayesian model for growth curve analysis. Biometrics 51, 639–655. Bollen, K. A. and Curran, P. J. (2006). Latent Curve Models: A Structural Equation Perspective. Wiley Series in Probability and Statistics. Hoboken: Wiley-Interscience. Cai, T., Liu, W., and Luo, X. (2011). A constrained `1 minimization approach to sparse precision matrix estimation. Journal of the American Statistical Association 106, 594–607. Dosenbach, N. U. F., Nardos, B., Cohen, A. L., Fair, D. A., Power, J. D., Church, J. A., and et al. (2010). Prediction of individual brain maturity using fMRI. Science 329, 1358–1361. Efron, B. (2010). Large-Scale Inference: Empirical Bayes Methods for Estimation, Testing, and Prediction. Institute of Mathematical Statistics Monographs. Cambridge: Cambridge University Press. Ferraty, F. and Vieu, P. (2006). Nonparametric Functional Data Analysis. New York: Springer. 20

Ghosal, S. and van der Vaart, A. W. (2007). Convergence rates of posterior distributions for noni.i.d. observations. The Annals of Statistics 35, 192–223. Hall, P., M¨uller, H. G., and Wang, J. L. (2006). Properties of principal components methods for functional and longitudinal data analysis. The Annals of Statistics 34, 1493–1517. James, G., Hastie, T. J., and Sugar, C. A. (2000). Principal component models for sparse functional data. Biometrika 87, 587–602. Kirsch, A. (1996). An Introduction to the Mathematical Theory of Inverse Problems. New York: Springer. Knapik, B. T., van der Vaart, A. W., and van Zanten, J. H. (2011). Bayesian inverse problems with Gaussian priors. The Annals of Statistics 39, 2626–2657. Liu, B. and M¨uller, H. G. (2009). Estimating derivatives for samples of sparsely observed functions, with application to online auction dynamics. Journal of the American Statistical Association 104, 704–717. L´opez-Pintado, S. and McKeague, I. W. (2011). Growthrate: Bayesian reconstruction of growth velocity. R package version 1.0, http://cran.r-project.org/package=growthrate. ˇ McKeague, I. W., L´opez-Pintado, S., Hallin, M., and Siman, M. (2011). Analyzing growth trajectories. Journal of Developmental Origins of Health and Disease 2, 322–329. Nashed, M. Z. and Wahba, G. (1974). Convergence rates of approximate least squares solutions of linear integral and operator equations of the first kind. Mathematics of Computation 28, 69–80.

21

Ramsay, J. O. and Silverman, B. W. (2005). Functional Data Analysis. New York: Springer. Rice, J. and Wu, C. (2000). Nonparametric mixed effects models for unequally sampled noisy curves. Biometrics 57, 253–259. Rue, H. and Held, L. (2005). Gaussian Markov Random Fields. Boca Raton: Chapman & Hall/CRC. Shohoji, T., Kanefuji, K., Sumiya, T., and Qin, T. (1991). A prediction of individual growth of height according to an empirical Bayesian approach. Annals of the Institute of Statistical Mathematics 43, 607–619. van der Vaart, A. W. and van Zanten, J. H. (2008a). Rates of contraction of posterior distributions based on Gaussian process priors. The Annals of Statistics 36, 1435–1463. van der Vaart, A. W. and van Zanten, J. H. (2008b). Reproducing kernel Hilbert spaces of Gaussian priors. In Pushing the Limits of Contemporary Statistics: Contributions in Honor of Jayanta K. Ghosh, volume 3 of Inst. Math. Stat. Collect., pages 200–222. Beachwood, OH: Institute of Mathematical Statistics. van der Vaart, A. W. and van Zanten, J. H. (2009). Adaptive Bayesian estimation using a Gaussian random field with inverse gamma bandwidth. The Annals of Statistics 37, 2655–2675. Wahba, G. (1977). Practical approximate solutions to linear operator equations when the data are noisy. SIAM Journal on Numerical Analysis 14, 651–667. Wu, H. and Zhang, J. T. (2006). Nonparametric Regression Methods for Longitudinal Data Analysis. Wiley Series in Probability and Statistics. Hoboken: Wiley-Interscience. 22

Yao, F., M¨uller, H. G., and Wang, J. L. (2005a). Functional data analysis for sparse longitudinal data. Journal of the American Statistical Association 100, 577–590. Yao, F., M¨uller, H. G., and Wang, J. L. (2005b). Functional linear regression analysis for longitudinal data. The Annals of Statistics 33, 2873–2903.

Appendix Proof of Theorem 2.1. The first part of the proof is to determine the posterior distribution of X. In terms of its prior density p(x), the posterior density of X is given by Bayes formula as p(x|y1 , . . . , yn−1 ) ∝ p(y1 , . . . , yn−1 |x)p(x) "n−1 # Y ∝ p(yi |xi , xi+1 ) p(x),

(4)

i=1

where x = (x1 , . . . , xn )T , the observation in the ith time interval is yi , and we have used the independent increments property of Brownian motion to separate the terms. Also, using standard properties of Brownian motion, it can be shown that p(yi |xi , xi+1 ) is normal with mean (xi + xi+1 )/2 and variance σ 2 ∆i /12. Apart from the addition of a constant, the log-likelihood term above is given (as a function of x for fixed y1 , . . . , yn−1 ) by n−1  6 X 2 (xi + x2i+1 + 2xi xi+1 )/4 − yi (xi + xi+1 ) ∆i log[p(y1 , . . . , yn−1 |x)] = − 2 σ i=1 " #  n−1  n−1 2 X 1 3 x21 X 1 1 x x x i i+1 = − 2 + + x2i + n + 2 2 σ ∆1 i=2 ∆i−1 ∆i ∆n−1 ∆i i=1 " #  n−1  X 6 y1 yi−1 yi yn−1 + 2 x1 + + xi + xn σ ∆1 ∆i−1 ∆i ∆n−1 i=2

1 = − xT Qx + bT x, 2 23

(5)

where Q is defined in the statement of the theorem and 6 b= 2 σ



y1 yi−1 yi yn−1 ,..., + ,..., ∆1 ∆i−1 ∆i ∆n−1

T .

Writing the prior density of X in the form 

1 p(x) ∝ exp − xT Q0 x + b0 T x 2



−1 where Q0 = Σ−1 0 and b0 = Σ0 µ0 (see Rue and Held (2005), page 27) and using (4) and (5), we

obtain   1 Tb T b p(x|y1 , . . . , yn−1 ) ∝ exp − x Qx + b x , 2 b = Σ−1 µ + b. This implies that the posterior distribution of X is b = Σ−1 + Q and b where Q 0 0 0 b = (Σ−1 + Q)−1 and mean Gaussian with covariance matrix Σ 0 b = Σ(Σ b −1 µ0 + Dy). b −1 b b −1 µ0 + b) = Σ(Σ b=Q µ 0 0 The next part of the proof is to determine the conditional distribution of X given X and the data. From the structure of the prior distribution of X between observation times, the conditional distribution of X given X and the data coincides with the distribution of the process 1/2

X(t) = σ∆i Bi0 ((t − ti )/∆i ) + X(ti ) + [X(ti+1 ) − X(ti )](t − ti )/∆i

(6)

for t ∈ [ti , ti+1 ), where Bi0 , i = 1, . . . , n − 1 are independent standard Brownian bridges subject to the constraint (1) imposed on X by the data, i.e., Z σ

1

Bi0 (s) ds + (X(ti ) + X(ti+1 ))/2 = yi .

0

From Lemma 1 that follows the proof, providing the conditional distribution of Bi0 given R1 0

Bi0 (s) ds, it is then seen that the conditional distribution of X given X and the data is Gaussian 24

with mean E(X(t)|X, y1 , . . . , yn−1 ) = X(ti ) + [X(ti+1 ) − X(ti )](t − ti )/∆i + 6(t − ti )(ti+1 − t) [yi − (X(ti ) + X(ti+1 ))/2] /∆2i e where K e is defined in the statement of the theorem. for t ∈ [ti , ti+1 ), and covariance kernel σ 2 K, Setting zi = X(ti ) − µ bi and rearranging terms in the above display then gives E(X(t)|X, y1 , . . . , yn−1 ) = µ b(t) + Z(t), where Z(t) = zi + (zi+1 − zi )(t − ti )/∆i − 3(t − ti )(ti+1 − t)(zi + zi+1 )/∆2i = ai (t)zi + bi (t)zi+1 for t ∈ [ti , ti+1 ), i = 1, . . . , n − 1. Here ai (t) and bi (t) are defined in the statement of the theorem. The final step of the proof is to remove the conditioning on X. From the first part of the proof, b is Gaussian with mean zero and we have that the posterior distribution of (z1 , . . . , zn )T = X − µ b It follows immediately that the the posterior distribution of the process Z is covariance matrix Σ. b in the statement of Gaussian with mean zero and covariance kernel K ∗ , as defined in terms of Σ the theorem. As we showed above, the conditional distribution of X given Z (or X) and the data e Since µ e do not depend is Gaussian with mean function µ b + Z and covariance kernel σ 2 K. b and K on Z, it follows using the convolution formula that the posterior distribution of X is Gaussian with b = σ2K e + K ∗. mean function µ b and covariance kernel K Lemma 1. Let B 0 be a standard Brownian bridge. The conditional distribution of B 0 given R1 0

B 0 (s) ds is Gaussian with mean µ0 (t) = 6t(1−t)

s ∧ t − st − 3ts(1 − t)(1 − s), for s, t ∈ [0, 1]. 25

R1 0

B 0 (s) ds and covariance kernel K 0 (s, t) =

Proof. Note that W = (B 0 (s), B 0 (t), Partition its covariance matrix as

R1 0

B 0 (u) du)T is a zero-mean Gaussian random vector. 



 Σ11 Σ12  , Σ=   Σ21 Σ22 where Σ11 and Σ22 are the covariance matrices of W (1) = (B 0 (s), B 0 (t))T and W (2) =

R1 0

B 0 (u) du,

respectively. Then, from the covariance of B 0 ,    s(1 − s) s ∧ t − st   , Σ12 = ΣT21 = (s(1 − s)/2, t(1 − t)/2)T , Σ11 =    s ∧ t − st t(1 − t) and Σ22 = 1/12. The conditional distribution of W (1) given W (2) is Gaussian with mean (2) µ(1) + Σ12 Σ−1 − µ(2) ), where µ(1) and µ(2) are their respective means, and covariance 22 (W

Σ11 − Σ12 Σ−1 22 Σ21 . The result now follows since the finite-dimensional distributions of a Gaussian process are determined by its mean and covariance kernel.

26

40 30 20 10

Growth Rate (in cms/year)

120 100 80

0

60

Length (in cms)

0

1

2

3

4

5

6

7

0

Age (in years)

1

2

3

4

5

6

7

Age (in years)

Figure 1: Growth curves based on natural cubic spline interpolation between the observation times (left panel), and the corresponding growth velocities (right panel).

27

0.20

0.15

0.10

0.05

muhat

spline

muhat

spline

muhat

spline

muhat

spline

muhat

spline

muhat

spline

0.10

0.08

0.06

0.04

0.02

Figure 2: Simulation model with α = 3. Boxplots comparing the MSE of the proposed estimator µ b(·) and the spline estimator of X at the observation times; n = 5 (first row), n = 10 (second row), H = 0.5 (first column), H = 0.7 (second column), H = 0.9 (third column).

28

1.4 1.2

0.16 0.14

1.0

0.12 0.10

0.8

0.08 0.06

Cross validation error

0.18

CV2 CV3 CV4

0

1

2

3

4

5

0

Sigma

1

2

3

4

5

Sigma

Figure 3: Simulation model with α = 3, n = 10, H = 0.5. The cross validation error CV(σ) over a fine grid of values of σ, based on removing each of the first 3 interior observation times (left panel), and based on averaging CV(σ) with all interior points successively removed (right panel).

29

Growth velocity (cms/year)

40

40

40

30

30

30

20

20

20

10

10

10

0

0

0

Growth velocity (cms/year)

0

1

2

3

4

5

6

7

0

1

2

3

4

5

6

7

40

40

40

30

30

30

20

20

20

10

10

10

0 1

2

3

4

5

Age (years)

6

7

1

2

3

4

5

6

7

0

1

2

3

4

5

6

7

0

0 0

0

0

1

2

3

4

5

Age (years)

6

7

Age (years)

Figure 4: Reconstructed growth velocity curves for two subjects; posterior mean (solid line), pointwise 95% credible intervals (dashed lines) based on σ = 1, 2, 3 for the first, second and third plots in each row, respectively; for one subject in the first row, and a second subject in the second row.

30

300 200 100 0

Cross validation error

400

CV2 CV3 CV4 CV5 CV6

0

10

20

30

40

Sigma

Figure 5: The cross validation error CV(σ) over a fine grid of values of σ in the growth velocity example based on removing each of the 5 interior observation times (1/3, 2/3, 1, 3, 4 years), CV2, . . . , CV6, respectively.

31

Suggest Documents