arXiv:1009.2707v2 [math.ST] 14 Mar 2011
Electronic Journal of Statistics Vol. 5 (2011) 127–145 ISSN: 1935-7524 DOI: 10.1214/11-EJS601
PAC-Bayesian bounds for sparse regression estimation with exponential weights Pierre Alquier∗ CREST - ENSAE 3, avenue Pierre Larousse 92240 Malakoff France and LPMA - Universit´ e Paris 7 175, rue du Chevaleret 75205 Paris Cedex 13 France e-mail:
[email protected] url: http://alquier.ensae.net/
and Karim Lounici School of Mathematics Georgia Institute of Technology Atlanta, GA 30332-0160 USA e-mail:
[email protected] url: http://people.math.gatech.edu/∼klounici6/ Abstract: We consider the sparse regression model where the number of parameters p is larger than the sample size n. The difficulty when considering high-dimensional problems is to propose estimators achieving a good compromise between statistical and computational performances. The Lasso is solution of a convex minimization problem, hence computable for large value of p. However stringent conditions on the design are required to establish fast rates of convergence for this estimator. Dalalyan and Tsybakov [17–19] proposed an exponential weights procedure achieving a good compromise between the statistical and computational aspects. This estimator can be computed for reasonably large p and satisfies a sparsity oracle inequality in expectation for the empirical excess risk only under mild assumptions on the design. In this paper, we propose an exponential weights estimator similar to that of [17] but with improved statistical performances. Our main result is a sparsity oracle inequality in probability for the true excess risk. AMS 2000 subject classifications: Primary 62J07; secondary 62J05, 62G08, 62F15, 62B10, 68T05. Keywords and phrases: Sparsity oracle inequality, high-dimensional regression, exponential weights, PAC-Bayesian inequalities. Received September 2010. ∗ Research partially supported by the French “Agence Nationale pour la Recherche” under grant ANR-09-BLAN-0128 “PARCIMONIE”.
127
128
P. Alquier and K. Lounici
Contents 1 2 3 4
Introduction . . . . . . . . . Sparsity oracle inequality in Sparsity oracle inequality in Proofs . . . . . . . . . . . . 4.1 Proofs of Section 2 . . 4.2 Proof of Theorem 3.1 . References . . . . . . . . . . . .
. . . . . . . expectation probability . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
128 131 132 135 135 139 143
1. Introduction We observe n independent pairs (X1 , Y1 ), . . . , (Xn , Yn ) ∈ X × R (where X is any measurable set) such that Yi = f (Xi ) + Wi , 1 6 i 6 n,
(1.1)
where f : X → R is the unknown regression function and the noise variables W1 , . . . , Wn are independent of the design (X1 , . . . , Xn ), satisfy also EWi = 0 and EWi2 6 σ 2 for any 1 6 i 6 n and for some known σ 2 > 0. The distribution of the sample is denoted by P, the corresponding expectation is denoted by E. 1/2 Pn 2 and kgk = For any function g : X → R define kgkn = i=1 g(Xi ) /n 2 1/2 . Let F = {φ1 , . . . , φp } be a set—called dictionary—of functions Ekgkn φj : X → R such that kφ Pjpk = 1 for any j (this assumption can be relaxed). For any θ ∈ Rp define fθ = j=1 θj φj , the empirical risk r(θ) =
n 2 1 X Yi − fθ (Xi ) , n i=1
and the integrated risk # n 2 1 X ′ ′ Y − fθ (Xi ) , R(θ) = E n i=1 i "
where {(X1′ , Y1′ ), . . . , (Xn′ , Yn′ )} is an independent replication of {(X1 , Y1 ), . . . , (Xn , Yn )}. Let us choose θ ∈ arg minθ∈Rp R(θ). Note that the minimum may not be unique, however we do not need to treat the identifiability question since we consider in this paper the prediction problem, i.e., find an estimator θˆn such that R(θˆn ) is close to minθ∈Rp R(θ) up to a positive remainder term as small as possible. It is a known fact that the least-square estimator θˆnLSE ∈ arg minθ∈Rp r(θ) performs poorly in high-dimension p > n. Indeed, consider for instance the deterministic design case with i.i.d. noise variables N (0, σ 2 ) and a full-rank design matrix, then θˆLSE satisfies i h E kfθˆLSE − f k2n − kfθ − f k2n = σ 2 . n
PAC-Bayesian bounds for sparse regression estimation
129
In the same context, assume now there exists a vector θ ∈ arg minθ∈Rp R(θ) with a number of nonzero coordinates p0 ≤ n. If the indices of these coordinates are known, then we can construct an estimator θˆn0 such that h i p0 E kfθˆ0 − f k2n − kfθ − f k2n = σ 2 . n n
The estimator θˆn0 is called oracle estimator since the set of indices of the nonzero coordinates of θ is unknown in practice. The issue is now to build an estimator, when the set of nonzero coordinates of θ is unknown, with statistical performances close to that of the oracle estimator θˆn0 . A possible approach is to consider solutions of penalized empirical risk minimization problems: ) ( n 2 1 X ˆ Yi − fθ (Xi ) + pen(θ) , θpen ∈ arg minp θ∈R n i=1 where the penalization pen(θ) is proportional to the number of nonzero components of θ such as for instance AIC, Cp and BIC criteria [1, 31, 37]. Bunea, Tsybakov and Wegkamp [9] established for the BIC estimator θˆnBIC the following non-asymptotic sparsity oracle inequality. For any ǫ > 0 there exists a constant C(ǫ) > 0 such that for any p > 2, n > 1 we have h i ep 2 2 p0 2 . log E kfθˆBIC − f kn 6 (1 + ǫ)kfθ − f kn + C(ǫ)σ n n p0 ∨ 1
Despite good statistical properties, these estimators can only be computed in practice for p of the order at most a few tens since they are solutions of nonconvex combinatorial optimization problems. Considering convex penalty function leads to computationally feasible optimization problems. A popular example of convex optimization problem is the Lasso estimator (cf. Frank and Friedman [22], Tibshirani [39], and the parallel work of Chen, Donoho and Saunders [15] on basis pursuit) with the penalty term pen(θ) = λ|θ|1 , where λ > 0 is some regularization parameter and, for any P integer d ≥ 2, real q > 0 and vector z ∈ Rd we define |z|q = ( dj=1 |zjq |)1/q and |z|∞ = max1≤j≤d |zj |. Several algorithms allow to compute the Lasso for very large p, one of the most popular is known as LARS, introduced by Efron, Hastie, Johnstone and Tibshirani [21]. However, the Lasso estimator requires strong assumptions on the matrix A = (φj (Xi ))16i6n,16j6p to establish fast rates of convergence results. Bunea, Tsybakov and Wegkamp [8] and Lounici [30] assume a mutual coherence condition on the dictionary. Bickel, Ritov and Tsybakov [7] and Koltchinskii [25] established sparsity oracle inequalities for the Lasso under a restricted eigenvalue condition. Cand`es and Tao [11] and Koltchinskii [26] studied the Dantzig Selector which is related to the Lasso estimator and suffers from the same restrictions. See, e.g., Bickel, Ritov and Tsybakov [7] for more details. Several alternative penalties were recently considered. Zou [45] proposed the adaptive Lasso which is the solution of aPpenalized empirical risk ˆ is minimization problem with the penalty pen(θ) = λ pj=1 |wˆ1j | |θj | where w
130
P. Alquier and K. Lounici
an initial estimator of θ. Zou and Hastie [46] proposed the elastic net with the penalty pen(θ) = λ1 |θ|1 + λ2 |θ|22 , λ1 , λ2 > 0. Meinshausen and B¨ uhlmann [35] and Bach [6] considered bootstrapped Lasso. See also Ghosh [23] or Cai, Xu and Zhang [10] for more alternatives to the Lasso. All these methods were motivated by their superior performances over the Lasso either from the theoretical or the practical point of view. However, strong assumptions on the design are still required to establish the statistical properties of these methods (when such results exist). A recent paper by van de Geer and B¨ uhlmann [42] provides a complete survey and comparison of all these assumptions. Simultaneously, the PAC-Bayesian approach for regression estimation was developed by Audibert [4, 5] and Alquier [2, 3], based on previous works in the classification context by Catoni [12–14], Mc Allester [34], Shawe-Taylor and Williamson [38], see also Zhang [44] in the context of density estimation. This framework is very well adapted for studying the excess risk R(·) − R(θ) in the regression context since it requires very weak conditions on the dictionary. However, the methods of these papers are not designed to cover the high-dimensional setting under the sparsity assumption. Dalalyan and Tsybakov [16–19] propose an exponential weights procedure related to the PAC-Bayesian approach with good statistical and computational performances. However they consider deterministic design, establishing their statistical result only for the empirical excess risk instead of the true excess risk R(·) − R(θ). In this paper, we propose to study two exponential weights estimation procedures. The first one is an exponential weights combination of the least squares estimators in all the possible sub-models. This estimator was initially proposed by Leung and Barron [28] in the deterministic design setting. Note that in the literature on aggregation and exponential weights, the elements of the dictionary are often arbitrary preliminary estimators computed from a frozen fraction of the initial sample so that these estimators are considered as deterministic functions, the aggregate is then computed using this dictionary and the remaining data. This scheme is referred to as ’data splitting’. See for instance Dalalyan and Tsybakov [18] and Yang [43]. Leung and Barron [28] proved that data splitting is not necessary in order to aggregate least squares estimators and raised the question of computation of this estimator in high dimension. In this paper we explicit the oracle inequality satisfied by this estimator in the high-dimensional case. For the second procedure, the design may be either random or deterministic. We adapt to the regression framework PAC-Bayesian techniques developed by Catoni [14] in the classification framework to build an estimator satisfying a sparsity oracle inequality for the true excess risk. Even though we do not study the computational aspect in this paper, it should be noted that efficient Monte Carlo algorithms are available to compute these exponential weights estimators for reasonably large dimension p (p ≃ 5000), see in particular the monograph of Marin and Robert [32] for an introduction to MCMC methods. Note also that in a work parallel to ours, Rigollet and Tsybakov [36] consider also an exponential weights procedure with discrete priors and suggest a version of the Metropolis-Hastings algorithm to compute it.
PAC-Bayesian bounds for sparse regression estimation
131
The paper is organized as follows. In Section 2 we define an exponential weights procedure and derive a sparsity oracle inequality in expectation when the design is deterministic. In Section 3, the design can be either deterministic or random. We propose a modification of the first exponential weights procedure for which we can establish a sparsity oracle inequality in probability for the true excess risk. Finally Section 4 contains all the proofs of our results. 2. Sparsity oracle inequality in expectation Throughout this section, we assume that the design is deterministic and the noise variables W1 , . . . , Wn are i.i.d. gaussian N (0, σ 2 ). For any J ⊂ {1, . . . , p} and K > 0 define p Θ(J) = θ ∈ R : ∀j ∈ / J, θj = 0 , (2.1) and
ΘK (J) = θ ∈ Rp :
|θ|1 ≤ K
and ∀j ∈ / J,
θj = 0 .
(2.2)
For the sake of simplicity we will write ΘK = ΘK ({1, . . . , p}). For any subset J ⊂ {1, . . . , p} define θˆJ ∈ arg min r(θ), θ∈Θ(J)
Pn
where r(θ) = n1 i=1 (Yi − fθ (Xi ))2 = kY − fθ k2n with Y = (Y1 , . . . , Yn )T . Denote by Pn ({1, . . . , p}) the set of all subsets of {1, . . . , p} containing at most n elements. The aggregate fˆn is defined as follows 2 P −λ r(θˆJ )+ 2σ n|J| ˆ π e θJ △ J∈P ({1,...,p}) J fˆn = fθˆn , θˆn = θˆn (λ, π) = P n (2.3) 2 −λ r(θˆJ )+ 2σ n|J| J∈Pn ({1,...,p}) πJ e
where λ > 0 is the temperature parameter, π is the prior probability distribution on P({1, . . . , p}), the P set of all subsets of {1, . . . , p}, that is, for any J ∈ {1, . . . , p}, πJ ≥ 0 and J∈P({1,...,p}) πJ = 1. The next result is a reformulation in our context of Theorem 8 of [28].
Proposition 2.1. Assume that the noise variables W1 , . . . , Wn are i.i.d. N (0, σ 2 ). Then the aggregate θˆn defined by (2.3) with 0 < λ 6 4σn2 satisfies h i 1 1 E[r(θˆJ )] + log . (2.4) E r(θˆn ) 6 min λ πJ J∈Pn ({1,...,p}) Proposition 2.1 holds true for any prior π. We suggest using the following prior. Fix α ∈ (0, 1) and define −1 α|J| p πJ = Pn , ∀J ∈ Pn ({1, . . . , p}), and πJ = 0 if |J| > n. (2.5) j |J| α j=0
132
P. Alquier and K. Lounici
As a consequence, we obtain the following immediate corollary of Proposition 2.1. Theorem 2.1. Assume that the noise variables W1 , . . . , Wn are i.i.d. N (0, σ 2 ). Then the aggregate fˆn = fθˆn , with λ = 4σn2 and π taken as in (2.5), satisfies h i E kfˆn − f k2n ( 6 minp θ∈R
σ 2 |J(θ)| kfθ − f k2n + n
4 log
pe |J(θ)|α
+1 +
4σ 2 log n
1 1−α
)
,
(2.6)
where for any θ ∈ Rp J(θ) = {j : θj 6= 0}. This result improves upon [17] which established in Theorem 6 for gaussian noise and deterministic design i h E kfˆn − f k2n !! ) ( p Tr(A⊤ A) σ2 16σ 2 |J(θ)| 2 1 + log+ + , |θ|1 ≤ minp kfθ − f kn + θ∈R n M (θ)σ n where log+ x = max {log x, 0} and we recall that A = (φj (Xi ))1≤i≤n,1≤j≤p . Note that our bound is faster by a factor log+ |θ|1 . Note also that the bound in the above display grows worse for large values of |θ|1 . In order to evaluate the performance of these exponential weights procedures, [40] developed a notion of optimal rate of sparse prediction. In particular, [36] established that there exists a numerical constant c∗ > 0 such that for all estimator Tn ep i σ2 h rank(A) ∧ s log 1 + . sup sup E kTn − f k2n − kfθ − f k2n ≥ c∗ n s θ∈Rp \{0}: f |J(θ)|≤s
The above display combined with Theorem 2.1 shows that the exponential weights procedure (2.3) with the prior (2.5) achieves the optimal rate of sparse rank(A) prediction for any vector θ satisfying |J(θ)| ≤ log(1+ep) . 3. Sparsity oracle inequality in probability In Section 2 we assumed the design is deterministic and we established an oracle inequality in expectation with the optimal rate of sparse prediction. We want now to establish an oracle inequality in probability that holds true for deterministic and random design. From now on, the design can be either deterministic or random. We make the following mild assumption: L = max kφj k∞ < ∞. 1≤j≤M
PAC-Bayesian bounds for sparse regression estimation
133
We assume in this section that the noise variables are subgaussian. More precisely we have the following condition. Assumption 3.1. The noise variables W1 , . . . , Wn are independent and independent of X1 , . . . , Xn . We assume also that there exist two known constants σ > 0 and ξ > 0 such that E(Wi2 ) ≤ σ 2 ∀k ≥ 3,
E(|Wi |k ) ≤ σ 2 k!ξ k−2 .
The estimation method is a version of the Gibbs estimator introduced in [13, 14]. Fix K ≥ 1. First we define the prior probability distribution as follows. For any J ⊂ {1, . . . , p} let uJ denote the uniform measure on ΘK+1 (J). We define X πJ uJ (dθ) m(dθ) = J⊂{1,...,p}
with π taken as in (2.5). We are now ready to define our estimator. For any λ > 0 we consider the probability measure ρ˜λ admitting the following density w.r.t. the probability measure m d˜ ρλ e−λr(θ) . (3.1) (θ) = R −λr dm dm ΘK e The aggregate f˜n is defined as follows f˜n = fθ˜n ,
θ˜n = θ˜n (λ, m) =
Z
θρ˜λ (dθ).
(3.2)
ΘK
Define C1 = 8σ 2 + (2kf k∞ + L(2K + 1))2 ∨[8[ξ + (2kf k∞ + L(2K + 1))]L(2K + 1)] .
We can now state the main result of this section.
Theorem 3.1. Let Assumption 3.1 be satisfied. Take K > 1 and λ = λ∗ = 2Cn1 . Assume that arg minθ∈Rp R(θ) ∩ ΘK 6= ∅. Then we have, for any ε ∈ (0, 1) and any θ¯ ∈ arg minθ∈Rp R(θ) ∩ ΘK , with probability at least 1 − ǫ, " 3L2 8C1 ¯ log (K + 1) ˜ ¯ |J(θ)| R(θn ) ≤ R(θ) + 2 + n n # enp 2 ¯ log + |J(θ)| + log . ¯ ε(1 − α) α|J(θ)| The choice λ = λ∗ comes from the optimization of a (rather pessimistic) upper bound on the risk R (see Inequality (4.9) in the proof of this theorem, page 142). However this choice is not necessarily the best choice in practice even though it gives the good order of magnitude for λ. The practitioner may use cross-validation to properly tune the temperature parameter.
134
P. Alquier and K. Lounici
Theorem 3.1 improves upon previous results on the following points: 1) Our oracle inequality is sharp in the sense that the leading factor in front ¯ is equal to 1 and we require only maxj kφj k < ∞ whereas l1 -penalized of R(θ) empirical risk minimization procedures such as Lasso or Dantzig selector have a leading factor strictly larger than 1 and impose in addition stringent conditions on the dictionary (cf. [7, 11]). For instance, [7] imposes the design matrix A = (φj (Xi ))1≤i≤n,1≤j≤p to be deterministic and to satisfy the following restricted eigenvalue condition for the Lasso △
κ(s) =
min
J0 ⊂{1,...,p}: |J0 |6s
min
∆∈Rp \{0}: |∆J c |1 63|∆J0 |1 0
|A∆|2 √ > 0, n|∆J0 |2
where for any ∆ ∈ Rp and J ⊂ {1, . . . , p}, we denote by ∆J the vector in Rp which has the same components as ∆ on J and zero coordinates on the complement J cq . Assuming in addition that the noise is gaussian N (0, σ 2 ) and taking λ = Aσ log p , A > 8, [7] proved that the Lasso θˆL satisfies with probability at n
least 1 − M 1−A
2
/8
1 log p 1 kfθˆL − f k2n ≤ (1 + η) kfθ − f k2n + C(η)A2 σ 2 |J(θ)| , n n n
∀θ ∈ Rp ,
where η, C(η) > 0 and C(η) increases to +∞ as η tends to 0. ¯ 1 ≤ K. On the downside, our estimator requires the additional condition |θ| This condition is common in the PAC-bayesian literature. Removing this condition is a difficult problem and does not seem possible with the actual techniques of proof where this condition is needed in order to apply Bernstein’s inequality. 2) We establish a sparsity oracle inequality in probability for the integrated risk R(·) whereas previous results on the exponential weights are given in expectation [16, 17, 19, 24, 29, 36]. 3) Unlike mirror averaging or progressive mixture rules, satisfying similar inequalities in expectation, our estimator does not involve an averaging step [18, 24, 29]. As a consequence, its computational complexity is significantly reduced as compared to those procedures with averaging step. For instance [29] considered the model (1.1) with random design and i.i.d. observations (X1 , Y1 ), . . . , (Xn , Yn ), n ≥ 2. The studied estimator is the following mirror averaging scheme n−1 1 X˜ fθˆM A , θMA = θk , n k=0 R where θ˜0 = Θ(K) θdΠ and for any 1 ≤ k ≤ n − 1, θ˜k is defined similarly as Pk θ˜n in (3.1)–(3.2) with r(θ) replaced by rk (θ) = k1 i=1 (Yi − fθ (Xi ))2 . These estimators can R be implemented for example by MCMC. In this case, computing the integral Θ(K) θρ˜λ (dθ) is the most time-consuming part of the procedure.
PAC-Bayesian bounds for sparse regression estimation
135
The procedure (3.1)–(3.2) requires computing this integral only once whereas the mirror averaging procedure of [29] requires computing integrals of this form n times. ¯ 1 ≤ K for some absolute constant K and taking 4) Under the assumption |θ| −1 ǫ = n we have with probability at least 1 − n−1 C1 3L2 enp ˜ ¯ ¯ R(θn ) ≤ R(θ) + C |J(θ)| log + , (3.3) ¯ n n2 α|J(θ)| for some absolute constant C > 0. In [36] a minimax lower bound in expectation is established for deterministic design and Gaussian noise. A similar result holds in probability with the same proof combined with Theorem 2.7 of [41]. There exists absolute constants c1 , c2 > 0 such that σ2 ep inf sup sup P kTn − f k2n ≥ kfθ − f k2n + c1 rank(A) ∧ s log 1 + ≥ c2 . Tn θ∈Rp : n s f |J(θ)|≤s
rank(A) If s ≤ log(1+ep) then we observe that the upper bound in (3.3) is optimal up to the additional logarithmic factor log n. Note however that if p ≥ n1+δ for some δ > 0, which is relevant with the high-dimensional setting we consider in this paper, then our bound is rate optimal.
4. Proofs 4.1. Proofs of Section 2 This proof uses an argument from Leung and Barron [28]. △ Proof of Proposition 2.1. The mapping Y → fˆn (Y ) = (fˆn (X1 , Y ), . . . , fˆn (Xn , Y ))T is clearly continuously differentiable by composition of elementary differentiable functions. For any subset J ⊂ {1, . . . , p} define AJ = (φj (Xi ))1≤i≤n,j∈J , ΣJ = n1 ATJ AJ , ΦJ (·) = (φj (·))j∈J and 2
gJ = e
−λ kY −fJ k2n + 2σ n|J|
where fJ (x, Y ) =
1 T T Y AJ Σ+ J ΦJ (x) , n
and Σ+ J denotes the pseudo-inverse of ΣJ . Denote by ∂i the derivative w.r.t. Yi . Simple computations give ∂i fJ (x, Y ) =
1 T ΦJ (Xi )Σ+ J ΦJ (x) , n
(∂i fJ (X1 , Y ), . . . , ∂i fJ (Xn , Y ))Y = fJ (Xi , Y ),
136
P. Alquier and K. Lounici
and n X
fJ (Xl , Y )∂i fJ (Xl , Y )
= fJ (Xi , Y ).
l=1
Thus we have ∂i (gJ )
= −λ∂i kY − fJ k2n gJ 2λ = − n
= −
(Yi − fJ (Xi , Y )) −
2λ (Yi − fJ (Xi , Y ))gJ , n
Recall that fˆn (·, Y )) =
P
We have P
n X l=1
∂i fJ (Xl , Y )(Yl − fJ (Xl , Y )) gJ
J∈Pn ({1,...,p})
P
!
πJ gJ fJ (·, Y )
J∈Pn ({1,...,p})
πJ gJ
.
πJ (∂i (gJ )fJ (Xi , Y ) + gJ ∂i (fJ (Xi , Y ))) P J∈Pn ({1,...,p}) πJ gJ P P J∈Pn ({1,...,p}) πJ ∂i (gJ ) J∈Pn ({1,...,p}) πJ gJ fJ (Xi , Y ) P − ( J∈Pn ({1,...,p}) πJ gJ )2 P 2 2λ J∈Pn ({1,...,p}) fJ (Xi , Y ) πJ gJ 2λ ˆ P Yi fn + = − n n J∈Pn ({1,...,p}) πJ gJ P + T 1 J∈Pn ({1,...,p}) ΦJ (Xi )ΣJ ΦJ (Xi ) πJ gJ P + n J∈Pn ({1,...,p}) πJ gJ
∂i fˆn (Xi , Y ) =
J∈Pn ({1,...,p})
2λ ˆ 2λ ˆ2 f (Xi , Y ) Yi fn (Xi , Y ) − n n n P 2 2λ J∈Pn ({1,...,p}) (fJ (Xi , Y ) − fˆn (Xi , Y )) πJ gJ P = n J∈Pn ({1,...,p}) πJ gJ P + T 1 J∈Pn ({1,...,p}) ΦJ (Xi )ΣJ ΦJ (Xi ) πJ gJ P + ≥ 0. n J∈Pn ({1,...,p}) πJ gJ
(4.1)
Consider the following estimator of the risk rˆn (Y ) = kfˆn (Y ) − Y k2n +
n 2σ 2 X ˆ ∂i fn (Xi , Y ) − σ 2 . n i=1
Using an argument based on Stein’s identity as in [27] we now prove that i h E[ˆ rn (Y )] = E kfˆn (Y ) − f k2n .
(4.2)
PAC-Bayesian bounds for sparse regression estimation
137
We have # " n h i X 2 Wi (fˆn (Xi , Y ) − f (Xi )) − σ 2 E kfˆn (Y ) − f k2n = E kfˆn (Y ) − Y k2n + n i=1 # " n 2X 2 ˆ ˆ (4.3) Wi fn (Xi , Y ) − σ 2 . = E kfn (Y ) − Y kn + n i=1 Q For z = (z1 , . . . , zn )T ∈ Rn write FW,i (z) = j6=i FW,i (zj ), where FW denotes the c.d.f. of the random variable W1 . Since E(Wi ) = 0 we have # " Z Wi i h ˆ ˆ ∂i fn (Xi , Y1 , . . . , Yi−1 , f (Xi ) + z, Yi+1 , . . . , Yn )dz E Wi fn (Xi , Y ) = E Wi 0
=
Z
Rn−1
Z
R
y
Z
0
y
ˆ ∂i fn (Xi , f + z)dzi dFW (y) dFW,i (z).
(4.4)
In view of (4.1) we can apply Fubini’s Theorem to the right-hand-side of (4.4). We obtain under the assumption W ∼ N(0, σ 2 ) that Z Z y Z Z ∞ ∂i fˆn (Xi , f + z)dzi dFW (y) = ydFW (y)∂i fˆn (Xi , f + z)dzi R+ 0 R+ z i Z σ 2 ∂i fˆn (Xi , f + z)dFW (zi ), = R+
A Similar equality holds for the integral over R− . Thus we obtain i i h h E Wi fˆn (Xi , Y ) = σ 2 E ∂i fˆn (Xi , Y ) .
Combining (4.2), (4.3) and the above display gives i h E [ˆ rn (Y )] = E kfˆn (Y ) − f k2n .
Since fˆn (·, Y ) is the expectation of fJ (·, Y ) w.r.t. the probability distribution ∝ g · π, we have P kfJ (·, Y ) − Y k2n − kfJ (·, Y ) − fˆn (Y )k2n gJ πJ J∈P ({1,...,p}) n 2 P kfˆn (·, Y )−Y kn = . J∈Pn ({1,...,p}) gJ πJ For the sake of simplicity set fJ = fJ (·, Y ) and fˆn = fˆn (·, Y ). Combining (4.2), the above display and λ 6 4σn2 yields Pn 4λσ2 P 2 ˆn k2 gJ πJ − 1 kf − f J n J∈Pn ({1,...,p}) kfJ − Y kn + i=1 n P rˆn (Y ) = π J∈Pn ({1,...,p}) J gJ P + n T 2σ 2 X J∈Pn ({1,...,p}) ΦJ (Xi )ΣJ ΦJ (Xi ) πJ gJ P + 2 − σ2 n i=1 π g J∈Pn ({1,...,p}) J J X 2σ 2 kfJ − Y k2n + 6 |J| gJ πJ − σ 2 . n J∈Pn ({1,...,p})
138
P. Alquier and K. Lounici
By definition of gJ we have kfJ − Y
k2n
1 2σ 2 |J| = − log + n λ −
gJ
1 log λ
P
J∈Pn ({1,...,p}) gJ πJ
X
J∈Pn ({1,...,p})
!
gJ πJ .
Integrating the above inequality w.r.t. the probability distribution C1 g ·π (where P C = J∈Pn ({1,...,p}) gJ πJ is the normalization factor) and using the fact that g · π X 1 1 gJ πJ log gJ = K ,π > 0 C C C J∈Pn ({1,...,p})
as well as a convex duality argument (cf., e.g., [20], p. 264) we get X 2σ 2 1 kY − fJ k2n + rˆn (Y ) 6 |J| πJ′ + K(π ′ , π) − σ 2 , n λ J∈Pn ({1,...,p})
for all probability measure π ′ on P({1, . . . , p}). Taking the expectation in the last inequality we get for any π ′ i h rn (Y )] E kfˆn − f k2n = E[ˆ X 2σ 2 1 2 E[kfJ − Y kn ] + |J| πJ′ + K(π ′ , π) − σ 2 6 n λ J∈Pn ({1,...,p}) ! n 2 X X 2σ 2 E[Wi fJ (Xi , Y )] + E[kfJ − f k2n ] + |J| πJ′ 6 n i=1 n J∈Pn ({1,...,p})
+ 6
X
J∈Pn ({1,...,p})
E[kfJ − f k2n ] +
1 K(π ′ , π) λ
1 4σ 2 |J| πJ′ + K(π ′ , π), n λ
where we have Stein’s argument E[Wi fJ (Xi , Y )] = σ 2 E [∂i fJ (Xi , Y )] and Pused n the fact that i=1 ∂i fJ (Xi , Y ) = 1 in the last line. Finally taking π ′ in the set of Dirac distributions on the subset J of {1, . . . , p} yields the theorem.
Proof of Theorem 2.1. First note that any minimizer θ ∈ Rp of the right-handside in (2.6) is such that |J(θ)| 6 rank(A) 6 n where we recall that A = (φj (Xi ))16i6n,16j6p . Indeed, for any θ ∈ Rp such that |J(θ)| > rank(A) we can construct a vector θ′ ∈ Rp such that fθ = fθ′ and |J(θ′ )| 6 rank(A) and the is nondecreasing on (0, p]. mapping x → x log epα x Next for any J ∈ Pn ({1, . . . , p}) we have σ 2 |J| σ 2 |J(θ)| 2 2 2 E[kfJ − f kn ] = min kfθ − f kn + . = min kfθ − f kn + n n θ∈Θ(J) θ∈Θ(J)
PAC-Bayesian bounds for sparse regression estimation
139
Thus σ2 J 1 1 2 + E[kfJ − f kn ] + log min λ πJ n J∈Pn ({1,...,p}) 1 σ 2 |J(θ)| 1 + = min min kfθ − f k2n + log λ πJ(θ) n J∈Pn ({1,...,p}) θ∈Θ(J) 2 1 σ |J(θ)| 1 = minp kfθ − f k2n + log + . θ∈R λ πJ(θ) n Combining the above display with Proposition 2.1 and our definition of the prior π gives the result. 4.2. Proof of Theorem 3.1 We state below a version of Bernstein’s inequality useful in the proof of Theorem 3.1. See Proposition 2.9 page 24 in [33], more precisely Inequality (2.21). Lemma 4.1. Let T1 , . . . , Tn be independent real valued random variables. Let us assume that there is two constants v and w such that n X i=1
E[Ti2 ] ≤ v
and for all integers k ≥ 3, n X
k!wk−2 E (Ti )k+ ≤ v . 2 i=1
Then, for any ζ ∈ (0, 1/w), # " n X [Ti − E(Ti )] ≤ exp E exp ζ i=1
vζ 2 2(1 − wζ)
.
Proof of Theorem 3.1. For any θ ∈ ΘK+1 define the random variables Ti = Ti (θ) = − (Yi − fθ (Xi ))2 + (Yi − fθ¯(Xi ))2 . Note that these variables are independent. We have n X
E[Ti2 ] =
n h i X 2 2 E [2Yi − fθ¯(Xi ) − fθ (Xi )] [fθ¯(Xi ) − fθ (Xi )] i=1
i=1
=
n X i=1
i h 2 2 E [2Wi + 2f (Xi ) − fθ¯(Xi ) − fθ (Xi )] [fθ¯(Xi ) − fθ (Xi )]
140
P. Alquier and K. Lounici
≤ =
n X i=1
n X i=1
E
h
i 2 8Wi2 + 2(2kf k∞ + L(2K + 1))2 [fθ¯(Xi ) − fθ (Xi )]
i h E 8Wi2 + 2(2kf k∞ + L(2K + 1))2 E [fθ¯(Xi ) − fθ (Xi )]2
¯ =: v(θ, θ) ¯ = v, ≤ n 8σ 2 + 2(2kf k∞ + L(2K + 1))2 R(θ) − R(θ)
where we have used in the last line Pythagore’s Theorem to prove kfθ − fθ¯k2 = ¯ Next we have, for any integer k ≥ 3, that R(θ) − R(θ). n X i=1
n i h X E |2Yi − fθ¯(Xi ) − fθ (Xi )|k |fθ¯(Xi ) − fθ (Xi )|k E (Ti )k+ ≤ i=1
n i h X k E 22k−1 |Wi |k + (kf k∞ + L(K + 1/2))k |fθ¯(Xi ) − fθ (Xi )| ≤ i=1
n h i X 2 E 22k−1 |Wi |k +(kf k∞ +L(K +1/2))k [L(2K +1)]k−2 [fθ¯(Xi )−fθ (Xi )] ≤ i=1
≤ 22k−1 σ 2 k!ξ k−2 + (kf k∞ + L(K + 1/2))k [L(2K + 1)]k−2 n i h X 2 E [fθ¯(Xi ) − fθ (Xi )] × i=1
2
+ (kf k∞ + L(K + 1/2))k )(4L(2K + 1))k−2 v 4(σ 2 + (kf k∞ + L(K + 1/2))2 ) 1 ≤ k!ξ k−2 + [kf k∞ + L(K + 1/2)]k−2 [4L(2K + 1)]k−2 v 4 k!wk−2 2 , ≤ k! (ξ + [kf k∞ + L(K + 1/2)])k−2 [4L(2K + 1)]k−2v ≤ v 4 2 ≤
(σ k!ξ
k−2
with w := 8(ξ + [kf k∞ + L(K + 1/2)])L(K + 1/2). Next, for any λ ∈ (0, n/w) and θ ∈ ΘK+1 , applying Lemma 4.1 with ζ = λ/n gives # " h i 2 vλ ¯ − r(θ) + r(θ) ¯ . E exp λ R(θ) − R(θ) ≤ exp 2n2 (1 − wλ n ) Set C = 8 σ 2 + [kf k∞ + L(K + 1/2)]2 . For the sake of simplicity let us put ! λ2 C . (4.5) β = λ− 2n(1 − wλ n ) For any ε > 0 the last display yields ε 2 ¯ ¯ ≤ . E exp β R(θ) − R(θ) + λ −r(θ) + r(θ) − log ε 2
PAC-Bayesian bounds for sparse regression estimation
141
Integrating w.r.t. the probability distribution m(·) we get # " Z ε 2 ¯ ¯ E exp β R(θ) − R(θ) + λ −r(θ) + r(θ) − log m(dθ) ≤ . ε 2 Next, Fubini’s theorem gives # 2 ¯ + λ −r(θ) + r(θ) ¯ − log m(dθ) E exp β R(θ) − R(θ) ε " # Z d˜ ρ 2 λ ¯ + λ −r(θ) + r(θ) ¯ − log = E exp β R(θ) − R(θ) (θ) − log ρ˜λ (dθ) dm ε ε ≤ . 2 Z
"
Jensen’s inequality yields # " Z Z ε 2 ¯ + λ − rd˜ ¯ − K(˜ ≤ . E exp β Rd˜ ρλ − R(θ) ρλ + r(θ) ρλ , m) − log ε 2 Now, using the basic inequality exp(x) ≥ 1R+ (x) we get # ) ( Z Z ε 2 ¯ ¯ ≥0 ≤ . P β Rd˜ ρλ − R(θ) + λ − rd˜ ρλ + r(θ) − K(˜ ρλ , m) − log ε 2 Using Jensen’s inequality again gives Z Z Rd˜ ρλ ≥ R θρ˜λ (dθ) = R(θ˜λ ). Combining the last two displays we obtain ( ) R 2 ¯ + 1 K(˜ θ) rd˜ ρ − r( ρ , m) + log ε λ λ λ ε ¯ ≤ P R(θ˜λ ) − R(θ) ≥1− . β 2 λ Now, using Lemma 1.1.3 in Catoni [14] we obtain that ( ) R ¯ + 1 K(ρ, m) + log 2 rdρ − r( θ) ε λ ε ¯ ≤ ≥1− . P R(θ˜λ ) − R(θ) inf β 2 ρ∈M1+ (ΘK+1 ) λ (4.6) ¯ by R(θ) − R(θ). ¯ Applying We now want to bound from above r(θ) − r(θ) Lemma 4.1 to T˜i (θ) = −Ti (θ) and similar computations as above yield successively # " h i vλ2 ¯ ¯ , E exp λ R(θ) − R(θ) + r(θ) − r(θ) ≤ exp 2n2 (1 − wλ n )
142
P. Alquier and K. Lounici
and so for any (data-dependent) ρ, # " Z Z ε 2 ¯ ¯ ≤ , E exp γ − Rdρ + R(θ) + λ rdρ − r(θ) − K(ρ, m) − log ε 2 where γ= Now, (Z P
¯ ≤ γ rdρ − r(θ) λ
Z
λ2 C λ+ 2n(1 − wλ n )
!
.
(4.7)
) 2 ε 1 ¯ K(ρ, m) + log ≥ 1 − . (4.8) Rdρ − R(θ) + λ ε 2
Combining (4.8) and (4.6) with a union bound argument gives (
¯ P R(θ˜λ ) − R(θ) ≤
inf
ρ∈M1+ (ΘK+1 )
γ
R
) ¯ + 2 K(ρ, m) + log 2 Rdρ − R(θ) ε β ≥ 1 − ε,
where M1+ (ΘK+1 ) is the set of all probability measures over ΘK+1 . Now for any δ ∈ (0, 1] taking ρ as the uniform probability measure on the set ¯ : |t − θ| ¯ 1 ≤ δ} ⊂ ΘK+1 (J(θ)) ¯ gives {t ∈ Θ(J(θ)) (
" 1 ¯ + ¯ log K + 1 P R(θ˜λ ) ≤ R(θ) γL2 δ 2 + 2 |J(θ)| β δ !#) 1 2 p 1 ¯ log + log + |J(θ)| + log ≥ 1 − ε. + log ¯ α 1−α ε |J(θ)| Taking δ = n−1 and the inequality log (
p ¯ |J(θ)|
¯ log pe¯ gives ≤ |J(θ)| |J(θ)|
" 1+
2 L λC λC 2(n − wλ) n2 1 − 2(n−wλ) #) 2 2 epn ¯ log (K + 1) + |J(θ)| ¯ log + log |J(θ)| ≥ 1−ε + ¯ λ ε(1 − α) α|J(θ)| (4.9) ¯ + P R(θ˜λ ) ≤ R(θ)
1
where we replaced γ and β by their definitions, see (4.5) and (4.7). Taking now λ = n/(2C1 ) (where we recall that C1 = C ∨ w) in (4.9) gives
PAC-Bayesian bounds for sparse regression estimation
143
" 2 8C 3L 1 ¯ log (K + 1) ¯ + |J(θ)| + P R(θ˜λ ) ≤ R(θ) n2 n #!) enp 2 ¯ ≥ 1 − ε, + |J(θ)| log + log ¯ ε(1 − α) α|J(θ)| (
where we have used that 1 −
λC 2(n−wλ)
≥ 1/2 and 1 +
λC 2(n−wλ)
≤ 3/2.
References [1] Akaike, H. (1973). Information Theory and an Extension of the Maximum Likelihood Principle. 2nd International Symposium on Information Theory. Budapest: Akademia Kiado. B. N. Petrov and F. Csaki, editors. 267–281. [2] Alquier, P. (2006). Transductive and Inductive Adaptative Inference for Regression and Density Estimation. PhD Thesis, Universit´e Paris 6. [3] Alquier, P. (2008). PAC-Bayesian bounds for randomized empirical risk minimizers. Mathematical Methods of Statistics 17 (4) 279–304. MR2483458 [4] Audibert, J.-Y. (2004). Aggregated Estimators and Empirical Complexity for Least Square Regression. Annales de l’Institut Henri Poincar´e B: Probability and Statistics 40 (6) 685–736. MR2096215 [5] Audibert, J.-Y. (2004). PAC-Bayesian Statistical Learning Theory. PhD Thesis, Universit´e Paris 6. [6] Bach, F. (2008). Bolasso: model consistent Lasso estimation through the bootstrap. Proceedings of the 25th international conference on Machine learning, ACM, New-York. [7] , Bickel, P. J., Ritov, Y. and Tsybakov, A. (2009). Simultaneous Analysis of Lasso and Dantzig Selector. The Annals of Statistics 37 (4) 1705–1732. MR2533469 [8] Bunea, F., Tsybakov, A. and Wegkamp, M. (2007). Sparsity oracle inequalities for the Lasso. Electronic Journal of Statistics 1 169–194. MR2397610 [9] Bunea, F., Tsybakov, A. and Wegkamp, M. (2007). Aggregation for Gaussian regression. The Annals of Statistics 35 1674–1697. MR2351101 [10] Cai, T., Xu, G. and Zhang, J. (2009). On Recovery of Sparse Signals via ℓ1 Minimization. IEEE Transactions on Information Theory 55 3388–3397. MR2598028 [11] Cand` es, E. and Tao, T. (2007) The Dantzig selector: statistical estimation when p is much larger than n. The Annals of Statistics 35. MR2382644 [12] Catoni, O. (2003) A PAC-Bayesian approach to adaptative classification. Preprint Laboratoire de Probabilit´es et Mod`eles Al´eatoires PMA-840. [13] Catoni, O. (2004) Statistical Learning Theory and Stochastic Optimization. Saint-Flour Summer School on Probability Theory 2001 (Jean Picard ed.), Lecture Notes in Mathematics, Springer. MR2163920
144
P. Alquier and K. Lounici
[14] Catoni, O. (2007) PAC-Bayesian Supervised Classification (The Thermodynamics of Statistical Learning). IMS Lecture Notes - Monograph Series 56. MR2483528 [15] Chen, S. S., Donoho, D. L. and Saunders, M. A. (2001). Atomic Decomposition by Basis Pursuit. SIAM Review 43 (1) 129–159. MR1639094 [16] Dalalyan, A. and Tsybakov, A. (2007). Aggregation by exponential weighting and sharp oracle inequalities. Proceedings of the 20th annual conference on Learning theory, Springer-Verlag Berlin, Heidelberg. MR2397581 [17] Dalalyan, A. and Tsybakov, A. (2008). Aggregation by exponential weighting, sharp oracle inequalities and sparsity. Machine Learning 72 (12) 39–61. [18] Dalalyan, A. and Tsybakov, A. (2010). Mirror averaging with sparsity priors. In minor revision for Bernoulli, arXiv:1003.1189. [19] Dalalyan, A. and Tsybakov, A. (2010). Sparse Regression Learning by Aggregation and Langevin Monte-Carlo. Preprint arXiv:0903.1223. [20] Dembo, A. and Zeitouni, O. (1998). Large Deviation Techniques and Applications. Springer. MR1619036 [21] Efron, B., Hastie, T., Johnstone, I. and Tibshirani, R. (2004). Least Angle Regression. The Annals of Statistics 32 (2) 407–499. MR2060166 [22] Frank, L. and Friedman, J. (1993). A Statistical View on Some Chemometrics Regression Tools. Technometrics 16 199–511. [23] Ghosh, S. (2007). Adative Elastic Net: an Improvement of Elastic Net to achieve Oracle Properties. Preprint. [24] Juditsky, A., Rigollet, P. and Tsybakov, A. (2008). Learning by mirror averaging. The Annals of Statistics 36 (5) 2183–2206. MR2458184 [25] Koltchinskii, V. (2010). Sparsity in Empirical Risk Minimization. Annales de l’Institut Henri Poincar´e B: Probability and Statistics, to appear. [26] Koltchinskii, V. (2009). The Dantzig selector and sparsity oracle inequalities. Bernoulli 15 (3) 799–828. MR2555200 [27] Lehmann, E. L. and Casella, G. (1998). Theory of Point Estimation. Springer. MR1639875 [28] Leung, G. and Barron, A. R. (2006). Information theory and mixing least-squares regressions. IEEE Transactions on Information Theory 52 (8) 3396–3410. MR2242356 [29] Lounici, K. (2007). Generalized mirror averaging and d-convex aggregation. Mathematical Methods of Statistics 16 (3). MR2356820 [30] Lounici, K. (2008). Sup-norm convergence rate and sign concentration property of Lasso and Dantzig estimators. Electronic Journal of Statistics 2 90–102. MR2386087 [31] Mallows, C. L. (1973). Some comments on cp . Technometrics 15 661– 676. MR1365719 [32] Marin, J.-M. and Robert, C. P. (2007). Bayesian Core: A practical approach to computational Bayesian analysis. Springer. MR2289769
PAC-Bayesian bounds for sparse regression estimation
145
[33] Massart, P. (2007). Concentration Inequalities and Model Selection. Saint-Flour Summer School on Probability Theory 2003 (Jean Picard ed.), Lecture Notes in Mathematics, Springer. MR2319879 [34] McAllester D. A. (1998). Some pac-bayesian theorems. Proceedings of the Eleventh Annual Conference on Computational Learning Theory. ACM, 230–234. MR1811587 ¨hlmann, P. (2010). Stability selection. Journal [35] Meinshausen, N. and Bu of the Royal Statistical Society B, to appear. [36] Rigollet, P. and Tsybakov, A. (2010). Exponential Screening and optimal rates of sparse estimation. The Annals of Statistics, to appear. [37] Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6 461–464. MR0468014 [38] Shawe-Taylor, J. and Williamson, R. (1997). A PAC analysis of a bayes estimator. Proceedings of the Tenth Annual Conference on Computational Learning Theory. ACM, 2–9. [39] Tibshirani, R. (1996). Regression shrinkage and selection via the LASSO. Journal of the Royal Statistical Society B 58 (1) 267–288. MR1379242 [40] Tsybakov, A. (2003). Optimal rates of aggregation. Computationnal Learning theory and Kernel Machines, Lecture Notes in Artificial Intelligence n.2777, Springer, Heidelberg. 303–313. [41] Tsybakov, A. (2009). Introduction to nonparametric estimation. Spinger, New York. MR2724359 ¨hlmann, P. (2009). On the conditions used [42] Van de Geer, S. A. and Bu to prove oracle results for the LASSO. Electronic Journal of Statistics 3 1360–1392. MR2576316 [43] Yang, Y. (2004). Aggregating regression procedures to improve performance. Bernoulli, 10 (1) 25–47. MR2044592 [44] Zhang, T. (2008). From epsilon-entropy to KL-entropy: analysis of minimum information complexity density estimation. The Annals of Statistics 34 2180–2210. MR2291500 [45] Zou, H. (2006). The adaptive lasso and its oracle properties. Journal of the American Statistical Association, 101 1418–1429. MR2279469 [46] Zou, H. and Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society B 67 (2) 301–320. MR2137327