A note on" MLE in logistic regression with a diverging dimension"

2 downloads 0 Views 91KB Size Report
Jan 26, 2018 - For example, The Theorem 2 of [5] who cited [1]. This Russian paper extend [4] to multivariate logistic regression with a diverging number of ...
arXiv:1801.08898v1 [math.ST] 26 Jan 2018

A note on “MLE in logistic regression with a diverging dimension” Huiming Zhang January 29, 2018

Abstract This short note is to point the reader to notice that the proof of high dimensional asymptotic normality of MLE estimator for logistic regression under the regime pn = o(n) given in paper: “Maximum likelihood estimation in logistic regression models with a diverging number of covariates. Electronic Journal of Statistics, 6, 1838-1846.” is wrong. Keyword: high dimensional logistic regression; generalized linear models; asymptotic normality.

In order to maintain the preciseness of the statistical scientific record, I write and post this notes on arxiv, as an intention for avoiding misleading the reader. Under mild conditions, it seems that [1] gives a concise proof that the MLE for logistic regression is asymptotically normality when the number of covariates p goes to infinity with the sample size n satisfying pn = o(n). [1] claimed that their results sharpen the existing results of asymptotic normality, for example, [2] studied the MLE estimator for GLM with dimension rate increasing pn = o(n2/3 ) and [4] analysed the GEE estimator for logistic regression as pn = o(n1/3 ). The result of [1] was cited by some recent papers, such as [3] for the high-dimensional likelihood ratio test in logistic regression when pnn < 21 . Nevertheless, it can be carefully seen that Lemma 3 in [1], which is adapted from claims (18) and (19) in [6] for quasi-likelihood estimates, is not true. We restate Lemma 3 and its first part of the proof in [1] as below: Lemma 3 ([1]). Under the conditions of Theorem 1, we have −1/2 −1/2 (β0 )Qn (β)Gn (β0 )u − 1 → 0. (uT u = 1) sup uT Gn β∈Nn (δ)

where Qn (β) :=

∂Ln (β) , ∂β T

n P

Gn (β0 ) =

i=1

−1/2

T xi H(xT i β0 )xi and Nn (δ) := {β :k Gn

(β0 )(β −

β0 ) k≤ δ}. 2 Proof. Let εi := yi − h(xT i β0 ), σi = varεi . A direct calculation yields

u T Gn

−1/2

−1/2

where An (β) = Gn Cn (β) =

n P i=1

−1/2

(β0 )Qn (β)Gn

−1/2

(β0 )Gn (β)Gn

(β0 )u − 1=An (β) − Bn (β0 ) − Cn (β)

(β0 )−1, Bn (β0 ) =

n P

−1/2

u T Gn

i=1 −1/2

u T Gn

−1/2

(β0 )xi xT i Gn

T (β0 )uT [h(xT i β0 ) − h(xi β)]. · · · .

1

(1) −1/2

(β0 )xi xT i Gn

(β0 )uT εi ,

The so-called “direct calculation” of verifying the decomposition (1) (modified from decomposition (21) in [6]) is not true. Notice that Gn (β) and Qn (β) is frankly equal for the setting in [1] (first paragraph of p1840), thus we must have Bn (β0 ) + Cn (β) =

n X

uT Gn−1/2 (β0 )xi xTi Gn−1/2 (β0 )uT [yi − h(xTi β)] ≡ 0,

i=1

which is impossible. In fact, the definition Qn (β) := ∂L∂βn (β) in Lemma 3 of [1] is straightly borrowed from T (21) in [6] for partial derivatives of log quasi-likelihood w.r.t. β Lquasi (β) := n

n X

xi H(xTi β)[Σ(xTi β)]−1 [yi − h(xTi β)]

i=1 T where H(t) := dh(t) dt , Σ(xi β) := Covβ (yi ). Directly applying decomposition (21) in [6] is not make sense, since the partial derivatives of log quasi-likelihood w.r.t. β depend on the responses {yi }ni=1 , and the responses in partial derivatives of log likelihood

Lmle n (β) :=

n X

xi [yi − h(xTi β)]

i=1

are cancelled, i.e. Qn (β) =

n P

i=1

xi H(xTi β)xTi which is not random in Lemma 3 of [1], since

the covariates are assumed to be deterministic. Actually the symbol “Q” means “quasi-”. However, [1] confusedly utilized it for log likelihood of logistic regression. For rest of the proof, the technique of asymptotic derivation is almost the same as [6] whose matrix computations and mathematical arguments (such as the local inverse function theorem) play an essential role. In the last paragraph of [1], they say: “We believe that the procedure can be extended to other generalized linear models and similar theoretical results may be established with straightforward derivations. One potential complication for other generalized linear models is that the response y may not be bounded as in logistic regression models. Other possible extensions are to the Cox model, robust regression, and procedures based on quasi-likelihood functions. Further effort is needed to build up similar procedure and theoretical results under these settings.” 6 years after publication from Google Scholar citation, there were not any related papers which extended their results pn = o(n) to other generalized linear models with straightforward derivations. For example, The Theorem 2 of [5] who cited [1]. This Russian paper extend [4] to multivariate logistic regression with a diverging number of covariates, it just obtains asymptotic normality of MLE with pn = o(n1/3 ). The proof techniques in [5] are also borrowed from [6].

2

References [1] Liang, H., Du, P. (2012). Maximum likelihood estimation in logistic regression models with a diverging number of covariates. Electronic Journal of Statistics, 6, 1838-1846. [2] Portnoy, S. (1988). Asymptotic behavior of likelihood methods for exponential families when the number of parameters tends to infinity. The Annals of Statistics, 16(1), 356366. [3] Sur, P., Chen, Y., Cand`es, E. J. (2017). The Likelihood Ratio Test in HighDimensional Logistic Regression Is Asymptotically a Rescaled Chi-Square. arXiv preprint arXiv:1706.01191. [4] Wang, L. (2011). GEE analysis of clustered binary data with diverging number of covariates. The Annals of Statistics, 39(1), 389-417. [5] Khaplanov, A. Yu. (2013). Asymptotic normality of the estimation of the multivariate logistic regression. Informatics and its Applications, 7(2), 69-74. In Russian, http://mi.mathnet.ru/ia262 [6] Yin, C., Zhao, L., Wei, C. (2006). Asymptotic normality and strong consistency of maximum quasi-likelihood estimates in generalized linear models. Science in China Series A, 49(2), 145-157. School of Mathematical Sciences, Peking University, Beijing, P.R.China [email protected]

3