Li et al. Journal of Inequalities and Applications (2016) 2016:142 DOI 10.1186/s13660-016-1087-z
RESEARCH
Open Access
Uniform convergence of estimator for nonparametric regression with dependent data Xiaoqin Li, Wenzhi Yang* and Shuhe Hu *
Correspondence:
[email protected] School of Mathematical Science, Anhui University, Hefei, 230601, China
Abstract In this paper, the authors investigate the internal estimator of nonparametric regression with dependent data such as α -mixing. Under suitable conditions such as the arithmetically α -mixing and E|Y1 |s < ∞ (s > 2), the convergence rate | mn (x) – m(x)| = OP (an ) + O(h2 ) and uniform convergence rate
ln n mn (x) – m(x)| = Op (an ) + O(h2 ) are presented, if an = nh supx∈S | d → 0. We generalize f some results in Shen and Xie (Stat. Probab. Lett. 83:1915-1925, 2013).
MSC: 62G05; 62G08 Keywords: nonparametric regression; internal estimator; arithmetically α -mixing; uniform convergence rate
1 Introduction Kernel-type estimators of the regression function are widely various situations because of their flexibility and efficiency, in the dependent cases as well as in the independent data case. This paper is concerned with the nonparametric regression model Yi = m(Xi ) + Ui ,
≤ i ≤ n, n ≥ ,
(.)
where (Xi , Yi ) ∈ Rd × R, ≤ i ≤ n, Ui is random variable such that E(Ui |Xi ) = , ≤ i ≤ n. Then one has E(Yi |Xi = x) = m(x),
≤ i ≤ n, n ≥ .
The most popular nonparametric estimators of the unknown function m(x) is the NW (x) given below and the local polynomials fitting. Let Nadaraya-Watson estimator m K(x) be a kernel function. Define Kh (x) = h–d K(x/h), where h = hn is a sequence of positive bandwidths tending to zero as n → ∞. Kernel-type estimators of the regression function are widely various situations because of their flexibility and efficiency, in the dependent data case as well as the independent data case. For the independent data, Nadaraya [] and Watson [] gave the most popular nonparametric estimators of the unknown func© 2016 Li et al. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Li et al. Journal of Inequalities and Applications (2016) 2016:142
Page 2 of 12
NW (x), i.e. tion m(x), named the Nadaraya-Watson estimator m n Yi Kh (x – Xi ) NW (x) = i=n . m i= Kh (x – Xi )
(.)
Jones et al. [] considered various versions of kernel-type regression estimators such as the Nadaraya-Watson estimator (.) and the local linear estimator. They also investigated the following internal estimator: Yi Kh (x – Xi ) n i= f (Xi ) n
n (x) = m
(.)
for known density f (·). The term ‘internal’ stands for the fact that the factor f (X i ) is in NW (x) has the factor f (x) = n– n K (x–X ) ternal to the summation, while the estimator m i i= h externally to the summation. The internal estimator was first proposed by Mack and Müller []. Jones et al. [] studied various kernel-type regression estimators, including introduced the internal estimator (.). Linton and Nielsen [] introduced ‘integration method’, based on direct integration of initial pilot estimator (.). Linton and Jacho-Chávez [] studied the two internal nonparametric estimators with the estimator similar to estimator (.) but in place of an un known density f (·), a classical kernel estimator f (x) = n– ni= Kh (x – Xi ) is used. Much work has been done for the kernel density estimation. For example, Masry [] gave the recursive probability density estimation for a mixing dependent sample, Roussas et al. [] and Tran et al. [] investigated the fixed design regression for dependent data, Liebscher [] studied the strong convergence of sums of α-mixing random variables and gave its application to density estimation, Hansen [] obtained the uniform convergence rates for kernel estimation with dependent data, and so on. For more work as regards kernel estimation, we can also refer to [–] and the references therein. Let (, F , P) be a fixed probability space. Denote N = {, , . . . , n, . . .}. Let Fmn = σ (Xi , m ≤ i ≤ n, i ∈ N) be the σ -field generated by random variables Xm , Xm+ , . . . , Xn , ≤ m ≤ n. For n ≥ , we define α(n) = sup
sup
∞ m∈N A∈Fm ,B∈Fm+n
P(AB) – P(A)P(B).
Definition . If α(n) ↓ as n → ∞, then {Xn , n ≥ } is called a strong mixing or α-mixing sequence. Recently, Shen and Xie [] obtained the strong consistency of the internal estimator (.) under α-mixing data. In their paper, the process is assumed to be geometrically α-mixing sequence, i.e. the mixing coefficients α(n) satisfy α(n) ≤ β e–β n , where β > and β > . They also supposed that the sequence {Yn , n ≥ } is bounded, as well as the density f (x) of X . Inspired by Hansen [], Shen and Xie [] and other papers above, we also investigate the convergence of the internal estimator (.) under α-mixing data. The process is supposed to be an arithmetically α-mixing sequence, i.e. the mixing coefficients α(n) satisfy α(n) ≤ Cn–β , C > , and β > . Without the bounded conditions of {Yn , n ≥ } and the density f (x) of X , we establish the convergence rate and uniform convergence rate for the
Li et al. Journal of Inequalities and Applications (2016) 2016:142
Page 3 of 12
internal estimator (.). For the details, please see our results in Section . The conclusion and the lemmas and proofs of the main results are presented in Section and Section , respectively. Regarding notation, for x = (x , . . . , xd ) ∈ Rd , set x = max(|x |, . . . , |xd |). Throughout the paper, C, C , C , C , . . . , denote some positive constants not depending on n, which may be different in various places. x denotes the largest integer not exceeding x. → means to P d → means to convergence in probability. X = Y means that the take the limit as n → ∞, − random variables X and Y have the same distribution. A sequence {Xn , n ≥ } is said to be d of second-order stationarity if (X , X+k ) = (Xi , Xi+k ), i ≥ , k ≥ .
2 Results and discussion 2.1 Some assumptions Assumption . We assume the data observed {(Xn , Yn ), n ≥ } valued in Rd × R comes from a second-order stationary stochastic sequence. The sequence {(Xn , Yn ), n ≥ } is also assumed to be arithmetically α-mixing with mixing coefficients α(n) such that α(n) ≤ An–β ,
(.)
where A < ∞ and for some s > E|Y |s < ∞
(.)
s – . s–
(.)
and β≥
The known density f (·) of X is upon its compact support Sf and it is also assumed that infx∈Sf f (x) > . Let B be a positive constant such as sup E |Y |s |X = x f (x) ≤ B .
(.)
x∈Sf
Also, there is a j∗ < ∞ such that for all j ≥ j∗ sup
x ∈Sf ,xj+ ∈Sf
E |Y Yj+ ||X = x , Xj+ = xj+ fj (x , xj+ ) ≤ B ,
(.)
where B is a positive constant and fj (x , xj+ ) denotes the joint density of (X , Xj+ ). Assumption . There exist two positive constants K¯ > and μ > such that sup K(u) ≤ K¯ u∈Rd
and
K(u) du = μ.
(.)
Rd
Assumption . Denote by Sf the interior of Sf . For x ∈ Sf , the function m(x) is twice differentiable and there exists a positive constant b such that ∂ m(x) ∂x ∂x ≤ b, i j
∀i, j = , , . . . , d.
Li et al. Journal of Inequalities and Applications (2016) 2016:142
Page 4 of 12
The kernel density function K(·) is symmetrical and satisfies Rd
|vi ||vj |K(v) dv < ∞,
∀i, j = , , . . . , d.
Assumption . The kernel function satisfies the Lipschitz condition, i.e.
K(u) – K u ≤ L u – u ,
∃L > ,
u, u ∈ Rd .
Remark . Similar to Assumption of Hansen [], Assumption . specifies that the serial dependence in the data is of strong mixing type, and equations (.)-(.) specify a required decay rate. Condition (.) controls the tail behavior of the conditional expectation E(|Y |s |X = x), condition (.) places a similar bound on the joint density and conditional expectation. Assumptions .-. are the conditions of kernel function K(u), i.e., Assumption . is a general condition, Assumption . is used to estimate the convergence rate of |E mn (x) – m(x)|, and Assumption . is used to investigate the uniform n (x). convergence rate of the internal estimator m
2.2 Main results n (x). For ≤ r ≤ s and s > , denote First, we investigate the variance bound of estimator m r/s K ¯ r– μ ¯ , where B , inf f (x), K , and μ are defined in Assumptions . μ(r, ¯ s) := (inf(B ) f (x)) x∈S r–+r/s f x∈Sf
and .. Theorem . Let Assumption . and Assumption . be fulfilled. Then there exists a < ∞ such that for n sufficiently large and x ∈ Sf n (x) ≤ d , Var m nh ¯ s) + (μ¯ (, s) + where := μ(, ¯ s) + j∗ μ(,
(.)
–/s μ B μ ¯ s (s,s) ) + A (s–)/s . (infx∈Sf f )
n (x). As an application to Theorem ., we obtain the weak consistency of estimator m Corollary . Let Assumption . and Assumption . be fulfilled and K(·) be a density function. For x ∈ Sf , m(x) is supposed to be continuous at x. If nhd → ∞ as n → ∞, then P
n (x) − m → m(x).
(.)
n (x) is presented. Next, the convergence rate of estimator m Theorem . For < θ < and s > , let Assumptions .-. hold, where the mixing exponent β satisfies – θ + θ s s – , . β > max ( – θ )(s – ) s – Denote an =
ln n nhd
(.)
and take h = n–θ/d . Then for x ∈ Sf , one has
m n (x) – m(x) = OP (an ) + O h .
(.)
Li et al. Journal of Inequalities and Applications (2016) 2016:142
Page 5 of 12
n (x) and its conThird, we now investigate the uniform convergence rate of estimator m vergence over a compact set. Let Sf be any compact set contained in Sf . Theorem . For < θ < and s > , let Assumptions .-. be fulfilled, where the mixing exponent β satisfies sθ d + θ s + sd + s – θ d – θ s – β > max , . ( – θ )(s – ) s–
(.)
Suppose that Assumption . is also fulfilled. Denote an =
ln n nhd
and take h = n–θ/d . Then
n (x) – m(x) = Op (an ) + O h . sup m
(.)
x∈Sf
2.3 Discussion The parametric θ in Theorem . and Theorem . plays the role of a bridge between the process (i.e. mixing exponent) and choice of positive bandwidth h. For example, if s+ s– d = , θ = , and β > max{ s– , s– }, then we take h = n–/ in Theorem . and obtain the convergence rate | mn (x) – m(x)| = OP ((ln n)/ n–/ ). Similarly, if d = , θ = , and β > s– , then we choose h = n–/ in Theorem . and establish the uniform convergence rate s– mn (x) – m(x)| = Op ((ln n)/ n–/ ). supx∈S | f
3 Conclusion On the one hand, similar to Theorem ., Hansen [] investigated the kernel average estimator (x) =
Yi Kh (x – Xi ), n i= n
(x)) ≤ d , where is a positive constant. For the and obtained the variance bound Var( nh details, see Theorem of Hansen []. Under some other conditions, Hansen [] also (x) – E (x)| = OP (an ), where an = gave the uniform convergence rates such as sup x ≤cn | ln n → and {cn } is a sequence of positive constant (see Theorems - of Hansen []). nhd On the other hand, under the conditions such as the geometrically α-mixing and {Yn , n ≥ } is bounded as well as the density function f (x) of X , Shen and Xie [] obtained the a.c. complete convergence such as | mn (x) – m(x)| −→ , if lnnhdn → (see Theorem . of Shen a.c.
mn (x) – m(x)| −→ , if and Xie []), the uniform complete convergence such as supx∈S | f
ln n nhd
→ (see Theorem . of Shen and Xie []). In this paper, we do not need the bounded conditions of {Yn , n ≥ } and f (x) of X , and we also investigate the convergence of the ˆ n (x). Under some weak conditions such as the arithmetically α-mixing internal estimator m s mn (x) – and E|Y | < ∞, s > , we establish the convergence rate in Theorem . such as | ln n m(x)| = OP (an ) + O(h ) if an = nhd → , and uniform convergence rate in Theorem . ln n such as supx∈S | mn (x) – m(x)| = Op (an ) + O(h ) if an = nh d → . In Theorem . and f mn (x)| = OP (an ) and supx∈S | mn (x) – E mn (x)| = OP (an ), Theorem ., we have | mn (x) – E f where the convergence rates are the same as that obtained by Hansen []. So, we relatively generalize the results in Shen and Xie [].
Li et al. Journal of Inequalities and Applications (2016) 2016:142
Page 6 of 12
4 Some lemmas and the proofs of the main results Lemma . (Hall and Heyde, [], Corollary A., i.e. Davydov’s lemma) Suppose that X and Y are random variables which are G -measurable and H -measurable, respectively, and E|X|p < ∞, E|Y |q < ∞, where p, q > , p– + q– < . Then – – E(XY ) – EXEY ≤ E|X|p /p E|Y |q /q α(G , H ) –p –q . Lemma . (Liebscher [], Proposition .) Let {Xn , n ≥ } be a stationary α-mixing sequence with mixing coefficient α(k). Assume that EXi = and |Xi | ≤ S < ∞, a.s., i = , , . . . , n. Then, for n, m ∈ N , < m ≤ n/, and all ε > , n ε S P Xi > ε ≤ exp – + nα(m), n ε ( D + εSm) m m i= where Dm = max≤j≤m Var(
j
i= Xi ).
Lemma . (Shen and Xie [], Lemma .) Under Assumption ., for x ∈ Sf , one has E mn (x) – m(x) = O h . Proof of Theorem . For x ∈ Sf , let Zi :=
Yi Kh (x – Xi ) = Zi , n i= f (Xi ) n i= n
n (x) = m
Yi Kh (x–Xi ) , f (Xi )
≤ i ≤ n. Consider now
n
n ≥ .
For any ≤ r ≤ s and s > , it follows from (.) and (.) that Kh (x – X )Y r hd(r–) E|Z |r = hd(r–) E f (X ) |Kh (x – X )|r d(r–) r E |Y | |X =h E f r (X ) r K x – u E |Y |r |X = u f (u) du = h hd f r (u) Sf r/s x – u r s ≤ K E |Y | |X = u f (u) hd f r–+r/s (u) du h Sf ≤
(B )r/s K¯ r– μ := μ(r, ¯ s) < ∞. (infx∈Sf f )r–+r/s
For j ≥ j∗ , by (.), one has Kh (x – X )Kh (x – Xj+ )Y Yj+ E|Z Zj+ | = E f (X )f (Xj+ ) |Kh (x – X )Kh (x – Xj+ )| =E E |Y Yj+ ||X , Xj+ f (X )f (Xj+ ) x – uj+ x – u = K K E |Y Yj+ ||X = u , Xj = uj+ h h Sf Sf
(.)
Li et al. Journal of Inequalities and Applications (2016) 2016:142
Page 7 of 12
fj (u , uj+ ) du duj+ hd f (u )f (uj+ ) B K(u )K(uj+ ) du duj+ ≤ B μ ≤ < ∞. (infx∈Sf f ) Rd Rd (infx∈Sf f ) ×
(.)
Define the covariances γj = Cov(Z , Zj+ ), j > . Assume that n is sufficiently large so that h–d ≥ j∗ . We now bound the γj separately for j ≤ j∗ , j∗ < j ≤ h–d , and h–d < j < ∞. First, for ≤ j ≤ j∗ , by the Cauchy-Schwarz inequality and (.) with r = , |γj | ≤
Var(Z ) · Var(Zj+ ) = Var(Z ) ≤ EZ ≤ μ(, ¯ s)h–d .
(.)
Second, for j∗ < j ≤ h–d , in view of (.) (r = ) and (.), we establish that |γj | ≤ E|Z Zj+ | + E|Z | ≤
B μ + μ¯ (, s). (infx∈Sf f )
(.)
Third, for j > h–d , we apply Lemma ., (.) and (.) with r = s (s > ) and we thus obtain /s /s –/s |γj | ≤ α(j) E|Z |s ¯ s)h–d(s–) ≤ A––/s j–β(–/s) μ(s,
≤ A––/s μ¯ s (s, s)j–(–/s) h–d(s–)/s .
(.)
Consequently, in view of the property of second-order stationarity and (.)-(.), for n sufficiently large, we establish n n– j n (x) = Var – γj Zi = nγ + n Var m n n n i= j= ≤
–d nh μ(, ¯ s) + n |γ | + n |γ | + n |γ | j j j n ∗ –d –d ≤j≤j∗ j τn |X = u hd du (infSf f ) Sf h K(u)E |Y |I |Y | > τn |X = x – hu f (x – hu) du = (infSf f ) Sf K(u)E |Y |s |X = x – hu f (x – hu) du ≤ (infSf f ) τns– Sf
E|Rn | ≤ E
≤
μB . (infSf f ) τns–
(.)
Combining Markov’s inequality with (.), one has |Rn – ERn | = OP τn–(s–) = OP (an ).
(.)
Denote Yi Kh (x – Xi ) I |Yi | ≤ τn := Z˜ n , n i= f (Xi ) n i= n
n (x) = m
n
n ≥ .
(.)
It can be seen that m n (x) – m(x) ≤ m n (x) – E mn (x) + E mn (x) – m(x) n (x) – E mn (x) + |Rn – ERn | + E mn (x) – m(x). ≤ m Similar to the proof of (.), it can be argued that Var
j i=
Z˜ i ≤ C jh–d ,
(.)
Li et al. Journal of Inequalities and Applications (2016) 2016:142
Page 9 of 12
which implies Dm = max Var
j
≤j≤m
Z˜ i ≤ C mh–d .
i=
– Meanwhile, one has |Z˜ i – EZ˜ i | ≤ Chdτn , ≤ i ≤ n. Setting m = a– n τn and using (.), h = n–θ/d , and Lemma . with ε = an n, we obtain for n sufficiently large
n (x) – E mn (x) > an P m n = P (Z˜ i – EZ˜ i ) > nan i= C τn nan + nA(an τn )β ≤ exp – –d –d an nhd (C h + C h ) β(s–)–s ln n + C h–d an s– ≤ exp – (C + C ) (θ–) β(s–)–s (s–)
≤ o() + C nθ n = o() + C n
(ln n)
β(θ –)(s–)+θ s+s–θ (s–)
β(s–)–s (s–)
(ln n)
β(s–)–s (s–)
= o(),
(.)
θs+s–θ s– in view of s > , < θ < , β > max{ (–θ)(s–) , s– }, and β(θ–)(s–)+θs+s–θ < . (s–) Consequently, by (.), (.), (.), and Lemma ., we establish the result of (.).
Proof of Theorem . We use some similar notation in the proof of Theorem .. Obviously, one has n (x) – m(x) ≤ sup m n (x) – E mn (x) + sup E mn (x) – m(x). sup m
x∈Sf
x∈Sf
x∈Sf
(.)
By the proof of (.) of Shen and Xie [], we establish that b E mn (x) – m(x) ≤ h K(v)|vi vj | dv ≤ C h , Rd ≤i,j≤d
which implies sup E mn (x) – m(x) = O h .
(.)
x∈Sf
˜ n (x), ˆ n (x) = Rn (x) + m Since m n (x) – E n (x) – E mn (x) ≤ sup m mn (x) + sup |Rn – ERn |. sup m
x∈Sf
x∈Sf
x∈Sf
(.)
It follows from the proof of (.) that sup |Rn – ERn | = Op (an ).
x∈Sf
(.)
Li et al. Journal of Inequalities and Applications (2016) 2016:142
Page 10 of 12
Since Sf is a compact set, there exists a ξ > such that Sf ⊂ B := {x : x ≤ ξ }. Let vn be vdn Bjn of B, where Bjn ⊂ {x : x – xjn ≤ vξn }, a positive integer. Take an open covering j= j = , , . . . , vdn , and their interiors are disjoint. So it follows that n (x) – E mn (x) sup m
x∈Sf
≤ max
n (x) – E sup m mn (x)
≤ max
n (xjn ) + max m n (x) – m n (xjn ) – E sup m mn (xjn )
≤j≤vdn x∈Bjn ∩S f
≤j≤vdn x∈Bjn ∩S f
+ max
≤j≤vdn
mn (xjn ) – E sup E mn (x)
≤j≤vdn x∈Bjn ∩S f
:= I + I + I .
(.)
n (x) in (.) and the Lipschitz condition of K , By the definition of m n – τ xjn – Xi x – Xi n m n (x) – m n (xjn ) ≤ inf f –K K nhd i= h h Sf ≤
Lτn d+ h infSf
f
x – xjn ,
x ∈ Sf .
τn + , we obtain Taking vn = hd+ a n
n (xjn ) ≤ n (x) – m sup m
x∈Bjn ∩Sf
Lξ an , infSf f
≤ j ≤ vdn ,
(.)
and I = max
n (x) – m n (xjn ) = O(an ). sup m
≤j≤vdn x∈Bjn ∩S
(.)
f
n (x)|, we have by (.) In view of |E mn (xjn ) – E mn (x)| ≤ E| mn (xjn ) – m I = max
mn (xjn ) – E sup E mn (x) ≤
≤j≤vdn x∈Bjn ∩S f
Lξ an = O(an ). infSf f
(.)
Y K (x –X ) For ≤ i ≤ n and ≤ j ≤ vdn , denote Z˜ i (j) = i hf (Xjni ) i I(|Yi | ≤ τn ). Then similar to the – proof of (.), we obtain by Lemma . with m = a– n τn and ε = Mnan for n sufficiently large
n (xjn ) – E P |I | > Man = P max m mn (xjn ) > Man ≤j≤vdn
d
≤
vn j=
n P Z˜ i (j) – EZ˜ i (j) > Mnan i=
Li et al. Journal of Inequalities and Applications (2016) 2016:142
≤
–
vdn exp
Page 11 of 12
M nan C τn A(an τn )β + vdn –d –d Man hd (C h + C Mh )
= I + I ,
(.)
where the value of M will be given in (.). – sd
ln n / In view of < θ < , s > , h = n–θ/d , and an = ( nh , one has h–d(d+) = nθ(d+) and an s– = d)
(ln n) large
sd – (s–)
n
sd(–θ ) (s–)
–
τn . Therefore, by vn = hd+ + and τn = an s– , we obtain for n sufficiently a n
M nan I = (C h–d + MC h–d ) M ln n – sd ≤ C h–d(d+) an s– exp – (C + MC ) vdn exp
–
≤ C (ln n)
sd – (s–)
) θ(d+)+ sd(–θ (s–) –
n
M (C + MC )
= o(),
(.)
where M is sufficiently large such that sd( – θ ) M ≥ θ (d + ) + . (s – ) (C + MC )
(.)
Meanwhile, by (.) and h = n–θ/d , one has for n sufficiently large I = vdn =
d β(s–)–s τn C C β(s–)–s(d+) C τn β an s– A(a τ ) ≤ an s– h–d = h–d(d+) n n d d+ Man h M h an M
β(s–)–s(d+) β(θ –)(s–)+sθ d+θ s+sd+s–θ d–θ C (s–) = o(), (ln n) (s–) n M
(.)
in which is used the fact that s > , < θ < , sθ d + θ s + sd + s – θ d – θ s – , , β > max ( – θ )(s – ) s– and β(θ – )(s – ) + sθ d + θ s + sd + s – θ d – θ < . (s – ) Thus, by (.)-(.), we establish that |I | = Op (an ).
(.)
Finally, the result of (.) follows from (.)-(.), (.), (.), and (.) immediately. Competing interests The authors declare that they have no competing interests. Authors’ contributions All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Li et al. Journal of Inequalities and Applications (2016) 2016:142
Page 12 of 12
Acknowledgements The authors are deeply grateful to the editors and the anonymous referees for their careful reading and insightful comments, which helped in improving the earlier version of this paper. This work is supported by the National Natural Science Foundation of China (11171001, 11426032, 11501005), Natural Science Foundation of Anhui Province (1408085QA02, 1508085QA01, 1508085J06, 1608085QA02), the Provincial Natural Science Research Project of Anhui Colleges (KJ2014A010, KJ2014A020, KJ2015A065), Quality Engineering Project of Anhui Province (2015jyxm054), Applied Teaching Model Curriculum of Anhui University (XJYYKC1401, ZLTS2015053) and Doctoral Research Start-up Funds Projects of Anhui University. Received: 28 January 2016 Accepted: 13 May 2016 References 1. Shen, J, Xie, Y: Strong consistency of the internal estimator of nonparametric regression with dependent data. Stat. Probab. Lett. 83, 1915-1925 (2013) 2. Nadaraya, EA: On estimating regression. Theory Probab. Appl. 9, 141-142 (1964) 3. Watson, GS: Smooth regression analysis. Sankhya, Ser. A 26, 359-372 (1964) 4. Jones, MC, Davies, SJ, Park, BU: Versions of kernel-type regression estimators. J. Am. Stat. Assoc. 89, 825-832 (1994) 5. Mack, YP, Müller, HG: Derivative estimation in nonparametric regression with random predictor variable. Sankhya, Ser. A 51, 59-72 (1989) 6. Linton, O, Nielsen, J: A kernel method of estimating structured nonparametric regression based on marginal integration. Biometrika 82, 93-100 (1995) 7. Linton, O, Jacho-Chávez, D: On internally corrected and symmetrized kernel estimators for nonparametric regression. Test 19, 166-186 (2010) 8. Masry, E: Recursive probability density estimation for weakly dependent stationary processes. IEEE Trans. Inf. Theory 32, 254-267 (1986) 9. Roussas, GG, Tran, LT, Ioannides, DA: Fixed design regression for time series: asymptotic normality. J. Multivar. Anal. 40, 262-291 (1992) 10. Tran, LT, Roussas, GG, Yakowitz, S, Van, BT: Fixed-design regression for linear time series. Ann. Stat. 24, 975-991 (1996) 11. Liebscher, E: Strong convergence of sums of α -mixing random variables with applications to density estimation. Stoch. Process. Appl. 65, 69-80 (1996) 12. Hansen, BE: Uniform convergence rates for kernel estimation with dependent data. Econom. Theory 24, 726-748 (2008) 13. Andrews, DWK: Nonparametric kernel estimation for semiparametric models. Econom. Theory 11, 560-596 (1995) 14. Bakouch, HS, Risti´c, MM, Sandhya, E, Satheesh, S: Random products and product auto-regression. Filomat 27, 1197-1203 (2013) 15. Bosq, D: Nonparametric Statistics for Stochastic Processes: Estimation and Prediction, 2nd edn. Lecture Notes in Statistics, vol. 110. Springer, Berlin (1998) 16. Bosq, D, Blanke, D: Inference and Prediction in Large Dimensions. Wiley, Chichester (2007) 17. Fan, JQ: Design-adaptive nonparametric regression. J. Am. Stat. Assoc. 87, 998-1004 (1992) 18. Fan, JQ: Local linear regression smoothers and their minimax efficiency. Ann. Stat. 21, 196-216 (1993) 19. Fan, JQ, Yao, QW: Nonlinear Time Series: Nonparametric and Parametric Methods. Springer, New York (2003) 20. Gao, JT, Lu, ZD, Tjostheim, D: Estimation in semiparametric spatial regression. Ann. Stat. 34, 1395-1435 (2006) 21. Gao, JT, Tjostheim, D, Yin, JY: Estimation in threshold autoregressive models with a stationary and a unit root regime. J. Econom. 172, 1-13 (2013) 22. Györfi, L, Kohler, M, Krzy˙zak, A, Walk, H: A Distribution-Free Theory of Nonparametric Regression. Springer, New York (2002) 23. Ling, NX, Wu, YH: Consistency of modified regression estimation for functional data. Statistics 46, 149-158 (2012) 24. Ling, NX, Xu, Q: Asymptotic normality of conditional density estimation in the single index model for functional time series data. Stat. Probab. Lett. 82, 2235-2243 (2012) 25. Ling, NX, Li, ZH, Yang, WZ: Conditional density estimation in the single functional index model for α -mixing functional data. Commun. Stat., Theory Methods 43, 441-454 (2014) 26. Masry, E: Multivariate local polynomial regression for time series: uniform strong consistency and rates. J. Time Ser. Anal. 17, 571-599 (1996) 27. Newey, WK: Kernel estimation of partial means and a generalized variance estimator. Econom. Theory 10, 233-253 (1994) 28. Peligrad, M: Properties of uniform consistency of the kernel estimators of density and of regression functions under dependence conditions. Stoch. Stoch. Rep. 40, 147-168 (1992) 29. Rosenblatt, M: Remarks on some non-parametric estimates of a density function. Ann. Math. Stat. 27, 832-837 (1956) 30. Stone, CJ: Consistent nonparametric regression. Ann. Stat. 5, 595-645 (1977) 31. Hall, P, Heyde, CC: Martingale Limit Theory and Its Application. Academic Press, New York (1980) 32. Liebscher, E: Estimation of the density and the regression function under mixing conditions. Stat. Decis. 19, 9-26 (2001)