Bulletin of the ISI Vol. LXII
PROCEEDINGS
i
Bulletin of the International Statistical Institute Volume LXII (2007)
GUEST EDITORS ********************************************************************* M. Ivette Gomes, DEIO (FCUL) and CEAUL, Universidade de Lisboa Jos´e Alberto Pinto Martins, Instituto Nacional de Estat´ıstica Jos´e Alberto Silva, Instituto Nacional de Estat´ıstica ***********************************************
BULLETIN of THE INTERNATIONAL STATISTICAL INSTITUTE - LXII (2007) 3138
CHAPTER 3. CONTRIBUTED PAPERS
CPM 014 : Linear Models
- 3244 -
BULLETIN of THE INTERNATIONAL STATISTICAL INSTITUTE - LXII (2007)
Maximum Likelihood Estimator in Models with Commutative Orthogonal Block Structure Carvalho, Francisco ´ Instituto Polit´ecnico de Tomar - Escola Superior de Gest˜ ao de Tomar, Area Interdepartamental de Matem´ atica Estrada da Serra - Quinta do Contador 2300 - 313 Tomar, Portugal E-mail:
[email protected] Mexia, Jo˜ao T. Faculdade de Ciˆencias e Tecnologia - Universidade Nova de Lisboa Monte da Caparica 2829 - 516 Caparica, Portugal E-mail:
[email protected] Oliveira, M. Manuela ´ Department of Mathematics - Evora University Col´egio Luis Ant´ onio Verney R. Rom˜ ao Ramalho, 59 ´ 7002 Evora, Portugal E-mail:
[email protected] Models with Commutative Orthogonal Block Structure We assume that we have a normal model with Commutative Orthogonal Block Structure (COBS). If a model has this structure it will have a variance-covariance matrix given by V (σ 2 ) =
m X
σi2 Pi ,
i=1
where P1 , . . . , Pm are known pairwise orthogonal orthogonal projection matrices that commute with T , the orthogonal projection matrix on Ω, the range space R(X) of X, which is spanned by the mean vector µ = Xβ. Since V and T commute, even not assuming normality, the Least Squares Estimator (LSE)of estimable vectors are BLUE, see [4]. We recall that Ψ = Aβ is estimable if R(At ) ⊆ R(X t ) and that b with b = Aβ it’s LSE estimator is Ψ b = (X t X)+ X t y, β where + indicates the Moore-Penrose inverse, and e ∼ N Ψ, A(X t V −1 (σ 2 )X)At . Ψ Likelihood We will now assume normality. From [1], we know that det(V (σ 2 )) =
m Y i=1
- 3245 -
g
σi2 i ,
BULLETIN of THE INTERNATIONAL STATISTICAL INSTITUTE - LXII (2007)
with gi = rank(Pi ), and V −1 (σ 2 ) =
m X 1 Pi . σ2 i=1 i
It is easy to write the log-likelihood as m
X gi 1 n l(β, σ 2 |y) = − (y − Xβ)t V −1 (σ 2 )(y − Xβ) − log σi2 − log 2π. 2 2 2 i=1
To carry out the maximization of the log-likelihood, we use a two-step approach. In the first step, we maximize (y − Xβ)t V −1 (σ 2 )(y − Xβ) as a function of β for a given σ 2 . Putting T c = I − T, we have
(y − Xβ)t (T + T c )V −1 (σ 2 )(T + T c )(y − Xβ) = = (y − Xβ)t T V −1 (σ 2 )T (y − Xβ) + (y − Xβ)T c V −1 (σ 2 )T c (y − Xβ) = (T y − Xβ)t V −1 (σ 2 )(T y − Xβ) + (T c y)t V −1 (σ 2 )(T c y),
since T V −1 (σ 2 )T c = V −1 (σ 2 )T T c = 0, because T T c = 0. Therefore we have only to minimize the b with first term. The minimum is attained for β = β b = T y, Xβ this is for the LSE of β. Going over to the second step, we have m
X gi 1 n b t V −1 (σ 2 )(y − X β) b − − (y − X β) log σi2 − log 2π = 2 2 2 i=1 m X gi 1 n = − (T c y)t V −1 (σ 2 )(T c y) − log σi2 − log 2π 2 2 2 i=1 m n 1X 1 c t (T y) Pi (T c y) + gi log σi2 − log 2π, =− 2 2 2 σi i=1 since T c X = 0 e T X = X. Thus, the maximum for the σi2 , i = 1, . . . , m, will be given by σ ˆi2 = with
Si , gi
i = 1, . . . , m,
Si = (T c y)t Pi (T c y) = y t T c Pi T c y,
i = 1, . . . , m.
Estimable Vectors The density function of y may be written as
2
n y|Xβ, ∀(σ ) =
1 − 2 e
b − β)t X t V −1 (σ 2 )X + (β
m X Si i=1
(2π)
n 2
m Y i=1
- 3246 -
!
gi ,
g σi2 i
BULLETIN of THE INTERNATIONAL STATISTICAL INSTITUTE - LXII (2007)
b and S1 , . . . , Sm will constitute a complete sufficient statistic so that, according to the Blackwellso β Lehman-Scheff´e theorem, the maximum likelihood estimator of any estimable vector will be UMVUE, and the Si i = 1, . . . , m, σ ˜i2 = pi with pi = rank(T c Pi T c ) = rank(T c Pi )
i = 1, . . . , m,
will be UMVUE for the variance components. We point out that σ ˜i2 = so we have the connection factors
gi 2 σ ˆ pi i
gi pi
i = 1, . . . , m,
i = 1, . . . , m,
to derive UMVUE from Maximum Likelihood Estimator for the variance components.
References [1] Fonseca, M.; Mexia, J.T.; Zmy´slony, R. (2006) - Binary Operation on Jordan algebras and orthogonal normal models, Linear Algebra And Its Applications, 75-86. [2] Mexia, J.T. (1990) - Best linear unbiased estimates, duality of F tests and the Scheff´e multiple comparison method in presence of controled heteroscedasticity - Comp. Stat y Data Analysis - Vol. 10, n0 3. [3] Mexia, J.T. (1995) - Introdu¸c˜ ao ` a Inferˆencia Estat´ıstica Linear. Edi¸c˜oes Universit´arias Lus´ofonas. [4] Zmi´slony, R. (1980) - A caracterization of Best Linear Unbiased Estimators in the general linear model, Mathematical Statistics and Probability Theory, Proc. 6th int. Conf., Wisla / Poland 1978, Lecture Notes Stat. 2, 365-373 (1980).
ABSTRACT A model with Orthogonal Block Structure has variance-covariance matrix given by a linear combination of known pairwise orthogonal orthogonal projection matrices. The variance components are the coefficients of that linear combination. If those orthogonal projection matrices commute with the orthogonal projection matrices on the space spanned by the mean vector, the model has Commutative Orthogonal Block Structure. We show that, for normal models with Commutative Orthogonal Block Structure, the Least Square Estimator for estimable vectors are both maximum likelihood estimators and UMVUE. We also show that the UMVUE estimators for variance components are given by the product by known coefficients of the maximum likelihood estimators, which are UMVUE.
- 3247 -