LITERATURE CITED i. A.M. Kagan, "On the estimation theory of location parameter," Sankhya, A28, 4 (1966). 2. A.M. Kagan, Yu. V. Linnik, and C. R. Rao, ...
Finally let us note that (as shown in [6]) under fairly general conditions an exact (not asymptotic) coincidence of the Pitman estimator takes place only for a normal
~(~). In this case,
~
with the polynomial estimator
~--~
~k)
I
LITERATURE CITED i. 2. 3. 4. 5. 6.
A . M . Kagan, "On the estimation theory of location parameter," Sankhya, A 2 8 , 4 (1966). A . M . Kagan, Yu. V. Linnik, and C. R. Rao, Characterization Problems of Mathematical Statistics [in Russian], Moscow (1972). E . J . G . Pitman, "The estimation of location and scale parameters of a continuous population of any given form," Biometrika, 30, III-IV (1938). H. Cramer, Mathematical Methods of Statistics, Princeton (1957). I . A . Ibragimov and R. Z. Khas'minskii, "Asymptotic behavior of certain statistical estimates in the smooth case," Teor. Veroyatn. Ee Primen., 17, 3 (1972); 18, 1 (1973). L . B . Klebanov, "Inadmissibility of polynomial estimates of location parameter," Mat. Zametki, 14, 6 (1973).
UNBIASED ESTIMATES AND CONVEX LOSS FUNCTIONS L. B. Klebanov
UDC 519.281
Under fairly general assumptions it is proved that unbiased variance are optimal in the class of unbiased estimates and convex loss function. The analysis is extended to the case tions. A class of loss functions is introduced and studied for a given family of distributions.
w
estimates of minimal with respect to any of matrix loss functhat are universal
Introduction and Principal Definitions On a measurable space
measures.
( ~ , 0 ~ 1 let us consider a family
We shall assume that on the basis of an observation
{P6 . @ ~ G } ~ ~,
of probability
that has a distribu-
tion
~
with unknown @ , it is necessary to estimate the value of a given parametric func-
tion
~
: 8
~
~m'
where ~
is the set of all square matrices of dimension
nl •
with real elements. We shall assume that the losses due to the use of the statistic the quality of estimation of ~ ~--*-~m" function w
~
: ~
are specified by a matrix-valued function
We shall also assume that the set ~
' ~m
vv
has an order relation ~
:~ .*
in x
The loss
is assumed to be convex in the first argument for any fixed value of the second
argument (the term of loss function will be applied to such functions only). If t
is the estimate of ~ ,
then its risk corresponding to the loss function w
will
be defined by the matrix
*It is assumed that ~ m
is an ordered vector space over the field of real numbers.
Translated from Zapiski Nauchnykh Seminarov Leningradskogo Otdeleniya Matematicheskogo Instituta im. V. A. Steklova AN SSSR, Vol. 34, pp. 40-52, 1974. 870
0096-4104/78/0906-0870507.50
9 1978 Plenum Publishing Corporation
We shall say that an estimate T if for any
@ 6~9
Let
~e(~] - R ~ ( ~ )
~ (or that
F is worse than
)
is nonnegative (with an order relation 9 )
A
~ if this matrix is nonzero for at least one @ ~ ~
~{ be a class of estimates.
class ~ A
A
the matrix
and strictly better than
is better than an estimate
A statistic
~ @~{
.
is said to be optimal in the
(or admissible in the class ~{ ) if it is better (or not worse) than any estimate
&~{
.
As a rule we shall consider the case that the class ~
class of unbiased estimates (u.e.)
~ of a parametric function
is a subclass of the
~ with finite covariance
matrices, i~e., of estimates of T such that =
0
for any O ~ l ~ 9
(0~
and there exists an
(i)
0
(here T denotes transposition).
An estimate ~
that is optimal in the class ~
covariance matrix (for a given order relation ~
of unbiased estimates with a finite
and loss function w )
estimate of minimum risk (u.e.m.r.) in th& class ~
is called an unbiased
.
The principal object of study in this paper is the relationship between the u.e.m.r. for various loss functions v
and order relations
!~ in the set ~ m "
Similar investiga-
tions for the case of real-valued parametric functions, estimates, and loss functions were carried out in [i, 2].
Matrix loss functions were studied in [3-5].
Here we present re-
sults that extend the corresponding theorems of [5, 6]. w
Optimal Estimates of Minimum Variance Now let us consider the scalar case, when
al ordered field of real numbers.
~ " ~--~-~, W "
~'---'~ ~,
and ~
is the usu-
The u.e.m.r, corresponding to a loss function
is called an unbiased estimate of minimum variance (u.e.m.v.). Let
~
be a class of u.e. with finite variance for the parametric function ][ , and let
21e ={~(~) : Eo% =0, Eel/.la§ < ~ ,
e ~ @}, e>0
tics I that satisfy the following condition: of a statistic I ~ ~ ' for any O ~ ~ THEOREM i. @ ~ e iance.
Proof.
L~ ( ~ ~(z,0))
. Let ~
~ J~
be a u.e.m.v., and let the set ~ o =~0 ~ e in the set ~
be dense for any
of all u.e. of zero (u.e.z.) with finite var-
will be a u.e.m.r, for any loss function w .
It is well known [7] that ~
is a u.e.m.v, if and only if
Eo for any @ ~ a n d
If ~ ( Z ~0) is the distribution function (d.f.)
then the polynomials of Z will be dense in the space
, i n the metric Lz(Pc), Then ~
By J~, we shall denote a set: of statis-
all u.e.z.
=
0
~(~) with finite variance.
(3)
By virtue of the conditions of the
theorem this is equivalent to
871
E~(~) for any 8 ~ e
and
the conditions
E 9 (~)=
equality).
Thus,
~ ~ ~o"
~
If, however,
= 0
% ~,
0 , ~ 8 1 ~ i 2~s < o ~
then
~
~ ~E, for E,
0 9
From Lemma 2 it follows that
This signifies that
is bounded, it follows in the same way as in Theorem 1 that
Url' is continuous and strictly monotonic; hence
Eo{ J } :0. According to the assumption of the theorem, holds for any ~ e ~ .
~
is dense in ~ ,
and therefore this equation
Thus the statistic ~ will be partially sufficient in the class ~
Now it follows from the proof of Theorem i that ~
.
is a u.e.m.r, for any loss function w~ .
This completes the proof of the theorem. Let us note that among the loss functions
w(~,~)=w(~-~
, w(~)~o, w(o)=o,
a representation (10) is admissible only for the functions
w,(f-r
where A ,
S,
c-~o,B-~
c
are constants,
tiere
=
w~ can be o b t a i n e d from
W 2 by a l i m i t
transition
for
such that B + 2A/c 2. 877
It is clear that not every loss function is universal.
7.
THEOREM
Let
W:(~,~)--'
W(~
be a convex piecewise continuously differentiable
(with only finitely many points at which there is no derivative) strictly convex.
Then there exists a family {Qe~ 8 6 E) 1 for which W
i.e., there exists a bounded statistic the loss function w , Proof.
where
is not universal,
~ which is a u.e.m.r, for its mean with respect to
but which is not a u.e.m.v.
Since the function
~4 , in which W
loss function that is not
W
is not strictly convex, there exists an interval (&,$) C
coincides with a linear function.
ot>O is sufficiently small.
Let
If % =(%1,%-~,%~
is a u.e.z., then
+/,co,#e=o'~ ee[o ] Hence follows that %3 = O any ~
~ ~4 .
, i/.., =-~LZ,
i.e., all the u.e.z, have the form
Let I be a u.e.m.v, for its mean.
(~1,-%,1, O)
for
Then
EoT =(I* i.e.,
II = Iz"
any I~ , ~
Hence I will be a u.e.m.v, for its mean if and only if ~ = ( ~,, ~,, ~5) for
~ ~~
In considering the above-mentioned loss function w ,
that the condition of optimality of the statistic ~
in the class ~
it is easy to see
will be written in the
form
Let ~ =(~,~'~, 0h 9 Then this condition of optimality will take the form
02 If [~ , z' and ~
are such that
then our statistic
~ =(~,~ ~z,O~
n ~i ~
o
z and
will be a u.e.m.r, for its mean with respect to w ,
but
it is not a u.e.m.v. w
Uniqueness of Optimal Estimates Let us note that the u.e.m.v.
~ constructed in the proof of Theorem 7 is a complete
sufficient statistic, and therefore the statistic
~=Ee{~
I ~}
is both a u.e.m.v, and a A
u.e.m.r, for E 0f.
Thus for the loss function mentioned in Theorem 7 both
~ and ~ were
u.e.m.r, for E 0 ~ (3~, i.e., the optimal estimate was not unique in the class ~
.
There-
fore it is of interest to ascertain the conditions of uniqueness of optimal estimates.
878
THEOREM 8.
Let the set of bounded u.e.z. ~i~ be dense in the metric L t ( P e ~ , @ ~ ~) ,
in the set ~ ,
and let w
be a twice continuously differentiable loss function that is
strictly convex in the first argument for any fixed value of the second argument. u.e.m.r, will be unique* in the class ~ Proof. l)
.
Let us assume the contrary.
the function
Then the
Let ~ and I
be two u.e.m.r.; then for any ~
(0,
~ I + ( ~ - ~ I ~ will be an unbiased estimate of the parametric function
and we have
i.e.,
By virtue of the strict convexity of the function w , I(~
almost certainly with respect to the measure
~e "
this equation holds only if ~ ( ~ = This completes the proof of the
theorem. Together with Theorem 7 we have also proved the following. THEOREM 9. such that E e
There exists a family =Ee
4, and
and
{ Re ~ 8 ~ ~) } and two statistics
~
and i~( ~ ~ ~
4 are u.e.m.r, with respect to the loss function men-
tioned in Theorem 7. Let us note that the uniqueness of an optimal unbiased estimate for the case of a quadratic loss function has been proved in [13]. w
Optimal Subalgebras Bahadur [9] has shown that if there exists a bounded nontrivial u.e.m.v., then there
exists also a nontrivial ~ - a l g e b r a
~
that has the following property:
Any statistic with
finite variance that is measurable with respect to q~ is a u.e.m.v, for its mean. A suba!gebra ~ w
c U~
is said to be optimal with respect to a loss function
estimate with finite w - r i s k
that is measurable with respect to ~ w
~vif any
is a u.e.m.r, for its
mean. THEOREM i0. w(~-
~
tion w ( ~ = O Proof.
Let the set
~b
be dense in ~
in the metric
L~(Pe] , e E|
, and let
be a twice continuously differentiable loss function that satisfies the condiI=O
.
Then
~w
= ~'
By virtue of Theorem 1 and Remark 2 we have
the indicator function
IA of the set A
~
C~w|
Let ~ E--~w .
will be a u.e.m.ro with respect to W .
Then
By virtue
of Lemma 2,
E e { W' ( iA - Pe[A'):I ~} = o, *The uniqueness is understood in the sense that two u.e.m.r, coincide with 1 for any @ ~ .
PC-probability
879
i.e.,
I$w'{ ~ - Pe(A~]dPe + I w'(- Pe (Al)/. d.P8 = o, A
~\A
But ~ is a u.e.z., i.e.,
A
~A
From the last two equations it follows that
[w'(i - Pe(A~I - w'(-P e (Al]] I/" dPe = o. By virtue of our assumption concerning W ,
it hence follows that
, ,,,,[ These equations signify that A ~ .
This completes the proof of the theorem. LITERATURE CITED
1 2 3 4 5 6 7 8 9. i0. ii. 12. 13.
880
A. Padmanabhan, "Some results on minimum variance unbiased estimation," Sankhya, A32, 1 (1970). Yu. V. Linnik and A. L. Rukhin, "Convex loss functions in the theory of unbiased estimation," Dokl. Akad. Nauk SSSR, 198, 3 (1971). N . A . Lebedev, Yu. V. Lifinik, and A. L. Rukhin, "Monotonic convex matrix loss functions in statistics," Tr. Mat. Inst. Akad. Nauk SSSR, 112 (1971). Yu. V. Linnik and A. L. Rukhin, "Matrix loss functions admitting the Rao--Blackwellization," Sankhya, A34, 1 (1972). L . B . Klebanov, Yu. V. Linnik, and A. L. Rukhin, "Unbiased estimation and matrix loss functions," Dokl. Akad. Nauk SSSR, 200, 5 (1971). L . B . Klebanov, "Universal loss functions and unbiased estimates," Dokl. Akad. Nauk SSSR, 203, 6 (1972). E . L . Lehman and H. Scheffe, "Completeness, similar regions and unbiased estimation," Sankhya, I0 (1950). N. I. Akhiezer, Classical Moment Problem, Hafner (1965). R. R. Bahadur, "On unbiased estimates of uniformly minimum variance," Sankhya, AI8, 304 (1957). A. M. Kagan, "Two remarks about characterization of sufficiency," in: Limit Theorems and Statistical Estimation, Tashkent (1966). H. Strasser, "Sufficiency and unbiased estimation," Metrika, 2-3 (1972). L. Fuks, Partially Ordered Algebraic Systems [in Russian], Moscow (1955). C. Stein, "Unbiased estimates with minimum variance," Annals Math. Statist., 21, 3 (1950).