Information 2010, 1, 13-27; doi:10.3390/info1010013 OPEN ACCESS
information ISSN 2075-1710 www.mdpi.com/journal/information Article
New Information Measures for the Generalized Normal Distribution Christos P. Kitsos * and Thomas L. Toulias Department of Mathematics, Agiou Spyridonos & Palikaridi, Technological Educational Institute of Athens, 12210 Egaleo, Athens, Greece; E-Mail:
[email protected] * Author to whom correspondence should be addressed; E-Mail:
[email protected]. Received: 24 June 2010; in revised form: 4 August 2010 / Accepted: 5 August 2010 / Published: 20 August 2010
Abstract: We introduce a three-parameter generalized normal distribution, which belongs to the Kotz type distribution family, to study the generalized entropy type measures of information. For this generalized normal, the Kullback-Leibler information is evaluated, which extends the well known result for the normal distribution, and plays an important role for the introduced generalized information measure. These generalized entropy type measures of information are also evaluated and presented. Keywords: entropy power; information measures; Kotz type distribution; Kullback-Leibler information
1. Introduction The aim of this paper is to study the new entropy type information measures introduced by Kitsos and Tavoularis [1] and the multivariate hyper normal distribution defined by them. These information measures are defined, adopting a parameter , as the -moment of the score function (see Section 2), where is an integer, while in principle 2 . One of the merits of this generalized normal distribution is that it belongs to the Kotz-type distribution family [2], i.e., it is an elliptically contoured distribution (see Section 3). Therefore it has all the characteristics and applications discussed in Baringhaus and Henze [3], Liang et al. [4] and Nadarajah [5]. The parameter information measures related to the entropy, are often crucial to the optimal design theory applications, see [6]. Moreover, it is proved that the defined generalized normal distribution provides equality to a new generalized
Information 2010, 1
14
information inequality (Kitsos and Tavoularis [1,7]) regarding the generalized information measure as well as the generalized Shannon entropy power (see Section 3). In principle, the information measures are divided into three main categories: parametric (a typical example is Fisher’s information [8]), non parametric (with Shannon’s information measure being the most well known) and entropy type [9]. The new generalized entropy type measure of information J ( X ) , defined by Kitsos and Tavoularis [1], is a function of density, as:
J ( X )
f ( x) ln f ( x) dx
(1)
p
From (1), we obtain that J ( X ) equals:
J ( X )
f ( x) 1 f ( x) dx f ( x) f ( x) dx f ( x) p
p
(2)
For 2 , the measure of information J 2( X ) is the Fisher’s information measure: 2
J( X )
p
f ( x) f ( x) dx 4 f ( x) p
f ( x)
2
dx
(3)
i.e., J 2( X ) J( X ) . That is, J ( X ) is a generalized Fisher’s entropy type information measure, and as the entropy, it is a function of density. Proposition 1.1. When is a location parameter, then J ( ) J () . Proof. Considering the parameter as a location parameter and transforming the family of densities to f ( x; ) f ( x ) , the differentiation with respect to is equivalent to the differentiation with respect to x . Therefore we can prove that J ( X ) J ( ) . Indeed, from (1) we have:
J ( X )
f ( x ) ln f ( x ) dx
p
f ( x) ln f ( x) dx J ( )
p
and the proposition has been proved. Recall that the score function is defined as: U ln f ( X ; )
f ( X ; ) f ( X ; )
(4)
with E(U ) 0 and E(U 2) J( ) under some regularity conditions, see Schervish [10] for details. It can be easily shown that when , a 1 , then J a ( X ) E(U ) . Therefore J ( X ) behaves as the -moment of the score function of f ( X ; ) . The generalized power is still the power of the white Gaussian noise with the same entropy, see [11], considering the entropy power of a random variable. Recall that the Shannon entropy H( X ) is defined as H( X ) p f (x)ln f (x)dx , see [9]. The entropy power N( X ) is defined through H( X ) as:
Information 2010, 1
15
N( X )
1
2
2 e
ep
H( X )
(5)
The definition of the entropy power of a random variable X was introduced by Shannon in 1948 [11] as the independent and identically distributed components of a p-dimensional white Gaussian random variable with entropy H ( X ) . The generalized entropy power N ( X ) is of the form; 2
N ( X ) M e p
H( X )
(6)
with the normalizing factor being the appropriate generalization of (2 e)1 , i.e., 1
1 M e
π
2
a p
1 M a (p ) 1 p 1 p 2
(7)
is still the power of the white Gaussian noise with the same entropy. Trivially, with 2 , the definition in (6) is reduced to the entropy power, i.e., N2( X ) N( X ) . In turn, the quantity:
p
p 2
1
p 1 1
appears very often when we define various normalizing factors, under this line of thought. Theorem 1.1. Generalizing the Information Inequality (Kitsos-Tavoularis [1]). For the variance of X , Var( X ) and the generalizing Fisher’s entropy type information measure J ( X ) , it holds: 12
1
2πpe Var( X ) 1p M J ( X )
1
with M as in (7). Corollary 1.1. When 2 then Var( X )J 2( X ) p 2 , and the Gramer-Rao inequality ([9], Th. 11.10.1) holds. Proof. Indeed, as M 2 (2πe)1 . For the above introduced generalized entropy measures of information we need a distribution to play the "role of normal", as in the Fisher's information measure and Shannon entropy. In Kitsos and Tavoularis [12] extend the normal distribution in the light of the introduced generalized information measures and the optimal function satisfying the extension of the LSI. We form the following general definition for an extension of the multivariate normal distribution, the -order generalized normal, as follows:
Information 2010, 1
16
Definition 1.1. The p -dimensional random variable X follows the -order generalized Normal, with mean and covariance matrix , when the density function is of the form: KT p ( , ) C ( p, ) det
1 2
exp{ 1 Q( X ) 2 ( 1) }
(8)
with Q( X ) ( X )t , 1( X ) , where u t , v is the inner product of u, v p ) and u t p is the transpose of u . We shall write X KT p (, ) . The normality factor C( p, ) is defined as: C ( p, )
1
p 2
p
p
1
Notice that for 2, KT2p(, ) is the well known multivariate distribution. Recall that the symmetric Kotz type multivariate distribution [2] has density:
Kotzm, r , s ( , ) K (m, r , s) det
1 2
Qm 1 exp{rQ s }
(9)
where r 0, s 0, 2m n 2 and the normalizing constant K(m, r, s) is given by: s( 2p )r (2 m p 2) 2 s K (m, r , s) p 2( 2 m 2 sp 2 )
see also [1] and [12]. Therefore, it can be shown that the distribution KT p (,Σ) follows from the symmetric Kotz type multivariate distribution for m 1 , r
1
and s
2( 1)
, i.e., KT p (, ) Kotz1,( 1) ,
2( 1)
(, ) .
Also note that for the normal distribution it holds N (, ) KT2p(, ) Kotz1,1 2,2(, ) , while the normalizing factor is C( p, ) K (1, 2(1) , 1) . 2. The Kullback-Leibler Information for –Order Generalized Normal Distribution Recall that the Kullback-Leibler (K-L) Information for two p -variate density functions f , g is defined as [13]:
KLI( f , g )
p
f ( x)log
f ( x) dx g ( x)
The following Lemma provides a generalization of the Kullback-Leibler information measure for the introduced generalized Normal distribution. Lemma 2.1. The K-L information KLIp of the generalized normals KT p (1,12I p) and KT p (0, 02I p ) is equal to:
KLIp
C ( p, ) p 02 q1 ( x ) q1 ( x ) q1 ( x ) log e dx e q ( x ) dx e q ( x ) dx 1 0 1p 2 12 p p p
where qi (x) 1 i1 x i
1
, x p, i 0,1 .
Information 2010, 1
17
Proof. We have consecutively:
KLIp KLI( KT ( 1 , 1 ), KT ( 0 , 0 ))
KT ( 1, 1 )( x)log
p
C ( p, ) C ( p, )
det 1
p
C ( p, ) det 1
exp{ 1 Q1 ( x) 2 ( 1) } log
det 1 C ( p, ) det 0
exp{
1
p
KT ( 1 , 1 )( x) KT ( 0 , 0 )( x)
dx
exp{ 1 Q1 ( x) 2 ( 1) } exp{
1
dx
Q0 ( x)
2 ( 1)
}
1 det 0 1 Q1 ( x) 2 ( 1) } log Q1 ( x) 2 ( 1) 1 Q0 ( x) 2 ( 1) dx det 1 2
and thus
KLIp
C ( p, ) 1 det 0 log det 1 det 1 2
exp{
1
p
exp{ 1 Q1 ( x) 2 ( 1) }dx
p
exp{ 1 Q1 ( x) 2 ( 1) } 1 Q1 ( x) 2 ( 1) dx
p
Q1 ( x) 2 ( 1) } 1 Q0 ( x) 2 ( 1) dx
For Σ1 12I p and 0 02I p , we finally obtain:
KLIp where qi ( x)
1
C ( p, ) 1 02 p q1 ( x ) log e dx e q1 ( x ) q1 ( x)dx e q1 ( x ) q0 ( x)dx p 2p 1 2 1 p p p
Qi ( x) 2( 1) , x p, i 0,1 .
Notice that the quadratic forms Qi (x) (x 1)t , i1(x 1) , i 0,1 can be written in the form of
Qi (x) i2 x 1 , i 0,1 respectively, and thus the lemma has been proved. 2
Recall now the well known multivariate K-L information measure between two multivariate normal distributions with 1 0 is:
p 2 2 KLI I log 02 1 12 1 2 0 2 1 0 p 0 p 2
p
2
which, for the univariate case, is:
KLI12 I1
1 02 12 ( 1 0 )2 log 1 2 12 02 02
In what follows, an investigation of the K-L information measure is presented and discussed, concerning the introduced generalized normal. New results are provided that generalize the notion of K-L information. In fact, the following Theorem 2.1 generalizes Ι p ( KLI2p) for the -order generalized Normal, assuming that 1 0 . Various “sequences” of the K-L information measures
Information 2010, 1
18
KLIp are discussed in Corollary 2.1, while the generalized Fisher’s information of the generalized
Normal KT p (0, I p ) is provided in Theorem 2.2. Theorem 2.1. For 1 0 the K-L information KLIp is equal to:
p 2 1 1 1 KLIp log 02 p 1 2 1 0
(10)
Proof. We write the result from the previous Lemma 2.1 in the form of:
C ( p, ) p 02 KLI log I I I 1 2 3 1p 2 12 p
(11)
where: I1
1 exp{
p
x 1
1
1
}dx , I 2
I3
exp{
1
p
exp{ 1
p
x 1
1
1
}
1
x 1
1
1
x 0
1
0
} 1
x 1
1
1
dx
dx
We can calculate the above integrals by writing them as:
I1
exp{ (
1
p
) 11 x 1 }dx , I 2 1
exp{ (
1
1
x 1 1 1 x 1 } dx 1 ( ) 1 1
1
) 11
p
I3
exp{ (
1
1
) 11
p
x 1 1 x 1 } dx 1 ( ) 0 1 1
1
and then we substitute z ( 1) 11( x 1) . Thus, we get respectively: I1 ( 1 )
p 1
1p
exp{ z
1
}dz , I 2 ( 1)
p 1
1
1
1p exp{ z } z dz p
p
and:
( ) 1 z 1 1 1 0 1 p I 3 ( 1 ) 1p exp{ z 1 } 1 ( 1 ) 0 p
( 1 )
p 1
1p exp{ z } 1 z 1 0 0 ( ) 1
dz
1
p
Using the known integrals:
1
1
0
1 dz
(12)
Information 2010, 1
19
e
e
z
z
2π p 2( p )
dz
z dz
p
and
( 2p )
p
p
e
z
dz
(13)
2 pπ p 2( p )
p
(14)
2( 2p )
I1 and I 2 can be calculated as:
I1 ( 1 )
I 2 ( 1 )
p 1
p 1
p 1
p 1
2π p 2( p 1 ) 1
( 2p )
2 pπ p 2( p 1 )
( 1 ) ( ) 2
p 2
and
(15)
p 1 I1
respectively. Thus, (11) can be written as:
KLIp
C ( p, )
1p
1 2 1 C ( p, ) pI1 log 02 I3 p 2 1 1
and by substitution of I1 from (15) and C( p, ) from definition 1.1, we get: 1 1 02 1 π p 2 p 1 p KLI log 2 ( ) I3 p 1 1p 1 ( 2 ) 2 p
2 p( p 1 )
p
(16)
Assuming 1 0 , from (12), I 3 is equal to:
I 3 ( 1 )
p 1
1 1 exp{ z 1 } z 1 dz 0 p
p 1
and using the known integral (14), we have:
I 3 ( 1 )
p 1 2
1 p 2 1 2 pπ ( p ) 1p 1 ( 2p ) 0
Thus, (16) finally takes the form: KLIp
2 p( p 1 )
1 ( p 1 ) p 02 1 1 2 1 p 1 log 2 2 p( ) p 1 ( 2p ) 0 1 ( 2 ) 2
2 p( 2p 1)( p 1 ) 1 02 1 1 1 1 log p 1 2 2 ( ) ( p 1) 1 0 1 2
However, ( 2p 1) 2p ( 2p ) and ( p 1 1) p 1 ( p 1) , and so (10) has been proved. For 1 0 and for 2 from (16) we obtain:
p 02 (2π) p 2 KLI log 2 1 I3 2 1 1p p 2
where, from (12):
(17)
Information 2010, 1
20 2
0 I 3 2 exp{ z } 1 z 1 dz 2 0 2 0 p 2
2
p 1
p 1 2
p
For 2 we have the only case in which
1
2 1p 2 exp{ z } 2 z 1 1 0 dz 02
(18)
p
* , and therefore, the integral of I 3 in (12) can be
simplified. Specifically, by setting z (zi )ip1 p , 0 (i0)ip1 p and 1 (i1)ip1 p , from (18) we have consecutively: p
I3 2 2
1
p 2 1p 2 2 1 0 exp{ z } 2 z 2 2 1 1 ( i i ) zi 1 0 dz 2 0 i 1 p
p
22
1p 2 2 2 exp{ z } z dz 2 2 0
p 1 2
p
p 1
22
1p 2 2 1 0 exp{ z }dz 2 0 p
p 1p 1 2 2 exp{ z ... z } ( i1 i0 )ti dz1...dz p 1 p 2 0 i 1 p
Due to the known integrals (13) and (14), which for 2 are reduced respectively to:
exp{ z
2
p
}dz π 2 and
exp{ z
p
} z dz 2p π 2
2
2
p
p
we obtain: p
I 3 (2π) 2
p p 1 p 1p 2 1p 1p 1 p 1 2 2 2 (2π) 2 i0 exp{ z12}z1dz1... exp{ z 2p }z p dz p 1 0 2 2 2 i 2 0 2 0 0 i 1
p 1 1p 1p 1 p 1 2 2 0 2 2 (2π) p 2 exp{ z } d ( z ) 1 1 0 i i i i 2 02 02 i 1 i 1 2 p 2
p 1 2
p 1 2
p 1p 1p 1 1 p 1 2 2 0 2 (2π) p 2 lim exp{ z } 1 1 0 i i i z 2 02 02 2 p i 1 i 1 p 2
i
i.e.,
I3 (2π) p 2
1p 1p 1 p 1 2 2 p 2 i0 0 0 ... 0 1 1 0 2 2 i 2 0 0 i 1 p times 1 p 2
and hence: I 3 (2π) p 2
1p 2 p 12 1 0 2 2 0
Finally, using the above relationship for I 3 , (17) implies: p 2 1 p 02 p 12 1 0 2 2 KLI log 02 1 p log 1 1 1 0 2 1 2 02 2 12 2 02 2 02 p 2
2
and the theorem has been proved. Corollary 2.1. For the Kullback-Leibler information KLI2p it holds: (i) Provided that 0 1 , KLI2p has a strict ascending order as dimension p * rises, i.e.,
Information 2010, 1
21 KLI12 KLI22 ... KLI 2p ...
(ii) Provided that 0 1 and 1 0 , and for a given , we have: KLI1 KLI2 ... KLIp ... KLIp , we obtain KLI1 KLI2 ... KLIp ... (iii) For KLIp in KLIp
(iv) Given p, 0 1 and 1 0 , the K-L information KLIp has a strict descending order as
{1} rises, i.e., KLI2p KLI3p ... KLIp ... (v) KLIp is a lower bound of all KLIp for 2,3,... Proof. From Theorem 2.1 KLI2p is a linear expression of p * , i.e.,
KLI p 1 2 0 2 0
2
p 2
with a non-negative slope:
1 2
log
02 12 1 0 12 02
This is because, from applying the known logarithmic inequality log x x 1, x * (where equality holds only for x 1 ) to 12 02 , we obtain: 12 12 02 12 log 2 2 1 log 2 2 1 0 0 1 0
and hence 0 , where the equality holds respectively only for 12 02 1 , i.e., only for 0 1 . Thus, as the dimension p * rises:
KLI p 1 2 0 2 0
2
p 2
also rises. In the general p -variate case, KLIp is a linear expression of p * , i.e., KLIp p , provided that 1 0 , with a non-negative slope:
1 02 1 1 1 log 2 1 0 2 1 0
Indeed, 0 as:
1 1 1 1 0 1 log log 1 1 0 0
Information 2010, 1 and since
1
22
1 , we get: 02 1 02 1 1 1 log 2 1 log 2 1 2( 1) 2 0 1 0 1 1
1
which implies that 0 . The equality holds respectively only for 12 02 1 , i.e., only for 0 1 . Thus, as the dimension p * rises, KLIp also rises, i.e., KLI1 KLI2 ... , and so
KLI1 KLI2 ... , provided that 1 0 (see Figure 1). Now, for given p, 0 1 and 1 0 , if we choose 1 2 , we have
1 1 1
2 2 1
or
1 1 1
2 1 2
Thus, 1 2 1 2 1 2 1 11 1 2 1 1 11 1 2 1 1 1 1 11 2 1 1 2 1 1 1 1 1 1 0 2 0 0 0 0 0
i.e., KLIp1 KLIp2 . Consequently, KLI2p KLI3p ... and so KLIp KLIp, 2,3,... (see Figure 2). Figure 1. The graphs of the KLIp , for dimensions p 1,2,...,20 , as functions of the quotient 0 1 (provided that 0 1 ), where we can see that KLI1 KLI2 ...
Figure 2. The graphs of the trivariate KLI3 , for 2,3,... , and KLI3 , as functions of the quotient 0 1 (provided that 0 1 ), where we can see that KLI2p KLI3p ... and KLIp KLIp, 2,3,... .
.
Information 2010, 1
23
The Shannon entropy of a random variable X which follows the generalized normal KT p (, ) is: det 1 H ( KT ( , )) log p C ( p, ) 12
p
(19)
This is due to the entropy of the symmetric Kotz type distribution (9) and it has been calculated (see [7]) for m 1 , s 2(1) , r 1 . Theorem 2.2. The generalized Fisher’s information of the generalized Normal KT p (0, I p ) is:
J ( KT p (0, I p )) p
( p( 1) ) (1 p 1 )
1
(20)
Proof. From: 2
f 1
1
f (1 ) f
2
( 1 ) 2 f
2 1
f
2
implies: f 1
( 1 ) f 1 f
Therefore, from (2), J ( X ) equals eventually: J ( KT p (0, I p )) C ( p, )
x 1
p
1
exp{ 1 x 1 }dx
Switching to hyperspherical coordinates and taking into account the value of C( p, ) we have the result (see [7] for details). Corollary 2.2. Due to Theorem 2.2, it holds that J (KTp (0, I p )) p and N (KTp (0, I p )) 1 . Proof. Indeed, from (20), J (KTp (0, I p )) p and the fact that J ( X ) N ( X ) p (which has been extensively studied under the Logarithmic Sobolev Inequalities, see [1] for details and [14]), Corollary 2.2 holds. Notice that, also from (20): N ( KT p (0, I p )) pJ a1
and therefore, N (KTp (0, I p )) 1 . 3. Discussion and Further Analysis We examine the behavior of the multivariate -order generalized Normal distribution. Using Mathematica, we proceed to the following helpful calculations to analyze further the above theoretical results, see also [12].
Information 2010, 1
24
Figure 3 represents the univariate -order generalized Normal distribution for various values of : 2 (normal distribution), 5 , 10 , 100 , while Figure 4, represents the bivariate 10-order generalized Normal KT102(0, I 2) with mean 0 and covariance matrix I 2 . Figure 3. The univariate -order generalized Normals KT1(0,1) for 2,5,10,100 .
Figure 4. The bivariate 10-order generalized normal KT102(0,1) .
Figure 5 provides the behavior of the generalized information measure for the bivariate generalized normal distribution, i.e., J (KT (0, I 2)) , where (, ) [2,20] [2,10] . Figure 5. J (KT105 (0, I 5)), 1 a 20 .
Information 2010, 1
25
For the introduced generalized information measure of the generalized Normal distribution it holds that: p. for J ( KT (0, I p )) p, for p, for p
For 2 and 2 , it is the typical entropy power for the normal distribution. The greater the number of variables involved at the multivariate -order generalized normal, the larger the generalized information J (KT p (0, I 2)) , i.e., J (KT p1 (0, I p1 )) J (KT p2 (0, I p2 )) for p1 p2 . In other words, the generalized information measure of the multivariate generalized Normal distribution is an increasing function of the number of the involved variables. Let us consider and p to be constants. If we let vary, then the generalized information of the multivariate -order generalized Normal is a decreasing function of , i.e., for 1 2 we have J (KT1p (0, I p)) J (KT p2 (0, I p)) except for p 1 . Proposition 3.1. The lower bound of J (KT p (0, I p )) is the Fisher’s entropy type information. Proof. Letting vary and excluding the case p 1 , the generalized information of the multivariate generalized normal distribution is an increasing function of , i.e., J1 (KT p (0, I p )) J2 (KT p(0, I p)) for a1 a2 . That is the Fisher’s information J 2(KT p (0, I p )) is smaller than the generalized information measure J (KT p (0, I p )) for all 2 , provided that p 1 . When , the more variables involved at the multivariate generalized Normal distribution the larger the generalized entropy power Ν (KT p (0, I p )) , i.e., Ν (KT p1 (0, I p1 )) Ν (KT p2 (0, I p2 )) for p1 p2 . The dual occurs when . That is, when the number of involved variables defines an increasing generalized entropy power, while for the generalized entropy power is a decreasing function of the number of involved variables. Let us consider and to be constants and let p vary. When the generalized entropy power of the multivariate generalized Normal distribution Ν (KT p (0, I p )) is increasing in p . When , the generalized entropy power of the multivariate generalized Normal distribution Ν (KT p (0, I p )) is decreasing in p . If we let vary and let 1 2 1 then, for the generalized entropy power of the multivariate generalized Normal distribution, we have Ν (KT1p (0, I p)) Ν (KT p2 (0, I p)) for certain and p , i.e., the generalized entropy power of the multivariate generalized Normal distribution is an increasing function of . Now, letting vary, Ν (KT p (0, I p )) is a decreasing function of , i.e., Ν1 (KT p (0, I p )) Ν2 (KT p (0, I p )) provided that 1 2 . The well known entropy power Ν (KT p (0, I p )) provides
an upper bound for the generalized entropy power, i.e., 0 Ν (KT p (0, I p)) Ν 2(KT p(0, I p)) , for given and p .
Information 2010, 1
26
4. Conclusions In this paper new information measures were defined, discussed and analyzed. The introduced -order generalized Normal acts on these measures as the well known normal distribution does on the Fisher’s information measure for the well known cases. The provided computations offer an insight analysis of these new measures, which can also be considered as the -moments of the score function. For the -order generalized Normal distribution, the Kullback-Leibler information measure was evaluated, providing evidence that the generalization “behaves well”. Acknowledgements The authors would like to thank I. Vlachaki for helpful discussions during the preparation of this paper. The referees’ comments are very much appreciated. References 1. 2.
3. 4. 5. 6. 7.
8. 9. 10. 11. 12. 13.
Kitsos, C.P.; Tavoularis, N.K. Logarithmic Sobolev inequalities for information measures. IEEE Trans. Inform. Theory 2009, 55, 2554-2561. Kotz, S. Multivariate distribution at a cross-road. In Statistical Distributions in Scientific Work; Patil, G.P., Kotz, S., Ord, J.F., Eds.; D. Reidel Publishing Company: Dordrecht, The Netherlands, 1975; Volume 1, pp. 247-270. Baringhaus, L.; Henze, N. Limit distributions for Mardia's measure of multivariate skewness. Ann. Statist. 1992, 20, 1889-1902. Liang, J.; Li, R.; Fang, H.; Fang, K.T. Testing multinormality based on low-dimensional projection. J. Stat. Plan. Infer. 2000, 86, 129-141. Nadarajah, S. The Kotz-type distribution with applications. Statistics 2003, 37, 341-358. Kitsos, C.P. Optimal designs for estimating the percentiles of the risk in multistage models of carcinogenesis. Biometr. J. 1999, 41, 33-43. Kitsos, C.P.; Tavoularis, N.K. New entropy type information measures. In Proceedings of Information Technology Interfaces; Luzar-Stiffer, V., Jarec, I., Bekic, Z., Eds.; Dubrovnik, Croatia, 24 June 2009; pp. 255-259. Fisher, R.A. Theory of statistical estimation. Proc. Cambridge Phil. Soc. 1925, 22, 700-725. Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed; Wiley: New York, NY, USA, 2006. Scherrvish, J.M. Theory of Statistics; Springer Verlag: New York, NY, USA, 1995. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379-423, 623-656. Kitsos, C.P.; Tavoularis, N.K. On the hyper-multivariate normal distribution. In Proceedings of International Workshop on Applied Probabilities, Paris, France, 7–10 July 2008. Kullback, S.; Leibler, A. On information and sufficiency. Ann. Math. Statist. 1951, 22, 79-86.
Information 2010, 1
27
14. Darbellay, G.A.; Vajda, I. Entropy expressions for multivariate continuous distributions. IEEE Trans. Inform. Theory 2000, 46, 709-712. © 2010 by the authors; licensee MDPI, Basel, Switzerland. This article is an Open Access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).