Optimistic shannon coding theorems for arbitrary

4 downloads 0 Views 289KB Size Report
source coding rate, Shannon theory, source-channel separation theorem. .... The liminf in probability of a sequence of random variables is defined as follows [8]: ...
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999

Optimistic Shannon Coding Theorems for Arbitrary Single-User Systems Po-Ning Chen and Fady Alajaji, Member, IEEE

Abstract—The conventional definitions of the source coding rate and of channel capacity require the existence of reliable codes for all sufficiently large block lengths. Alternatively, if it is required that good codes exist for infinitely many block lengths, then optimistic definitions of source coding rate and channel capacity are obtained. In this work, formulas for the optimistic minimum achievable fixedlength source coding rate and the minimum "-achievable source coding rate for arbitrary finite-alphabet sources are established. The expressions for the optimistic capacity and the optimistic "-capacity of arbitrary single-user channels are also provided. The expressions of the optimistic source coding rate and capacity are examined for the class of information stable sources and channels, respectively. Finally, examples for the computation of optimistic capacity are presented.

2623

In this work, we demonstrate that T and C do indeed have a general formula. The key to these results is the application of the generalized entropy/information rates introduced in [3] and [4] to the existing proofs by Verd´u and Han [7], [14] of the direct and converse parts of the conventional coding theorems. We also provide a general expression for the optimistic minimum "-achievable source coding rate and the optimistic "-capacity. In Section II, we briefly introduce the generalized sup/inf-information/entropy rates which will play a key role in proving our optimistic coding theorems. In Section III, we provide the optimistic source coding theorems. They are shown based on two recent bounds due to Han [7] on the error probability of a source code as a function of its size. Interestingly, these bounds constitute the natural counterparts of the upper bound provided by Feinstein’s Lemma and the Verd´u–Han lower bound to the error probability of a channel code. Furthermore, we show that for information-stable sources, the formula for T reduces to

Index Terms—Error probability, optimistic channel capacity, optimistic source coding rate, Shannon theory, source-channel separation theorem.

1

n!1 n

T = lim inf

H (X n ):

This is in contrast to the expression for T , which is known to be

I. INTRODUCTION The conventional definition of the minimum achievable fixedlength source coding rate T for a source Z [13, Definition 4] requires the existence of reliable source codes for all sufficiently large block lengths. Alternatively, if it is required that reliable codes exist for infinitely many block lengths, a new, more optimistic definition of source coding rate (denoted by T ) is obtained [13]. Similarly, the optimistic capacity C is defined by requiring the existence of reliable channel codes for infinitely many block lengths, as opposed to the definition of the conventional channel capacity C [14, Definition 1]. This concept of optimistic source coding rate and capacity has recently been investigated by Verd´u et al. for arbitrary (not necessarily stationary, ergodic, information stable, etc.) sources and single-user channels [13], [14]. More specifically, they establish an additional operational characterization for the optimistic minimum achievable source coding rate (T ) by demonstrating that for a given channel, the classical statement of the source-channel separation theorem1 holds for every channel if T = T [13]. In a dual fashion, they also show that for channels with C = C , the classical separation theorem holds for every source. They also conjecture that T and C do not seem to admit a simple expression. Manuscript received September 1, 1998; revised April 19, 1999. This work was supported in part under an ARC Grant from Queen’s University, the Natural Sciences and Engineering Research Council of Canada (NSERC) under Grant OGP0183645, and the National Science Council of Taiwan, ROC, under Grant NSC 87-2213-E-009-139. The material in this correspondence was presented in part at the IEEE International Symposium on Information Theory, MIT, Cambridge, MA, August 1998. P.-N. Chen is with the Department of Communication Engineering, National Chiao-Tung University, HsinChu, Taiwan, ROC. F. Alajaji is with the Department of Mathematics and Statistics, Queen’s University, Kingston, Ont. K7L 3N6, Canada. Communicated by I. Csisz´ar, Associate Editor for Shannon Theory. Publisher Item Identifier S 0018-9448(99)07305-8. 1 By the “classical statement of the source-channel separation theorem,” we mean the following. Given a source Z with (conventional) source coding rate T (Z ) and channel W with capacity C , then Z can be reliably transmitted over W if T (Z ) < C . Conversely, if T (Z ) > C , then Z cannot be reliably transmitted over W . By reliable transmissibility of the source over the channel, we mean that there exists a sequence of source-channel codes such that the decoding error probability vanishes as the block length n (cf. [13]).

!1

T = lim sup

n!1

1

n

H (X n ):

The above result leads us to observe that for sources that are both stationary and information-stable, the classical separation theorem is valid for every channel. In Section IV, we present (without proving) the general optimistic channel coding theorems, and prove that for the class of informationstable channels the expression of C becomes C = lim sup sup

n!1

X

1

n

I (X n ; Y n )

while the expression of C is

n!1 X

C = lim inf sup

1

n

I (X n ; Y n ):

Finally, in Section V, we present examples for the computation of C and C for information-stable as well as information-unstable channels. II. "-INF/SUP-INFORMATION/ENTROPY RATES Consider an input process X defined by a sequence of finite1 (n) (n) dimensional distributions [14] X = fX n = (X1 ; 1 1 1 ; Xn )gn=1 . 1 ( n ) ( n ) n Denote by Y = fY = (Y1 ; 1 1 1 ; Yn )gn=1 the corresponding output process induced by X via the channel

1

W

1 fW n = P

=

Y jX

:

X

1

n ! Y n gn=1 1

which is an arbitrary sequence of n-dimensional conditional distributions from X n to Y n , where X and Y are the input and output alphabets, respectively. We assume throughout this correspondence that X and Y are finite. In [8] and [14], Han and Verd´u introduce the notions of inf/supinformation/entropy rates and illustrate the key role these information measures play in proving a general lossless (block) source coding theorem and a general channel coding theorem. The inf-information rate I (X ; Y ) (resp., sup-information rate I (X ; Y )) between processes X and Y is defined in [8] as the

0018–9448/99$10.00  1999 IEEE

2624

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999

liminf in probability (resp., limsup in probability) of the sequence of normalized information densities (1=n) iX W (X n ; Y n ), where 1

n

n

1

n

iX W (a ; b ) =

1

n

log

PY

jX

j

n n (b a ) : n PY (b )

When X is equal to Y , I (X ; X ) (resp., I (X ; X )) is referred to as the sup (resp., inf) entropy rate of X and is denoted by H (X ) (resp., H (X )). The liminf in probability of a sequence of random variables is defined as follows [8]: if An is a sequence of random variables, then its liminf in probability is the largest extended real number U such that

!1

lim Pr [An < U ] = 0: n

(1)

Similarly, its limsup in probability is the smallest extended real number U such that

!1

lim Pr [An > U ] = 0: n

(2)

Note that these two quantities are always defined; if they are equal, then the sequence of random variables converges in probability to a constant. It is straightforward to deduce that (1) and (2) are, respectively, equivalent to

!1

!1

!1

!1

lim inf Pr [An < U ] = lim sup Pr [An < U ] = 0 n n

and

lim inf Pr [An < U ] = lim sup Pr [An > U ] = 0: n n

(3)

(4)

We can observe, however, that there might exist cases of interest where only the liminf’s of the probabilities in (3) and (4) are equal to zero, while the limsup’s do not vanish. There are also other cases where both the liminf’s and limsup’s in (3) and (4) do not vanish, but they are upper-bounded by a prescribed threshold ". Furthermore, there are situations where the interval [U ; U ] does not contain only one point; e.g., when An converges in distribution to another random variable. This remark constitutes the motivation to the recent work in [3] and [4], where generalized versions of the inf/sub-information/entropy rates are established.

1=1 is a seDefinition 2.1—Inf/Sup Spectrums [3], [4]: If fAn gn quence of random variables taking values in a finite set A, then its inf-spectrum u(1) and its sup-spectrum u(1) are defined by 1 u() = lim inf Pr fAn  g n!1

and

1 u() =

f

lim sup Pr An n

!1

 g:

1

Fig. 1. The asymptotic CDF’s of a sequence of random variables fAn gn=1 : u(1) = sup-spectrum and u(1) = inf-spectrum.

where the superscript “0” denotes a strict inequality in the definition of U 1 ; i.e.,

1

U"

f

g

= sup  : u( ) < " :

Note also that U  U "  U "  U . Remark that U " and U " always exist. For a better understanding of the quantities defined above, we depict them in Fig. 1. If we replace An by the normalized information (resp., entropy) density, we get the following definitions. Definition 2.3—"-Inf/Sup-Information Rates [3], [4]: The "-infinformation rate I " (X ; Y ) (resp., "-sup-information rate I " (X ; Y )) between X and Y is defined as the quantile of the sup-spectrum (resp., inf-spectrum) of the normalized information density. More specifically

1

f

I " (X ; Y ) = sup  : iXW ( )

where

1

1

iXW ( ) = lim sup Pr

!1

and

n

n

n

1

f

1

1

iXW ( ) = lim sup Pr

!1

n

n

n

iX W (X ; Y )

I " (X ; Y ) = sup  : iXW ( )

where

 "g 

 "g n

n

iX W (X ; Y )



:

In other words, u(1) and u(1) are, respectively, the liminf and the limsup of the cumulative distribution function (CDF) of An . Note that by definition, the CDF of An —Pr fAn  g—is nondecreasing and right-continuous. However, for u(1) and u(1), only the nondecreasing property remains.

Definition 2.4—"-Inf/Sup-Entropy Rates [3], [4]: The "-inf-entropy rate H " (X ) (resp., "-sup-entropy rate H " (X )) for a source X is defined as the quantile of the sup-spectrum (resp., inf-spectrum) of the normalized entropy density. More specifically

Definition 2.2—Quantile of Inf/Sup-Spectrum [3], [4]: For any 0  "  1, the quantiles U " and U " of the sup-spectrum and the inf-spectrum are defined by

where

1

U"

1 =

and

f

 "g

where

respectively. It follows from the above definitions that U " and U " are right-continuous and nondecreasing in ". Note that Han and Verd´u’s liminf/limsup in probability of An are special cases of U " and U " . More specifically, the following hold: U = U0

and

U = U1

1

!1

n

 "g

sup  : u( )

f

hX ( ) = lim sup Pr

f

U " = sup : u()

and

1

H " (X ) = sup  : hX ( )

1

1

n

f

n

hX (X )

H " (X ) = sup  : hX ( )

1

hX ( ) = lim sup Pr

!1

n

and 1

n

n

1

hX (X ) =

1

n

1

n

 "g 

 "g n

hX (X )



n log(1=PX (X )):

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999

III. OPTIMISTIC SOURCE CODING THEOREMS In [13], Vembu et al. characterize the sources for which the classical separation theorem holds for every channel. They demonstrate that for a given source X , the separation theorem holds for every channel if its optimistic minimum achievable source coding rate (T (X )) coincides with its conventional (or pessimistic) minimum achievable source coding rate (T (X )); i.e., if T (X ) = T (X ). We herein establish a general formula for T (X ). We prove that for any source X

T (X ) = H 1

Definition 3.6—Optimistic "-Achievable Source Coding Rate: Fix 0 < " < 1. R  0 is an optimistic "-achievable rate if, for every

> 0, there exists a sequence of (n; M ) fixed-length source codes Cn such that

n

log

and Pe(n)

M H" +

2

:

Therefore, lim inf n!1

Pe(n)

 lim inf Pr n!1

1

n

n (X )

hX

0 lim sup Pr n!1 0. We will prove the converse by contradiction. Suppose that T 10" (X ) < H " (X ). Then (9 > 0) T 10" (X ) < H " (X ) 0 3 . By definition of T 10" (X ), there exists a sequence of codes Cn such that 1

n

and

log

jCn j < [H "

(X )

Pe(n)

 1 0 ":

hX

n (X )

>

hX

n (X )

> (H "

lim inf n!1

0 3 ] + (6)

By Lemma 3.2

Pe(n)

 Pr

n

 Pr

n

1 1

1

n

log

jCnj + 0 e0n

(X )

0 2 ) + 0 e0n :

Therefore, lim inf n!1

as the optimistic source coding rate.

Pe(n)

Note that actually T 10" (X ) = H " (X ), except possibly at the points of discontinuities of H " (X ) (which are countable). Proof: 1) Forward Part (Achievability)—T 10" (X )  H " (X ): We need to prove the existence of a sequence of block codes fCn gn0 (n) such that, for every > 0, (1=n) log jCn j < H " (X )+ and Pe  1 0 " for infinitely many n. Lemma 3.1 ensures the existence (for any > 0) of a source block code Cn = (n; expfn(H " + =2)g) with error probability

(X ):

We also provide the general expression for the optimistic minimum "-achievable source coding rate. We show these results based on two new bounds due to Han (one upper bound and one lower bound) on the error probability of a source code [7, Ch. 1]. The upper bound (Lemma 3.1) consists of the counterpart of Feinstein’s lemma for channel codes (cf., for example, [14, Theorem 1]), while the lower bound (Lemma 3.2) consists of the counterpart of the Verd´u–Han lower bound on the error probability of a channel code [14, Theorem 4]. As in the case of the channel coding bounds, both source coding bounds (Lemmas 3.1 and 3.2) hold for arbitrary sources and for arbitrary fixed block length. Definition 3.5: An (n; M ) fixed-length source code for X n is a n g. The error probability collection of M n-tuples Cn = fc1n ; 1 1 1 ; cM ( n) 1 n of the code is Pe = Pr [X 2 = Cn ].

1

2625

Pe(n)

 1 0 lim sup Pr n!1 >1 0 "

1

n

hX

n (X )

 H"

(X )

0

where the last inequality follows from the definition of H " (X ). Thus a contradiction to (6) is obtained. 3) Equality: H " (X ) is a nondecreasing function of "; hence the number of discontinuous points is countable. For any continuous point ", we have that H " (X ) = H " (X ), and thus T " (X ) = H " (X ). Theorem 3.2—Optimistic Minimum Achievable Source Coding Rate Formula: For any source X

T (X ) = H 1

(X ):

Proof: By definition,

1

sup H (X )  H 1 (X ): 0

0

such that

H1

(X )

< T (X ) 0 :

But by definition of T (X ), there exists 0 < " = "( ) < 1 such that

T (X ) 0 < T " (X ):

2626

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999

Therefore,

H1

(X )

< T (X ) 0 < T " (X )  H 10" (X )  H 1

Observations: • If the source X is both information-stable and stationary, the above Lemma yields

(X )

and a contradiction is obtained. We conclude this section by examining the expression of T (X ) for information stable sources. It is already known (cf., for example, [13]) that for an information-stable source X

T (X ) = lim

sup

!1

n

1

n

H (X n ):

We herein prove a parallel expression for T (X ). Definition 3.7—Information-Stable Sources [13]: A source X is said to be information-stable if H (X n ) > 0 for n sufficiently large, and hX (X n )=H (X n ) converges in probability to one as n ! 1, i.e., hX (X n ) lim sup Pr 0 1 > = 0; 8 > 0 H (X n ) n!1

T (X ) = T (X ) =

inf !1

1

n

H (X n ):

Proof: 1) (T (X )  lim inf n!1 (1=n)H (X n )): Fix " > 0 arbitrarily small. Using the fact that hX (X n ) is a (finite-alphabet) nonnegative bounded random variable, we can write the normalized block entropy as 1

n

H (X n ) = E =

E

1

n 1

n

+E

hX

n (X )

hX

n (X )1 0

1

n

 n1 hX 1

n

(X )1

hX

n

hX

n (X ) n

(X )

 H1

> H1

1

n

H (X n ):

T (X ) = H (X ); and (X )

H"

 T10" (X )  H " (X ):

IV. OPTIMISTIC CHANNEL CODING THEOREMS

Lemma 3.3: Every information source X satisfies n

!1

This implies that given a stationary and information-stable source X , the classical separation theorem holds for every channel. • Recall that both Lemmas 3.1 and 3.2 hold not only for arbitrary sources X , but also for arbitrary fixed block length n. This leads us to conclude that they can analogously be employed to provide a simple proof to the conventional source coding theorems [8]

where H (X n ) = E [hX (X n )] is the entropy of X n .

T (X ) = lim

lim n

C

= sup X

I 0 (X ; Y ) = sup I (X ; Y ); X

and the conventional "-capacity

(X ) + "

(X ) + "

In this section, we state without proving the general expressions for the optimistic "-capacity2 (C " ) and for the optimistic capacity (C ) of arbitrary single-user channels. The proofs of these expressions are straightforward once the right definition (of I " (X ; Y )) is made. They employ Feinstein’s lemma and the Verd´u–Han lower bound [14, Theorem 4], and follow the same arguments used in [14] to show the general expressions of the conventional channel capacity

:

sup X

(X ;

I"

Y )  C"

 sup I " (X ; Y ): X

(7)

We close this section by proving the formula of C for informationstable channels.

From the definition of H 1 (X ), it directly follows that the first term in the right-hand side of (7) is upper-bounded by H 1 (X ) + ", and that the liminf of the second term is zero. Thus

Definition 4.8—Channel Block Code: An (n; M ) code for channel W n with input alphabet X and output alphabet Y is a pair of mappings

T (X ) = H 1

(X )

 lim inf n!1

1

n

H (X n ):

2) (T (X )  lim inf n!1 (1=n)H (X n )): Fix " > 0. Then for infinitely many n Pr

hX (X n ) H (X n ) 1

= Pr

n

 Pr

1

n

1

01>"

1

n

hX

n (X )

> (1 + ")

hX

n (X )

> (1 + ")

n (X )

1

n

1

n

H (X n ) + "

H (X n )

(X )

Y n ! f1; 2; 1 1 1 ; M g:

> (1 + ")

lim inf n!1

:

1

n

 (1 + ")

lim inf n!1

1

n

1

=

1

M

M m

=1 fy : g(y )6=mg

W n (y n jf (m)):

Definition 4.9—Optimistic "-Achievable Rate: Fix 0 < " < 1. R  0 is an optimistic "-achievable rate if, for every > 0, there exists a sequence of (n; M ) channel block codes such that log M > R 0 and Pe(n)  " for infinitely many n: n

H (X n ) + " = 0:

By the definition of H 1 (X ), the above implies that

T (X ) = H 1

g:

and

Pe(n)

Since X is information stable, we obtain that lim inf Pr n!1

f1; 2; 1 1 1 ; M g ! X n

Its average error probability is given by

hX

lim inf n!1

f:

H (X n ) + " :

The proof is completed by noting that " can be made arbitrarily small.

Definition 4.10—Optimistic "-Capacity C " : Fix 0 < " < 1. The supremum of optimistic "-achievable rates is called the optimistic "-capacity, C " . Definition 4.11—Optimistic Capacity C : The optimistic channel capacity C is defined as the supremum of the rates that are optimistic "-achievable for all 0 < " < 1. It follows immediately from the 2 The authors would like to point out that the expression of separately obtained in [11, Theorem 7].

C"

was also

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999

definition that

Since the channel is information-stable, we get that

C

C" 1 C " = lim "#0

=

inf 0 0. Then for infinitely many n

!1

1

W

Furthermore, note that, if C = C and there exists an input distribution PX^ that achieves C , then PX^ also achieves C .

We prove a dual formula for C .

C

PX~

By the definition of C , the above immediately implies that

I 0 (X ; Y ):

= sup X

We next investigate the expression of C for information-stable channels. The expression for the capacity of information-stable channels is already known (cf., for example, [13])

C

!1

lim inf n

C

= 0:

Theorem 4.3—Optimistic "-Capacity Formula: Fix 0 < " < 1. The optimistic "-capacity C " satisfies

C

2627

0 1 < 0"

:

=

1 1

0 hb (1=8); 0 hb (1=4);

for n odd for n even

1 0 a log a 0 (1 0 a) log (1 0 a) 2 2

=

is the binary entropy function. Therefore,

!1 Cn = 1 0 hb (1=4)

C

= lim inf n

C

= lim sup n

and

!1 Cn = 1 0 hb (1=8) > C:

3 The strong (or strong converse) capacity C SC is defined [2] as the infimum of the numbers R for which there exists > 0 such that for all (n) (n; M ) codes with (1=n) log M > R

; lim inf n Pe = 1. This definition of CSC implies that for any sequence of (n; M ) codes with (n) lim inf n (1=n) log M > CSC , Pe > 1 " for every " > 0 and for n sufficiently large. It is shown in [2] that CSC = lim" 1 C " = supX I (X ; Y ).

!1

0

!1

0

"

2628

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999

Example 5.2: Here we use the information-stable channel provided in [13, Sec. III] to show that C > C . Let N be the set of all positive integers. Define the set J as

J =1 fn 2 N : 22 +1  n < 22 +2 ; i = 0; 1; 2; 1 1 1g = f2; 3; 8; 9; 10; 11; 12; 13; 14; 15; 32; 33; 1 1 1 ; 63; 128; 129; 1 1 1 ; 255; 1 1 1g: i

i

Consider the following nonstationary symmetric channel W . At times n 2 J , Wn is a BSC , whereas at times n 2 = J , Wn is a BSC = . Put W n W1 2 W2 2 1 1 1 2 Wn . Here again Cn is achieved by a Bernoulli = input X n . We then obtain

(1 2)

1 Cn =

n

=

n

=1

i

= J (nn)

(0) (1 2)

^

^i; Yi ) = 1 [J (n) 1 (1) + (n 0 J (n)) 1 (0)] I (X n

( ) =1 jJ \ f1; 2; 1 1 1 ; ngj. It can be shown that

where J n

2 2 2blog nc + 1 ; 1 0 J (n) 3 n 3n = n 2 2 2blog nc 0 2 ; 3 n 3n

for

blog2 nc odd

for

blog2 nc even.

=0

=1 ( + )

1 0 H 10" (Z )  C"  1 0 H (10") (Z ) and

1 0 H 10" (Z )  C "  1 0 H (10") (Z ): (1 ) log

C

= lim ninf !1 Cn = 1=3

C

= lim nsup Cn = 2=3: !1

and

(

(1

) )

( ) = H (10") (Z ) = H 10" (Z ) = H (10") (Z ) = FV01 (1 0 ") 1 where FV (a) = Pr fV  ag is the cumulative distribution function 0 of V , and FV 1 (1) is its inverse [1]. Consequently, 01 C" = C " = 1 0 FV (1 0 ") and

1 0 FV01(1 0 ") = 0: = C = lim "#0 ~ 1 ; W~ 2 ; 1 1 1 consist of the channel in Example Example 5.4: Let W ^ 1 ; W^ 2 ; 1 1 1 consist of the channel in Example 5.3. Define 5.2, and let W C

(1 2)

01

i

2

Pr n1 log PWPY(Y(YjnX) )   n

= W^ i ;

for i

= 1; 2; 1 1 1 :

(1 2)

imply that

1 log PW (Y n jX n )   ( ) =1 lim inf Pr n!1 n PY (Y n ) 5 =1 0 FV 3 0 2

iXW 

where Xi , Yi , and Zi are, respectively, the ith input, ith output, and ith noise, and 8 represents modulo- addition. Suppose that the input process is independent of the noise process. Also assume that the noise sequence fZn gn1 is drawn according to the Polya contagion urn scheme [1], [10], as follows: an urn originally contains R red balls and B black balls with R < B ; we make successive draws from the urn; after each draw, we return to the urn balls of the same color as was just drawn > . The noise sequence fZi g

n

and W2i01

(1 ) log[ ( )] = ( ) ( (1 ) ) lim inf (1=n)J (n) = 1=3 and limn!1 sup(1=n)J (n) = 2=3 n!1

= 1; 2; 1 1 1

(1 0)

= W~ i

As in the previous examples, the channel is symmetric, and a Bernoulli = input maximizes the inf/sup-information rates. Therefore, for a Bernoulli = input X , we have the equations at the bottom of this page. The fact that 0 =i PZ Z i converges 1 h U , where in distribution to the continuous random variable V b U is beta-distributed =; 0  = , and the fact that

Example 5.3—The Polya-Contagion Channel: Consider a discrete additive channel with binary input and output alphabet f ; g described by

= Xi 8 Zi ;

()

H 10" Z

W2i

B. Information-Unstable Channels

Yi

( ) = ( )

It has been shown [1] that 0 =n PZ Z n converges in 1 h U , where U distribution to the continuous random variable V b is beta-distributed =; 0  = , and hb 1 is the binary entropy function. Thus

a new channel W as follows:

Consequently,

=1 = ( + )

corresponds to the outcomes of the draws from the Polya urn: Zi 1 if ith ball drawn is red and Zi , otherwise. Let  R= R B 1 and  = R B . It is shown in [1] that the noise process fZi g is stationary and nonergodic; thus the channel is information-unstable. From Lemma 2 and [4, Sec. IV, Pt. I], we obtain

1+1

and

n n ( ) =1 limn!1 sup Pr n1 log PWPY(Y(YjnX) )   =1 0 FV 43 0 2 :

iXW 

Consequently, C"

= 56 0 12 FV01(1 0 ")

and C"

= 23 0 21 FV01(1 0 "):

i i i i Pr 21i log PW~P (Y(YjXi ) ) + log PW^P (Y(YjXi ) )   ; Y Y = i+1 i+1 ( Y i jX i ) PW 1 ~ Pr 2i + 1 log P (Y i ) + log PW^ P (Y (Y ij+1X ) )   ; Y Y 1 1 1 0 Pr 0 i log PZ (Z i ) < 1 0 2 + i J (i) ; = 1 0 Pr 0 i +1 1 log PZ (Z i+1 ) < 1 0 2 0 i +1 1  + i +1 1 J (i)

;

if n

= 2i

if n

= 2i + 1

if n

= 2i

if n

= 2i + 1.

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 45, NO. 7, NOVEMBER 1999

2629

Gaussian Codes and Shannon Bounds for Multiple Descriptions

Thus

0

Suggest Documents