Coding for Linear Operator Channels over Finite ... - Semantic Scholar

4 downloads 7196 Views 172KB Size Report
Jan 13, 2010 - Email: [email protected] ... Emails: {j4meng, ehyang}@uwaterloo.ca ... also showed cases that the lower bound of¯CSS is tight.
1

Coding for Linear Operator Channels over Finite Fields Shenghao Yang

Jin Meng and En-hui Yang

Department of Information Engineering

Department of Electrical and Computer Engineering

The Chinese University of Hong Kong, Hong Kong SAR

University of Waterloo, ON, Canada

Email: [email protected]

Emails: {j4meng, ehyang}@uwaterloo.ca

Abstract Linear operator channels (LOCs) are motivated by the communications through networks employing random linear network coding (RLNC). Following the recent information theoretic results about LOCs, we propose two coding schemes for LOCs and evaluate their performance. These schemes can be used in networks employing RLNC without constraints on the network size and the field size. Our first scheme makes use of rank-metric codes and generalizes the rank-metric approach of subspace coding proposed by Silva et al.. Our second scheme applies linear coding. The second scheme can achieve higher rate than the first scheme, while the first scheme can make use of existing encoding/decoding algorithms of rank-metric codes. Our coding schemes require only the expectation of the rank of the transformation matrix. The second scheme can also be realized ratelessly without any priori knowledge of channel statistics.

I. I NTRODUCTION Let F be the finite field with q elements. A linear operator channel (LOC) with input X ∈ FT ×M and output Y ∈ FT ×N is given by Y = XH,

(1)

where H is the channel transformation matrix distributed over FM ×N . Here we focus on the non-coherent transmission of LOCs where the instances of H are unknown in both the transmitter and the receiver. One motivation to study LOCs is random linear network coding (RLNC), a promising technique to transmit information through communication networks, especially wireless networks [1]–[3]. A communication network employing RLNC is a LOC with H depending on the network topology. In the original RLNC model [1] and many applications [2], [3], H is assumed to have rank M . This assumption gives a good approximation when i) M is less than or equal to the maximum flow from the transmitter to the receiver, and ii) the field size for network coding is sufficiently large in proportion with the number of network nodes [1], [4]. This assumption, however, does not hold for RLNC with small finite fields. For example, in wireless sensor networks, which is characterized by large network size and limited computing capability of network nodes, RLNC with a small finite field is more attractive.

January 13, 2010

DRAFT

2

Yang et al. [5] recently studied LOCs with arbitrary distributions of H, which is suitable to model RLNC with small finite fields. They generalized the concept of subspace coding proposed by K¨otter and Kschischang [6] and studied the maximum achievable rate of subspace coding for LOCs. Let C¯SS and C¯ be the normalized maximum achievable rate of subspace coding and the normalized channel capacity of a LOC, respectively. When H changes independently for each input matrix, they obtained that (1 − M/T ) E[rk(H)] + (T, q) ≤ C¯SS ≤ C¯ ≤ E[rk(H)], where E[rk(H)] is the expectation of the rank of H and (T, q) > 0 goes to zero as T log2 q goes to infinity. They also showed cases that the lower bound of C¯SS is tight. In this work we study coding schemes for LOCs with arbitrary distributions of H. We focus on the transmission scheme that all the input matrices X have the formulation   I X =  , ˜ X

(2)

where I is an identity matrix. For such a transmission, the received matrix     I H , Y =  H =  ˜ ˜ X XH where H is the instance of H. This kind of transmission is called channel training since part of the received matrix is the instance of H, and it can be regarded as a special subspace coding scheme. The rank-metric approach for RLNC proposed by Silva et al. [7] is in the class of channel training. Their coding approach also includes the RLNC scheme of Ho et al. [1] as a special case. Our contribution in this paper is summarized as follows. Let C¯CT be the normalized maximum achievable rate of channel training. We show that C¯CT = (1 − M/T ) E[rk(H)], which gives a good approximation to the lower bound of C¯SS obtained in [5]. We show that the rank-metric approach [7] can achieve C¯CT only when H has constant rank. We demonstrate that the throughput of the codes constructed using rank-metric codes can be less than one-fifth of C¯CT . As far as we know, this is the first evaluation of existing RLNC schemes for H with random rank. We extend the method of Silva et al. [7] to construct codes for LOCs. The constructed code is called lifted rankmetric code. The optimality of lifted rank-metric codes—in terms of achieving C¯CT —depends on the existence of the maximum-rank-distance (MRD) codes first studied in [8]. Specifically, we show that if T  M , lifted rankmetric codes can approach C¯CT . Otherwise, since the existence of MRD codes is unclear, we donnot know if lifted rank-metric codes can achieve C¯CT . We further propose a class of codes called lifted linear matrix codes, which can achieve C¯CT for all T ≥ M . We prove that the decoding error of lifted linear matrix codes decreases exponentially with the code length. Both lifted rank-metric codes and lifted linear matrix codes are universal in the sense that i) only E[rk(H)] is required to design codes and ii) a code has the similar performance for all LOCs with the same E[rk(H)]. Moreover, lifted linear matrix codes can be realized ratelessly without any priori channel knowledge.

January 13, 2010

DRAFT

3

II. C HANNEL T RAINING A. Maximum Achievable Rate The LOC given in (1) is denoted by LOC(H, T ). The dimension of H, M × N , is also called the dimension of the LOC. We assume that X and H are independent, which is consistent with noncoherent transmission. H keeps constant for an input matrix, which can also be equivalently regarded as T input vectors. We assume that H changes independently for each input matrix. A block code for LOC(H, T ) is a subset of (FT ×M )n , the nth Cartesian power of FT ×M . Here n is the length of the code. Since the components of codewords are matrices, such a code is called a matrix code. The channel capacity of a LOC can be approached using a sequence of matrix codes with n → ∞. In this work, we focus on the channel training scheme. A matrix code is called a channel training code if all the codewords X have the formulation in (2). Requried by the channel training scheme, we consider T ≥ M in the following discussion. C¯CT is the maximum achievable rate of channel training normalized by T log2 q. Proposition 1: For LOC(H, T ) with dimension M × N and T ≥ M , C¯CT = (1 − M/T ) E[rk(H)]. ˜ be a random matrix over F(T −M )×M Proof: Let X

  I ˜ and let Y˜ = XH. If the input of LOC(H, T ) is X =  , ˜ X

    I H the output is Y =   H =  . Thus, ˜ X Y˜ C¯CT = max I(X; Y )/(T log2 q) pX

˜ Y˜ H)/(T log2 q). = max I(X; pX ˜

˜ and H are independent, we have Since X ˜ Y˜ H) = I(X; ˜ Y˜ |H) I(X; = H(Y˜ |H) ≤ E[rk(H)](T − M ) log2 q, ˜ with uniformly independent components. where the equality is achieved by X B. Formulation of Channel Training Codes A matrix code C (n) ⊂ F(T −M )×nM induces a channel training code for LOC(H, T ) with dimension M × N as ˜ (n) ∈ C (n) , we write follows. For X h ˜ (n) = X ˜1 X

˜2 X

···

i ˜n , X

(3)

˜ (n) , which extends the definition of lifting in [7], as where Xi ∈ F(T −M )×M . Define the M -lifting of X       I I I ˜ (n) ) =  M  ,  M  , · · · ,  M  , LM (X ˜1 ˜2 ˜n X X X

January 13, 2010

DRAFT

4

˜ (n) ) ∈ (FT ×M )n . Define the M -lifting of C (n) as where IM is an M × M identity matrix. We see LM (X ˜ (n) ) : X ˜ (n) ∈ C (n) }. LM (C (n) ) = {LM (X

(4)

˜ (n) ) for LM (X ˜ (n) ) and We call LM (C (n) ) the lifted matrix code of C (n) . When the context is clear, we write L(X L(C (n) ) for LM (C (n) ). The rate R(n) of L(C (n) ) is R(n) =

log2 |L(C (n) )| log2 |C (n) | = . nT log2 q nT log2 q

˜ (n) ). Each use of LOC(H, T ) can transmit one component of Suppose that the transmitted codeword is L(X ˜ (n) ). The ith output matrix of LOC(H, T ) is L(X     IM Hi Yi =   Hi =   , ˜i ˜i X Y ˜i = X ˜ i Hi . Let where Hi is the ith instance of H and Y  H1   H2  H(n) =    

(5)



..

   ,   

. Hn

and h ˜ (n) = Y ˜1 Y

˜2 Y

···

i ˜n . Y

We obtain the decoding equation of the lifted matrix code L(C (n) ) as ˜ (n) = X ˜ (n) H(n) . Y

(6)

˜ (n) can use the knowledge of H(n) . The decoding of Y C. Channel Training and Subspace Coding Let Pj(M, FT ) be the set of subspaces of FT with dimension less than or equal to M . An subspace code is a subset of Pj(M, FT )n , the nth Cartesian power of Pj(M, FT ). Here n is called the block length of the subspace code. The subspace codes with block length one was first proposed by K¨otter and Kschischang for RLNC [6]. There exists a one-to-one mapping from channel training codes to subspace codes. For a matrix X, let hXi be its column space, the subspace spanned by the column vectors of X. Define π : LM (F(T −M )×M ) → Pj(M, FT ) as π(X) = hXi. We can check that π is a one-to-one mapping. Given a channel training code, we can use π to map all the matrices in the codewords to subspaces and obtain a subspace code. As shown in [5], the normalized maximum achievable rate of subspace codes C¯SS ≥ (1 − M/T ) E[rk(H)] + (T, q) = C¯CT + (T, q), where 0 < (T, q) < 1.8/(T log2 q). Thus, using channel training codes does not lose much rate compared with subspace codes when T is large. III. R ANK -M ETRIC C ODES FOR LOC S In this section, we extend the rank-metric approach of Silva et al. [7] to construct matrix codes for LOCs. January 13, 2010

DRAFT

5

A. Rank-Metric Codes Define the rank distance between X, X0 ∈ Ft×m as d(X, X0 ) = rk(X − X0 ). A rank metric code is a unit-length matrix code with the rank distance [8]. The minimum distance of a rank-metric code C ⊂ Ft×m is D(C) =

min d(X, X0 ).

X6=X0 ∈C

When t ≥ m, we have log2 |C| ≤ m − D(C) + 1, t log2 q

(7)

which is called the Singleton bound for rank-metric codes [8] (see also [7] and the reference therein). Codes that achieve this bound are called maximum-rank-distance (MRD) codes. Gabidulin described a class of MRD codes for t ≥ m, which are analogs of generalized Reed-Solomon codes [8]. Suppose the transmitted codeword is X0 ∈ C and the received matrix is Y = X0 H. If H is known at the receiver, we can decode Y using the minimum distance decoder defined as ˆ = arg min d(Y, XH). X X∈C

(8)

ˆ = X0 for all H with rk(H) ≥ r if and Proposition 2: The minimum distance decoder is guaranteed to return X only if D(C) ≥ m − r + 1, where 0 < r ≤ m. Remark: Silva et al. only proved the sufficient condition in Prop. 2 when considering additive errors. In fact, the necessary condition also holds without considering the additive errors as [6], [7]. Proof: We first prove the sufficient condition. Assume D(C) ≥ m − r + 1 and rk(H) ≥ r. We know d(Y, X0 H) = 0. Suppose that there exists a different codeword X1 ∈ C with d(Y, X1 H) = 0. We have (X0 − X1 )H = 0. Using the rank-nullity theorem of linear algebra, d(X0 , X1 ) = rk(X0 − X1 ) ≤ M − rk(H) ≤ m − r, i.e., a contradiction to D(C) ≥ m − r + 1. Now we prove the necessary condition. Assume D(C) ≤ m − r. There must exist X1 , X2 ∈ C such that d(X1 , X2 ) = rk(X1 − X2 ) ≤ m − r. Let B = {h ∈ Fm×1 : (X1 − X2 )h = 0}. We know dim(B) = m − rk(X1 − X2 ) ≥ r. By juxtaposing the vectors in B, we can obtain a matrix H with rk(H) ≥ r. We know (X1 − X2 )H = 0. So if the transformation matrix is H, the decoder cannot always output the correct codeword. B. Lifted Rank-Metric Codes Consider LOC(H, T ) with dimension M × N . The lifted matrix codes L(C (n) ), where C (n) ∈ F(T −M )×nM is a rank-metric code, is also called lifted rank-metric code. By the Singleton bound of rank-metric codes in (7), log2 |C (n) | ¯ (n) ). ≤ nM − D(C (n) ) + 1 , D(C (T − M ) log2 q January 13, 2010

DRAFT

6

Thus the rate of LM (C (n) ) R(n) ≤

¯ (n) )(T − M ) log2 q D(C nT log2 q

¯ (n) )/n, = (1 − M/T )D(C

(9)

where the equality is achieved by MRD codes. ˜ (n) ). By the decoding equality in (6), we can decode Y ˜ (n) using Suppose that the transmitted codeword is L(X 0 the minimum distance decoder defined in (8). By Prop. 2, the minimum distance decoder is guaranteed to return ˆ (n) = X ˜ (n) for all H(n) with rk(H(n) ) ≥ D(C ¯ (n) ). X 0 C. Throughput of Lifted Rank-Metric Codes Let

H (n)

 H1    =   

    ,   

H2 ..

.

(10)

Hn in which Hi , i = 1, · · · , n, are independent and follow the same distribution of H. By our analysis above, a receiver using the minimum distance decoder can judge if its decoding is guaranteed to be correct by checking rk(H(n) ), ¯ (n) ), the decoding is guaranteed to be correction. Otherwise, which is an instance of H (n) . If rk(H(n) ) ≥ D(C correct decoding cannot be guaranteed. Define the throughput of L(C (n) ) as ¯ (n) )}, TMDD (C (n) ) , R(n) Pr{rk(H (n) ) ≥ D(C where MDD stands for minimum distance decoder. Define ρ(n) =

maxC (n) ⊂F(T −M )×M TMDD (C (n) ) . (1 − M/T ) E[rk(H)]

ρ(n) is a parameter that shows the performance of lifted rank-metric codes. Let N ∗ = min{M, N }, the maximum possible rank of H. Lemma 3: For any positive integer n, ρ(n) ≤

maxr≤nN ∗ r Pr{rk(H (n) ) ≥ r} , ρ(rk(H (n) )), E[rk(H (n) )]

where the equality holds for n ≤ T /M − 1. Proof: By (9), TMDD (C (n) ) ≤

  ¯ (n) n o M D(C ) ¯ (n) ) , 1− Pr rk(H (n) ) ≥ D(C T n

where the equality holds for MRD codes. Thus ρ(n) = ≤

January 13, 2010

(n) maxr≤nN ∗ maxC (n) ⊂F(T −M )×nM :D(C ) ¯ (n) )=r TMDD (C (1 − M/T ) E[rk(H)]

maxr≤nN ∗ r Pr{rk(H (n) ) ≥ r} . n E[rk(H)]

(11)

DRAFT

7

TABLE I T HE VALUES ρmin (c, 6)

c

1

2

3

4

5

6

ρmin (c, 6)

0.408

0.408

0.460

0.526

0.649

1.0

¯ (n) ) = r can be constructed We know that when T − M ≥ nM , for any 0 < r ≤ nN ∗ MRD code C (n) with D(C using Gabidulin codes [8]. Thus, the equality in (11) holds when n ≤ T /M − 1. Lemma 4: i) For any positive integer n, ρ(rk(H (n) )) ≤ 1, where the equality holds if and only if H has a constant rank. ii) limn→∞ ρ(rk(H (n) )) = 1. Proof: i) can be verified by the definitions and ii) can be proved using the weak law of large numbers. Lemma 3 and 4 tell two things about lifted rank-metric codes. First, when H has constant rank or T  M , lifted rank-metric codes can achieve C¯CT . Second, if there exists MRD codes C (n) ⊂ F(T −M )×nM for large n, lifted rank-metric codes can approach C¯CT . D. Performance of Unit-Length Lifted Rank-Metric Codes Silva et al. [7] first used unit-length lifted rank-metric codes to construct subspace codes, and their codes generalize the widely used coding scheme for random linear network coding proposed by Ho et al. [1]. Here we evaluate the performance of unit-length lifted rank-metric codes for LOCs. Our valuation also reflects the performance of such codes for random linear network coding. For 0 < c ≤ 1 and N ∗ > 0 define ρmin (c, N ∗ ) =

min

prk(H) :E[rk(H)]=c,rk(H)≤N ∗

ρ(rk(H)).

There exists a rank distribution of H such that ρmin (c, N ∗ ) upper bounds the performance of unit-length lifted rank-metric codes. Linear programming algorithms can be applied to find ρmin (c, N ∗ ). In Table I, we show the values ρmin (c, 6) for c = 1, · · · , 6. We see ρmin (6, 6) = 1, which is the case that the channel has a constant rank. For c < 6, ρmin (c, 6) is less than 1. In Fig. 1, the value of ρmin (3, N ∗ ) decreases with N ∗ . ρmin (3, 200) is even less than one-fifth, which means that unit-length lifted rank-metric codes can achieve less than one-fifth of C¯CT . IV. L INEAR M ATRIX C ODES FOR LOC S In this section, we propose another coding scheme that can achieve C¯CT for all T ≥ M . A. Linear Matrix Codes Consider LOC(H, T ) with dimension M × N . For any positive real number s ≤ N , let G(n) be an bnsc × nM matrix, called the generator matrix. The matrix code generated by G(n) is (n)

GT −M = {BG(n) : B ∈ F(T −M )×bnsc }.

January 13, 2010

DRAFT

8

1

0.9

0.8

ρmin(3,N*)

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

20

40

60

80

100

120

140

160

180

200

N*

Fig. 1.

The value of ρmin (3, N ∗ ) for N ∗ = 3, 4, · · · , 200.

(n)

The code for LOC(H, T ) is the lifted matrix code L(GT −M ), called lifted linear matrix code. The rate of L(G (n) ) is R(n) =

log2 |F(T −M )×bnsc | nT log2 q

= (1 − M/T )bnsc/n. When n → ∞, R(n) → (1 − M/T )s. Suppose that the transmitted codeword is L(B0 G(n) ). The received matrix is given by (5). The decoding equation in (6) now becomes ˜ (n) = B0 G(n) H(n) . Y

(12)

Since the receiver knows H(n) and G(n) , the information B0 can be uniquely determined if and only if G(n) H(n) (n)

is full rank. Thus, the decoding error Pe

using (12) satisfies

Pe(n) ≤ Pr{rk(G(n) H (n) ) < bnsc}. B. Performance of Linear Matrix Codes Lemma 5 (Chernoff Bound): Let τi , i = 1, · · · , n, are independent random variables with the same distribution of τ ∈ {0, 1, · · · , m}. Then Pr

( X i

January 13, 2010

) τi < nα

n ≤ min etα E[e−tτ ] . t>0

DRAFT

9

Moreover, if α < E[τ ], mint>0 etα E[e−tτ ] < 1. Proof: For any t > 0, Pr

( X

) τi < nα

P o n = Pr e−t i τi > e−tnα

i

P ≤ etnα E[e−t i τi ] Y = etnα E[e−tτi ]

(13) (14)

i

= etα E[e−tτ ]

n

,

where (13) follows from Markov’s inequality and (14) follows from independence. Now assume α < E[τ ]. Let f (t) = etα E[e−tτ ]. We know that f (t) is a continuous function for t ≥ 0, f (0) = 1 and f 0 (0) = α − E[τ ] < 0. Thus, there exists t0 > 0 such that f (t0 ) < 1. A random matrix is said to be purely if all its components are uniformly independently distributed. Lemma 6: Suppose that G(n) is an bnsc × nM purely random matrix and independent with H (n) . For any s and  such that 0 < s < s +  < E[rk(H)], there exists g(s + ) > 1 such that Pr{rk(G(n) H (n) ) < bnsc}


(1 − q −k )

k=bn(s+)c−bnsc+1 ∞ Y



(1 − q −k )

k=bnc+1

≥1−

∞ X

q −k

k=bnc+1

= 1 − q −bnc /(q − 1). Moreover, using the Chernoff bound in Lemma 5, Pr{rk(H (n) ) < bn(s + )c} ≤ Pr{rk(H (n) ) < n(s + )}  n ≤ min et(s+) E[e−t rk(H) ] . t>0

January 13, 2010

DRAFT

10

 Let g(s + ) = 1/ mint>0 et(s+) E[e−t rk(H) ] . Since s +  < E[rk(H)], g(s + ) > 1. Therefore, Pr{rk(F (n) ) = bnsc} X ≥ an (i)prk(H (n) ) (i), i≥bn(s+)c

  q −bnc 1− Pr{rk(H (n) ) ≥ bn(s + c} q−1    q −bnc ≥ 1− 1 − g(s + )−n q−1 >

>1−

q −bnc −n − g(s + ) . q−1

The proof is completed. Lemma 7: Let 0 ≤ bi ≤ 1, i = 1, · · · , n, be a sequence of real numbers. If

Pn

i=0 bi /n

≤ /2 for some  > 0,

then there are more than half of the numbers in the sequence with values at most . Proof: Let A = {bi : bi ≤ }. If |A| ≤ n/2, then n X

bi =

i=0

X

bi +

i∈A

X

bi

i∈A /

> (n − |A|) ≥ n/2. We have a contradiction to

Pn

i=0 bi /n

≤ /2. Thus, |A| > n/2.

Theorem 8: Consider linear matrix codes for LOC(H, T ) with dimension M × N , and (s, ) satisfying 0 < s < s +  < E[rk(H)]. More than half of the matrices G(n) ∈ Fbnsc×nM , when used as the generator matrix, give that  −bnc  q −n (n) Pe < 2 + g(s + ) q−1 where g(s + ) > 1. Proof: By Lemma 6 X

Pr{rk(G(n) H (n) ) < bnsc}q −nbnscM

G(n) ∈Fbnsc×nM

=

X

Pr{rk(G(n) H (n) ) < bnsc}pG(n) (G(n) )

G(n) ∈Fbnsc×nM

= Pr{rk(G(n) H (n) ) < bnsc} ≤

q −bnc −n + g(s + ) , q−1

where G(n) is a purely random matrix. The theorem is proved by applying Lemma 7. Our analysis in the last two subsection tells that for any R < E[rk(H)], there exists a sequence of lifted linear (n)

matrix codes with rate R(n) → R and Pe

(n)

→ 0 as n → ∞. Moreover, Pe

decreases exponentially with the

increasing of n.

January 13, 2010

DRAFT

11

C. Rateless Coding Our coding schemes, both the lifted rank-metric codes and the lifted linear matrix codes, require only E[rk(H)]. Here we show that the lifted linear matrix codes can be realized ratelessly without the knowledge of E[rk(H)] if there exists one-bit feedback from the receiver to the transmitter. Suppose that we have a sequence of k × M matrices Di , i = 1, 2, · · · , which is known by both the transmitter and the receiver. Here k is a design parameter. Write h D(n) = D1

D2

···

i Dn .

The transmitter forms its messages into a (T − M ) × k message matrix B, and it keeps on transmitting L(BDi ), i = 1, 2, · · · , until it receives a feedback from the receiver. The ith output of the channel is given in (5). After collecting the ith output, the receiver checks that if D(i) H(i) has rank k. If D(i) H(i) has rank k, the receiver sends ˜ i = BD(i) H(i) . After a feedback to the transmitter and decodes the message matrix B by solving the equation Y received the feedback, the transmitter can transmit another message matrix. This rateless realization of lifted linear matrix codes can be found in applying RLNC in wireless network [2], [3]. Our work partially justifies the optimality of their methods. V. C ONCLUDING R EMARKS We evaluate the performance of the existing codes for RLNC based on channel training and show that they cannot achieve the maximum possible rate of channel training in general. We propose two coding schemes for LOCs: lifted rank-metric codes and lifted linear matrix codes, which can ahieve higher rate than the existing codes for RLNC. Lifted linear matrix codes can achieve the maximum possible rate of channel training and it can also be realized rateless without the information of channel statistics. Lifted rank-metric codes have the advantage that we can make use of existing encoding and decoding algorithms of rank-metric codes. R EFERENCES [1] T. Ho, M. Medard, R. Koetter, D. R, Karger, M. Effros, J. Shi, and B. Leong, “A random linear network coding approach to multicast,” IEEE Trans. Inform. Theory, vol. 52, no. 10, pp. 4413–4430, Oct. 2006. [2] S. Chachulski, M. Jennings, S. Katti, and D. Katabi, “Trading structure for randomness in wireless opportunistic routing,” in Proc. ACM SIGCOMM, 2007. [3] S. Katti, D. Katabi, H. Balakrishnan, and M. Medard, “Symbol-level network coding for wireless mesh networks,” in ACM SIGCOMM, 2008. [4] H. Balli, X. Yan, and Z. Zhang, “Error correction capability of random network error correction codes,” in Proc. IEEE ISIT’07, Jun. 2007. [5] S. Yang, S.-W. Ho, J. Meng, and E. hui Yang, “Optimality of subspace coding for linear operator channels over finite fields,” in Proc. IEEE Information Theory Workshop 2010, Cario, Egypt, 2010. [6] R. Koetter and F. R. Kschischang, “Coding for errors and erasures in random network coding,” IEEE Trans. Inform. Theory, vol. 54, no. 8, pp. 3579–3591, Aug. 2008. [7] D. Silva, F. Kschischang, and R. Koetter, “A rank-metric approach to error control in random network coding,” IEEE Trans. Inform. Theory, vol. 54, no. 9, pp. 3951–3967, Sept. 2008. [8] E. M. Gabidulin, “Theory of codes with maximum rank distance,” Probl. Inform. Transm, vol. 21, no. 1, pp. 1–12, 1985.

January 13, 2010

DRAFT

Suggest Documents