General error locator polynomials for binary cyclic codes with t ≤ 2 and n < 63 April 22, 2005 Teo Mora (
[email protected]) Department of Mathematics, University of Genoa, Italy. Emmanuela Orsini (
[email protected]) Department of Mathematics, University of Milan, Italy. Massimiliano Sala (
[email protected]) Boole Centre for Research in Informatics, UCC Cork, Ireland. Abstract. We show that a recently proposed algorithm ([9]) for decoding cyclic codes may be applied efficiently to all binary cyclic codes with t ≤ 2 and n < 63. Keywords: Gr¨obner bases, cyclic codes, decoding procedure.
1
Introduction
Cyclic codes form a large class of block linear codes, containing in particular the BCH codes. As regards BCH codes, it is known that they can be decoded efficiently ([5]), but also that their decoding performance degrades as the length increases ([8]). As regards cyclic codes, no efficient decoding algorithm is known (up to their actual distance), but they are not known to suffer from the same distance limitation as the BCH codes (and experimentally they become better and better as the length increases). However, recently a decoding algorithm for cyclic codes has been proposed ([9]), which might be efficient. 1
This algorithm depends on the notion of “general error locator polynomial” and it is efficient for a given code C if and only the relevant polynomial is sparse. At present, there is no known theoretical proof of the sparsity of general error locator polynomials for arbitrary cyclic codes, but there is some experimental evidence, at least in the binary case. Let t be the number of errors a code can correct (the correction capability of a code). In this paper we show that this algorithm may be efficiently applied to all binary cyclic codes with length less than 63 and t ≤ 2. Apart from only two cases, where a direct Gr¨obner basis computation is needed, we show that all these codes may be grouped in a few classes, each allowing a theoretical interpretation. In particular, the general error locator polynomial for all these codes is sparse. This paper is organized as follows: • Section 2, we provide the preliminaries that we need for cyclic codes, we recall the notion of general error locator polynomial for linear block codes and we show how it can be applied to decode binary cyclic codes, • Section 3, we investigate the case when a binary cyclic code has a defining set of type S = {1, 2i + 1} and t = 2, • Section 4, we investigate the case when a binary cyclic code admits S = {1, n − 1, l} as a defining set and t = 2, • Section 5, we provide general error locator polynomials for all binary cyclic codes with n < 63 and t ≤ 1, • Section 6, we provide general error locator polynomials for all binary cyclic codes with n < 63 and t = 2, • Section 7, we draw some conclusions.
2
Preliminaries
Our standard references for classical coding theory are [6] and [10]. In this section n is understood to be an odd number n ≥ 3.
2.1
Some algebraic background and notation
We denote by Z2 the field with two elements. For any m ≥ 2, we denote by F2m the finite field with 2m elements.
2
Let xn − 1 ∈ Z2 [x]. We denote by F the splitting field of xn − 1 over Z2 and by α a primitive n–root of unity, that is n
x −1 =
n−1 Y
(x − αi ).
i=0
It is well-known that F = F2m , for some m ≥ 2, and that the powers of α, {αi | 0 ≤ i ≤ n − 1}, are distinct. From now on α will be understood.
2.2
Standard notation for cyclic codes
Let C be an [n, k, d] binary cyclic code with n odd. Let g ∈ Z2 [x] be the generator polynomial of the code C. It is well-known that g is a polynomial of degree r = n − k and that g divides xn − 1. Traditionally the complete defining set of C is the set SC = {i | g(αi ) = 0, 0 ≤ i ≤ n − 1}. As SC is partitioned into cyclotomic classes, there are some subsets S of SC , any of them sufficient to specify the code unambiguously and we call any such S a defining set. Example 2.1. Let C be the [15, 7, 5] binary BCH code, then SC = {1, 2, 3, 4, 6, 8, 9, 12}, but the code may be unambiguously defined by S = {1, 3} or also by S 0 = {1, 3, 9}, so that both S and S 0 are defining sets for C.
2.3
General error locator polynomials for linear codes
Let D be a binary [n, k, d] linear block code and H be one of its parity check– matrices. We allow the entries of H to lie in the extension field F. We recall that t = [1/2(d − 1)]. If we denote by c = (c0 , . . . , cn−1 ), v = (v0 , . . . , vn−1 ) and e = (e0 , . . . , en−1 ), respectively, the transmitted codeword, the received vector and the error vector, then we get: HvT = H(cT + eT ) = HcT + HeT = 0 + HeT = sT , where the vector s ∈ (F)n−k is called the syndrome vector associated to v. Definition 2.2. A correctable syndrome is a syndrome vector s ∈ (F)n−k corresponding to an error with weight µ ≤ t. 3
Let e be an error of weight µ ≤ t. We denote by Le (z) the corresponding error locator polynomial, i.e. a polynomial of degree µ whose roots represent the error locations. In other words if e = (0, . . . , 0, 1 , 0, . . . , 0, 1 , 0, . . . , 0, 1 , 0, . . . , 0) , | {z } ↑ ↑ ↑ | {z } k1 −1
k1
kl
kµ
n−1−kµ
where k1 , . . . , kµ are the error components, then µ Y Le (z) = (z − αkl ). l=1
Remark 2.3. Traditionally, the reciprocal of Le (z), with roots the inverses α−kl = (αkl )−1 , is used as the error locator polynomial. We recall the definition of general error locator polynomial for D: Definition 2.4 ([9]). Let L a polynomial in Z2 [X, z], where X = (x1 , . . . , xn−k ). Then L is a general error locator polynomial of D if: 1. L(X, z) = z t +at−1 (X)z t−1 +· · ·+a0 (X), with aj ∈ Z2 [X], 0 ≤ j ≤ t−1; 2. given a correctable syndrome s = (s1 , . . . , sn−k ), corresponding to an error of weight µ ≤ t, if we evaluate the X variables in s, then the t roots of L(s, z) are the µ error locations plus zero counted with multiplicity t − µ. Observe that point 2 of Def. 2.4 is equivalent to L(s, z) = z t−µ Le (z), where e is the error associated to syndrome s. Given a generic linear code D it is not known whether L exists or not. The main result of [9] is the following: Theorem 2.5 ([9]). Every binary cyclic code possesses a general error locator polynomial. Remark 2.6. In [9] the results are more general, holding for any field and covering also the cases when some erasures occur.
4
2.4
General error locator polynomials for cyclic codes
Form now on, we shorten “binary linear code” to “code”. Let C be an [n, k, d] code, we now recall briefly how to obtain the general error locator polynomial of C. Let SC = {i1 , . . . , in−k } be the complete defining set of C. We need a few definitions. Definition 2.7. Let n ∈ N be odd. We denote by p(n, x, y) ∈ Z2 [x, y] the following polynomial: n−1
xn − y n X i n−1−i = xy . p(n, x, y) = x−y i=0 We are going to define an ideal in Z2 [X, Z], where X = (x1 , . . . , xn−k ) and Z = (z1 , . . . , zt ). The x variables play the role of syndromes and the z variables play the role of error locations. Definition 2.8 ([9]). With the above notation, we denote by IC the ideal in Z2 [X, Z] generated by the following polynomials: ( t ) X i m zl j − xj | 1 ≤ j ≤ n − k} ∪ {x2j − xj | 1 ≤ j ≤ n − k ∪ l=1
o n ∪ zin+1 − zi | 1 ≤ i ≤ t ∪ z˜l · zl · p(n, z˜l , zl ) | 1 ≤ ˜l < l ≤ t . The ideal IC is called the syndrome ideal of C. Let G be the (completely) reduced Gr¨obner basis of IC , w.r.t. the lexicographical order x1 < x2 < · · · < xn−k < zt < · · · < z1 . We denote by GX and GXZ the following subsets of G: GX = G ∩ Z2 [X],
GXZ = G ∩ (Z2 [X, Z] \ Z2 [X]),
G = GX ∪ GXZ .
The elements of GXZ can be collected into blocks {Gi }1≤i≤t , where,Sfor any i, Gi ⊂ Z2 [X, zt , . . . , zi+1 ][zi ] \ Z2 [X, zt , . . . , zi+1 ], so that GXZ = ti=1 Gi . On the other hand, any Gi , 1 ≤ i ≤ t, can be decomposed into blocks of polynomials according to their degree with respect to the variable zi : Gi =
∆i [ δ=1
5
Giδ .
In this way, if g ∈ Giδ , we have: • g ∈ Z2 [X, zt , . . . , zi+1 ][zi ] \ Z2 [X, zt , . . . , zi+1 ], • degzi (g) = δ, i.e. g = aziδ + · · · and a ∈ Z2 [X, zt , . . . , zi+1 ]. Let Niδ be the number of elements of Giδ . We name the elements of the set Giδ = {giδj , 1 ≤ j ≤ Niδ } after their order: h < j ⇔ Lt(giδh ) < Lt(giδj ). The following theorem is a direct consequence of results from [9]. Theorem 2.9 ([9]). Let IC be the syndrome ideal associated to C and let G be as above. Then: S 1. Gi = iδ=1 Giδ and Giδ 6= ∅, for 1 ≤ i ≤ t and 1 ≤ δ ≤ i; 2. Gii = {gii1 }, for 1 ≤ i ≤ t, i.e. exactly one polynomial exists with degree i w.r.t. the variable zi in Gi . Moreover, its leading term and leading polynomial are Lt(gii1 ) = zii ,
Lp(gii1 ) = 1
The polynomial gtt1 ∈ Gt may be shown to satisfy all proprieties needed by a general error locator polynomial. So, if we want to compute L for a given cyclic code C, we first compute G from IC and then we take gtt1 from G: L(X, z) = gtt1 (x1 , . . . , xn−k , z). Remark 2.10. In order to write IC , we do not need to use the complete defining set SC of C: we may use P set S of C. For example, P any other defining if {1, 2} ∈ SC , we have both tl=1 zl1 = x1 and tl=1 zl2 = x2 , which means x2 = x21 , so that we may remove x2 (and its corresponding equation). Once we have computed L for a code C, the decoding algorithm is straightforward: Input s = (s1 , . . . , sn−k ) µ=t While at−µ (s1 , . . . , sn−k ) = 0 do µ := µ − 1; Output µ, L(z)
6
so that (1)
Le (z) =
L(z) . z t−µ
Remark 2.11. If t = 1, the definition of L requires a polynomial in Z2 [X, z] with degz (L) = 1 and leading coefficient 1. This means that L is of type L(X, z) = z + a(X),
a ∈ Z2 [X] .
If t = 2, the definition of L requires a polynomial in Z2 [X, z] with degree degz (L) = 2 and leading coefficient 1. This means that L is of type L(X, z) = z 2 + a(X)z + b(X),
a, b ∈ Z2 [X] .
Remark 2.12. Suppose that t = 2, so that L is as in Remark 2.11. Suppose that we need only a two component syndrome vector s = (¯ x1 , x ¯2 ) to identify a correctable error e. Let z¯1 and z¯2 be the two roots of L(¯ s, z) (the error locations). Then there are only three (mutually exclusive) cases: • either the weight of e is 2, which happens if and only if z¯1 6= 0 and z¯2 6= 0 (and in this case z¯1 6= z¯2 ), • or the weight of e is 1, which happens if and only if z¯1 6= 0 and z¯2 = 0 (or if z¯1 = 0, z¯2 = 6 0), • or e = 0, which happens if and only if z¯1 = z¯2 = 0.
3
Case (1, 2i + 1) and t = 2
In this section we consider only codes with a defining set S = {1, 2i + 1}, for some i ≥ 1, and t = 2 (with an odd length). From Def. 2.8, we get: (2)
z1 + z2 = x1 ,
(3)
z12i+1 + z22i+1 = x2 .
The general error locator polynomial associated to C is (Remark 2.11) LC (x1 , x2 , z) = z 2 + a(x1 , x2 )z + b(x1 , x2 ). 7
Remark 3.1. In the z–variable, polynomial LC has degree two, so we can express its coefficients in term of its roots (see Remark 2.12): z¯1 + z¯2 = a(¯ x1 , x¯2 ),
z¯1 z¯2 = b(¯ x1 , x¯2 ) .
From (2) we immediately obtain: a(x1 , x2 ) = x1 . The main result of this section is the following. Theorem 3.2. Let C be a code with t = 2 and defining set S = {1, 2i + 1}, for some i ≥ 1 (and odd length). Let s= (¯ x1 , x¯2 ) be a correctable syndrome. Let z¯1 and z¯2 be the corresponding error locations. Let ¯b = z¯1 z¯2 . Then there is a polynomial P ∈ Z2 [y1 , y2 , y3 ] such that P (¯ x1 , x¯2 , ¯b) = 0,
degy3 (P ) ≤ i .
Moreover, P does not depend on s but only on i. Theorem 3.2 is an easy consequence of Theorem 3.7. To prove Theorem 3.7, we need a few simple lemmas. Lemma 3.3. Let D be a commutative ring with identity 1D . Let l be a [l] natural number, l ≥ 1. Then there are some ah ∈ D, for 1 ≤ h ≤ l, such that for any d1 and d2 in D we have d2l+1 1
+
d2l+1 2
2l+1
= (d1 + d2 )
+
l X
[l]
2(l−h)+1
ah (d1 d2 )h (d1
2(l−h)+1
+ d2
).
h=1
Proof. Let 1D be the multiplicative identity in D. Since the binomial theorem holds for any commutative ring, we have 2l+1
(d1 + d2 )
=
2l+1 X j=0
2l + 1 j 2l+1−j d1 d2 . j
Since 2l + 2 is even, we can break the right hand side of the former equation into two sums with the same number of addenda: 2l+1 X 2l + 1 j 2l+1−j (4) d1 d2 = j j=0 l X 2l + 1 j=0
j
dj1 d2l+1−j 2
2l+1 X
+
j=l+1
8
2l + 1 j 2l+1−j . d1 d2 j
By two index substitutions, h = j − 1 − l and m = l − h, we get 2l+1 X
j=l+1
l 2l + 1 j 2l+1−j X 2l + 1 = d1 d2 = dl−h dh+l+1 2 1 j h + l + 1 h=0 l X m=0
2l + 1 d2l+1−m dm 2 . 2l + 1 − m 1
But then we can collect powers of (d1 d2 ) in (4), getting l X 2l + 1 j
j=0
(d1 d2 )j d2l+1−2j 2
(5)
j=0
m
m=0
(we have used the obvious fact 2l+1 X
+
l X 2l + 1
2l+1 2l+1−m
=
2l+1 m
(d1 d2 )m d2l+1−2m 1
), i.e.
l 2l + 1 j 2l+1−j X 2l + 1 d1 d2 = (d1 d2 )j (d2l+1−2m + d2l+1−2j ). 2 1 j j j=0
From (5), our desired result readily follows by setting 2l + 1 [l] ah = · 1D . h t u [l]
Note that the ah ’s in Lemma 3.3 do not depend on d1 or d2 and that [l] some of the ah ’s may be 0D , depending solely on the characteristic of D. We define inductively a sequence of polynomials. Definition 3.4. For any σ ≥ 0, let Lσ ∈ Z2 [y1 , y2 ] be such that: • L0 = y 1 • Lσ = y12σ+1 +
Pσ
h=1
[σ]
ah y2h Lσ−h
[σ]
where the {ah } are the same as Lemma 3.3, when D = Z2 . Lemma 3.5. Given Lσ as in the above definition, ∀σ ≥ 0, we have deg|y2 (Lσ ) ≤ σ. 9
Proof. By induction on σ. If σ = 0 we have L0 = y1 . By induction we assume that, for any 1 ≤ h < σ, deg|y2 (Lσ−h ) ≤ σ − h. P [σ] By definition, we have Lσ = y12σ+1 + σh=1 ah y2h Lσ−h , so that ! σ X [σ] h deg|y2 Lσ = deg|y2 ah y2 Lσ−h ≤ h + σ − h = σ . h=1
t u Lemma 3.6. Let l ≥ 1. For any z˜1 , z˜2 ∈ Z¯2 , we have z˜1 2l+1 + z˜2 2l+1 = Ll (z˜1 + z˜2 , z˜1 z˜2 ) Proof. It follows immediately from Lemma 3.3. t u Theorem 3.7. Let l ≥ 1. Let z˜1 , z˜2 ∈ Z2 . Let x˜1 , x˜2 , ˜b ∈ Z2 be such that x˜1 = z˜1 + z˜2 , x˜2 = z˜1 2l+1 + z˜2 2l+1 and ˜b = z˜1 z˜2 . Then there is P ∈ Z2 [y1 , y2 , y3 ] such that: ˜ = 0, P (˜ x1 , x˜2 , B) deg|y3 P ≤ l . Moreover, P does not depend on the choice of z˜1 or z˜2 . Proof. Set P (y1 , y2 , y3 ) = Ll (y1 , y3 ) − y2 and apply Lemma 3.6.
t u
Example 3.8. Let C an [n, k, d] code with S = {1, 3}, n odd and t = 2. We have z1 + z2 = x1 , z13 + z23 = x2 and LC = z 2 + x1 z + B. Then: z13 + z23 = (z1 + z2 )3 + z1 z2 (z1 + z2 ), that is, Bx1 + x2 + x31 = 0. Note that C in Example 3.8 is a BCH code and so this particular case can be deduced also from [3], [4]. Example 3.9. Let C an [n, k, d] code with S = {1, 5}, n odd and t = 2. We have z1 + z2 = x1 , z15 + z25 = x2 and LC = z 2 + x1 z + B. We get: z15 + z25 = (z1 + z2 )5 + z1 z2 (z13 + z23 ), z15 + z25 = (z1 + z2 )5 + z1 z2 ((z1 + z2 )3 + z1 z2 (z1 + z2 )) i.e. B 2 x1 + Bx31 + x2 + x51 = 0. 10
Case (1, n − 1, l) and t = 2
4
Let C be an [n, k, d] code with S = {0, 1, n − 1}, n odd and t = 2. In this case we can consider three syndromes {x1 , x2 , x3 } such that (6)
z1 + z2 = x1
(7)
z1n−1 + z2n−1 = x2
(8)
z1l + z2l = x3
We know also that z1n+1 = z1 , z2n+1 = z2 . The general error locator polynomial of C is of type (Remark 2.11): L = z 2 + a(x1 , x2 , x3 )z + b(x1 , x2 , x3 ). Thanks to Remark 2.12 and 3.1, we have L = z 2 + x1 z + b(x1 , x2 , x3 ),
b(¯ x1 , x¯2 , x¯3 ) = z¯1 z¯2 .
To study the case when there are exactly two errors (µ = 2), we need only the first two syndromes. This is done in Subsection 4.1. To extend to the case when there is an error (µ = 1), we need also the third syndrome. This is done in Subsection 4.2 and 4.3. All these cases are summarized in Subsection 4.4.
4.1
µ=2
We suppose there are exactly two errors. This is equivalent to z¯1 6= 0 and ¯z2 6= 0 Under these assumptions, from (6) and (7) we have (x1 x2 ) = (z1 + z2 )(z1n−1 + z2n−1 ) = z1 z2 (z2n−2 + z2n−2 ) and x22 = (z1n−1 + z2n−1 )2 = z12n−2 + z22n−2 = z1n−2 + z2n−2 , so that (x1 x2 ) = z1 z2 x22 ,
i.e. b = z1 z2 = 11
x1 x2 x1 = . 2 x2 x2
We have a special case when also S 0 = {0, 1} is a defining set. In this case we have, for some 1 ≤ δ ≤ n − 1, 1 ≡ −2δ
mod (n)
and δ
δ
z1 = z1n+1 = z1n−2 = (z1n−2 )(z1n )2
δ −1
= z1n2
δ −2δ
(n−1)2δ
= z1
,
which means δ
δ
x22 = (z1n−1 + z2n−1 )2 = z1 + z2 = x1 , so that b = x22
4.2
δ −1
.
µ = 1 and l = 0
There is only one error if and only if exactly one of {z1 , z2 } is zero. In particular, if there is only one error the product z1 z2 reduces to 0. Unfortunately the ratio z1 x1 becomes = z12 , −1 x2 z1 and so it cannot be used as b in this case. To take into consideration also the case when µ = 1, we may multiply by a polynomial h ∈ Z2 [x1 , x2 , x3 ] s.t. • h(¯ x1 , x¯2 , x¯3 ) = 1, if (¯ x1 , x¯2 , x¯3 ) corresponds to two errors, • h(¯ x1 , x¯2 , x¯3 ) = 0, if (¯ x1 , x¯2 , x¯3 ) corresponds to one error. Such an h will ensure that the product x1 h(x1 , x2 , x3 ) x2 takes the value z¯1 z¯2 both when µ = 2 and when µ = 1. Moreover, since x−1 2 is in practice replaced by xn−1 , said product will be 0 when µ = 0, which is 2 what we expect from b even in the case µ = 0. To construct such an h, we use the third syndrome. When l = 0 we may rewrite (8) as z1n + z2n = x3
(9)
Since z1n = 1 if and only if z1 is non-zero, we have that: • z1n + z2n = 1 + 1 = 0, if µ = 2, • z1n + z2n = 1, if µ = 1, • z1n + z2n = 0 + 0 = 0, if µ = 0. In other words, we can take h = (1 + x3 ) . 12
4.3
µ = 1 and l = n/3
We argument analogously to the previous case. The only difference is that l = n/3 now and so the h will be different. n/3 n/3 From z1 + z2 = x3 , we have n/3
x3 3 = (z1
n/3
n/3 n/3
n/3
+ z2 )3 = z1n + z2n + z1 z2 (z1
n/3
+ z2 ) = z1n + z2n + B n/3 x3
and this formula is true for any µ. If µ = 2 we have x33 = B n/3 x3 = (
(10)
x1 n/3 n/3 2/3n x3 = x1 x2 x3 . −1 ) x2
If µ = 1 we have n/3
x33 = (z1 )3 = z1n = 1 .
(11)
Thanks to (10) and (11) it is now clear that the following h satisfies our requirements x33 + 1 . h = n/3 2/3n x1 x2 x3 + 1
4.4
Summary
We summarize our results in the following theorem. Theorem 4.1. Let C be an [n, k, d] code, with t = 2 and n odd. Suppose S admits a defining set S = {1, n − 1, l}. Then x1 L = z 2 + x1 z + h(x1 , x2 , x3 ) , x2 where h is a function s.t. • h(¯ x1 , x¯2 , x¯3 ) = 1, if (¯ x1 , x¯2 , x¯3 ) corresponds to two errors, • h(¯ x1 , x¯2 , x¯3 ) = 0, if (¯ x1 , x¯2 , x¯3 ) corresponds to one error. In particular, • if l = 0, h = 1 + x3 , • if l = n/3, h =
x33 +1 n/3 2/3n x1 x2 x3 +1
.
Moreover, if C admits also S 0 = {0, 1} as a defining set, then L = z 2 + x1 z + x22
δ −1
h(x1 , x2 , x3 ) ,
where 1 ≤ δ ≤ n − 1 is such that 1 ≡ −2δ mod (n). 13
5
Codes with t ≤ 1 and n < 63
In this section we consider codes, with t ≤ 1 and n < 63, n odd. The case t = 0 is trivial. If t = 1 and n < 63 then we can distinguish the following cases. 1) S = {m}, with (n, m) = 1. In this case the code C is equivalent to the BCH code with t = 1 and S 0 = {1}. Since z1m = x1 , z1n+1 = z1 and (n, m) = 1, we may apply Bezout’s Lemma and obtain an integer k such that: mk ≡ 1 mod (n). This means (z1m )k = z1 =⇒ xk1 = z1 . In other words, we can say that LC = z1 + xk1 . 2) S = {m, h}, with (m, h) = 1. We have that z1m = x1 and z1h = x2 . From Bezout’s Lemma, there are integers m0 and h0 such that mm0 + hh0 = 1, hence 0
0
(z1m )m (z1h )h = z1 ,
0
0
h xm 1 x2 = z1 ,
0
0
h LC = z1 + xm 1 x2 .
3) The code C is a sub-code of a code C 0 of type 1) or 2). In this case it is possible to decode C using the general error locator polynomial of C 0 . In fact, if s is correctable syndrome for C, then s is a correctable syndrome also for C 0 . By evaluating L of C 0 in s, we find the error locations. 4) The code C is equivalent to a code C 0 of type 1), 2) or 3). Again, it is possible to decode using the general error locator polynomial of C 0 , as follows. Let φ : (Z2 )n 7→ (Z2 )n be the coordinate permutation function providing the code equivalence, i.e. φ(C 0 ) = C. Let e be a correctable error for C. We have that e0 = φ−1 (e) is a correctable error for C 0 (φ is distance invariant).Let s0 be the syndrome corresponding to e0 . By evaluating L of C 0 in s0 , we find e0 and hence e, since e = φ(e0 ).
14
6
Codes with t = 2 and n < 63
In this section we consider all binary cyclic codes with t = 2, n < 63 and n odd, summarizing all the cases we have discussed. By a direct inspection the following theorem is easily established. Theorem 6.1. Let C be an [n, k, d] binary cyclic code with d ∈ {5, 6} and 7 ≤ n < 63 (n odd). Then there are only five cases: 1. C admits a defining set of type S = {1, 2i + 1}, i ≥ 1 . 2. C admits a defining set of type S = {1, n − 1, l} . 3. C is one of the following two codes: a) n = 39, S = {0, 1}, b) n = 51, S = {0, 1, 5}. 4. C is a sub-code of one of the codes of the above cases. 5. C is equivalent to one of the codes of the above cases. For all the codes in points 1) and 2) of Theorem 6.1, the general error locator polynomial is of type L = z 2 + a(x1 , x2 , x3 )z + b(x1 , x2 , x3 ), with a = x1 (Section 3 and 4). It is interesting to note that also for the codes in point 3) of Theorem 6.1 a direct computation shows that a = x1 . Since the codes of points 4) and 5) of Theorem 6.1 can be decoded using the polynomials for the other three cases, we need only exhibit b for codes covered by points 1), 2) and 3). Moreover, we need provide b only for one code in any equivalence class. The results are presented in the following table. They are grouped according to the length. Within one group, all codes that may be decoded with a polynomial L are on the right of L. Among these codes, the one in bold corresponds to the code actually used to compute L. The value in the third column, “case”, explains which point of Theorem 6.1 has been used to describe the corresponding code family in the last column. Observe that some times a code family might be described with either point 1 or point 2.
15
n
B
case
codes
9
x72
2
{0, 1}
15
2 x2 x14 1 + x1
1
{1, 3}, {3, 7}, {1, 3, 7}, {0, 1, 3}
14 x1 x14 2 + x1 x2 x3
2
{0, 1, 7}
53 x121 + x104 + x87 1 1 1 + x1
1
{1}, {3}, {0, 1}
2 x2 x62 1 + x1
1
62 x1 x62 2 + x1 x2 x3
2
{0, 1, 5}, {0, 1, 5, 7}
25
x127 + x21 1
1
{1}
27
x511 + x511 1 1 x3
2
{0, 1}
245 x2 x20 1 + x1
1
{1, 9}, {0, 1, 9}
24 12 8 10 7 6 5 x13 2 x1 + x2 x1 + x2 x1 + x2 x1
1
{1, 15}, {3, 7}, {5, 11}
1
{1, 5}, {1, 7}, {3, 11} {5, 7}, {1, 11, 15} {1, 15}, {3, 15} {0, 1, 5}, {0, 1, 7} {1, 5, 15}, {1, 7, 15}
17
21
31
2 23 x82 x24 1 + x2 x1
{1, 3}, {5, 9}, {1, 3, 7}, {3, 5, 9} {5, 7, 9}, {1, 3, 9}, {0, 1, 3}, {0, 1, 3, 9}
{1, 3}, {7, 15}, {7, 11}, {1, 11}, {3, 5}, {5, 15} 1 {1, 3, 7}, {0, 1, 3, 7} {3, 7, 11}, {1, 5, 11} {3, 5, 7}, {5, 11, 15} {1, 3, 15}, {3, 7, 15} {5, 7, 11}, {3, 5, 11} continued on the next page
2 x2 x30 1 + x1
16
continued from the previous page
n
B
case
codes
33
x21 + x21 x3
2
{0, 1}, {0, 1, 5} {0, 1, 11}, {0, 1, 5, 11}
352 + x317 + x177 + x142 + x107 x22 x27 1 + x1 1 1 1 1
1
{1, 5}, {3, 15} {3, 5, 15}, {1, 5, 15} {0, 1, 5}, {0, 1, 5, 15}
x2 + x21 x4094 1
1
{1, 3}, {1, 3, 7}
x665 + x626 + x587 + x548 + x509 2 2 2 2 2 353 + x197 + x158 + x119 + x2 +x470 + x 2 2 2 2 2 2
3
{0, 1}, {0, 1, 13} {0, 1, 7}, {0, 1, 7, 13}
1
{1}, {0, 1}
x4094 x2 + x21 1
1
{1, 3}, {7, 21} {1, 3, 21}, {3, 7, 21} {1, 3, 9}, {7, 9, 21} {1, 3, 7}, {0, 1, 3} {1, 3, 9, 15}, {1, 3, 15} {1, 3, 15, 21}, {1, 3, 9, 21} {1, 3, 9, 15, 21}, {0, 1, 3, 9} {0, 1, 3, 15}, {0, 1, 3, 21} {0, 1, 3, 9, 15, 21} {1, 3, 7, 9, 21} {0, 1, 3, 9, 15}, {0, 1, 3, 15, 21} {0, 1, 3, 9, 21}
677 + x587 + x42 x81 + x22 x51 + x2 x26 1 + x1 1 +x542 + x407 + x362 + x317 + x272 1 1 1 1 1 + 2 +x92 1 + x1
1
{1, 21}, {3, 7} {0, 1, 15, 21}, {0, 1, 21} {1, 21, 15}
542 + x317 + x272 + x47 + x2 x22 x29 1 + x1 1 1 1 1
1
{1, 9}, {7, 9} {0, 1, 9, 15}, {0, 1, 9, 21} {0, 1, 5, 7, 9}, {0, 1, 5, 9}
x1 x4094 + x1 x4094 x3 2 2
2
5 12 4 19 2 33 x3 x32 1 + x2 x1 + x2 x1 + x2 x1 + 632 + x452 + x407 + x317 + +x677 + x 1 1 1 1 1 2 +x182 + x92 1 1 + x1
2
35
39
43 x518 1
+ x604 + x647 + x690 + x733 x776 1 + 1 1 1 1 475 432 260 174 2 + x1 + x1 + x1 + x1 + x45 1 + x1
45
{0, 1, 7}, {0, 1, 7, 9} {0, 1, 5, 7}
{1, 7, 15} , {1, 5, 7, 15} {0, 1, 7, 15}, {0, 1, 5, 7, 15}
continued on the next page
17
continued from the previous page
n
B
case
x2 x254 + x21 1
1
51
codes {1, 3}, {5, 9} {3, 19}, {9, 11} {0, 1, 3}, {0, 1, 3, 17} , {1, 3, 17}
{1, 9}, {9, 19} {3, 11}, {3, 5} {0, 1, 9}, {0, 1, 9, 17} {1, 9, 19}, {0, 1, 9, 19} {1, 9, 17}, {1, 9, 17, 19}
4 17 3 128 x52 x161 + x42 x68 1 1 + x2 x1 + x2 x1 + 2 188 + x2 x35 + x x248 + +x32 x77 2 1 1 + x2 x1 2 1 206 +x2 x197 + x2 x146 + x2 x95 1 1 1 + x1
1
x1 x254 + x1 x254 2 2 x3
2
4 135 + x4 x33 + x52 x130 + x52 x28 1 1 + x2 x1 2 1 2 x145 + x2 x94 + x2 x43 + +x22 x196 + x 1 2 1 2 1 2 1 206 + x104 + x2 +x2 x201 + x2 x99 1 1 + x1 1 1
3
55
x497 + x112 1 1
1
{1}, {3}, {1, 3} {1, 11}, {3, 11}, {1, 3, 11}
57
x511 + x511 2 2 x3
2
{0, 1}, {0, 1, 5} {0, 1, 19}, {0, 1, 5, 19}
{0, 1, 19}, {0, 1, 17, 19} {0, 1, 5, 17, 19}, {0, 1, 5, 11, 17, 19} {0, 1, 9, 17, 19}
{0, 1, 5} {0, 1, 5, 17} {0, 1, 5, 19}
Table 1: Binary cyclic code with t = 2 and n < 63
7
Conclusions and further research
In this paper we provide general error locator polynomials for all binary cyclic codes, with t ≤ 2 and n < 63. In all these cases our polynomials are sparse, suggesting that any decoding procedure based on the general error locator polynomial of a cyclic code might be efficient. Actually the number of monomials of L apparently grows linearly, since |L| ≤ n. We give some theoretical explanations for the sparsity of our polynomials, in all cases except two. A complete proof for all cases (any n and any t) seems far beyond our means, at present, but we plan to investigate more and more particular cases, hoping sooner or later to get the profound reason behind the sparsity, whose experimental evidence is apparent (at least in the binary case). 18
8
Acknowledgments
The authors would like to thank P. Fitzpatrick and C. Traverso for their comments on our work. The second author would like to thank the third author (her supervisor). A special thank to M. Giusti, for the use of the computational cluster “MEDICIS”, and to the Singular team, for their powerful software package.
References [1] M. Caboara, T. Mora, “The Chen-Reed-Helleseth-Truong Decoding Algorithm and the Gianni-Kalkbrenner Gr¨obner Shape Theorem”, AAECC, n. 13, p. 209-232, 2002. [2] X. Chen, I. S. Reed, T. Helleseth, K. Truong, “Use of Gr¨obner Bases to Decode Binary Cyclic Codes up to the True Minimum Distance”, IEEE Trans. on Inf. Th., vol. 40, p. 1654–1661, 1994. [3] A.B. III Cooper, “Direct solution of BCH decoding equations”, Comm., Cont. and Sign. Proc., p. 281–286, 1990. [4] A.B. III Cooper, “Finding BCH error locator polynomials in one step”, Electronic Letters, vol. 27, p. 2090–2091, 1991. [5] P. Fitzpatrick, “On the Key Equation”, IEEE Trans. on Inf. Th., vol. 41 1290–1302, 1995 [6] F. J. MacWilliams, N.J.A. Sloane, The Theory of Error-Correcting Codes, North Holland, 1977. [7] T. Mora, M. Sala, “On the Groebner bases for some symmetric systems and their application to Coding Theory”, JSC, n. 35, p. 177-194, 2003. [8] S. Lin and E. J. Weldon, Jr., “Long BCH codes are bad”, Information and Control, 11(4):445-451, October 1967. [9] E. Orsini, M. Sala, “Correcting errors and erasures via the syndrome variety”, J. Pure Appl. Algebra, accepted for publication. [10] W.W. Peterson, E.J. Weldon, Jr., Error Correcting Codes, MIT Press, 1972. [11] M. Sala, “Gr¨obner bases and distance of cyclic codes”, AAECC, vol. 13, no. 2, 137-162, 2002. 19