An Improved List Decoding Algorithm for the Second Order Reed-Muller Codes and its Applications ? Rafa¨el Fourquet1 and C´edric Tavernier2 1
??
Universit´e de Paris 8 Saint-Denis (MAATICAH), France Email:
[email protected] 2 THALES Communications Colombes, France Email:
[email protected]
Abstract. We propose an algorithm which is an improved version of the Kabatiansky-Tavernier list decoding algorithm for the second order binary Reed-Muller code RM(2, m), of length n = 2m , and we analyse its theoretical and practical complexity. This improvement allows a better theoretical complexity. Moreover, we conjecture another complexity which corresponds to the results of our simulations. This algorithm has the strong property of being deterministic and this fact drives us to consider some applications, like determining some lower bounds concerning the covering radius of the RM(2, m) code.
Key words: list decoding, Reed-Muller codes, covering radius.
1
Introduction
Introduced by P. Elias [1] fifty years ago, the concept of list decoding has been recently revived thanks to M. Sudan discovery [3] of efficient list decoding algorithms for Reed-Solomon (RS) codes. Despite that there are similarities between Reed-Solomon and Reed-Muller (RM) codes, no efficient list decoding algorithm for RM codes was known until very recently. O. Goldreich and L.A. Levin [10] have suggested a probabilistic algorithm which works in the worst case for the first order binary Reed-Muller code RM(1, m). Subsequently, O. Goldreich, R. Rubinfeld and M. Sudan [9] have generalized this algorithm over a finite field and suggested an extension for any order over a large finite field. Then, deterministic algorithms which work for any order have been suggested by R. Pellikaan and X.-W. Wu in [4], and subsequently by G. Kabatiansky and C. Tavernier in [2] for the second order and by I. Dumer, G. Kabatiansky and C. Tavernier [5] for any order. This last algorithm considerably improves the algorithm of [4] ? ??
This article is a rewritten and completed version of [18]. The work of C. Tavernier was supported by DISCREET, IST project no. 027679, funded in part by the European Commission’s Information Society Technology 6th Framework Program.
by correcting up to the Johnson bound with an almost linear complexity. Here, we are interested in decoding the RM(2, m) code beyond minimal distance, and we propose an improvement of the algorithm of [2]. This algorithm is correlated with the techniques used in [10] and [9]. Nevertheless, these techniques cannot be derived in a simple way. The algorithm of [10] is probabilistic and works for the first order Reed-Muller code, whereas we propose a deterministic algorithm for the second order Reed-Muller code. Many difficult problems appear as soon as we want to decode beyond the Johnson bound. There is a natural one to one correspondence between the second order ReedMuller code RM(2, m) and the set of m-variable quadratic Boolean functions. A codeword of RM(2, m) can be represented by the evaluation of a quadratic Boolean function over its definition set. Then we can define the Hamming distance d between a Boolean function, which corresponds to the received vector, and the set of quadratic functions RM(2, m). The covering radius ρ(2, m) of the second order Reed-Muller code RM(2, m) is defined as ρ(2, m) = maxy∈Bm d(y, RM(2, m)), where Bm is the set of m-variable Boolean functions. This quantity is still unknown for m ≥ 7. However, it has been recently upper bounded in [15]. Some lower bounds are also known (see [8], [16]). In this article, we are going to test some cryptographic Boolean functions in order to improve the lower bound of the covering radius up to m = 12. We also propose some simulations in the binary symmetric channel. It appears, according to our simulation results, that the performances of this algorithm are comparable to the performances of the Ilya Dumer algorithm (see [17]) for a “reasonable” quantity of errors. We will not propose a proof, nevertheless it could be a starting point for a next work.
2 2.1
The Second Order Reed-Muller Codes Basic Definitions and Notations
We denote by F2 the field with two elements, and by Bm , for some positive integer m, the set of all Boolean functions f : Fm 2 → F2 . Such a Boolean function f can be uniquely represented as an m-variable polynomial over F2 (the Algebraic Normal Form). The algebraic degree of f is the degree of this polynomial. An affine (resp. quadratic) Boolean function is of algebraic degree at most 1 (resp. 2). A Boolean function f can also be identified with the binary vector of length 2m consisting of all values f (x), x ∈ Fm 2 : f = (f (0, 0, . . . , 0), f (1, 0, . . . , 0), f (0, 1, . . . , 0), f (1, 1, . . . , 0), . . . , f (1, 1, . . . , 1)). Conversely, any binary vector of length 2m will also be considered as a Boolean function in Bm . From now, m will denote a fixed positive integer, and we set n = 2m . Let f = (f1 , . . . , fn ) and g = (g1 , . . . , gn ) be two binary vectors of length n, corresponding to m-variable Boolean functions. The Hamming distance d(f, g) between these vectors is the number of coordinates where f and g differ : d(f, g) = |{i ∈ [1, . . . , n], fi 6= gi }|.
The Hamming weight wt(f ) of f is the distance between f and the null vector. We will also denote by fχ (the font is changed) the vector of {−1, 1}n corresponding to the function (−1)f (the sign function of f ), that is, fχ = ((−1)f1 , . . . , (−1)fn ). For any Boolean function f ∈ Bm , we denote by F(f ) the following character sum, related to the Walsh transform of f : X F(f ) = (−1)f (x) = n − 2wt(f ). (1) x∈Fm 2
Note that F(f + 1) = −F(f ). We will also consider in this paper the usual scalar product, which is very adapted to our problems. Let x = (x1 , . . . , xn ) and y = (y1 , . . . , yn ) be two vectors of Zn . We denote by hx, yi the scalar product of x and y, defined by n X hx, yi = xi yi . i=1
The usual Euclidean norm satisfies the relation kxk2 = hx, xi , and the useful 2 Cauchy-Schwartz inequality gives hx, yi ≤ kxk2 · kyk2 . We also have, for f, g ∈ Bm : hfχ , gχ i = F(f + g) = n − 2d(f, g),
(2)
2
kfχ k = n. Let us now recall the definition of the Reed-Muller codes of orders 1 and 2. The first-order binary Reed-Muller code RM(1, m), of length n, consists of vectors a = (. . . , a(x1 , . . . , xm ), . . . ), where a(x) = a(x1 , . . . , xm ) = a0 + a1 x1 + · · · + ai xi + · · · + am xm , ai ∈ F2 , 0 ≤ i ≤ m, is any affine Boolean function, and where (x1 , . . . , xm ) runs over all 2m points of the Boolean cube of dimension m. This code has dimension m + 1 and minimal distance n/2. We will also denote by RM(1, m)# the subset of RM(1, m) consisting of all linear functions (with a null constant term a0 ). Now, the second-order binary Reed-Muller code RM(2, m), of length n, consists of vectors q = (. . . , q(x1 , . . . , xm ), . . . ), where q(x) = q(x1 , . . . , xm ) = a0 +
m X i=1
ai xi +
X
qi,j xi xj
(3)
1≤i 2√ 2 bounds for smaller values of ε. Theorem 2 ([2]). Let j be an integer such that 1 ≤ j ≤ bm/2c. Then, for any received vector y ∈ Fn2 and for any ε > 2−1−j/2 , we have Pj−1 i=0
|Ly,ε | ≤
A2m−1 −2m−1−i (2−i − 2−j ) . 4ε2 − 2−j
In particular, the case j = 1 corresponds to the Johnson bound. P Proof. Let L = |Ly,ε |. By definition of Ly,ε , we have 2nεL ≤ q∈Ly,ε F(y + q) = D E P yχ , q∈Ly,ε qχ . Hence, using the Cauchy-Schwartz inequality, we get +2
* 4n2 ε2 L2 ≤
X
yχ ,
qχ
q∈Ly,ε
2
X
2
qχ ≤ kyχ k ×
, | {z } q∈L
y,ε n
which implies that * 2
+ X
2
4nε L ≤
qχ ,
q∈Ly,ε
X
qχ
.
(5)
q∈Ly,ε
Denoting, for every q ∈ Ly,ε , by E(q) the set {p + q ∈ RM(2, m) : p ∈ Ly,ε } of size L, we get * + X X X X X (−1)f (x) , S(y) := qχ , qχ = q∈Ly,ε
q∈Ly,ε f ∈E(q) x∈Fm 2
q∈Ly,ε
P with x∈Fm (−1)f (x) = F(f ) = n − 2wt(f ). In order to upper bound S(y), we 2 will assume that for any q ∈ Ly,ε , the set E(q) contains all codewords of weight less than 2m−1 − 2m−j (the worst case). Then, the remaining codewords in E(f ) have a weight strictly greater than 2m−1 −2m−j , that is, at least 2m−1 −2m−j−1 . Then, for any j such that 1 ≤ j ≤ bm/2c, we have S(y) ≤ nL
j−1 X A2m−1 −2m−1−i i=0
2i
+
and finally, with (5), we get that L ≤
2
L −L
j−1 X
! A2m−1 −2m−1−i
i=0 Pj−1 i=0
A2m−1 −2m−1−i (2−i −2−j ) . 4ε2 −2−j
n , 2j 2
3
Deterministic List Decoding Algorithm for the Second Order Reed-Muller Codes
We shall describe in this section an algorithm (called “sums-algorithm”) whose idea is to construct recursively the quadratic functions belonging to the list Ly,ε = {q ∈ RM(2, m) : F(y + q) ≥ 2nε}. This algorithm uses the classical “divide and conquer” method. Let us first introduce a useful notation : for f ∈ Bm , ν ∈ [1, . . . , m] and s ∈ Fm−ν , let fs ∈ Bν be the restriction of f to the “facet” 2 ν Ss := {(x, s) ∈ Fm 2 | x ∈ F2 },
that is, fs (x) = f (x, s). For the sake of compactness, if s = (s1 , s2 , . . . , sm−ν ) ∈ Fm−ν and δ ∈ F2 , we will denote by “δs” (that is “0s” or “1s”) the vector 2 (δ, s1 , s2 , . . . , sm−ν ) in Fm−ν+1 . Let us note that, for s ∈ Fm−ν , we have 2 2 X X X F(fs ), (6) (−1)fs (x) = F(f ) = x∈Fν s∈Fm−ν 2 2
s∈Fm−ν 2
with F(fs ) =
X
(−1)fs (x,0) + (−1)fs (x,1) = F(f0s ) + F(f1s ).
(7)
x∈Fν−1 2
The proposed sums-algorithm will exploit the relation (6), with f = y + q, by computing an upper bound of the quantity F(qs + ys ). The following definition allows to give a nice description of the restriction qs of q to a given facet Ss . Pm Definition 1. Let q(x1 , . . . , xm ) = A1 (x1 , . . . , xm ) + i=2 xi Ai (x1 , . . . , xi−1 ) ∈ RM(2, m) be a quadratic Boolean function. For 2 ≤ ν ≤ m, we define the ν-th prefix q ν ∈ RM(2, ν)# of q as the quadratic part of q depending only on the first ν variables: q ν (x1 , . . . , xν ) = x2 A2 (x1 ) + · · · + xν Aν (x1 , . . . , xν−1 ). At the ν-th step of the algorithm, we will determine a list Lνy,ε of candidates which “could”, but may not, coincide with the ν-th prefix of an element q ∈ Ly,ε . By definition, for ν ≤ m and for s ∈ Fm−ν , there exists an affine function 2 such that qs = q ν + lq,s , where lq,s is the restriction to Ss of the lq,s ∈ RM(1, ν)P function A1 + i|si =1 Aν+i . As a consequence we can rewrite (6): F(y + q) =
X
F(ys + q ν + lq,s ).
(8)
s∈Fm−ν 2
Now, for each s ∈ Fm−ν , we can upper bound F(ys + q ν + lq,s ) by 2 ν maxl∈RM(1,ν) F(ys + q + l). Finally, we deduce that if q ∈ Ly,ε , then: X Γyν (q) := max F(ys + q ν + l) ≥ F(y + q) ≥ 2nε. s∈Fm−ν 2
l∈RM(1,ν)
(9)
The key point is that Γyν (q) depends only on the prefix q ν of q. This means that a function q ν ∈ RM(2, ν)# could be the ν-th prefix of a solution q ∈ Ly,ε only if it satisfies the “Γyν -criterion” implied by (9), namely only if Γyν (q ν ) ≥ 2nε. This motivates the introduction of the list Lνy,ε = {q ν ∈ RM(2, ν)# : Γyν (q ν ) ≥ 2nε}, consisting of every potential prefix of any function q ∈ Ly,ε . Note also that a function q ν = q ν−1 + xν Aν ∈ RM(2, ν)# can be in Lνy,ε only if its prefix q ν−1 is in Lν−1 y,ε since, by (7), max l∈RM(1,ν)
F(ys + q ν + l) = ≤
max (F(y0s + q ν−1 + l0 ) + F(y1s + q ν−1 + Aν + l1 ))
l∈RM(1,ν)
max l∈RM(1,ν−1)
F(y0s + q ν−1 + l) +
max l∈RM(1,ν−1)
F(y1s + q ν−1 + l).
The proposed sums-algorithm will recursively determine the intermediate lists Lνy,ε , for 2 ≤ ν ≤ m, in the following way : for each q ν−1 ∈ Lν−1 y,ε , and for each Aν ∈ RM(1, ν −1)# , the corresponding “successor” q ν−1 +xν Aν is tested against the Γyν -criterion to decide whether this candidate belongs or not to Lνy,ε . To be efficient, the algorithm should generate rather small intermediate lists. In the next section, we give upper bounds for the sizes of the lists Lνy,ε . In Sect. 4, we give a more precise description of the sums-algorithm and give its complexity. 3.1
Size of the Intermediate Lists
Theorem 3. For 2 ≤ ν ≤ m, let j such that 1 ≤ j ≤ bν/2c. For any vector y ∈ Fn2 and any ε > 2−1−j/2 , we have |Lνy,ε | ≤
Pj−1 i=0
A2ν−1 −2ν−1−i (2−i − 2−j ) 4ε2 − 2−j
Proof. Let L = |Lνy,ε |. For any q ∈ Lνy,ε and for any s ∈ Fm−ν , we can fix 2 an arbitrary element cqs ∈ RM(1, ν) that satisfies maxl∈RM(1,ν) F(ys + q + l) = → F2 by hq (x, s) = F(ys + q + cqs ). Then we define the function hq : Fν2 × Fm−ν 2 q q(x) + cs (x). Then, by construction, we have * + X X X 2nεL ≤ F(ys + hqs ) = yχ , hqχ . m−ν q∈Lν y,ε s∈F2
q∈Lν y,ε
Hence, using the Cauchy-Schwartz inequality, we get that +2
* 4n2 ε2 L2 ≤
yχ ,
X q∈Lν y,ε
hqχ
2
X
2 q ≤ kyχ k hχ .
q∈Lνy,ε
(10)
Denoting Es (q) = {p + cps + q + cqs ∈ RM(2, ν) : p ∈ Lνy,ε }, we get that
2
X X X X
X q
S(y) := hχ = (−1)f (x) ,
q∈Lνy,ε ν m−ν q∈Lν s∈F y,ε f ∈Es (q) x∈F2 2
with x∈Fν (−1)f (x) = F(f ) = 2ν − 2wt(f ). Then, if we assume that for any 2 s ∈ Fm−ν and any q ∈ Lνy,ε , the set Es (q) contains all codewords of weight less 2 than 2ν−1 − 2ν−j , we know that the remaining codewords in Es (q) have a weight strictly greater than 2ν−1 − 2ν−j , that is, at least 2ν−1 − 2ν−j−1 . Hence, using the knowledge of the weight distribution of the RM(2, ν) code, we obtain, for any j such that 1 ≤ j ≤ bν/2c, any q ∈ Lνy,ε and any s ∈ Fm−ν : 2 P
X
ν
F(f ) ≤ 2
j−1 X A2ν−1 −2ν−1−i
2i
i=0
f ∈Es (q)
+
L−
j−1 X
! A2ν−1 −2ν−1−i
2ν−j .
i=1
Then, by summing over q ∈ Lνy,ε and s ∈ Fm−ν , we get 2 S(y) ≤ nL
j−1 X A2ν−1 −2ν−1−i i=0
2i
+
2
L −L
j−1 X
! A2ν−1 −2ν−1−i
i=0
With the equation (10), we get L ≤ is proved.
Pj−1 i=0
A2ν−1 −2ν−1−i (2−i −2−j ) , 4ε2 −2−j
Corollary 2. For any ε > 0, let j ≥ 1 such that 2−1− for any ν ≥ 2 such that j ≤ bν/2c, |Lνy,ε | ≤
j−1 2
n . 2j
and the theorem 2 j
≥ ε > 2−1− 2 . Then,
21+2ν(j−1) . 4ε2 − 2−j
Otherwise, if j > bν/2c, |Lνy,ε | ≤ 2j(2j−1) . Proof. The first claim is a direct consequence of Corollary 1 : for 0 ≤ i ≤ j − 1, Pj−1 A2ν−1 −2ν−1−i (2−i − 2−j ) < A2ν−1 −2ν−1−i ≤ 22νi , and i=0 22νi < 21+2ν(j−1) . The second inequality comes from the fact that for the first steps ν such that ν < 2j, the list is, in the worst case (i.e. when the list is exhaustive), of size 2ν(ν−1)/2 . 2 j
Remark 1. If we denote θ = ε − 2−1− 2 > 0, we have 4ε2 − 2−j = 4θ(2−j/2 + θ) = 2ν(j−1)+j/2 O(θ2−j/2 ), and the first inequality of the corollary gives |Lνy,ε | = O( 2 ). θ √ In particular, |Lνy,ε | = O(1/θ) for ε > 1/ 8 and |Lνy,ε | = O(22ν /θ) for ε > 1/4.
4
Description and Complexity of the sums-algorithm
In this section, we describe more precisely the sums-algorithm, in a recursive manner. In particular, we describe precisely the procedure to follow in order to apply the Γ -criterion, and give the corresponding complexity. As the sumsalgorithm consists in applying this criterion onto the successors of each element of the intermediate lists, whose sizes were bounded in the previous section, this permits to give the complexity of the algorithm. We also consider the special case of the decoding up to the Johnson bound (Sect 4.3).
4.1
Recursivity
The Intermediate Lists. Recall that, at the ν-th step, in order to determine Lνy,ε , we have to apply the Γyν -criterion onto each element q ν ∈ RM(2, ν)# of the form q ν−1 +xν Aν (for all Aν ∈ RM(1, ν −1)# ) whose prefix q ν−1 is in Lν−1 y,ε . The problem is that for small values of ε, the intermediate list Lνy,ε may be too big (much bigger than the final list Ly,ε ) to be stored in memory. That is why we give a recursive description, in the sense that for a given element q ν−1 ∈ Lν−1 y,ε , once all of its valid successors are determined in Lνy,ε , the algorithm chooses one of them, say q ν , and then determines all the valid successors of this q ν before examining another selected candidate in Lν−1 y,ε . That means that the algorithm will never have to store a complete list Lνy,ε , but only a “partial” list Lν consisting of the linear functions Aν corresponding to the valid successors of q ν−1 . Figure 1 represents these lists. As the size |Lν | of Lν is less than (or equal to) 2ν−1 , and an element Aν being stored with less than m bits, Pm the total amount of needed memory to manage the lists will be less than m ν=2 2ν−1 < m2m bits. Another non-negligible advantage is that, in this way to proceed, we are able to exploit the recursive structure of the RM(2, m) code to compute the criterion in a faster way.
The Γ -criterion. Let us recall that, to apply the Γyν -criterion onto each successor q ν = q ν−1 + xν Aν of q ν−1 , we have to evaluate, for all s ∈ Fm−ν , the 2 quantity max l∈RM(1,ν)
F(ys + q ν + l) = =
max
l∈RM(1,ν)# , δ∈F2
max
l∈RM(1,ν)#
F(ys + q ν + l + δ)
|F(ys + q ν + l)| .
This seems to imply that we need to know the value of F(ys + q ν−1 + xν Aν + l) for each Aν ∈ RM(1, ν − 1)# and each l ∈ RM(1, ν)# . But we can in fact do better by separating the character sum according to whether xν = 0 or xν = 1,
x3
0 0
*
0 H H x2 x1 H HH j
q ν (x) =
x2 x1 +x4 x3
*
pp ppp
ppp
pp p*
pp ppp
ppp
pp p*
ppp ppp - x2 + x3 - x1 + x4 0 p ppppp ppppp H * p pH pp pppp pp pppp pp H pp pppp pp pppp p p HH p ppp pp pp pp p j j H x1 x1 pp pp pp j H pp pp pp H pp pp pp H H pp p j pp R Rp R x1 + x2 L2
L3
x2 (x1 )
+ x3 (0)
L4
L5 ...
+ x4 (x2 + x3 ) + x5 (x1 + x4 )
Fig. 1. sums with recursive lists Lν .
using (7) : max l∈RM(1,ν)
F(ys + q ν + l) =
=
max
l∈RM(1,ν−1)# , lν ∈F2
max
l∈RM(1,ν−1)#
F(y0s + q ν−1 + l)
+ F(y1s + q ν−1 + Aν + l + lν ) F(y0s + q ν−1 + l) + F(y1s + q ν−1 + Aν + l)
The last equality is obtained by fixing lν so that the two terms have same sign, which obviously gives the greatest value. We now assume that all the values and u ∈ RM(1, ν − 1)# have been Fν−1 [s, u] := F(ys + q ν−1 + u), for s ∈ Fm−ν+1 2 stored at the previous step ν−1 in an array Fν−1 of size 2m−ν+1 ×2ν−1 = 2m = n. Then, denoting Γ ν [Aν ] = Γyν (q ν−1 + xν Aν ) for simplicity, the Γyν -criterion can be expressed as : X Γ ν [Aν ] = Ms [Aν ] ≥ 2nε s∈Fm−ν 2
where Ms [Aν ] :=
max
u,v∈RM(1,ν−1)# u+v=Aν
|Fν−1 [0s, u]| + |Fν−1 [1s, v]| .
(11)
We see that the complexity to compute Ms [Aν ] for all Aν ∈ RM(1, ν − 1) is the number of such Aν ’s times the cost of finding a maximum, that is O(2ν−1 × 2ν−1 ) = O(22ν ). We deduce that Γ ν [Aν ] can be known for all Aν with complexity O(2m−ν × 22ν ) = O(2m+ν ). However, we introduced the notation Ms because
we are able to compute Ms in a much faster way, with a conjectured complexity in O(2ν ). This improvement is discussed in Sect. 4.2. At this point, all the functions Aν for which Γ ν [Aν ] ≥ 2nε are stored in the list Lν , and the algorithm is run successively at the step ν + 1 for all the corresponding valid candidate q ν−1 + xν Aν ∈ Lνy,ε . As we assumed that Fν−1 was computed at step ν − 1, we need now to compute the array Fν corresponding to the chosen Aν . The following equalities show that it is done with complexity O(n). Let s ∈ Fm−ν 2 and u = uν−1 + xν uν ∈ RM(1, ν)# . We have : Fν [s, u] = F(ys + q ν−1 + xν Aν + uν−1 + xν uν ) = F(y0s + q ν−1 + uν−1 ) + F(y1s + q ν−1 + Aν + uν−1 + uν ) = Fν−1 [0s, uν−1 ] + (−1)uν Fν−1 [1s, Aν + uν−1 ]. In the case where ν = m, then Fm contains the values of F(y + q m + u) for all u ∈ RM(1, m). If Fm [u] ≥ 2nε (resp. ≤ −2nε), then q m + u (resp. q m + u + 1) belongs to Ly,ε . Note that all the arrays F2 , . . . , Fν−1 have to be saved simultaneously in memory: once all elements of the current partial list Lν are examined, the next element of the previous list Lν−1 is examined, using Fν−2 , and so on. So the memory complexity to manage the recursive arrays Fi is O(m2 2m ) bits. Nevertheless, we may prefer an O(m2m ) memory complexity (it may be useful for large values of m, e.g. m ≈ 26) by computing Fν in a non-recursive manner, with a Fast Fourier Transformation (FFT), and with a time complexity in O(2m−ν × ν2ν ) = O(ν2m ) instead of O(2m ). sums. We now resume the recursive version 3 of the sums-algorithm. It takes as inputs, at the ν-th step, a function q ν−1 ∈ RM(2, ν − 1)# , and the corresponding array Fν−1 (note that the information given by the code-vector y is indirectly contained in Fν−1 ). The output is the the set of elements of Ly,ε admitting q ν−1 as a prefix. Algorithm sums input: ε, ν, m, q ν−1 , Fν−1 output: valid successors in Ly,ε of q ν−1 – if ν ≤ m then : – for each s ∈ F2m−ν P , compute Ms from Fν−1 , – and deduce Γ ν = s Ms , – and the list Lν = {Aν ∈ RM(1, ν − 1) : Γ ν [Aν ] ≥ 2nε}. – for each Aν ∈ Lν do : – compute Fν from Fν−1 , – call sums with input ε, ν + 1, m, q ν−1 + xν Aν , Fν . – if ν = m + 1 then : – for each u ∈ RM(1, m), – if |Fm [u]| ≥ 2nε, put the corresponding element in Ly,ε . 3
We have implemented the algorithm in a non-recursive manner for efficiency, but the gain is not significant.
Let us now give the complexity of sums. For each q ν ∈ Lνy,ε , we need to compute Fν and Γ ν+1 . The corresponding complexity, in bit operations, is then O( Lνy,ε × ν ×(|{z} 2m + 2| m+ν {z })). |{z} | {z } Fν Γ ν+1 size in bits qν Pm We deduce a total complexity in O( ν=2 ν Lνy,ε 2m (1 + 2ν )). Remark 2. If we compute Fν with a FFT, the term 1 + 2ν has to be replaced by ν + 2ν : the gain seems not to be really important. But we will see in Sect. 4.2 that in practice, the complexity for the computation of Γ ν is in O(2m ). Avoiding the FFT, which would be the most costly operation, is then interesting. Theorem 4. For any received vector y, the proposed sums-algorithm evaluates the list of all vectors q ∈ RM(2, m) such that d(y, q) ≤ n( 21 − ε) with complexity O(n2 log(n)
m X
|Lνy,ε |).
(12)
ν=2
In particular, ε = a+θ with θ > 0, the complexity equals O(n2 log(n)2 /θ) √ denoting 4 for a = 1/ 8; O(n log(n)2 /θ) for a = 1/4; and O(n2h+2 2j/2 log(n)2 /θ) for a = 21+j/2 and j ∈ {1, . . . , b ν2 c } fixed correspondingly. 4.2
A Sorting Technique
The aim of this section is to present an algorithm to compute the quantity Ms [Aν ] (defined in (11)) for all Aν ∈ RM(1, ν − 1)# , with a better complexity than O(22ν ). Denoting M = Ms , V0 [u] = |Fν−1 [0s, u]| and V1 [v] = |Fν−1 [1s, v]|, the problem is to find M [A] = max V0 [u] + V1 [v], u+v=A
for A, u, v ∈ RM(1, ν − 1)# . Instead of choosing A (2ν−1 possibilities) and computing the corresponding maximum (O(2ν−1 ) complexity), the main idea is to evaluate the values of the set E = {V0 [u] + V1 [v] | u, v ∈ RM(1, ν − 1)# } in the decreasing order, until all the M [A] are known. More precisely, the function u + v is computed only if the corresponding sum V0 [u] + V1 [v] belongs to the subset {V ∈ E | V ≥ minA M [A]}. In practice, only O(2ν ) such sums need to be computed. Figure 2 illustrates the values of E for given V0 and V1 (corresponding to a random codeword with random noise). The first observation is that, for ν ≥ 3, all the values V0 [u] (resp. V1 [v]) are equal to each other modulo 4. This can be proved by using relation (2) and the fact that the Hamming weight of u is equal to 0 or 2ν−2 . For an efficient implementation, it is then interesting to divide V0 and V1 by 4, without loss of information provided that the sum “rem” of the remainders of this division is stored.
V0 \V1 16 12 8 8 8 8 8 8 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 0 0 0 0 0 0 0 0 0
14 30 26 22 22 22 22 22 22 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 14 14 14 14 14 14 14 14 14
10 26 22 18 18 18 18 18 18 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 10 10 10 10 10 10 10 10 10
10 26 22 18 18 18 18 18 18 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 10 10 10 10 10 10 10 10 10
10 26 22 18 18 18 18 18 18 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 10 10 10 10 10 10 10 10 10
6 22 18 14 14 14 14 14 14 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6
6 22 18 14 14 14 14 14 14 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6
6 22 18 14 14 14 14 14 14 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6
6 22 18 14 14 14 14 14 14 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6
6 22 18 14 14 14 14 14 14 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6
6 22 18 14 14 14 14 14 14 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6
6 22 18 14 14 14 14 14 14 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6
6 22 18 14 14 14 14 14 14 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6
6 22 18 14 14 14 14 14 14 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6
6 22 18 14 14 14 14 14 14 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6
6 22 18 14 14 14 14 14 14 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6
6 22 18 14 14 14 14 14 14 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6
6 22 18 14 14 14 14 14 14 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6
2 18 14 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 2 2 2 2 2 2 2 2 2
2 18 14 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 2 2 2 2 2 2 2 2 2
2 18 14 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 2 2 2 2 2 2 2 2 2
2 18 14 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 2 2 2 2 2 2 2 2 2
2 18 14 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 2 2 2 2 2 2 2 2 2
2 18 14 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 2 2 2 2 2 2 2 2 2
2 18 14 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 2 2 2 2 2 2 2 2 2
2 18 14 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 2 2 2 2 2 2 2 2 2
2 18 14 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 2 2 2 2 2 2 2 2 2
2 18 14 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 2 2 2 2 2 2 2 2 2
2 18 14 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 2 2 2 2 2 2 2 2 2
2 18 14 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 2 2 2 2 2 2 2 2 2
2 18 14 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 2 2 2 2 2 2 2 2 2
2 18 14 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 2 2 2 2 2 2 2 2 2
2 18 14 10 10 10 10 10 10 6 6 6 6 6 6 6 6 6 6 6 6 6 6 6 2 2 2 2 2 2 2 2 2
Fig. 2. V0 + V1 for sorted arrays V0 and V1 (with ν − 1 = 5).
The second observation is that, after this division, the list of values of V0 (resp. V1 ) contains many times almost all the integers between 0 and the maximum max0 of V0 (resp. max1 of V1 ). So the idea is to group the functions according to the corresponding value of V0 (resp. V1 ). Let us set Pi [x] = {u ∈ RM(1, ν − 1)# : Vi [u] = x}, i = 0, 1. Figure 3 illustrates the same array as in figure 2 after division by 4 and grouping in Pi . Now, we obtain the greatest value of E by choosing u ∈ P0 [max0 ] and v ∈
4 3 2 1 0
3 7 6 5 4 3
2 6 5 4 3 2
1 5 4 3 2 1
0 4 3 2 1 0
Fig. 3. V0 + V1 for sorted arrays V0 and V1 after division and grouping.
P1 [max1 ], and deduce that for those u’s and v’s, M [u+v] = 4×(max0 +max1 )+ rem. Then, the functions u, v such that M [u + v] = 4 × (max0 + max1 − 1) + rem (the “second” potential value for M [u+v]) are obtained by choosing (u, v) either in P0 [max0 ] × P1 [max1 − 1] or in P0 [max0 − 1] × P1 [max1 ]. In continuing this process, M [A] is determined for all A.
Algorithm input: ν, V0 , V1 output: Determination of M rem ← (V0 [0] mod 4) + (V1 [0] mod 4) max0 , max1 ← 0. For each u ∈ RM(1, ν − 1)# , do : – x ← bV0 [u]/4c and y ← bV1 [u]/4c – put u in P0 [x] and in P1 [y] – max0 ← max(max0 , x) and max1 ← max(max1 , y) z ← max0 + max1 While M is not fully determined, do : – for each x, y such that 0 ≤ x ≤ max0 , 0 ≤ y ≤ max1 and x + y = z, – for each u ∈ P0 [x] and each v ∈ P1 [y] do : (?) if M [u + v] is still unknown, then M [u + v] ← 4 × z + rem. – z ← z − 1. Clearly, the construction of the sets P0 and P1 has a time (and memory 4 ) complexity which is linear in 2ν−1 . Hence, the complexity of this algorithm is essentially determined by the number of times the inner-most loop (marked by ?) is run. We have computed this number, during the decoding of some functions, and divided it by 2ν−1 . The following array represents the mean of these numbers, for each step ν. The Boolean functions f1 , f2 , f3 and f4 are the received vectors. The decoding radius was chosen equal to the distance of these functions from the RM(2, m) code. step ν
3
4
5
6
7
f1 f2 f3 f4
1.66 1.820 1.75 1.77
1.750 1.727 1.92 1.94
2.108 2.029 2.00 1.39
2.542 2.437 1.63 1.33
2.361 3.228 1.45 1.30
8
9
10
11
12
13
14
15
16
17
18
19
20
1.278 3.917 2.727 1.325 1.34 1.32 1.40 1.31 1.41 1.43 1.46 1.18 1.45 1.48 1.04 1.32 1.34 1.34 1.35 1.38 1.38 1.38 1.32 1.37 1.29 1.38 1.002 1.52
The function f1 (resp. f2 ) is the function trace(x254 ) : F28 ' F82 → F2 in 8 variables (resp. trace(x1022 ) in 10 variables), it is the trace of the inverse function. It was decoded with ε = 0.17969 (resp. ε = 0.11718). The function f3 (resp. f4 ) is a random quadratic function in 18 (resp. 20) variables, added to an error vector of weight 65536 (resp. 200000), corresponding to ε = 1/4 (resp. ε = 0.30927). These experimental results show that in practice, this algorithm is linear in 2ν . However, we could not prove this good complexity. Conjecture 1. With this sorting algorithm, the proposed sums-algorithm evaluates, for any received vector y, the list of allP vectors q ∈ RM(2, m) such that m d(y, q) ≤ n( 21 − ε) with complexity O(n log(n) ν=2 |Lνy,ε |). Remark 3. Experimentally, the running time of a decoding has been verified to be proportional to n times the sum of the sizes of the lists. 4
For efficiency, a preliminary step should be to determine in advance P the sizes of each Pi [x], so that these lists can be stored in an array of total size x |Pi [x]| = 2ν−1 .
4.3
Complexity up to the Johnson Bound.
We give here another method to compute Ms for the decoding up to the Johnson bound, which allows a better complexity than for the general case. At the ν-th step, we are going to suggest a small list of potential candidates Aν ∈ RM(1, ν − 1)# as continuations. To do this task, we use the classical Plotkin construction: consider the derivative of f ∈ Bm , defined by Df (x1 , . . . , xν−1 , xν+1 , . . . , xm ) = f (x1 , . . . , xν−1 , xν +1, xν+1 , . . . , xm )+f (x1 , . . . , xm ). Note that wt(Df ) ≤ wt(f ). 1 Let q ∈ RM(2, m) and let θ > 0. If d(y, q) ≤ n(1/2−( 2√ +θ)), then d(Dy, Dq) ≤ 2 n 1 1 1 n 1 √ − 2θ) = ( − ( √ − + 2θ)). Now we can decode the vector Dy with (1 − 2 2 2 2 2 2 the list decoding algorithm for the first order RM code of [7], in order to obtain all potential Dq (and hence all potential Aν ). In this case, the complexity is in O(n) binary operations, and the size of the list is less than or equal to 5, according to the Johnson bound forPthe RM(1, m) code. Thus the total cost m of these derivative steps is given by ν=2 O(n) = O(n log2 (n)). Then, for each s ∈ Fm−ν , we only need to compute Ms for a small list (of size ≤ 5) of candidates 2 Aν . This is done in O(ν2ν 2m−ν ) binary operations. We deduce: Theorem 5. For any received vector y, the proposed sums-algorithm evaluates 1 − θ) with the list of all vectors q ∈ RM(2, m) such that d(y, q) ≤ n( 12 − 2√ 2 complexity O(n log22 (n)/θ). We remark that the complexity in [5] was in O(n log2 (n)/θ2 ) for the same decoding radius. Hence, for θ ≤ 1/ log2 (n), we improve the previous algorithm.
5
Simulations and Implications for the Covering Radius
We present in this section some experimental results which ensure the efficiency of the sums-algorithm. 5.1
The Binary Symmetric Channel (BSC)
We present here the result of the decoding of random quadratic Boolean functions, added to random error vectors. The following figures represent the size of the lists. We took each time the mean over 100 decodings. Figure 4 shows the (logarithm of the) size of each intermediate list, for different values of ε (d is the corresponding decoding radius), generated by the decoding of 9-variable Boolean functions. Figure 5 represents the comparison between the sum of the sizes of the intermediate lists, and the sum of the upper bounds for these sizes Pm given by Corollary 2. Figure 6 represents the quantity ε2 log2 ( ν=2 |Lνy,ε |), for 1 m = 8, 9, 10, 12 and ε ∈ [0.16, 2√ ]. We see that this quantity is experimentally 2 Pm −2 upper-bounded by 1, that is, ν=2 |Lνy,ε | < 2ε , and deduce a complexity in −2 O(n2ε ) (in bytes operations) for the sums-algorithm applied to the BSC.
16
ε, d = 0.293, 106 ε, d = 0.283, 111 ε, d = 0.268, 119 ε, d = 0.234, 136 ε, d = 0.219, 144 ε, d = 0.205, 151 ε, d = 0.189, 159 ε, d = 0.178, 165 ε, d = 0.162, 173
14
log2 (|Lνy,ε |)
12 10 8 6 4 2 0 2
3
4
5
6
7
8
9
step ν Fig. 4. Size of the intermediate lists (m = 9).
ε 0.3
0.28
0.26
0.24
0.22
0.2
0.18
0.16
60 P ν P log2 ( ν |Ly,ε |) log2 ( ν upper bound)
50
log2 (size)
40 30 20 10 0 100
110
120
130
140
150
160
170
weight of error = n(1/2 − ε) Fig. 5. Size of the lists compared with theoretical bounds (m = 9).
180
0.44 m=8 m=9 m = 10 m = 12
P ε2 log2 ( ν |Lνy,ε |)
0.42 0.4 0.38 0.36 0.34 0.32 0.34
0.32
0.3
0.28
0.26
0.24
0.22
0.2
0.18
0.16
ε P ν Fig. 6. The quantity ε2 log2 ( m ν=2 |Ly,ε |).
5.2
Cryptographic Functions.
We represent here the sizes of the lists at each step ν ≥ 5 of the decoding of some cryptographic functions, which stand for the received vector (the lists for the steps ν =2, 3, 4 are exhaustive, of size 2, 8, 64 respectively). These functions are of the form trace(xk ) : F2m ' Fm 2 → F2 , for some integer k. The decoding radius d (associated with ε) was chosen as small as possible so that solutions can be found (except for the Welch function in 13 variables). The number t of needed hours for the computation (with a PC at 3GHz) is also displayed.
The Welch function (trace(x2
(m−1)/2
+3
)).
m \ step ν 5 6 7 8 9 10 11 12 13 d ε t 7 256 512 1027 36 0, 219 9 1024 31232 209408 22016 23552 184 0, 141 11 1024 32768 2097152 254615552 119707648 1603584 1628160 848 0, 086 7 13 1024 32768 2097152 268435456 1592680448 0 0 0 0 3487 0, 0743 149
We see that the decoding of this function in RM(2, 13) with d = 3487 does not give any solution. However a solution was found at distance 3632. We deduce that 3487 < d(x67 , RM(2, 13)) ≤ 3632. We also obtained that d(x131 , RM(2, 15)) ≤ 15488 and d(x259 , RM(2, 17)) ≤ 63680.
The inverse function (trace(x2 m\ν 7 8 9 10 11 12
5 448 1023 1024 1024 1024 1024
6 76 4495 32666 32768 32768 32768
m
−2
)).
7 8 9 10 11 12 78 16 16 38285 9 9 2016990 68303 2 2 2097152 263217917 4492858 66 66 2097152 268435456 52637907827 7139776 6 6
d 36 82 182 392 842 1760
ε t 0, 219 0, 180 0, 145 0, 117 0, 03 0, 089 6 0.0703 ≈ 90 days
For m =13, 14 and 15, we have d(trace(x8190 ), RM(2, 13)) ≤ 3696, d(trace(x16382 ), RM(2, 14)) ≤ 7580 and d(trace(x32766 ), RM(2, 15)) ≤ 15506. The Kasami functions (trace(x2
2k
−2k +1
), with gcd(m, k) = 1 , k ≤ m/2).
m; k \ step ν 2 3 4 5 6 7 8 9 10 11 7; 3 2 8 64 39 7 7 2 8 64 198 112 56 68 8; 3 9; 4 2 8 64 1024 28393 4622 137 141 10; 3 2 8 64 1024 32768 1201601 7144 165 165 11; 4 2 8 64 1024 32768 2097152 81407968 47918 99 99
5.3
d 32 72 176 384 824
0, 25 0, 219 0, 156 0, 125 0, 098
The Covering Radius of The Second Order Reed-Muller Codes
Known Results. Despite that the Reed-Muller codes have been studied for many years, their exact covering radius are yet unknown. First, let us recall that the covering radius ρ(2, m) of the RM(2, m) code is defined by ρ(2, m) = maxy∈Bm d(y, RM(2, m)), where Bm is the set of all Boolean functions. Concerning the upper bound for m ≥ 10, the best known result comes from [15] and states: Theorem 6 ([15]). For every positive integer m ≥ 10, one has $ √ % 15 m 122929 155582504573 m−1 − ρ(2, m) ≤ 2 − ·22 · 1− 2 21 · 2m 4410 · 22m The following result, from [16] (an exact version of the asymptotic lower bound given in [8]), gives a lower bound for the covering radius: p Theorem 7 ([16]). Let c be a positive real number such that c > log(2). Then r m(m − 1) m−1 m−1 2 2 . ρ(2, m) ≥ 2 −c 1+m+ 2 We summarize in the following table the best known bounds for small values of m (see [13], [14], [8], [15] and [16]): m 2 3 4 5 6 7 8 9 10 11 12 lower bounds of ρ(2, m) 0 1 2 6 18 40 84 171 372 806 1714 upper bounds of ρ(2, m) 0 1 2 6 18 44 100 220 464 956 1946 We have proposed in this article a strong list decoding algorithm, and we have tested some cryptographic Boolean functions, which permits to improve the lower bounds on the covering radius of the second order Reed-Muller code for 9 ≤ m ≤ 12.
Lower Bound on the Covering Radius. In order to try to lower bound the covering radius for a small number of variables, we tested, for m ≤ 10, all power functions of the form trace(xk ) and all sums of two power functions, that is, functions of the form trace(xk + xh ) defined over F2m with k, h < 2m . We have displayed in the following table the Hamming distance of the furthest power function we found from the RM(2, m) codes. We compare the values obtained with the lower bounds given in [8] and [16]. m f d(f, RM(2, m)) Bounds of [8] 6 x21 ; x11 + x21 ; x13 + x21 ; x21 + x27 ; 18 18 (exact) 7 x7 + x15 ; 38 40 8 x7 ; x73 84 84 9 x73 196 171 7 35 37 41 49 73 85 10 x ;x ;x ;x ;x ;x ;x 400 372 11 x67 ; x73 848 806 12 x4094 1760 1714
For 7 variables, we did not succeed to find such far Boolean function (d = 40). 2m−1 −d In the following table, we represent the values of m−1 q , related m(m−1) 2 2 × 1+m+ 2 p to Theorem 7, which have to be compared to log(2) ≈ 0.833. m d 2
6
m−1 2
2m−1 −d q m(m−1) × 1+m+ 2
4 5 6 7 8 9 10 11 12 2 6 18 40 84 196 400 848 1760 0.64 0.63 0.53 0.56 0.64 0.55 0.66 0.67 0.72
Conclusion
We have proposed a list decoding algorithm for the second order Reed-Muller code and studied its complexity. According to the results of our simulations, we see that the complexity that we have conjectured makes sense. This low complexity allowed to consider the computation of some lower bounds for the covering radius. We do not know yet if the bounds concerning the size of the list beyond the Johnson radius that were given in [2] are tight or not. It would be very interesting to construct some Boolean functions which are optimally far from the second order Reed-Muller code of larger orders; this is an open problem. An interesting question concerns the extension of this algorithm to Reed-Muller codes. The main obstacle comes from the fact that running over all elements of the second order Reed-Muller code is expensive. The reason is that the dimension of this code is huge. We have not yet any idea of how to proceed. We have not considered the case of the generalized Hamming distance, but this problem should not be so difficult; nevertheless the size of the intermediate lists should be recalculated, so this problem merits to be studied. The case of the binary symmetric channel is also very interesting according to the simulation results, and it merits a strong theoretical study. Acknowledgement
We gratefully thank Professor Grigory Kabatiansky for very helpful and stimulating discussions.
References 1. P. Elias, “List decoding for noisy channels” 1957-IRE WESCON Convention Record, Pt. 2, pp. 94–104, 1957. 2. G. Kabatiansky and C. Tavernier,”List decoding of second order Reed-Muller Codes”, in Proc. 8th Intern. Simp. Comm. Theory and Applications, Ambleside, UK, July 2005. 3. V. Guruswami and M. Sudan, “Improved decoding of Reed-Solomon and algebraicgeometry codes”, IEEE Trans. Inform. Theory, vol. 45, pp. 1757–1767, 1999. 4. R. Pellikaan and X.-W. Wu, “List decoding of q-ary Reed-Muller Codes”, IEEE Trans. on Information Theory, vol. 50, pp. 679-682, 2004. 5. I. Dumer, G. Kabatiansky and C. Tavernier, “List decoding of Reed-Muller codes up to the Johnson bound with almost linear complexity”, in Proc.ISIT 2006, Seattle, USA. 6. G. Kabatiansky and C. Tavernier, “List decoding of Reed-Muller codes of first order”, in Proc. ACCT-9, pp. 230–235, Bulgaria, 2004. 7. G. Kabatiansky and C. Tavernier, “List decoding of first order Reed-Muller codes II”, in Proc. ACCT-10, Zvenigorod, Russia, 2006. 8. G. Cohen, I. Honkala, S. Litsyn and A. Lobstein. Covering Codes. North-Holland, 1997. 9. O. Goldreich, R. Rubinfeld and M. Sudan, “Learning polynomials with queries: the highly noisy case”, Proceedings of 36-th Symp. on Foundations of Computer Science, pp. 294–303, 1995. 10. O. Goldreich and L.A. Levin, “A hard-core predicate for all one-way functions”, Proceedings of 21-st ACM Symp. on theory of Computing, pp. 25–32, 1989. 11. F.J. MacWilliams and N.J.A Sloane, The Theory of Error-Correcting Codes, NorthHolland Publishing Company, 1977. 12. N. Sloane and E. Berlekamp, Weight enumerator for second-order Reed-Muller codes, Information Theory, IEEE Transactions on Volume 16, Issue 6, Nov 1970 Page(s): 745 - 751 13. X. D. Hou. Some results on the covering radii of Reed-Muller codes, IEEE Transactions in Information Theory, 39(2), march 1993. 14. J. Schatz. The second-order reed-muller code of length 64 has covering radius 18, IEEE Transactions in Information Theory, IT-27(4):529530, 1981. 15. C. Carlet et S. Mesnager, Improving the upper bounds on the covering radii of binary Reed-Muller codes, To appear in IEEE Transactions on Information Theory, 2006. 16. C. Carlet, The complexity of Boolean functions from cryptographic viewpoint, Dagstuhl Seminar “Complexity of Boolean Functions”, 15 pages, march 2006, M. Krause, P. Pudlak, R. Reischuk and D. van Melkebeek editors 17. I.Dumer, “Recursive decoding and its performance for low-rate Reed-Muller codes”,IEEE Trans. on Information Theory, vol. 50, pp. 811-823, 2004. 18. R. Fourquet, C. Tavernier, “List Decoding of Second Order Reed-Muller Codes and its Covering Radius Implications”, Workshop on Coding and Cryptography 2007, pp. 147–156.