Linear Extractors for Extracting Randomness from ... - Semantic Scholar

2 downloads 19211 Views 222KB Size Report
California Institute of Technology. Pasadena, CA 91125. Email: hzhou@caltech.edu. Jehoshua Bruck. Electrical Engineering Department. California Institute of ...
2011 IEEE International Symposium on Information Theory Proceedings

Linear Extractors for Extracting Randomness from Noisy Sources Hongchao Zhou

Jehoshua Bruck

Electrical Engineering Department California Institute of Technology Pasadena, CA 91125 Email: [email protected]

Electrical Engineering Department California Institute of Technology Pasadena, CA 91125 Email: [email protected]

Abstract—Linear transformations have many applications in information theory, like data compression and error-correcting codes design. In this paper, we study the power of linear transformations in randomness extraction, namely linear extractors, as another important application. Comparing to most existing methods for randomness extraction, linear extractors (especially those constructed with sparse matrices) are computationally fast and can be simply implemented with hardware like FPGAs, which makes them very attractive in practical use. We mainly focus on simple, efficient and sparse constructions of linear extractors. Specifically, we demonstrate that random matrices can generate random bits very efficiently from a variety of noisy sources, including noisy coin sources, bit-fixing sources, noisy (hidden) Markov sources, as well as their mixtures. It shows that lowdensity random matrices have almost the same efficiency as highdensity random matrices when the input sequence is long, which provides a way to simplify hardware/software implementation. Note that although we constructed matrices with randomness, they are deterministic (seedless) extractors - once we constructed them, the same construction can be used for any number of times without using any seeds. Another way to construct linear extractors is based on generator matrices of primitive BCH codes. This method is more explicit, but less practical due to its computational complexity and dimensional constraints.

I. I NTRODUCTION Randomness plays an important role in many fields, including complexity theory, cryptography, information theory and optimization. There are many examples of randomized algorithms, which are faster, more space efficient or simpler than any known deterministic algorithms [1]. For such applications, one typically assumes a supply of independent and unbiased random bits that leads to a question: How can we extract randomness from imperfect natural sources, like the voltage noise in electrical systems, user’s operating behaviors or the weather? In this paper, we consider this problem using linear transformations due to its simplicity and efficiency. Namely, given an input binary sequence X of length n generated from an imperfect source, we would like to construct an n×m binary matrix M such that the output of the extractor Y = XM is arbitrarily close to truly random bits. Statistical distance is commonly used to measure the distance between two distributions in randomness extraction. Here, we

978-1-4577-0594-6/11/$26.00 ©2011 IEEE

say Y ∈ {0, 1}m is ϵ-close to truly random bits if and only if 1 2



|P [Y = y] − 2−m | ≤ ϵ

(1)

y∈{0,1}m

where ϵ is arbitrarily small. This condition guarantees that in any applications, if we replace the truly random bits with the sequence Y , the error probability caused by the replacement is at most ϵ. It is stronger than the normalized divergence used in [2] and [3], in which it showed that optimal source codes can be used as universal random bits generators from arbitrary stationary ergodic random sources. It is also much stronger than the problem of constructing small-bias probability spaces studied by J. Naor and M. Naor [4], and the problem of linear correctors studied by Lacharme [5]. The line of research on extracting randomness from imperfect sources dates back to von Neumann’s work [6] in 1951, who considered the problem of generating random bits from a biased coin with unknown probability. His beautiful algorithm was later improved by Elias [7]. In 1986, Blum [8] studied the problem of generating random bits from an arbitrary Markov chain, as a correlated source model, by applying the von Neumann scheme. Later, Santha and Vazirani [9] showed how to generate quasi-randomness from a noisy source where the probability of each bit is slightly-unpredictable. A more general weak random source model, with only considering the amount of randomness, was studied by Zuckerman [10] in 1990. It was shown that it is impossible to devise a single function that extracts even one bit of randomness from such a source. This observation led to the introduction of seeded extractors, which using a small number of truly random bits as the seed (catalyst) for randomness extraction. When simulating a probabilistic algorithm, one can simply eliminates the requirement of truly random bits by enumerating all the possible strings for the seed and taking a majority vote. Seeded extractors have a wide range of applications and attracted interests of many researchers in the last two decades. There are a variety of very efficient constructions of seeded extractors, summarized in [11], [12]. Let X be a distribution on {0, 1}n , with a seeded extractor of seed length d = O(log n), one is able to extract almost as many as Hmin (X) bits of randomness asymptotically from a general weak random source X [13].

1664

Here, Hmin (X) is the min-entropy of X, defined by Hmin (X) =

min

x∈{0,1}n

log

1 P [X = x]

In the last a few years, the concept of seedless (deterministic) extractors has attracted renewed interests, motivated by the reduction of the computational complexity for simulating probabilistic algorithms as well as some requirements in cryptography. Several specific classes of sources have been studied, including independent sources, which can be divided into several independent parts consisting certain amount of randomness [14]–[16]; bit-fixing sources, where some bits in a binary sequence are truly random and the remaining bits are fixed [17]–[19]; samplable sources, where the source is generated by a process that has a bounded amount of computational resources like space [20], [21]. Our work is to consider this problem from another aspect while focusing on the simplicity and efficiency of generating random bits. Simplicity is certainly an important issue that is why Intel’s random number generator [22] applies the simple scheme of von Neumann [6] rather than the other much more efficient extractors. This motivates us to focus on simple linear constructions such that the extracting efficiency is competitive with the current most efficient constructions for a variety of noisy sources. By using some hardware like FPGAs, linear transformations can be computed very fast, especially when their corresponding matrices are sparse [23]. In this paper, we demonstrate that random matrices, with each entry being 0 or 1 with probability 1/2, are very powerful for extracting randomness from a variety of weak random sources, including bit-fixing sources, noisy coin sources, noisy (hidden) Markov sources, as well as their mixtures. In addition, we discuss the performance of low-density random matrices and show that they have almost the same efficiency as highdensity random matrices. This implies that we can improve the implementation complexity of the linear extractors. Another way of constructing linear matrices is to apply the generator matrices of primitive BCH codes. Comparing to the method of low-density random matrices, this method is more explicit, but less practical due to its computational complexity and dimensional constraints. II. N OISY C OIN S OURCE In this section, we study a very simple and useful weak random source model, which is presented by Santha and Vazirani [9], Varirani [24], P. Lacharme [5], etc. Here, we generalize it in the following way, namely noisy coin sources: Let X = x1 x2 ...xn ∈ {0, 1}n be a sequence generated by flipping a coin. Due to the bias of the coin and the existence of external adversaries, the probability of xi = 1 with 1 ≤ i ≤ n is not a perfect 21 . Instead, it is slightly unpredictable but we know that it is constrained in [ 12 − ei /2, 12 + ei /2] for some constant ei . With this model as a starting point, we study the character of good matrices for extracting randomness. Before presenting the results, let’s first consider the case that the output length is one, i.e. given a few imperfect random

bits independent of each other how to construct exactly one bit such that it is as unbiased as possible. It is known that the binary sum operation is very efficient for solving the problem. The following lemma gives the bound of the bias of the bit generated based on the binary sum operation. Lemma 1. Let x1 , x2 , ..., xn be n independent bits and for all 1 ≤ i ≤ n, P [xi = 1] ∈ [ 21 − ei /2, 12 + ei /2], then let z = x1 + x2 + ... + xn mod 2, we have ∏n ei 1 |P [z = 1] − | ≤ i=1 2 2 Proof: This lemma can be proved by simple induction. It is a simple generalization of the result in [25]. To generate multiple random bits, namely m bits instead of a single bit, one way is to divide all ∪mthe bits into m groups, denoted as S1 , S2 , ..., Sm such that i=1 Si = {x1 , x2 , ..., xn } and different groups don’t have overlaps with each other. Then for 1 ≤ i ≤ m, the ith output bit, namely yi , is generated by summing up the bits in Si . However, this method is very inefficient. In fact, we can improve the efficiency significantly by allowing the overlaps between different groups. In this case, we sacrifice a little independence of the output bits but the bias of each bit is reduced greatly. An equivalent way to express this method is to use a matrix, denoted as M , such that Mij = 1 if and only if xi ∈ Sj for all 1 ≤ i ≤ n and 1 ≤ j ≤ m, otherwise, Mij = 0. As a result, the output of this method is Y = XM for a given input sequence X. In this paper, we use the statistical distance between a random binary sequence Y and the uniform distribution on {0, 1}m to∑ measure the goodness of Y , which is defined as ρ(Y ) = 21 y∈{0,1}m |P [Y = y] − 2−m |. It is also the maximal error probability introduced by replacing truly random bits with the sequence Y in any randomized algorithms. Lemma 2. Let X = x1 x2 ...xn be a binary sequence generated from an arbitrary random source and let M be an n × m matrix with m ≤ n. Then given Y = XM , we have ρ(Y ) ≤

∑ u∈{0,1}m ,u̸=0

1 |PX [XM uT = 1] − | 2

Proof: Similar as the idea in [25], for all y ∈ {0, 1}m , we define function h as h(y) = P (Y = y). For this function, its Fourier transform is denoted by Fh , then ∑ ∀y ∈ {0, 1}m , h(y) = 2−m Fh (u)(−1)y·u u∈{0,1}m

and ∑

∀u ∈ {0, 1}m , Fh (u) =

h(y)(−1)y·u

y∈{0,1}m

When u = 0, we have

1665

|Fh (u)| =

∑ y∈{0,1}m

h(y) = 1

When u ̸= 0, we have



|Fh (u)| = |

h(y)(−1)y·u |

y∈{0,1}m



= |



h(y) −

y·u=0

EM [ρ(Y )] ≤ 2m−n−1

h(y)|

y·u=1



= |1 − 2

with length n such that each element is 0 or 1 with probability 1/2. So for any u ̸= 0, PM [M uT = v T ] = 2−n . As a result,

h(y)|

1 2|P [XM uT = 1] − | 2

(2)

Based on the definition of ρ(Y ), we can get ∑ 1 ∑ ρ(Y ) = |2−m Fh (u)(−1)y·u − 2−m | 2 m m y∈{0,1}

1 2

≤ = ≤



u∈{0,1}

−m

2

y∈{0,1}m



|Fh (u)|

u̸=0

1∑ |Fh (u)| 2 u̸=0 ∑ 1 |P [XM uT = 1] − | 2

This completes the proof. The following theorem provides the upper bound of E[ρ(Y )] when Y is extracted using a random matrix from a noisy coin source. Theorem 3. Let X = x1 x2 ...xn be an independent sequence such that P [xi = 1] ∈ [ 12 − ei /2, 12 + ei /2]. Let M be an n × m random matrix such that each entry of M is 0 or 1 with probability 12 . Then given Y = XM , we have n ∏

(1 + ei )

i=1

Proof: According to Lemma 1, when u ̸= 0, we have ∏n (M uT )i e 1 T |PX [XM u = 1] − | ≤ i=1 i (4) 2 2 where (M uT )i is the ith element of the vector M uT . Substituting (4) into Lemma 2 yields that ρ(Y ) ≤

n 1 ∑ ∏ (M uT )i ei 2 i=1

(5)

u̸=0

Now, we calculate the expectation of ρ(Y ), EM [ρ(Y )] n ∑∏ (M uT )i ≤ EM [ ei ] u̸=0 i=1

=

1∑ 2

u̸=0



n ∏

v∈{0,1}n

i=1

evi i PM [M uT = v T ]

evi i = 2m−n−1

(6)

Since M is a random matrix (each entry is either 0 or 1 with probability 1/2), once u ̸= 0, M uT is a random vector

n ∏

(1 + ei )

i=1

This completes the proof. For a binary sequence X ∈ {0, 1}n , its min-entropy Hmin (X) is the maximal amount of randomness that can be extracted using seeded-extractors. In the following corollary, we show that random matrices are also able to extract as many as Hmin (X) bits of randomness when the sources are noisy coin sources. Corollary 4. Let X ∈ {0, 1}n be a raw sequence from a noisy coin source and let M be an n × m random matrix such that each entry of M is 0 or 1 with probability 21 . Assume m Y = XM . If Hmin (X) < 1, as n becomes large enough, we have that ρ(Y ) converges to 0 in probability, i.e., ρ(Y )

(3)

u̸=0

EM [ρ(Y )] ≤ 2m−n−1

n ∏

v∈{0,1}n i=1

y·u=1

=



p

→ 0

The corollary above implies that when the length of the input sequence n is large enough, we can extract random bits from a noisy coin source by simply constructing a random matrix. With probability arbitrarily close to 1 (depending on n), the constructed matrix is able to extract randomness optimally. Here, we need to distinguish between this method and those of seeded extractors, which use some additional truly random bits whenever to extract randomness. In our method, the matrix is randomly generated but the extraction itself is still deterministic, that means we can use the same matrix to extract randomness for any number of times without reconstructing it. Namely, our method is a ‘probabilistic construction of deterministic extractors’. We see that random matrices with each entry being 0 or 1 with probability 1/2 are very efficient for randomness extraction, but from the point of computational complexity, we prefer low-density matrices rather than those high-density matrices. It is natural to ask whether it is possible to reduce the density of 1’s in the random matrices without affecting the efficiency too much? Motivated by this, we study a random matrix M such that each entry of it is 1 with probability p ≪ 21 . Surprisingly, such a matrix is still able to extract almost as many as Hmin (X) bits of randomness when the input sequence is long enough. For simplification, we consider a noisy coin source X such that X = x1 x2 ...xn with P [xi = 1] ∈ [ 21 − e/2, 21 + e/2] with a constant e. Theorem 5. Let X = x1 x2 ...xn be an independent sequence such that P [xi = 1] ∈ [ 12 − e/2, 21 + e/2]. Let M be an n × m random matrix such that each entry of M is 1 with probability p = w( logn n ). Here, p = w( logn n ) means that p > k logn n for any fixed k when n → ∞. Assume Y = XM . If n logm 2 < 1, 1+e as n becomes large enough, we have that ρ(Y ) converges to p 0 in probability, i.e., ρ(Y )→0.

1666

For practical use, we can set some sparsity constraints on each column of random matrices. For example, we can let the number of ones in each column of the randomly generated matrices to be a constant, namely k. We may also use pseudorandom bits instead of truly random bits in constructing such matrices. In coding theory, many good codes are constructed based on randomly generated matrices. Such examples include LDPC (low-density parity-check) codes and network codes. While these codes have very good performance, their decoding procedures are usually a little complex partially due to the introduction of randomness. Comparing to those applications, randomness extraction is a one-way process that we don’t need to (and we cannot) recover the original source from the generated random bits. This feature makes random matrices very attractive in the applications of randomness extraction. In the rest of this section, we try to construct good matrices without using any randomness. We show that if the weight distribution of a code is binomial, then its generator matrix is a good candidate to extract randomness from a noisy coin source. Theorem 6. Let C be a linear binary code with dimension m and codeword length n. Assume its weight distribution is binomial and its generator matrix is G. Let X = x1 x2 ...xn be an independent sequence such that P [xi = 1] ∈ [ 12 − e/2, 12 + e/2], then given Y = XGT , we have that

sources. The following theorem shows that the probability of ρ(XM ) = 0 approaches 1 when M is an n × m random matrix with m − k ≪ 0 such that each entry of M is 0 or 1 with probability 1/2. Theorem 7. Let X = x1 x2 ...xn be a binary sequence generated from a bit fixing source in which k bits are unbiased and independent, the other n − k bits are fixed or copies of the k independent random bits. Let M be an n × m random matrix such that each entry of M is 0 or 1 with probability 1 2 . Given Y = XM , then we have that PM [ρ(Y ) ̸= 0] ≤ 2m−k So by setting m − k ≪ 0, using a random matrix with each entry being 1 with probability p = 1/2, we can generate an independent and unbiased sequence from a bit fixing source with almost probability 1. Similar as noisy coin sources, we prove that low-density random matrices also work for bit fixing sources. Theorem 8. Let X = x1 x2 ...xn be a binary sequence generated from a bit fixing source in which k bits are unbiased and independent, the other n−k bits are fixed or copies of the k independent random bits. Let M be an n×m random matrix such that each entry of M is 1 with probability p = w( logn n ). Assume Y = XM . If m k < 1, as n becomes large enough, we have that ρ(Y ) = 0 with almost probability 1, i.e.,

ρ(Y ) ≤ 2m−n−1 (1 + e)n

PM [ρ(Y ) = 0] → 1

According to the theorem above, as n becomes large e2 nough, we can extract as many as n log2 ( 1+e ) random bits from a noisy coin source defined above, by using the generator matrix of a linear code with binomial weight distribution. It turns out that the generator matrices of primitive BCH codes are good candidates. For a primitive BCH code of length 2k − 1, it is known that the weight distribution of the code is approximately binomial, see Theorem 21 and 23 in [26]. Namely, the number bi of codewords of weight i is ( k ) 2 −1 bi = a (1 + Ei ) i

We see that random matrices are powerful for extracting randomness from bit fixing sources. Now the question is can we construct a fixed linear extractor to extract randomness efficiently from all bit fixing sources specified by n and k? However, the answer is negative. The reason is that it requires XM uT to be an unbiased random bit for all u ̸= 0 (See the proof above). So ∥M uT ∥ > n − k for all u ̸= 0, otherwise we are able to find a bit fixing source X such that XM uT is fixed as follows: Assume X = x1 x2 ...xn , if (M uT )i = 1 we set xi as an unbiased random bit, otherwise we set xi = 0 being fixed. It further implies that if we have a linear code with generator matrix M T , then its minimum distance should be more than n−k. But for such a matrix, its efficiency ( m n ) is usually very low. For example, when k = n2 , we have to find a linear code such that its minimum distance is more than n2 . In this case, the dimension of the code, i.e. m, is much smaller than k.

where a is a constant, and the error term Ei tends to zero as k grows. III. B IT F IXING S OURCE In this section, we consider another type of weak random sources, called bit fixing sources, first studied by Cohen and Wigderson [17]. Assume a binary sequence X of length n is generated from a bit fixing source, then k bits in X are unbiased and independent and the remaining n − k bits are fixed (or ‘embedded’ copies of the k independent bits). For such sources, under linear transformations, we have two cases about the output sequence Y = XM : (1) All the bits in Y are independent of each other, hence ρ(XM ) = 0. Or (2), there exists one output bit which is fixed or completely depends on the other bits, in this case ρ(XM ) ≥ 1 for all n. Now, we demonstrate that the constructions based on random matrices for noisy coin sources still work for bit fixing

IV. N OISY (H IDDEN ) M ARKOV S OURCE Consider the existing physical sources for random number generators, most of them are based on some natural noise, such as thermal noise and avalanche noise. Simply, these sources can be described as noisy (hidden) Markov sources: Let Y = y1 y2 ...yn ∈ Dn be a sampling of a noise signal such that for all i > 1 P [yi |yi−1 , ..., y1 ] = P [yi |yi−1 ] then the binary sequence generated by this source is X = x1 x2 ...xn where xi = I(yi ) ∈ {0, 1} with some function I.

1667

To generate random bits, the noisy sources usually have the following property: For all 1 ≤ i1 < i2 < i3 ≤ n, 1 e 1 e P [I(yi2 ) = 1|yi1 , yi3 ] ∈ [ − , + ] 2 2 2 2 for some constant e. For these noisy (hidden) Markov sources, we have the following lemma. Lemma 9. Let X = x1 x2 ...xn be a binary sequence generated from a noisy hidden Markov source described above. Let z = xi1 + ... + xit mod 2 for 1 ≤ i1 < i2 < ... < it ≤ n with some t, then we have 1 e(t−1)/2 |P [z = 1] − | ≤ (7) 2 2 Usually, Y = y1 y2 ...yn ∈ Dn is a time series of real numbers, i.e. D = R, and the function I is a threshold function such that { 0 if yi < c xi = I(yi ) = 1 if yi ≥ c for a constant c. Note that Equ. (7) can be rewritten as |P [z = 1] − 12 | ≤ √ ( e)t √ , which is very similar to the result in Lemma 1. If we 2 e ignore the constant term, √ the only difference between them is that e is replaced by e. Based on this observation as well as the results in Section II for noisy coin sources, we can prove the following theorems for noisy hidden Markov sources. Theorem 10. Let X = x1 x2 ...xn be a binary sequence generated from a noisy hidden Markov source described above. Let M be an n × m random matrix such that each entry of M is 0 or 1 with probability 12 . Then given Y = XM , we have EM [ρ(Y )] ≤

√ 2m−n−1 √ (1 + e)n e

Theorem 11. Let X = x1 x2 ...xn be a binary sequence generated from a noisy hidden Markov source described above. Let M be an n × m random matrix such that each entry of M is 1 with probability p = w( logn n ). Assume Y = XM . If n logm 2√ < 1, as n becomes large enough, we have that 1+

e

p

ρ(Y ) converges to 0 in probability, i.e., ρ(Y ) →

0.

Theorem 12. Let C be a linear binary code with dimension m and codeword length n. Assume its weight distribution is binomial and its generator matrix is G. Let X = x1 x2 ...xn be a binary sequence generated from a noisy hidden Markov source described above, then given Y = XGT , we have that ρ(Y ) ≤

√ 2m−n−1 √ (1 + e)n e

These theorems imply that when n is large enough, we can extract as much as n log2 1+2√e bits of randomness from a noisy hidden Markov source using linear extractors. ACKNOWLEDGMENT This work was supported in part by the Molecular Programming Project funded by the NSF Expeditions in Computing Program under grant CCF-0832824.

R EFERENCES [1] R. Motwani and P. Rgahavan, “Randomized Algorithms”, Combridge University press, 1995. [2] K. Visweswariah, S. R. Kulkarni and S. Verd´u, “Source codes as random number generators”, IEEE Trans. on Information Theory. vol. 44, no. 2, pp 462-471, 1998. [3] T. S. Han, “Folklore in source coding: information-spectrum approach”, IEEE Trans. on Information Theory, vol. 51, no. 2, pp 747-753, 2005. [4] J. Naor and M. Naor, “Small-bias probability spaces: efficient constructions and applications”, SIAM Journal on Computing, vol. 22, no. 4, pp 838-856, 1993. [5] P. Lacharme, “Analysis and construction of correctors”, IEEE trans. on Information Theory, vol. 55, pp 4742-4748, 2009. [6] J. von Neumann, “Various techniques used in connection with random digits”, Appl. Math. Ser., Notes by G.E. Forstyle, Nat. Bur. Stand., vol. 12, pp 36-38, 1951. [7] P. Elias, “The efficient construction of an unbiased random sequence”, Ann. Math. Statist., vol. 43, pp 865-870, 1972. [8] M. Blum, “Independent unbiased coin flips from a correlated biased source: - a finite state Markov chain”, Combinatorica, vol. 6, pp 97-108, 1986. [9] M. Santha and U. V. Vazirani, “Generating quasi-random sequences from semi-random sources”, Journal of Computer and Symstem Science, 33:75-87, 1986. [10] D. Zuckerman, “General weak random sources”, In Proceedings of IEEE Symposium on Foundations of Computer Science (FOCS), pp 534-543, 1990. [11] R. Shaltiel, “Recent developments in explicit constructions of extractors”, BULLETIN-European Association For Theoretical Computer Science, vol. 77, pp 67-95, 2002. [12] Z. Dvir and A. Wigderson, “Kakeya sets, new mergers and older extractors”, In Proceedings of IEEE Symposium on Foundations of Computer Science (FOCS), 2008. [13] V. Guruswami, C. Umans, and S. Vadhan, “Unbalanced expanders and randomness extractors from parvaresh-vardy codes”, In Proceedings of the Twenty-Second Annual IEEE Conference on Computational Complexity (CCC), pp 96-108, 2007. [14] A. Rao, “Randomness extractors for independent sources and applications”, Ph.D. Thesis, Department of Computer Science, University of Texas at Austin, 2007. [15] R. Raz, “Extractors with weak random seeds”, In Proceedings of the 37th Annual ACM Symposium on Theory of Computing (STOC), pp 11-20, 2005. [16] B. Barak, R. Impagliazzo, and A. Wigderson, “Extracting randomness using few independent sources”, SIAM Journal on Computing, 36:10951118, 2006. [17] A. Cohen and A. Wigderson, “Dispersers, deterministic amplification, and weak random sources”, In Proceedings of the 30th Annual IEEE Symposium on Foundations of Computer Science (FOCS), 1989. [18] J. Kamp and D. Zuckerman, “Deterministic extractors for bit-fixing sources and exposure-resilient cryptography”, SIAM Journal on Computing, 36:1231-1247, 2006. [19] A. Gabizon, R. Raz, and R. Shaltiel, “Deterministic extractors for bitfixing sources by obtaining an independent seed”, SIAM Journal on Computing, 36:1072-1094, 2006. [20] L. Trevisan and S. P. Vadhan, “Extracting randomness from samplable distributions”, In Proceedings of the 41st Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp 32-42, 2000. [21] J. Kamp, A. Rao, S. Vadhan and D. Zuckerman, “Deterministic extractors for small-space sources”, Journal of Computer and System Sciences, vol. 77, pp 191-220, 2011. [22] B. Jun and P. Kocher. “The Intel random number generator”, 1999. http://www. cryptography.com/resources/whitepapers/IntelRNG.pdf. [23] L. Zhuo, V. K. Prasanna, “Sparse matrix-vector multiplication on FPGAs”, In Proceedings of the 2005 ACM/SIGDA 13th International Symposium on Field-Programmable Gate Arrays, pp 63-74, 2005. [24] U. V. Vazirani, “Efficiency consideration in using semi-random sources”, In Proceedings of the 19th Annual ACM Symposium on the Theory of Computing (STOC), pp 160-168, 1987. [25] P. Lacharme, “Post processing functions for a biased physical random number generator”, In Proceeding of FSE, vol. 5086, pp 334-342, 2008. [26] F. J. Macwilliams, N. J. A. Sloane, “The theory of error-correcting codes”, Elsevier/North-Holland, 1977.

1668

Suggest Documents