Primality Testing, Integer Factorization, and Discrete Logarithms Theodoulos Garefalakis
Department of Computer Science, University of Toronto Toronto, Ontario M5S 1A4, Canada
[email protected] March 13, 1998
1 Introduction Computational number theory has been a subject of study since the ancient years. The interest in the subject was renewed, and increased considerably during the last two decades, because of the applications it found in public key cryptography. Apart from being interesting on a purely intellectual level, public key cryptosystems have several advantages over traditional cryptosystems [34], that make them appealing particularly for applications where the volume of information that has to be transmitted is large. An obvious requirement of a good cryptographic system is that messages should be easy to encrypt and decrypt for legitimate users, but decryption should be hard for everyone else. Public key cryptosystems base their security on (apparently) computationally hard problems. Number theory turned out to be an excellent source of such problems. For example, the integer factoring problem is the backbone of the RSA cryptosystem [47], and exponentiation ciphers like the DieHellman key exchange scheme [14, 15] is based on the discrete logarithm problem. Other problems, like primality testing and irreducibility testing (for polynomials over nite elds) are also essential for the implementation of the above cryptosystems. Although the cryptosystems mentioned here are believed to be secure, they have not being proved so. The security of these systems depends on our current inability to eciently solve certain number theoretic problems. Therefore, it becomes important to benchmark how hard these problems are at the present. The purpose of this paper is to survey some historical and modern methods for primality testing, integer factorization, and the discrete logarithm problem, and point 1
out some theoretical questions related to the algorithms. Our main concern will be the rigourosity of the running time bounds and the error estimates of the algorithms, as well as the analysis of the algorithms on the average (over the inputs). The paper is organized as follows. In section 2, we de ne the problem of primality testing and present three somewhat dierent approaches. In section 3, we present some integer factoring methods. In section 4, we discuss the discrete logarithm problem over nite elds, and present the index calculus method and some variants. We conclude with section 5, where we discuss some analytic techniques that have been useful in the study of those three problems (and other related problems), and suggest some further work.
2 Primality Testing In this section we discuss algorithms that address the question \given an integer n, is n prime or composite?" Such algorithms are referred to as primality tests if the answer \n is prime" is given with certainty, and as compositeness tests is the answer \n is composite" is never wrong. Abusing this de nition, we will sometimes refer to both types of tests as primality tests (a very common abuse in the literature). Let us start by de ning the object of interest. An integer n > 1 is prime if and only if n is divisible only by 1 and itself. The de nition suggests the rst primality test, trial division. If n is not a prime then it will p have a prime factor no bigger than n. The test is deterministic, and therefore it always produces p the right answer but it has running time exponential in the size of the input ( n). The trial division algorithm does more than testing for primality: it actually tries to produce a factor of n. It is currently believed that primality testing and integer factorization are quite dierent problems, and modern primality tests try to exploit this fact. A satisfactory primality test should have the following properties:
correctness (the algorithm should always give the correct answer) generality (the algorithm should work for all numbers, not only for certain types) speed (the algorithm should run in polynomial time) Currently, such a test does not exist1 . So, three dierent approaches have been devised to handle A randomized polynomial time test does exist, but is only of theoretical interest. Whether a deterministic polynomial time primality test exists, remains an open question. 1
2
the problem. Each approach gives up exactly one of the above desirable properties. We will discuss examples of each approach.
2.1 Probabilistic and ERH-Based Methods In the rst approach, we give up the correctness. An algorithm of this type is either probabilistic, and produces the correct answer with high probability (but not always), or it is deterministic, but its correctness relies on unproven (but widely believed) conjectures, namely the Extended Riemann Hypothesis (ERH). In both cases those algorithms do not qualify as solutions to the problem at hand, but since they are fast, they might be suitable for practical purposes. The simplest test in this approach is the Fermat test, and is based on Fermat's little theorem.
Theorem 1 If p is a prime, and a an integer with (a; p) = 1, then ap? 1 (mod p). 1
Using this theorem one can often (in fact most of the times) prove that a number n is not prime by simply nding a base a such that an?1 6 1 (mod n). If such a base a exists, it is called Fermat witness for n, and one can hope to discover it by randomly choosing bases 2 a n ? 1. The converse of Fermat's Little Theorem however is not true. Therefore, the Fermat test is certainly not a primality test (does not prove the primality of n). Unfortunately, it does not even qualify as compositeness test, since there exist numbers for which no Fermat witness exists. Such numbers are known as Carmichael numbers, and it was recently proved by Alford, Granville, and Pomerance [4] that they are in nitely many. The test however is not necessarily doomed. If the number to be tested is chosen at random in a range from 1 to x, then the probability of failure tends to zero, as x ! 1, even if we only test for a = 2 [39, 40, 23, 16]. There is a straightforward way to strengthen the Fermat test. The underlying theorem is Euler's Criterion.
Theorem 2 If p is an odd prime and (a; p) = 1, then a ?2 1 p
a (mod p). p
Based on this theorem is the Solovay-Strassen test, which tests the congruence of Theorem 2 for a random a, 1 a n ? 1. The probabilistic Solovay-Strassen test is a one-sided error Monte-Carlo algorithm, that uses O((lg n)3) bit operations on input n. The probability of success is at least 1/2 for every input. It seems plausible that the probability of success is much greater when the input n is taken at random. No such analysis, however, is known. A similar probabilistic test was devised by Rabin [46], and independently by Miller [32], based on the following theorem.
3
Theorem 3 Let n be an odd composite, and write n ? 1 = 2s d, with d odd. Then if n is prime, then any integer a 2 f1; :::n ? 1g satis es ad 1 (mod n) or a d ?1 (mod n) for some i with 0 i < s. 2i
The test again picks a at random and checks if the conditions of the theorem are satis ed. The Rabin test runs also in time O((lg n)3), and has probability of success at least 3/4 for every input. Furthermore, any composite number that successfully passes the Miller-Rabin test also passes the Solovay-Strassen test. In that sense the Miller-Rabin completely supersedes the Solovay-Strassen test. We note that it is possible to make the last two tests deterministic, under the assumption of the ERH. This is based on a theorem by Ankeny regarding the least quadratic non-residue modulo a prime [5]. His theorem implies (see [7]) that if n is composite, then (assuming ERH) the witness belongs to a set of size 2(log n)2, which one can search deterministically. The algorithm uses O((lg n)5) bit operations. The Rabin test can also be made deterministic in a very similar manner.
2.2 Special Tests We turn our attention now to the second approach, where we give up generality. The algorithms that fall in this category either assume that some additional information about the input is given, or work only for some special types of input. All of the tests in this section are based on the following theorem by Kraitchik and Lehmer [27].
Theorem 4 Let n be an integer. Then n is a prime if and only if there exists an integer a such ?1 that an? 1 (mod n) and a 6 1 (mod n) for all primes q j n ? 1. 1
n
q
This leads to a straightforward algorithm that proves the primality of every prime n, given the complete factorization of n ? 1. It chooses an integer a uniformly at random from 1 up to n ? 1, and checks if the conditions of the theorem are satis ed. The expected number of bit operation used is O((lg n)4 ). However, this algorithm diverges if the input number n is composite. Fellows and Koblitz [17] recently showed one way to overcome this unpleasant feature. They presented a deterministic algorithm that given the complete factorization of n ? 1 determines whether n is prime or composite in time O((lg n)6 = lg lg n). We note that analogous tests exist in the case that the complete factorization of n + 1 (instead of n ? 1) is given. Of special interest in computational number theory are numbers of some special forms, such as Mp = 2p ? 1 (Mersenne numbers), Fk = 22 + 1 (Fermat numbers), and numbers of the form k
4
n = f 2s + 1. We brie y mention that for each of the above types deterministic polynomial time
algorithms are known, namely the Lucas-Lehmer test for Merssene numbers, Pepin's test for Fermat numbers and Proth's test for numbers of the form n = f 2s + 1.
2.3 Unconditional Primality Tests In this section we are interested in algorithms that prove the primality of a number of no special form, without any assumptions. These primality tests are based on algebraic number theory, and were developed over the past 15 years. The rst test in this category is the cyclotomic rings test, due to Adleman, Pomerance, and Rumely (also known as the APR test). In [3], the authors give both a deterministic and a randomized (Las Vegas type) version, and prove that they run in time (log n)O(log log log n) . A more practical version of the above test, that uses Jacobi sums instead of Gauss sums, was discovered by Cohen and Lenstra [11]. The Jacobi sum test has also the above time bound, and can prove the primality of numbers of 100-200 digits in a matter of seconds on a single Cray X-MP processor. An interesting (but unpleasant) feature of the Jacobi Sum test is that it does not provide any certi cate of the primality of the number, and the only way to verify the answer is to repeat the test. Another modern primality test is the random curve test, due to Goldwasser and Kilian [20]. It is based on the use of elliptic curves over nite elds, and is probabilistic in nature. Unfortunately, if the input number n is composite the test may diverge. However, it will never give the wrong answer, and furthermore, it will provide a certi cate of the primality of the number, that can be veri ed much faster than it can be found. The algorithm has been analyzed only under heuristic assumptions and the conjectured running time is O((log n)12). We conclude this section with an algorithm discovered by Adleman and Huang [2], that is one of the major achievements in theoretical algorithmic number theory. Their algorithm, called the Abelian variety test is a randomized algorithm (Las Vegas type), and provably runs in expected polynomial time O((log n)6 ). Although their algorithm is totally non-practical, it remains the only primality test that can be proved to run in polynomial time. An excellent treatment of theoretical and algorithmic aspects of primality testing can be found in [7]. The Jacobi sum test and the elliptic curve test are presented in detail in [10]. A good survey for the subject can be found in [28].
5
3 Integer Factorization We turn now to the problem of integer factorization. Given a composite integer n, a factorization of n is understood to be a complete factorization of n into primes. Most factoring methods will not in general give the complete factorization, but rather a non-trivial divisor d (i.e., 1 < d < n) of n. One then applies the algorithm recursively on the two pieces d, and n=d. Finding a non-trivial divisor of n is called splitting n, or even sometimes by abuse of language, factoring n.
3.1 Early Methods We start our discussion with the simplest algorithm, namely the trial division (see [24, 10]). In a worst-case setting, the algorithm takes O(n1=2) arithmetic operations to factor n completely. It is interesting that the \average" performance of the algorithm is provably better. In particular, Knuth and Trabb-Pardo [25] proved that at least half of the integers in the interval [1; n] will be factored in time O(n0:35). Before turning to modern, more powerful factoring algorithms, we will discuss an algorithm proposed by Pollard, the (p ? 1)?method. The eciency of the method depends on the nature of the prime factors of n. In order to be more precise, we will need a de nition.
De nition 1 Let y be a positive integer. An integer n is de ned to be y?smooth if all its prime factors are y . n is de ned to be y ?powersmooth, if all the prime powers dividing n are y . Assume now that n has a prime factor p such that p ? 1 is y ?powersmooth, and let E = E (y ) be the least common multiple of the positive integers y . Then p ? 1 j E , and therefore, by Fermat's Little Theorem we have aE 1 (mod p) for any a coprime to n, which implies that (aE ? 1; n) > 1: Note that the above GCD is not necessarily non-trivial. For example, if n = pq , and both p ? 1, and q ? 1 are y ?powersmooth, then the GCD will be equal to n. If however, the GCD is tested for increasing values of y , then it is highly improbable2 that this will happen. Since n will have at p p least one prime factor n, we only need to consider values of y up to n, and thus the worst case complexity of the method is O(n1=2+), no better than trial division . It can also be shown that the bound is tight (let n = (2p + 1)(2q + 1), and p, q , 2p + 1, and 2q + 1 primes). However, one would expect that n has a prime factor smoother than the above example. Indeed, Pomerance and Sorenson [45] were able to prove that the (p ? 1)?method can factor more numbers in a given 2
This seems to be an empirical observation. No formal proof of this statement has come to our attention.
6
amount of time than the trial division algorithm. Nothing is known about the average number of operations needed by the (p ? 1)?method to split a number n, when n is taken at random.
3.2 Modern Methods In this section, we focus on some of the latest and most powerful factoring algorithms known. They can be divided into two groups. The rst is called the \combination of congruences", while the second might be called \groups of smooth order". Typical members of the \combination of congruences" camp are the continued fraction method, the quadratic sieve , and the number eld sieve. In the \groups of smooth order" camp belong the (p ? 1)?method, and the elliptic curve method. Suppose we want to factor n. The goal of all algorithms in the rst camp is to nd x; y 2 Z such that x2 y 2 (mod n) (1) but
x 6 +y (mod n)
(2)
Then from (1) we have n j (x + y )(x ? y ), and from (2) we know that n 6 j (x + y ) and n 6 j (x ? y ). Hence, d = (n; x ? y ) is a proper divisor of n, and we managed to split n. But how can we generate x and y? The idea is the following. Let F the set of the rst m primes. We will call F the factor base. The next step is to determine congruences
x2i zi (mod n)
(3)
such that zi factors completely over F . Suppose we can nd m + 1 such congruences. Then for the corresponding zi 's we have m Y zi = pej : ij
j =1
For each i, consider the exponent vector reduced modulo 2, i.e.,
v(zi) = (ei1 ; :::eim) mod 2;
1 i m + 1:
These vectors lie in an m?dimensional vector space over Z=(2), and therefore, m + 1 vectors must have at least one linear dependency:
v(zi1 ) + ::: + v(zi ) (0; :::; 0) (mod 2); k
7
which means that
z = zi1 :::zi = k
m Y pjei1 j +:::+eik j : j =1
Since all the exponents are even, z is a square, and we managed to generate a congruence like (1), and we can hope that (2) will also hold, so that we will be able to split n. This is the common idea behind the algorithms of this class. Their dierences lie in the way they search for smooth squares modulo n. Dixon's random squares method picks x 2 f1; :::; n ? 1g at random and then checks if x2 mod 2 is smooth. The continued fraction method relies on properties of continued fraction expansions. The quadratic sieve by Pomerance [41] uses quadratic polynomials. Finally, the most recent development, the number eld sieve makes use of algebraic number elds [29]. The modern factoring algorithms in their majority have been analyzed under heuristic assumptions. Under such heuristic arguments, the quadratic sieve runs in time L(n), where L(n) = exp((1 + o(1))(log n log log n)1=2), while the number eld sieve is conjectured to run asymptotically much faster, in time exp(c(log n)1=3(log log n)2=3). Some \combination of congruences" algorithms have been rigorously analyzed in [42, 48], the best rigorous bound being L(n). The algorithms in the \groups of smooth order" class were inspired by the (p ? 1)?method, and explored further the idea that if p ? 1 is smooth, i.e., Fp has smooth order, then we can factor. The elliptic curve method, due to Lenstra [22], tries to overcome the fact that it is unlikely for Fp to have a small smoothness bound, by shifting between dierent groups, until one is found with appropriately smooth order. The algorithm has not been analyzed rigorously. Heuristic arguments support the conjectured running time L(n). An excellent presentation of the early factoring methods can be found in [24]. Early and modern methods with emphasis in the later are surveyed in [10]. A very comprehensive presentation and heuristic analysis of the quadratic sieve and the elliptic curve method can be found in [43].
4 Discrete Logarithms In this section we consider the problem of computing discrete logarithms. Let G be a group (written multiplicatively), g 2 G, and let hg i be the subgroup generated by g . The problem of the discrete logarithm for the group G may be stated as follows. Given g 2 G and a 2 hg i; nd an integer x such that g x = a:
8
We will be interested in the special case where the group G is the cyclic group of a nite eld, i.e., G will be Fq . Throughout this section, g will denote a primitive element of Fq . We start our discussion with an algorithm by Pohlig and Hellman [37], which computes discrete logarithms over Fq using O(pp) operations, where p is the largest prime factor of q ? 1. Although this is, in the worst-case, exponential in log q (when for example q ? 1 is itself a prime), the algorithm will compute any discrete logarithm in Fq quite rapidly, if q ? 1 is smooth. This particular observation makes the nite elds with smooth order insecure for cryptographic applications. We turn now to the basic index calculus method, which is the rst subexponential algorithm for the problem, and provided the basis for several more sophisticated algorithms. The idea rst appeared in the work of Kraitchik [26], and was rediscovered and analyzed by Adleman [1], Merkle [31], and Pollard [38]. Our presentation will be for nite elds Fq with q = pn , n > 1, a prime power, i.e., the element of Fq will be polynomials over Fp of degree < n. The basic method, however, works also for the case that q is a prime. Let f be a monic, irreducible polynomial over Fp , and let g be a primitive element of Fq . Also let Sm denote the set of all irreducible polynomials over Fp , of degrees m, where m is a parameter of the algorithm. The algorithm is probabilistic in nature, and consists of two phases. In the rst phase, one constructs a large database of the discrete logarithms of all polynomials in Sm . In the second phase one computes the actual discrete logarithm. It should be noted that the rst phase is considerably more time consuming, but is done only once. After the database is constructed, one can compute any discrete logarithm over the particular eld fairly fast (but still not in polynomial time). The basic idea for the rst phase is to choose a random integer s, 1 s q ? 1, and form the polynomial h g s (mod f ); deg h < n; and check whether h factors into irreducible factors over Sm . If not, discard it, and try another exponent s. If it does, say Y h = v b (h) ; v
v2S
then we obtain the congruence
s
X v2S
bv (h) logg v (mod q ? 1):
Once we obtain slightly more than #Sm such congruences, we expect to be able to solve the system and determine the logg v . 9
In the second phase, we compute the discrete logarithm of a given polynomial h. First, we choose a random number s, 1 s q ? 1 and compute
h h gs (mod f ); deg h < n
(4)
If the reduced polynomial h is m-smooth, then all its irreducible factors are in Sm , so that
h h g s we obtain
logg h ?s +
Y b (h) v v (mod f ); v2S
X v2S
bv (h) logg v (mod q ? 1):
For the basic version, as presented above, rigorous upper bounds for the running time of the second phase have been obtained. If we let M = M (n) = exp((1 + o(1))(n log n)1=2) then the time required by the second phase is M 0:588:::. Heuristic arguments suggest that the rst phase can be carried out in time M 1:176::: [33]. Several variants of the above method have appeared in the literature. Their focus is to make the algorithm more ecient by increasing the probability that the polynomial h used in both phases factors completely into irreducible polynomials in Sm . On the basis of heuristic arguments, one can show that these variants are indeed asymptotically faster. However, none of them has been rigorously analyzed. Furthermore, it is not clear if they can be modi ed to work on prime elds, Fp . Perhaps the most important of all is the Coppersmith method [12], which works for elds F2 , and is conjectured to have asymptotic running time of the form exp((c + o(1))n1=3(log n)2=3), where c = 1:351::: for the rst phase, and c = 1:098::: for the second. Of special interest is also the variant by Blake et.al. [8], the \Waterloo Algorithm", for two reasons: it is one of the algorithms that is actually used in practice, and does not make any assumptions about the irreducible polynomial f used to de ne Fq . The Blake et.al. change to the basic method aects the running time of the second phase (the actual computation of the discrete logarithm). Indeed, instead of factoring the polynomial de ned by Equation (4), they nd two polynomials w1 and w2 such that w1h w2 (mod f ); n
and such that deg wi n=2 for i = 1; 2. Those polynomials can be found very fast using the extended Euclidean algorithm on h and f . Once this is done, the wi are factored, and if they both
10
are divisible only by irreducibles from Sm , say
wi =
Y c (w ) vv i; v2S
then the discrete logarithm is computed as logg h ?s +
X v2S
(cv (w2) ? cv (w1)) logg v (mod q ? 1):
The idea behind this approach is that if the w1 ; w2 behave like independently chosen, random polynomials of degree n=2, then the probability that both are m?smooth is greater than the probability of a single polynomial of degree n being m?smooth. Discrete logarithms over prime elds seem to be harder to compute. The only rigorous subexponential algorithm is the basic index calculus algorithm, which applies to prime elds unchanged. If we let L = L(p) = exp((1 + o(1))(log p log log p)1=2), then the running time for the rst phase p p is L 2 , and for the second phase is L1= 2 (see [42]). Coppersmith, Odlyzko, and Schroeppel [13] proposed several algorithms, and provided heuristic arguments that they run in time L, while the second phase can be used to compute individual logarithms in time L1=2. The question whether an algorithm with provable running time L exists, remains open. An excellent survey of discrete logarithms, and their cryptographic signi cance is [33]. Another good survey, focusing on computational complexity issues is [30].
5 Analytic Techniques We discuss now some techniques that have proved useful in problems related to those presented here. All the techniques are analytic, and come from two dierent elds: analytic number theory, and analytic combinatorics. The work of Pomerance and collaborators on the distribution on pseudoprimes [39, 40, 23] has proved what was observed in practice: that the Fermat test is rarely mistaken. Using similar techniques, it seems possible to obtain similar results for other types of pseudoprimes, e.g., Euler pseudoprimes, which would lead to the average case study of stronger primality tests. The latest developments in algorithms for integer factorization and discrete logarithm make use of smooth numbers in a very essential way. This connection is demonstrated in [44]. Smooth numbers have been extensively studied in analytic number theory. A good survey of the existing results is that of Hildebrand and Tenenbaum [21]. The mirror image of smooth numbers are the numbers without small prime factors. The related Buchstab function has also been studied in the 11
context of number theory. Following the work of Can eld [9], Garefalakis, Panario, and Richmond showed that the asymptotic behavior of the Buchstab function can be studied using techniques from analytic combinatorics [19]. The work of Panario and Richmond [36], Flajolet, Gourdon, and Panario [18], and Panario, Gourdon, and Flajolet [35] suggests other connections between number theory and combinatorics.Furthermore, the Dickman and the Buchstab functions, that were discovered in the context of analytic number theory appear also in the context of combinatorial decomposable structures, e.g., polynomials over nite elds and permutations. In a recent paper [6], Bach and Peralta re ned the notion of smooth numbers and studied what they called semismooth numbers. It seems plausible, that the techniques used in [9, 19] can be used to analyze asymptotically the recursive relations presented in [6]. It is not unreasonable to expect that similar results can be proved for combinatorial structures. Such a re nement might give some further insight for the factoring and discrete logarithm problems, and provide the technical tools to devise improved version of the existing algorithms.
References [1] L. M. Adleman. A subexponential algorithm for the discrete logarithm problem with applications to cryptography. In Proc. 20th IEEE Found. Comp. Sci. Symp., pages 55{60, 1979. [2] L. M. Adleman and M. Huang. Primality testing and abelian varieties over nite elds. Lecture Notes in Math., 1512, 1992. [3] L. M. Adleman, C. Pomerance, and R. S. Rumely. On distinguishing prime numbers from composite numbers. Ann. Math., 117:173{206, 1983. [4] W. R. Alford, A. Granville, and C. Pomerance. There are in nitely many carmichael numbers. Ann. Math., 140:703{722, 1994. [5] N. C. Ankeny. The least quadratic non residue. Ann. Math., 55:65{72, 1952. [6] E. Bach and R. Peralta. Asymptotic semismoothness probabilities. Math. Comb., 65:1701{ 1715, 1996. [7] E. Bach and J. Shallit. Algorithmic number theory. The MIT Press, Cambridge, MA, 1996. [8] I. F. Blake, R. Fuji-Hara, R. C. Mullin, and S. A. Vanstone. Computing logarithms in nite elds of caracteristic two. SIAM J. Alg. Disc. Methods, 5:276{285, 1985.
12
[9] E. R. Can eld. The asymptotic behavior of the Dickman - de Bruijn function. Congressus Numerantium, 35:139{148, 1982. [10] H. Cohen. A course in computational algebraic number theory. springer-Verlag, Berlin Heidelberg, 1996. [11] H. Cohen and H. W. Lenstra Jr. Primality testing using Jacobi sums. Math. Comp., 42:297{ 330, 1984. [12] D. Coppersmith. Fast evaluation of logarithms in elds of caracteristic two. IEEE Trans. Inform. Theory, IT-30:587{594, 1984. [13] D. Coppersmith, A. M. Odlyzko, and R. Schroeppel. Discrete logarithms in GF(p). Algorithmica, 1:1{15, 1986. [14] W. Die and M. Hellman. New directions in cryptography. IEEE Trans. Inform. Theory, 22:472{492, 1976. [15] T. ElGamal. A public key cryptosystem and a signature scheme based on discrete logarithms. IEEE Trans. Inform. Theory, 31:496{472, 1985. [16] P. Erd}os and C.Pomerance. On the number of false witnesses for a composite number. Math. Comp., 46:259{279, 1986. [17] M. R. Fellows and N. Koblitz. Self-witnessing polynomial-time complexity and prime factorization. Codes and Cryptography, 2:231{235, 1992. [18] P. Flajolet, X. Gourdon, and D. Panario. The complete analysis of a polynomial factorization algorithm over nite elds. submitted to SIAM J. on Computing (extended abstract in ICALP '96, Lecture Notes in Comp. Sci. vol. 1099). [19] T. Garefalakis, D. Panario, and B. Richmond. Asymptotics of some number-theoretic functions via combinatorial techniques. Work in progress. [20] S. Goldwasser and J. Kilian. Almost all primes can be quickly certi ed. In Proc. 18th Annual ACM Symp. on Theory of Computating, pages 316{329, 1986. [21] A. Hildebrand and G. Tenenbaum. Integers without large prime factors. J. de Theorie des Nombres, 5:411{484, 1993. [22] H. W. Lenstra Jr. Factoring integers with elliptic curves. Ann. Math., 126:649{673, 1987. 13
[23] S. H. Kim and C. Pomerance. The probability that a random probable pseudoprime is composite. Math. Comp., 53:721{741, 1989. [24] D. E. Knuth. The art of computer programming, vol. 2. Addison Wesley, 1997. [25] D. E. Knuth and L. Trabb Pardo. Analysis of a simple factorization algorithm. Theoret. Comp. Sci., 3:321{348, 1976. [26] M. Kraitchik. Theorie des nombers, vol. 1. Gauthier-Villars, Paris, 1922. [27] D. H. Lehmer. Tests for primelity by the converse of fermat's theorem. Bull. Amer. Math. Soc., 33:327{340, 1927. [28] A. K. Lenstra. Primality testing. Proc. of Symp. in Applied Math., 42:13{25, 1990. [29] A. K. Lenstra, H. W. Lenstra Jr., M. S. Manasse, and J. M. Pollard. The number eld sieve. Lecture Notes in Math. Springer-Verlag, Berlin, Heidelberg, New York, 1554:11{42, 1993. [30] K. S. McCurley. The discrete logarithm problem. Proc. of Symp. in Applied Math., 42:49{74, 1990. [31] R. Merkle. Secrecy, authentication, and public key systems. Technical Report Ph.D. disertation, Dept. of Electrical Engineering, Stanford Univ., 1979. [32] G. Miller. Riemann's hypothesis and tests for primality. J. Comp. System Sci., 13:300{317, 1976. [33] A. M. Odlyzko. Discrete logarithms in nite elds and their cryptographic signi cance. Advances in Cryptology: Proceedings of EUROCRYPT 84, T. Beth, N. Cot, and I. Ingemarsson (eds.), Springer-Verlag, Lecture Notes in Computer Science, 209:224{314, 1985. [34] A. M. Odlyzko. Public key cryptography. ATT Tech. J., 73:17{23, 1994. [35] D. Panario, X. Gourdon, and P. Flajolet. An analytic approach to smooth polynomials over nite elds. To appear in Proc. ANTS III, 1998. [36] D. Panario and B. Richmond. Analysis of Ben-Or's polynomial irreducibility test. submitted to Random Structures and Algorithms. [37] S. C. Pohlig and M. Hellman. An improved algorithm for computing logarithms over GF(p) and its cryptographic signi cance. IEEE Trans. Inform. Theory, IT-24:106{110, 1978. 14
[38] J. Pollard. Monte carlo methods for index computations (mod p). Math. Comp., 32:918{924, 1978. [39] C. Pomerance. On the distribution of pseudoprimes. Math. Comp., 37:587{593, 1981. [40] C. Pomerance. A new lower bound for the pseudoprime counting function. Illinois J. Math., 26:4{9, 1982. [41] C. Pomerance. The quadratic sieve factoring algorithm. Advances in Cryptology: Proceedings of EUROCRYPT 84, T. Beth, N. Cot, and I. Ingemarsson (eds.), Springer-Verlag, Lecture Notes in Computer Science, 209:67{79, 1985. [42] C. Pomerance. Fast, rigorous factorization and discrete logarithm algorithms. In Discrete Algorithms And Complexity, Proc. of the Japan-US Joint Seminar, Academic Press, pages 119{143, 1986. [43] C. Pomerance. Factoring. Proc. of Symp. in Applied Math., 42:27{47, 1990. [44] C. Pomerance. The role of smooth numbers in number theoretic algorithms. In Proc. International Congress of Mathematics, pages 411{422, 1994. [45] C. Pomerance and J. Sorenson. Counting the integers factorable via cyclotomic methods. J. of Algorithms, 19:250{265, 1995. [46] M. O. Rabin. Probabilistic algorithms for testing primality. J. Number Theory, 12:128{138, 1980. [47] R. L. Rivest, A. Shamir, and L. M. Adleman. A method for obtaining digital signatures and public-key cryptosystems. Comm. ACM, 21:120{126, 1978. [48] B. Vallee. Generation of elements with small modular squares and provably fast integer factoring algorithms. Math. Comp., 56:823{849, 1991.
15