Posts Tagged ‘OCR’

Probabilistic Algorithm for Testing Primality

Thursday, April 25th, 2019

MICHAEL O. RABIN1

Institute of Mathematics, Hebrew University, Jerusalem, Israel, and Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
Communicated by H. Zassenhaus

Received December 10, 1977

We present a practical probabilistic algorithm for testing large numbers of arbitrary form for primality. The algorithm has the feature that when it determines a number composite then the result is always true, but when it asserts that a number is prime there is a provably small probability of error. The algorithm was used to generate large numbers asserted to be primes of arbitrary and special forms, including very large numbers asserted to be twin primes. Theoretical foundations as well as details of implementation and experimental results are given.

The problem of determining for given integers whether they are prime has occupied mathematicians for centuries. In the words of Gauss in his celebrated “Disquisitiones Arithmeticae” [2, p. 396]:

The problem of distinguishing prime numbers from composite numbers and of resolving the latter into their prime factors is known to be one of the most important and useful in arithmetic. It has engaged the industry and wisdom of ancient and modern geometers to such an extent that it would be superfluous to discuss the problem at length. Nevertheless we must confess that all methods that have been proposed thus far are either restricted to very special cases or are so laborious and prolix that even for numbers that do not exceed the limits of tables constructed by estimable men, i.e., for numbers that do not yield to artificial methods, they try the patience of even the practiced calculator. And these methods do not apply at all to larger numbers.

Gauss proceeds to give several interesting algorithms for the determination of primality. We do not intend to present here a survey or bibliography of the extensive literature concerning testing for primality. All the existing methods require for arbitrary integers n a number of steps O(ne) where e is some fraction such as 1/3 or 1/4. Thus, experimental results indicate that around n = 1065 each of these methods encounters some numbers untestable by it.
For integers of certain special forms, the most notable example being the so-called Mersenne numbers n = 2p - 1, where p is a prime, there do exist rather rapid tests for primality. But arbitrary large integers defy even the most powerful computers, when tested by the conventional methods.
In this paper we present a primality test applicable to arbitrary and very large integers. If we count an arithmetical operation such as addition, multiplication, or division, involving integers 0 ≤ a, b ≤ n as one step then the test of primality of n requires in the worst case c(log2 n)2 steps where c is about 100. Thus very large numbers are amenable to this test. In practice the number of steps in most cases is about c log2 n for n which are prime, and about 3 log2 n for numbers n that are composite even if they have no small divisors.
The salient features of our method are that it is probabilistic, i.e., uses randomization within the computation, and that it produces the answer with a certain controllable miniscule probability of error. To be more precise, with full details to be given later, the algorithm produces and employs certain random integers 0 < b1 ,..., bk < n. If the outcome of the whole test is that n is composite then it is always correct (barring computational error). If the outcome asserts that n is prime then it may sometimes be wrong. But the probability that a composite number will be erroneously asserted to be prime is smaller than 1/22k. If, say, k = 50, then the probability of error is at most 1/2100.
This last statement does not mean that an integer n asserted as prime by use of 50 random numbers is prime with probability at least 1 - 1/2100. Such an interpretation is nonsensical since n is either prime or not. The correct meaning is that if the test is applied to m = 2100 integers n1, n2 ,..., nm, then the expected number of wrong answers is one. These integers need not be pairwise different and no probability distribution on the integers to be tested for primality is assumed.
Thus when we say that an integer n was asserted to be prime by use of 50 random numbers, this is no proof that n is actually prime. What is stated is that n was asserted to be prime by a procedure that on the average will make no more than one mistake in 2100 applications (even when testing the same n).
The test and an earlier, slightly weaker, form of the theorem underlying it appeared in [5]. A different probabilistic test for primality was given by Solovay and Strassen [6].
In Section 3 we discuss the implementation and application of the test to the generation of very large numbers asserted to be primes of arbitrary or of a desired prescribed form. Large primes are useful in exact arithmetical computations involving large integers. By computing modulo a prime, one can avoid the use of fractions. Also, recently evolved digitalized signature systems are based on the easy availability of large primes.
In the last section we present some of the experimental results which include examples of very large numbers asserted to be primes, twin primes, etc., very rapidly obtained on a medium-sized computer.

1. THE FUNDAMENTAL THEOREM

Throughout this paper (a, b) denotes the greatest common divisor (g.c.d.) of the integers a, b, and res(a, b) denotes the least nonnegative residue of a when divided by b. By b | a we denote the fact that b divides a, i.e., res(a, b) = 0.2

DEFINITION. Let n be an integer. Consider the following condition to be denoted by Wn(b), on an integer b:

(i) 1 ≤ b < n;
(ii)(a) bn-1 ≢ 1 mod n, or
(ii)(b) ∃i s.t.3 2i | (n - 1) and 1 < (b(n-1)/2i - 1, n) < n.

Call such an integer b a witness to the compositeness of n.
This condition was first considered by Miller [4], who used it to give a nonprobabilistic test for primality assuming the correctness of the extended Riemann hypothesis (ERH). Our test does not assume ERH and is considerably faster.
Obviously if Wn(b) holds for some b then n is composite. For if (ii)(a) holds then Fermat’s relation is violated. And (ii)(b) means that n has a proper divisor. Thus the adjective “witness to compositeness” is justified. It turns out that if n is composite then witnesses abound.

THEOREM 1. If 4 < n is composite then

eq. (1): 3(n - 1)/4 ≤ c({b | Wn(b)}).

By c(S) we denote the number of elements in the set S.

Because of (i), (1) means that no more than 1/4 of the numbers 1 ≤ b < n are not witnesses. We require some lemmas for the proof of Theorem 1.
Denote by En the set of all c, 1 ≤ c < n, (c, n) = 1. We have c(En) = φ(n), where φ is Euler’s function4. The set En is a group under multiplication mod n. If n = pk, where p is prime, then φ(n) = pk - pk-1; for odd p, En is a cyclic group.
For an integer m denote by e(m) the largest i such that 2i | m.
For a sequence m = ⟨m1 ,..., mk⟩ of integers define res(b, m) = ⟨s1 ,..., sk⟩, where si = res(b, mi), 1 ≤ i ≤ k. When m is fixed abbreviate res(b, m) = res(b).

LEMMA 2. Let mi | n for 1 ≤ i ≤ k, and (mi, mj) = 1, 1 ≤ i ≤ j ≤ k, and let m = ⟨m1 ,..., mk⟩. For every s = ⟨s1 ,..., sk⟩, s1 ∈ Em1 ,..., sk ∈ Emk, the number of elements in

En ∩ {b | res(b, m) = s}

is the same.

Proof. Let t be the product of all primes dividing n, but not any of the mi, 1 ≤ i ≤ k. Assume t ≠ 1, for t = 1 the proof that follows has to be just slightly modified. By the Chinese Remainder Theorem there exists an integer b such that res(b, ⟨m, t⟩) = ⟨s1 ,..., sk , 1⟩. Since mi | n, 1 ≤ i ≤ k, and t | n, we may assume 1 ≤ b < n. Now b ∈ En. Otherwise for some prime, p | n and p | b. This p divides some mi, or p | t. Since si = b - qimi, if p | mi then p | si contradicting (mi , si) = 1. Similarly p | t leads to the contradiction p | 1.
Denote the restriction res( , m) | En by f. By the previous paragraph, f: En → Em1 Χ ... Χ Emk = G is a homomorphism of En onto the direct product G. Thus f-1(⟨s1 ,..., sk⟩) has the same number of elements for all ⟨s1 ,..., sk⟩ ∈ G.

Let, for example, p | n, q | n, where p and q are different primes. The number of b ∈ En for which res(b, ⟨p , q⟩) = ⟨s1 , s2⟩ is the same for every pair 1 ≤ s1 < p - 1, 1 ≤ s2 < q - 1. We shall employ in the sequel such considerations without further elaboration.

LEMMA 3. Let p1 ≠ p2 be primes, q1 = p1k1, q2 = p2k2. Assume q1q2 | n. Denote ti = (φ(qi), n - l), mi = φ(qi)/ti, i = 1, 2.
At most φ(n)/m1m2 of the integers b ∈ En do not satisfy Wn(b).
If t1 is even then at most φ(n)/(2m1m2) of the b ∈ En do not satisfy Wn(b).

Proof. Let ai be a primitive root mod qi, i = 1,2. That is, ait ≡ 1 mod qi if and only if φ(qi) | t.
Let b ∈ En then (b, qi) = 1 and b ≡ airi mod qi, for some r1, r2. Because q1q2 | n we have that

eq. (2): bn - 1 ≡ 1 mod n

implies φ(qi) | ri(n - l), i = 1, 2. Hence it implies mi | ri, i = 1, 2, so r1 = h1m1, r2 = h2m2.
Thus if res(b, ⟨q1, q2⟩) = ⟨s1, s2⟩ then for at most 1/m1m2 out of all pairs ⟨s1, s2⟩ will (2) hold. By Lemma 2 all pairs ⟨s1, s2⟩ appear equally often as residue of b ∈ En, hence (2) will hold for at most φ(n)/m1m2 of the integers b ∈ En. But if (2) does not hold then Wn(b) is true.
The sharper claim made, when t1 is even, hinges on the fact, to be proved next, that in this case even if (2) holds for b, in at least half the instances (ii)(b) holds, so that b is still a witness.
Assume that t1 is even and e(t1) = e(t2); see the notation preceeding Lemma 2. Let i be such that (n - 1)/2i = d(i) is an integer and tj ∤ d(i), tJ/2 | d(i), j = 1, 2. Let b ∈ En satisfy (2) and adopt the above notation, concerning a1, a2, r1, r2, etc. If h1 is odd and h2 is even then φ(q1) ∤ h1m1d(i) and φ(q2) | h2m2d(i). Hence

eq. (3): bd(i) ≢ 1 mod q1,

eq. (4): bd(i) ≡ 1 mod q2.

For such a b ∈ En (ii)(b) holds. Similarly if h1 is even and h2 is odd. There are four equally frequent possibilities for the parities of h1 and h2. Thus for half of the b ∈ En satisfying (2), Wn(b) does still hold. Hence at most φ(n)/2m1m2 integers b ∈ En do not satisfy Wn(b).
Finally let t1 be even, e(t1) < e(t2). Choose i and j so that t2 | d(i), t1 ∤ d(i), and t1/2j | d(i) where j is minimal with the last property. Still using the above notation for a b ∈ En satisfying (2), we have φ(q2) | h2m2d(i) so that (4) holds. But bd(i) ≡ 1 mod q1 holds only if φ(q1) | h1m1d(i), i.e., only if 2j | h1. Unless 2j | h1 we have (3), so that Wn(b) is not true for at most φ(n)/2jm1m2 integers b ∈ En. But 1 ≤ j.

Proof of Theorem 1. Let 4 < n be composite. If 1 ≤ b < n and (b, n) ≠ 1 then (ii)(a) and consequently Wn(b) holds. So that to prove (1) it suffices to show that at least 3φ(n)/4 of the b ∈ En satisfy Wn(b).
Assume that n = pk, 1 < k. Here φ(n) = pk-1(p - 1) and p ∤ (n - 1), so pk-1 ≤ φ(pk)/(φ(pk), n - 1) = m. By the argument at the beginning of the proof of Lemma 3 applied to a single prime, we have that at most φ(n)/m of the b ∈ En satisfy (2) and hence satisfy not Wn(b). If 9 < n then 5 ≤ p or 2 < k hence 4 ≤ m and (1) follows. If n = 9, there are only two non-witnesses.
Next let n have at least two different prime divisors p1, p2, let qi = piki be the maximal powers such that q1 | n, q2 | n, and assume that, say, φ(q1) ∤ (n - 1). Then 2 ≤ φ(q1)/(φ(q1), n - 1) = m1. If φ(q2) ∤ (n - 1) then 2 ≤ m2, and we have finished by the first statement of Lemma 3. Thus assume φ(q2) | (n - 1). If p2 ≠ 2 then t2 = φ(q2) is even and by the second assertion of Lemma 3 at most φ(n)/2m1 ≤ φ(n)/4 (here m2 = 1) of the b ∈ En satisfy not Wn(b). If p2 = 2 then φ(n) ≤ (n - 1)/2 and at most φ(n)/m1 ≤ (n - 1)/4 of the 1 ≤ b < n satisfy (2).
Finally let n = p1k1p2k2 ... prkr, 2 ≤ r, satisfy φ(piki) | (n - 1), 1 ≤ i ≤ r. Then ki = 1, 1 ≤ i ≤ r. An easy calculation shows 3 ≤ r. Such numbers do exist, e.g., (5), (6), and satisfy (2) for all b ∈ En. In honor of their discoverer [1] we call them Carmichael numbers. Let p1, p2, p3 be three different prime divisors of n, (pi - 1) | (n - l), 1 ≤ i ≤ 3.
Assume e(p1 - 1) = e(p2 - 1) = e(p3 - 1), and let i be such that for (n - 1)/2i = d(i) we have (pi - 1) ∤ d(i), (pj - 1)/2 | d(i), 1 ≤ j ≤ 3. If b = airi mod pi, where ai is a primitive root mod pi, 1 ≤ i ≤ 3, then by the analysis in the proof of Lemma 3, 1 < (bd(i) - 1, n) < n unless the ri are all even or all odd. For if, say, r1 is even and r2 is odd then p1 | (bd(i) - 1) and p2 ∤ (bd(i) - 1). Thus not Wn(b) holds for at most (2/8)φ(n) of the b ∈ En.
The similar analysis of the cases e(p3 - 1) = e(p2 - 1) < e(p1 - 1) and e(p3 - 1) < e(p2 - 1) ≤ e(p1 - 1), is left to the reader.
If n = p*q*r is a Carmichael number with p ≡ q ≡ r ≡ -1 mod 4, p < q < r. Then n ≡ -1 mod 4. Consequently 4 ∤ (n - 1) and i in (ii)(b) is just i = 1. Thus Wn(b) holds for all b ∉ En, i.e., for the

n(1 - (1 - 1/p)(1 - 1/q)(1 - 1/r)) ≤ n (1 - (1 - 1/p)3) ≤ 3n/p

numbers (b, n) ≠ 1, and also for exactly (3/4)φ(n) of the b ∈ En.
Adding up:

c({b | Wn(b)) ≤ (3/4 + 3/p) (n - 1).

If Carmichael numbers of the above type with arbitrarily large smallest prime factor do exist, then (1) is asymptotically best possible.
A systematic computer search for Carmichael numbers of this form was conducted by Oren by looking at triplets of primes 100 ≤ p, g, r ≤ 3000. Many instances were found.
For example, for

eq. (5): 652969351 = 271 * 811 * 2971

the fraction of witnesses is 0.7513. For

eq. (6): 2000436751 = 487 * 1531 * 2683

the fraction of witnesses is 0.7507. Thus the constant 3/4 in Theorem 1 seems to be best possible.

2. THE ALGORITHM

Given a number n we choose a k, determined by the desired reliability, and randomly pick 1 ≤ b1 ,..., bk < n.
Let n - 1 = 2lm, where m is odd5. For b = bi compute bm mod n. This can be done by about 1.5 log2m multiplications, see [2, p. 398]. Every time a number n < d is obtained, find d1 = res(d, n) and continue. Thus only products of numbers a, b < n and divisions of d < n2 by n are involved. Next calculate the residues mod n of b2m,..., b2lm = bn-1.
Since log2n = l + log2m, the evaluation of all the necessary powers of b will require 1.5 log2n multiplications of a, b < n and 1.5 log2n divisions of a d < n2 by n, altogether 3 log2n steps.
If the residue of bn-1 is not 1 then Wn(b) holds. If that residue is 1, find (res(b2im, n) - 1, n) for 1 ≤ i ≤ l. Each g.c.d. computation can be performed by doing at most log2n subtractions and divisions by 2. If any one of these g.c.d.‘s is neither 1 nor n then Wn(b) is true. Thus for each bi we computationally determined whether Wn(bi) is true, and the total required number of steps is 3 log2n + l * log2n.
If, for any 1 ≤ i ≤ k, Wn(bi) is true then n is composite. If, for all 1 ≤ i ≤ k, Wn(bi) is not true, then the test asserts that n is prime.

THEOREM 2. The above algorithm requires for n - 1 = 2lm, m odd, at most k * (2 * log2n + l * log2n) Steps. If n is prime then the result is always correct. For each composite n the test may declare n to be prime, but the probability of such error is smaller than 1/22k.

Proof. Only the last statement requires proof. An error will occur only when the n to be tested is composite and the b1 ,..., bk chosen in this particular run of the algorithm are all nonwitnesses. Becauses of Theorem 1, the probability of randomly picking a nonwitness is smaller than 1/4. The probability of independently picking k nonwitnesses is smaller than l/4k = 1/22k. Thus the probability that for any given number n, a particular run of the algorithm will produce an erroneous answer is smaller than 1/22k.
Note that the theorem is formulated with respect to each n, no averaging over a range of n’s is necessary. For a given composite n the test may (rarely) give a wrong answer in one run and the correct answer in other runs. Thus there are no n’s on which the algorithm behaves poorly.

3. IMPLEMENTATION

In implementing the algorithm we incorporate some laborsaving steps. First we test the given n for divisibility by any prime p < N, where, say N=1000.6
If n passed this sieve then we apply the algorithm. Suppose that we pick k = 30. An examination of the proof of Theorem 1 shows that for “most” composite n the probability of finding a witness in one try is much bigger than 3/4. In fact, in actual runs when n was composite the witness was always the first or second b chosen. But even for n for which (1) is almost exact, the probability of error is 1/260 < 10-18. One expected error in a billion billion tests! This seems small when compared to the frequency of machine errors present in practical computations.
The main application of the primality test is to produce large numbers asserted to be primes with specified properties.
The overall strategy is to generate numbers of the specified form and test for primality until a number is asserted to be prime. Such searches lead to success rather quickly because in most cases the Prime Number Theorem ensures that the density of primes in the sequence mi searched is like 1/l if the numbers satisfy ln(mi) ∼ l. In other cases an appropriate density statement can be derived from stronger hitherto unproven assumptions such as the extended Riemann hypothesis (ERH). In every instance actually tried, the search did terminate within practical time.
As mentioned before, any primality test should incorporate trying small divisors. Thus in practice the full test is applied only to the numbers asserted to be primes.
If desired, it is often possible to ensure that n - 1 = 2 * m, where m is odd so that the second part of checking Wn, that relating to (ii)(b), involves only b(n-1)/2. But even if this feature is not incorporated, a heuristic argument shows that the expected number of g.c.d. computations arising from (ii)(b) is two. Note that the notion of expected number used here does not have a precise meaning as in Theorem 2 and is invoked only heuristically.
Suppose we want to find an “arbitrary” prime having 250 binary digits. Start by randomly generating a sequence m = α1α2 ... α250, αi ∈ {0, 1}, α1 = 1. View m as an integer written in binary notation, say it is even. Successively form the sums m + 1, m + 3 ,...; if desired omit those for which 4 | (n - 1). For each n = m + i test for primality. Continue until a number is asserted to be prime.
Because of the Prime Number Theorem, the search will not require testing a prohibitive number of sums m + i. The density of primes around n = 2250 is 1/ln n > 1/250. Thus, one can incorporate a stopping rule: If within 500 tries no prime was found, drop the number m and restart with a new number m, [log2m] = 250. In this way a number asserted to be prime n = m + i, i < 1000, will usually be found very rapidly.
For certain applications it is important to have a prime n so that n - 1 has a large prime factor. This is achieved in a similar fashion. Suppose that we want n - 1 to have a prime factor p so that [log2p] = 200. Find, as before, such a number asserted to be prime. Form the numbers 2ip + 1, i = 1, 3,... . Test each number for primality. Again a number q = lp + 1 of the desired form is rapidly asserted as prime. If the arithmetical series with difference p does not rapidly find a number asserted to be prime (this has never happened), then p can be discarded and another prime p1 can be used for restarting the search.
These are just examples of searches actually conducted. The variations pertaining to other forms of primes, or to the testing of given numbers are obvious.

4. SOME EXPERIMENTAL RESULTS

V. Pratt has programmed this algorithm and together we planned some experiments. The computations were done on a medium-sized computer7. Numbers with several hundred binary digits were generated and tested. Still, the computations, including searches, did not take more than minutes per number asserted to be prime.
The first test was verification of the known results concerning primality and compositeness of n = 2p - 1, p ≤ 500, p prime. Within about 10 min all answers were produced without a single error. This was mainly done to test the program. For Mersenne primes our test is about 1/k as fast as the Lucas test which applies only to Mersenne numbers.
Next the test was applied to generate some large numbers asserted to be primes along the lines explained in Section 3. Rather than load an arbitrary binary sequence m, the search started from powers of 2 and proceeded by decrements. The numbers

2300 - 153, 2400 - 593,

were asserted to be the largest primes below 2300 and 2400, respectively, since the other numbers in the intervals are all composite.
Finally the test was applied to discover what we believe are the hitherto largest known numbers asserted to be twin primes. In order to speed up the sieving process the search was started with a number m which is a product of many small primes. Then pairs of the form m * l + 821, m * l + 823, and m * l + 827, m * l + 829 were tried. The pairs 821, 823 and 827, 829 are themselves twin primes. Within half an hour

(∏pi<300 pi) * 338 + 821, (∏pi<300 pi) * 338 + 823,

were asserted to be twin-primes. These numbers are of order of magnitude 10123. Five subsequent hours of search failed to discover additional pairs. This is the only case where any search required more than a few minutes. Of course, no Prime Number Theorem density estimates apply to twin primes. The reader’s guess as to the heuristic implications of this seeming gap in twin primes is as good as ours.

In conclusion, let us raise a question concerning possible theoretical applications of this primality test, in addition to the practical applications mentioned in the Introduction. What conjectures pertaining to the distribution of primes can one formulate, lend support to, or disprove by use of this test? For example, are there some refined estimates for the density of primes around 10100, perhaps consequences of some strong number- or function-theoretic conjectures, which one could experimentally check using the primality test. The author, together with V. Pratt, have tested the density of primes of the form n = x2 + 1 for n ∼ 250, 2100, 2150, 2200, and found very good agreement with the Hardy-Littlewood conjecture on this density. These and other experimental results will be reported elsewhere.
Note added May 10, 1978: Donald Knuth (and, it seems, several others) observed that the primality test can dispense with the g.c.d. computation. His observation is based on the following corollary of Theorem 1.
Using the notation of Section 2, let n - 1 = 2l * m, where m is odd. Let 0 ≤ x < n, denote x0 ≡ xm mod n, xi ≡ xi-12 mod n, 1 ≤ i ≤ l. Thus xl ≡ xn-1 mod n.

PROPOSITION. If 4 < n is composite and odd then for at least (3/4)(n - 1) of the 1 ≤ x < n either xl ≠ 1 or for some 1 ≤ i ≤ l we have xi = 1 and xi-1 ≠ n - 1.

Proof. By Theorem 1, if n is composite then for at least (3/4)(n - 1) of the 1 ≤ x < n Wn(x) holds. If Wn(x) holds then xl ≠ 1, or xl = 1 and for some 0 ≤ j ≤ l we have 1 < (xj - 1, n) < n.
In the second case let i - 1, j ≤ i - 1 < l, be the last index such that xi-1 ≠ 1; thus xi = 1. We have xj ≡ 1 mod p, where p is a proper divisor of n. Hence xi-1 ≡ 1 mod p, since

xi-1 ≡ xj2(-)-1 mod n.

Now xi-1 = n - 1 is impossible because it entails xi-1 - 1 = n - 2 being divisible by p, but p ≠ 2. Thus xi-1 ≠ n - 1 and xi = 1.
Note that if the condition in the statement of the proposition holds for some x then n must be composite. Namely, if xl ≠ 1 then Fermat’s relation does not hold. And if xi-1 ≠ n - 1, xi ≡ xi-12 ≡ 1 mod n, then 1 has more than two square roots mod n so n is composite.
The test for primality runs as in Section 2 except that when xl = 1 we need not compute the (xi - 1, n). Simply calculate x0 ≡ xm mod n, then square mod n repeatedly until xi-1 ≠ n - 1 and xi = 1, or until xl is reached. In any case we need never square xl-1.

REFERENCES
1. R. D. CARMICHAEL, On composite numbers p which satisfy the Fermat congruence ap-1 ≡ 1 mod p, Amer. Math. Monthly 19 (1912), 22-27.
2. C. E. GAUSS, “Disquisitiones Arithmeticase” (A. A. Clarke, Transl.), Yale Univ. Press, New Haven, Conn. London, 1966.
3. D. E. KNUTH, “The Art of Computing,” Vol. 2, Addison-Wesley, Reading, Mass., 1969.
4. G. L. MILLER, Reimann’s hypothesis and a test for primality, J. Comput. and System Sci. 13 (1976), 300-317.
5. M. 0. RABIN, Probabilistic algorithms, in “Algorithms and Complexity, Recent Results and New Direction” (J. F. Traub, Ed.), pp. 21-40, Academic Press, New York, 1976.
6. R. SOLOVAY AND V. STRASSEN, A fast Monte-Carlo test for primality, SIAM J. Comput. 6 (1977), 84-85.

  1. Originally published in Journal of Number Theory, 12, 128-138 (1980) [pdf version] Some minor edits were made to the text to be amenable to this publishing format. []
  2. And similarly, by b ∤ a we denote the fact that b does not divide a, i.e., res(a, b) ≠ 0. []
  3. There exists i such that []
  4. Euler's totient function counts the positive integers up to a given integer n that are relatively prime to n. []
  5. The choice of the letter "l" here is unfortunate, since it can be hard to distinguish from the number 1, depending on the font. I have left it as the letter l to be consistent with the paper. []
  6. Or, you could pick the biggest primorial that fits in your memory size and test for primality against that by calculating the g.c.d. []
  7. As computers have changed over the years, calling it a medium-sized computer is not very descriptive []