MICHAEL O. RABIN^{1}

Institute of Mathematics, Hebrew University, Jerusalem, Israel, and Massachusetts Institute of Technology, Cambridge, Massachusetts 02139

Communicated by H. Zassenhaus

Received December 10, 1977

We present a practical probabilistic algorithm for testing large numbers of arbitrary form for primality. The algorithm has the feature that when it determines a number composite then the result is always true, but when it asserts that a number is prime there is a provably small probability of error. The algorithm was used to generate large numbers asserted to be primes of arbitrary and special forms, including very large numbers asserted to be twin primes. Theoretical foundations as well as details of implementation and experimental results are given.

The problem of determining for given integers whether they are prime has occupied mathematicians for centuries. In the words of Gauss in his celebrated “Disquisitiones Arithmeticae” [2, p. 396]:

The problem of distinguishing prime numbers from composite numbers and of resolving the latter into their prime factors is known to be one of the most important and useful in arithmetic. It has engaged the industry and wisdom of ancient and modern geometers to such an extent that it would be superfluous to discuss the problem at length. Nevertheless we must confess that all methods that have been proposed thus far are either restricted to very special cases or are so laborious and prolix that even for numbers that do not exceed the limits of tables constructed by estimable men, i.e., for numbers that do not yield to artificial methods, they try the patience of even the practiced calculator. And these methods do not apply at all to larger numbers.

Gauss proceeds to give several interesting algorithms for the determination of primality. We do not intend to present here a survey or bibliography of the extensive literature concerning testing for primality. All the existing methods require for arbitrary integers n a number of steps O(n^{e}) where e is some fraction such as 1/3 or 1/4. Thus, experimental results indicate that around n = 10^{65} each of these methods encounters some numbers untestable by it.

For integers of certain special forms, the most notable example being the so-called Mersenne numbers n = 2^{p} - 1, where p is a prime, there do exist rather rapid tests for primality. But arbitrary large integers defy even the most powerful computers, when tested by the conventional methods.

In this paper we present a primality test applicable to arbitrary and very large integers. If we count an arithmetical operation such as addition, multiplication, or division, involving integers 0 ≤ a, b ≤ n as one step then the test of primality of n requires in the worst case c(log_{2} n)^{2} steps where c is about 100. Thus very large numbers are amenable to this test. In practice the number of steps in most cases is about c log_{2} n for n which are prime, and about 3 log_{2} n for numbers n that are composite even if they have no small divisors.

The salient features of our method are that it is *probabilistic*, i.e., uses randomization within the computation, and that it produces the answer with a certain controllable miniscule probability of error. To be more precise, with full details to be given later, the algorithm produces and employs certain random integers 0 < b_{1} ,..., b_{k} < n. If the outcome of the whole test is that n is composite then it is always correct (barring computational error). If the outcome asserts that n is prime then it may sometimes be wrong. But the probability that a composite number will be erroneously asserted to be prime is smaller than 1/2^{2k}. If, say, k = 50, then the probability of error is at most 1/2^{100}.

This last statement does *not* mean that an integer n asserted as prime by use of 50 random numbers is prime with probability at least 1 - 1/2^{100}. Such an interpretation is nonsensical since n is either prime or not. The correct meaning is that if the test is applied to m = 2^{100} integers n_{1}, n_{2} ,..., n_{m}, then the expected number of wrong answers is one. These integers need not be pairwise different and no probability distribution on the integers to be tested for primality is assumed.

Thus when we say that an integer n was asserted to be prime by use of 50 random numbers, this is no proof that n is actually prime. What is stated is that n was asserted to be prime by a procedure that on the average will make no more than one mistake in 2^{100} applications (even when testing the same n).

The test and an earlier, slightly weaker, form of the theorem underlying it appeared in [5]. A different probabilistic test for primality was given by Solovay and Strassen [6].

In Section 3 we discuss the implementation and application of the test to the generation of very large numbers asserted to be primes of arbitrary or of a desired prescribed form. Large primes are useful in exact arithmetical computations involving large integers. By computing modulo a prime, one can avoid the use of fractions. Also, recently evolved digitalized signature systems are based on the easy availability of large primes.

In the last section we present some of the experimental results which include examples of very large numbers asserted to be primes, twin primes, etc., very rapidly obtained on a medium-sized computer.

1. THE FUNDAMENTAL THEOREM

Throughout this paper (a, b) denotes the greatest common divisor (g.c.d.) of the integers a, b, and res(a, b) denotes the least nonnegative residue of a when divided by b. By b | a we denote the fact that b divides a, i.e., res(a, b) = 0.^{2}

DEFINITION. Let n be an integer. Consider the following condition to be denoted by W_{n}(b), on an integer b:

(i) 1 ≤ b < n;

(ii)(a) b^{n-1} ≢ 1 mod n, or

(ii)(b) ∃i s.t.^{3} 2^{i} | (n - 1) and 1 < (b^{(n-1)/2i} - 1, n) < n.

Call such an integer b a witness to the compositeness of n.

This condition was first considered by Miller [4], who used it to give a nonprobabilistic test for primality assuming the correctness of the extended Riemann hypothesis (ERH). Our test does not assume ERH and is considerably faster.

Obviously if W_{n}(b) holds for some b then n is composite. For if (ii)(a) holds then Fermat’s relation is violated. And (ii)(b) means that n has a proper divisor. Thus the adjective “witness to compositeness” is justified. It turns out that if n is composite then witnesses abound.

THEOREM 1. If 4 < n is composite then

eq. (1): 3(n - 1)/4 ≤ c({b | W_{n}(b)}).

By c(S) we denote the number of elements in the set S.

Because of (i), (1) means that no more than 1/4 of the numbers 1 ≤ b < n are not witnesses. We require some lemmas for the proof of Theorem 1.

Denote by E_{n} the set of all c, 1 ≤ c < n, (c, n) = 1. We have c(E_{n}) = φ(n), where φ is Euler’s function^{4}. The set E_{n} is a group under multiplication mod n. If n = p^{k}, where p is prime, then φ(n) = p^{k} - p^{k-1}; for odd p, E_{n} is a cyclic group.

For an integer m denote by e(m) the largest i such that 2^{i} | m.

For a sequence m = ⟨m_{1} ,..., m_{k}⟩ of integers define res(b, m) = ⟨s_{1} ,..., s_{k}⟩, where s_{i} = res(b, mi), 1 ≤ i ≤ k. When m is fixed abbreviate res(b, m) = res(b).

LEMMA 2. Let m_{i} | n for 1 ≤ i ≤ k, and (m_{i}, m_{j}) = 1, 1 ≤ i ≤ j ≤ k, and let m = ⟨m_{1} ,..., m_{k}⟩. For every s = ⟨s_{1} ,..., s_{k}⟩, s1 ∈ E_{m1} ,..., s_{k} ∈ E_{mk}, the number of elements in

E_{n} ∩ {b | res(b, m) = s}

is the same.

Proof. Let t be the product of all primes dividing n, but not any of the mi, 1 ≤ i ≤ k. Assume t ≠ 1, for t = 1 the proof that follows has to be just slightly modified. By the Chinese Remainder Theorem there exists an integer b such that res(b, ⟨m, t⟩) = ⟨s_{1} ,..., s_{k} , 1⟩. Since m_{i} | n, 1 ≤ i ≤ k, and t | n, we may assume 1 ≤ b < n. Now b ∈ E_{n}. Otherwise for some prime, p | n and p | b. This p divides some m_{i}, or p | t. Since s_{i} = b - q_{i}m_{i}, if p | m_{i} then p | s_{i} contradicting (m_{i} , s_{i}) = 1. Similarly p | t leads to the contradiction p | 1.

Denote the restriction res( , m) | E_{n} by f. By the previous paragraph, f: E_{n} → E_{m1} Χ ... Χ E_{mk} = G is a homomorphism of E_{n} onto the direct product G. Thus f^{-1}(⟨s_{1} ,..., s_{k}⟩) has the same number of elements for all ⟨s_{1} ,..., s_{k}⟩ ∈ G.

Let, for example, p | n, q | n, where p and q are different primes. The number of b ∈ E_{n} for which res(b, ⟨p , q⟩) = ⟨s_{1} , s_{2}⟩ is the same for every pair 1 ≤ s_{1} < p - 1, 1 ≤ s_{2} < q - 1. We shall employ in the sequel such considerations without further elaboration.

LEMMA 3. Let p_{1} ≠ p_{2} be primes, q_{1} = p_{1}^{k1}, q_{2} = p_{2}^{k2}. Assume q_{1}q_{2} | n. Denote t_{i} = (φ(q_{i}), n - l), m_{i} = φ(q_{i})/t_{i}, i = 1, 2.

At most φ(n)/m_{1}m_{2} of the integers b ∈ E_{n} do not satisfy W_{n}(b).

If t_{1} is even then at most φ(n)/(2m_{1}m_{2}) of the b ∈ E_{n} do not satisfy W_{n}(b).

Proof. Let a_{i} be a primitive root mod q_{i}, i = 1,2. That is, a_{i}^{t} ≡ 1 mod q_{i} if and only if φ(q_{i}) | t.

Let b ∈ E_{n} then (b, q_{i}) = 1 and b ≡ a_{i}^{ri} mod q_{i}, for some r_{1}, r_{2}. Because q_{1}q_{2} | n we have that

eq. (2): b^{n - 1} ≡ 1 mod n

implies φ(q_{i}) | r_{i}(n - l), i = 1, 2. Hence it implies m_{i} | r_{i}, i = 1, 2, so r_{1} = h_{1}m_{1}, r_{2} = h_{2}m_{2}.

Thus if res(b, ⟨q1, q2⟩) = ⟨s1, s2⟩ then for at most 1/m_{1}m_{2} out of all pairs ⟨s_{1}, s_{2}⟩ will (2) hold. By Lemma 2 all pairs ⟨s_{1}, s_{2}⟩ appear equally often as residue of b ∈ E_{n}, hence (2) will hold for at most φ(n)/m_{1}m_{2} of the integers b ∈ E_{n}. But if (2) does not hold then W_{n}(b) is true.

The sharper claim made, when t_{1} is even, hinges on the fact, to be proved next, that in this case even if (2) holds for b, in at least half the instances (ii)(b) holds, so that b is still a witness.

Assume that t_{1} is even and e(t_{1}) = e(t_{2}); see the notation preceeding Lemma 2. Let i be such that (n - 1)/2^{i} = d(i) is an integer and t_{j} ∤ d(i), t_{J}/2 | d(i), j = 1, 2. Let b ∈ E_{n} satisfy (2) and adopt the above notation, concerning a_{1}, a_{2}, r_{1}, r_{2}, etc. If h_{1} is odd and h_{2} is even then φ(q_{1}) ∤ h_{1}m_{1}d(i) and φ(q_{2}) | h_{2}m_{2}d(i). Hence

eq. (3): b^{d(i)} ≢ 1 mod q_{1},

eq. (4): b^{d(i)} ≡ 1 mod q_{2}.

For such a b ∈ E_{n} (ii)(b) holds. Similarly if h_{1} is even and h_{2} is odd. There are four equally frequent possibilities for the parities of h_{1} and h_{2}. Thus for half of the b ∈ E_{n} satisfying (2), W_{n}(b) does still hold. Hence at most φ(n)/2m_{1}m_{2} integers b ∈ E_{n} do not satisfy W_{n}(b).

Finally let t_{1} be even, e(t_{1}) < e(t_{2}). Choose i and j so that t_{2} | d(i), t_{1} ∤ d(i), and t_{1}/2^{j} | d(i) where j is minimal with the last property. Still using the above notation for a b ∈ E_{n} satisfying (2), we have φ(q_{2}) | h_{2}m_{2}d(i) so that (4) holds. But b^{d(i)} ≡ 1 mod q_{1} holds only if φ(q_{1}) | h_{1}m_{1}d(i), i.e., only if 2^{j} | h_{1}. Unless 2^{j} | h_{1} we have (3), so that W_{n}(b) is not true for at most φ(n)/2^{j}m_{1}m_{2} integers b ∈ E_{n}. But 1 ≤ j.

Proof of Theorem 1. Let 4 < n be composite. If 1 ≤ b < n and (b, n) ≠ 1 then (ii)(a) and consequently W_{n}(b) holds. So that to prove (1) it suffices to show that at least 3φ(n)/4 of the b ∈ E_{n} satisfy W_{n}(b).

Assume that n = p^{k}, 1 < k. Here φ(n) = p^{k-1}(p - 1) and p ∤ (n - 1), so p^{k-1} ≤ φ(p^{k})/(φ(p^{k}), n - 1) = m. By the argument at the beginning of the proof of Lemma 3 applied to a single prime, we have that at most φ(n)/m of the b ∈ E_{n} satisfy (2) and hence satisfy *not* W_{n}(b). If 9 < n then 5 ≤ p or 2 < k hence 4 ≤ m and (1) follows. If n = 9, there are only two non-witnesses.

Next let n have at least two different prime divisors p_{1}, p_{2}, let q_{i} = p_{i}^{ki} be the maximal powers such that q_{1} | n, q_{2} | n, and assume that, say, φ(q_{1}) ∤ (n - 1). Then 2 ≤ φ(q_{1})/(φ(q_{1}), n - 1) = m_{1}. If φ(q_{2}) ∤ (n - 1) then 2 ≤ m_{2}, and we have finished by the first statement of Lemma 3. Thus assume φ(q_{2}) | (n - 1). If p_{2} ≠ 2 then t_{2} = φ(q_{2}) is even and by the second assertion of Lemma 3 at most φ(n)/2m_{1} ≤ φ(n)/4 (here m_{2} = 1) of the b ∈ E_{n} satisfy *not* W_{n}(b). If p_{2} = 2 then φ(n) ≤ (n - 1)/2 and at most φ(n)/m_{1} ≤ (n - 1)/4 of the 1 ≤ b < n satisfy (2).

Finally let n = p_{1}^{k1}p_{2}^{k2} ... p_{r}^{kr}, 2 ≤ r, satisfy φ(p_{i}^{ki}) | (n - 1), 1 ≤ i ≤ r. Then k_{i} = 1, 1 ≤ i ≤ r. An easy calculation shows 3 ≤ r. Such numbers do exist, e.g., (5), (6), and satisfy (2) for all b ∈ E_{n}. In honor of their discoverer [1] we call them Carmichael numbers. Let p_{1}, p_{2}, p_{3} be three different prime divisors of n, (p_{i} - 1) | (n - l), 1 ≤ i ≤ 3.

Assume e(p_{1} - 1) = e(p_{2} - 1) = e(p_{3} - 1), and let i be such that for (n - 1)/2^{i} = d(i) we have (p_{i} - 1) ∤ d(i), (p_{j} - 1)/2 | d(i), 1 ≤ j ≤ 3. If b = a_{i}^{ri} mod p_{i}, where a_{i} is a primitive root mod p_{i}, 1 ≤ i ≤ 3, then by the analysis in the proof of Lemma 3, 1 < (b^{d(i)} - 1, n) < n unless the r_{i} are all even or all odd. For if, say, r_{1} is even and r_{2} is odd then p_{1} | (b^{d(i)} - 1) and p_{2} ∤ (b^{d(i)} - 1). Thus *not* W_{n}(b) holds for at most (2/8)φ(n) of the b ∈ E_{n}.

The similar analysis of the cases e(p_{3} - 1) = e(p_{2} - 1) < e(p_{1} - 1) and e(p_{3} - 1) < e(p_{2} - 1) ≤ e(p_{1} - 1), is left to the reader.

If n = p*q*r is a Carmichael number with p ≡ q ≡ r ≡ -1 mod 4, p < q < r. Then n ≡ -1 mod 4. Consequently 4 ∤ (n - 1) and i in (ii)(b) is just i = 1. Thus W_{n}(b) holds for all b ∉ E_{n}, i.e., for the

n(1 - (1 - 1/p)(1 - 1/q)(1 - 1/r)) ≤ n (1 - (1 - 1/p)^{3}) ≤ 3n/p

numbers (b, n) ≠ 1, and also for exactly (3/4)φ(n) of the b ∈ E_{n}.

Adding up:

c({b | W_{n}(b)) ≤ (3/4 + 3/p) (n - 1).

If Carmichael numbers of the above type with arbitrarily large smallest prime factor do exist, then (1) is asymptotically best possible.

A systematic computer search for Carmichael numbers of this form was conducted by Oren by looking at triplets of primes 100 ≤ p, g, r ≤ 3000. Many instances were found.

For example, for

eq. (5): 652969351 = 271 * 811 * 2971

the fraction of witnesses is 0.7513. For

eq. (6): 2000436751 = 487 * 1531 * 2683

the fraction of witnesses is 0.7507. Thus the constant 3/4 in Theorem 1 seems to be best possible.

2. THE ALGORITHM

Given a number n we choose a k, determined by the desired reliability, and randomly pick 1 ≤ b_{1} ,..., b_{k} < n.

Let n - 1 = 2^{l}m, where m is odd^{5}. For b = b_{i} compute b^{m} mod n. This can be done by about 1.5 log_{2}m multiplications, see [2, p. 398]. Every time a number n < d is obtained, find d_{1} = res(d, n) and continue. Thus only products of numbers a, b < n and divisions of d < n^{2} by n are involved. Next calculate the residues mod n of b^{2m},..., b^{2lm} = b^{n-1}.

Since log_{2}n = l + log_{2}m, the evaluation of all the necessary powers of b will require 1.5 log_{2}n multiplications of a, b < n and 1.5 log_{2}n divisions of a d < n^{2} by n, altogether 3 log_{2}n steps.

If the residue of b^{n-1} is not 1 then W_{n}(b) holds. If that residue is 1, find (res(b^{2im}, n) - 1, n) for 1 ≤ i ≤ l. Each g.c.d. computation can be performed by doing at most log_{2}n subtractions and divisions by 2. If any one of these g.c.d.‘s is neither 1 nor n then W_{n}(b) is true. Thus for each b_{i} we computationally determined whether W_{n}(b_{i}) is true, and the total required number of steps is 3 log_{2}n + l * log_{2}n.

If, for any 1 ≤ i ≤ k, W_{n}(b_{i}) is true then n is composite. If, for all 1 ≤ i ≤ k, W_{n}(b_{i}) is not true, then the test asserts that n is prime.

THEOREM 2. The above algorithm requires for n - 1 = 2^{l}m, m odd, at most k * (2 * log_{2}n + l * log_{2}n) Steps. If n is prime then the result is always correct. For each composite n the test may declare n to be prime, but the probability of such error is smaller than 1/2^{2k}.

Proof. Only the last statement requires proof. An error will occur only when the n to be tested is composite and the b_{1} ,..., b_{k} chosen in this particular run of the algorithm are all nonwitnesses. Becauses of Theorem 1, the probability of randomly picking a nonwitness is smaller than 1/4. The probability of independently picking k nonwitnesses is smaller than l/4^{k} = 1/2^{2k}. Thus the probability that for any given number n, a particular run of the algorithm will produce an erroneous answer is smaller than 1/2^{2k}.

Note that the theorem is formulated with respect to each n, no averaging over a range of n’s is necessary. For a given composite n the test may (rarely) give a wrong answer in one run and the correct answer in other runs. Thus there are no n’s on which the algorithm behaves poorly.

3. IMPLEMENTATION

In implementing the algorithm we incorporate some laborsaving steps. First we test the given n for divisibility by any prime p < N, where, say N=1000.^{6}

If n passed this sieve then we apply the algorithm. Suppose that we pick k = 30. An examination of the proof of Theorem 1 shows that for “most” composite n the probability of finding a witness in one try is much bigger than 3/4. In fact, in actual runs when n was composite the witness was always the first or second b chosen. But even for n for which (1) is almost exact, the probability of error is 1/2^{60} < 10^{-18}. One expected error in a billion billion tests! This seems small when compared to the frequency of machine errors present in practical computations.

The main application of the primality test is to produce large numbers asserted to be primes with specified properties.

The overall strategy is to generate numbers of the specified form and test for primality until a number is asserted to be prime. Such searches lead to success rather quickly because in most cases the Prime Number Theorem ensures that the density of primes in the sequence m_{i} searched is like 1/l if the numbers satisfy ln(m_{i}) ∼ l. In other cases an appropriate density statement can be derived from stronger hitherto unproven assumptions such as the extended Riemann hypothesis (ERH). In every instance actually tried, the search did terminate within practical time.

As mentioned before, any primality test should incorporate trying small divisors. Thus in practice the full test is applied only to the numbers asserted to be primes.

If desired, it is often possible to ensure that n - 1 = 2 * m, where m is odd so that the second part of checking W_{n}, that relating to (ii)(b), involves only b^{(n-1)/2}. But even if this feature is not incorporated, a heuristic argument shows that the expected number of g.c.d. computations arising from (ii)(b) is two. Note that the notion of expected number used here does not have a precise meaning as in Theorem 2 and is invoked only heuristically.

Suppose we want to find an “arbitrary” prime having 250 binary digits. Start by randomly generating a sequence m = α_{1}α_{2} ... α_{250}, α_{i} ∈ {0, 1}, α_{1} = 1. View m as an integer written in binary notation, say it is even. Successively form the sums m + 1, m + 3 ,...; if desired omit those for which 4 | (n - 1). For each n = m + i test for primality. Continue until a number is asserted to be prime.

Because of the Prime Number Theorem, the search will not require testing a prohibitive number of sums m + i. The density of primes around n = 2^{250} is 1/ln n > 1/250. Thus, one can incorporate a stopping rule: If within 500 tries no prime was found, drop the number m and restart with a new number m, [log_{2}m] = 250. In this way a number asserted to be prime n = m + i, i < 1000, will usually be found very rapidly.

For certain applications it is important to have a prime n so that n - 1 has a large prime factor. This is achieved in a similar fashion. Suppose that we want n - 1 to have a prime factor p so that [log_{2}p] = 200. Find, as before, such a number asserted to be prime. Form the numbers 2ip + 1, i = 1, 3,... . Test each number for primality. Again a number q = lp + 1 of the desired form is rapidly asserted as prime. If the arithmetical series with difference p does not rapidly find a number asserted to be prime (this has never happened), then p can be discarded and another prime p_{1} can be used for restarting the search.

These are just examples of searches actually conducted. The variations pertaining to other forms of primes, or to the testing of given numbers are obvious.

4. SOME EXPERIMENTAL RESULTS

V. Pratt has programmed this algorithm and together we planned some experiments. The computations were done on a medium-sized computer^{7}. Numbers with several hundred binary digits were generated and tested. Still, the computations, including searches, did not take more than minutes per number asserted to be prime.

The first test was verification of the known results concerning primality and compositeness of n = 2^{p} - 1, p ≤ 500, p prime. Within about 10 min all answers were produced without a single error. This was mainly done to test the program. For Mersenne primes our test is about 1/k as fast as the Lucas test which applies only to Mersenne numbers.

Next the test was applied to generate some large numbers asserted to be primes along the lines explained in Section 3. Rather than load an arbitrary binary sequence m, the search started from powers of 2 and proceeded by decrements. The numbers

2^{300} - 153, 2^{400} - 593,

were asserted to be the largest primes below 2^{300} and 2^{400}, respectively, since the other numbers in the intervals are all composite.

Finally the test was applied to discover what we believe are the hitherto largest known numbers asserted to be twin primes. In order to speed up the sieving process the search was started with a number m which is a product of many small primes. Then pairs of the form m * l + 821, m * l + 823, and m * l + 827, m * l + 829 were tried. The pairs 821, 823 and 827, 829 are themselves twin primes. Within half an hour

(∏_{pi<300} p_{i}) * 338 + 821, (∏_{pi<300} p_{i}) * 338 + 823,

were asserted to be twin-primes. These numbers are of order of magnitude 10^{123}. Five subsequent hours of search failed to discover additional pairs. This is the only case where any search required more than a few minutes. Of course, no Prime Number Theorem density estimates apply to twin primes. The reader’s guess as to the heuristic implications of this seeming gap in twin primes is as good as ours.

In conclusion, let us raise a question concerning possible theoretical applications of this primality test, in addition to the practical applications mentioned in the Introduction. What conjectures pertaining to the distribution of primes can one formulate, lend support to, or disprove by use of this test? For example, are there some refined estimates for the density of primes around 10^{100}, perhaps consequences of some strong number- or function-theoretic conjectures, which one could experimentally check using the primality test. The author, together with V. Pratt, have tested the density of primes of the form n = x^{2} + 1 for n ∼ 2^{50}, 2^{100}, 2^{150}, 2^{200}, and found very good agreement with the Hardy-Littlewood conjecture on this density. These and other experimental results will be reported elsewhere.

Note added May 10, 1978: Donald Knuth (and, it seems, several others) observed that the primality test can dispense with the g.c.d. computation. His observation is based on the following corollary of Theorem 1.

Using the notation of Section 2, let n - 1 = 2^{l} * m, where m is odd. Let 0 ≤ x < n, denote x_{0} ≡ x^{m} mod n, x_{i} ≡ x_{i-1}^{2} mod n, 1 ≤ i ≤ l. Thus x_{l} ≡ x^{n-1} mod n.

PROPOSITION. If 4 < n is composite and odd then for at least (3/4)(n - 1) of the 1 ≤ x < n either x_{l} ≠ 1 or for some 1 ≤ i ≤ l we have x_{i} = 1 and x_{i-1} ≠ n - 1.

Proof. By Theorem 1, if n is composite then for at least (3/4)(n - 1) of the 1 ≤ x < n W_{n}(x) holds. If W_{n}(x) holds then x_{l} ≠ 1, or x_{l} = 1 and for some 0 ≤ j ≤ l we have 1 < (x_{j} - 1, n) < n.

In the second case let i - 1, j ≤ i - 1 < l, be the last index such that x_{i-1} ≠ 1; thus x_{i} = 1. We have x_{j} ≡ 1 mod p, where p is a proper divisor of n. Hence x_{i-1} ≡ 1 mod p, since

x_{i-1} ≡ x_{j}^{2(-)-1} mod n.

Now x_{i-1} = n - 1 is impossible because it entails x_{i-1} - 1 = n - 2 being divisible by p, but p ≠ 2. Thus x_{i-1} ≠ n - 1 and x_{i} = 1.

Note that if the condition in the statement of the proposition holds for some x then n must be composite. Namely, if x_{l} ≠ 1 then Fermat’s relation does not hold. And if x_{i-1} ≠ n - 1, x_{i} ≡ x_{i-1}^{2} ≡ 1 mod n, then 1 has more than two square roots mod n so n is composite.

The test for primality runs as in Section 2 except that when x_{l} = 1 we need not compute the (x_{i} - 1, n). Simply calculate x_{0} ≡ x^{m} mod n, then square mod n repeatedly until x_{i-1} ≠ n - 1 and x_{i} = 1, or until x_{l} is reached. In any case we need never square x_{l-1}.

REFERENCES

1. R. D. CARMICHAEL, On composite numbers p which satisfy the Fermat congruence a^{p-1} ≡ 1 mod p, *Amer. Math. Monthly* **19** (1912), 22-27.

2. C. E. GAUSS, “Disquisitiones Arithmeticase” (A. A. Clarke, Transl.), Yale Univ. Press, New Haven, Conn. London, 1966.

3. D. E. KNUTH, “The Art of Computing,” Vol. 2, Addison-Wesley, Reading, Mass., 1969.

4. G. L. MILLER, Reimann’s hypothesis and a test for primality, *J. Comput. and System Sci.* **13** (1976), 300-317.

5. M. 0. RABIN, Probabilistic algorithms, in “Algorithms and Complexity, Recent Results and New Direction” (J. F. Traub, Ed.), pp. 21-40, Academic Press, New York, 1976.

6. R. SOLOVAY AND V. STRASSEN, A fast Monte-Carlo test for primality, *SIAM J. Comput.* **6** (1977), 84-85.

- Originally published in Journal of Number Theory, 12, 128-138 (1980) [pdf version] Some minor edits were made to the text to be amenable to this publishing format. [↩]
- And similarly, by b ∤ a we denote the fact that b does not divide a, i.e., res(a, b) ≠ 0. [↩]
- There exists i such that [↩]
- Euler's totient function counts the positive integers up to a given integer n that are relatively prime to n. [↩]
- The choice of the letter "l" here is unfortunate, since it can be hard to distinguish from the number 1, depending on the font. I have left it as the letter l to be consistent with the paper. [↩]
- Or, you could pick the biggest primorial that fits in your memory size and test for primality against that by calculating the g.c.d. [↩]
- As computers have changed over the years, calling it a medium-sized computer is not very descriptive [↩]

Tags: Cryptography, Math, OCR, Scientific_Literature