Posts Tagged ‘Scientific_Literature’

Constructing Digital Signatures from One Way Function

Monday, August 19th, 2019

Op. 52

Constructing Digital Signatures from One Way Function

Leslie Lamport
Computer Science Laboratory
SRI International

18 October 1979
CSL 98

333 Ravenswood Ave. Menlo Park, California 94025
(415) 326-6200 Cable: SRI INTL MPK TWX: 910-373-1246

1. Introduction

A digital signature created by sender P for a document m is data item op(m) having the property that upon receiving m and op(m) one can determine (and if necessary prove in court of law) that P generated the document m .

A one way function is function that is easy to compute, but whose inverse is difficult to compute1. More precisely one way function T is a function from set of data objects to a set of values having the following two properties:

  1. Given any value v , it is computationally infeasible to find a data object d such that T(d) = v .

  2. Given any data object d , it is computationally infeasible to find a different data object d' such that T(d') = T(d) .

If the set of data objects is larger than the set of values, then such a function is sometimes called one way hashing function.

We will describe a method for constructing digital signatures from such a one way function T . Our method is an improvement of a method devised by Rabin2. Like Rabin's, it requires the sender P to deposit a piece of data o in some trusted public repository for each document he wishes to sign. This repository must have the following properties:

  • o can be read by anyone who wants to verify P's signature.

  • It can be proven in court of law that P was the creator of o .

Once o has been placed in the repository, P can use it to generate a signature for any single document he wishes to send.

Rabin's method has the following drawbacks not present in ours.

  1. The document must be sent to single recipient Q , who then requests additional information from P to validate the signature. P cannot divulge any additional validating information without compromising information that must remain private to prevent someone else from generating new document m' with valid signature op(m').

  2. For a court of law to determine if the signature is valid, it is necessary for P to give the court additional private information.

This has the following implications.

  • P -- or a trusted representative of P must be available to the court.
  • P must maintain private information whose accidental disclosure would enable someone else to forge his signature on a document.

With our method, P generates a signature that is verifiable by anyone, with no further action on P's part. After generating the signature, P can destroy the private information that would enable someone else to forge his signature. The advantages of our method over Rabin's are illustrated by the following considerations when the signed document m is a check from P payable to Q .

  1. It is easy for Q to endorse the check payable to third party R by sending him the signed message "make m payable to R ". However, with Rabin's scheme, R cannot determine if the check m was really signed by P , so he must worry about forgery by Q as well as whether or not P can cover the check. With our method, there is no way for Q to forge the check, so the endorsed check is as good as check payable directly to R signed by P . (However, some additional mechanism must be introduced to prevent Q from cashing the original check after he has signed it over to R .)

  2. If P dies without leaving the executors of his estate the information he used to generate his signatures, then Rabin's method cannot prevent Q from undetectably altering the check m -- for example, by changing the amount of money payable. Such posthumous forgery is impossible with our method.
  3. With Rabin's method, to be able to successfully challenge any attempt by Q to modify the check before cashing it, P must maintain the private information he used to generate his signature. If anyone (not just Q ) stole that information, that person could forge a check from P payable to him. Our method allows P to destroy this private information after signing the check.

2. The Algorithm

We assume a set M of possible documents, set K of possible keys3, and set V of possible values. Let S denote the set of all subsets of {1, ... , 40} containing exactly 20 elements. (The numbers 40 and 20 are arbitrary, and could be replaced by 2n and n. We are using these numbers because they were used by Rabin, and we wish to make it easy for the reader to compare our method with his.)

We assume the following two functions.

  1. A function F : K → V with the following two properties:

    1. Given any value v in V , it is computationally infeasible to find a key k in K such that F(k) = v .

    2. For any small set of values v1, ... , vm , it is easy to find a key such that F(k) is not equal to any of the vi .
  2. A function G : M → S with the property that given any document m in M , it is computationally infeasible to find document m' ≠ m such that G(m') = G(m)

For the function F , we can use any one way function T whose domain is the set of keys. The second property of F follows easily from the second property of the one way function T . We will discuss later how the function G can be constructed from an ordinary one way function.

For convenience, we assume that P wants to generate only a single signed document. Later, we indicate how he can sign many different documents. The sender P first chooses 40 keys ki such that all the values F(ki) are distinct. (Our second assumption about F makes this easy to do.) He puts in public repository the data item o = (F(k1), ... , F(k40)) . Note that P does not divulge the keys ki , which by our first assumption about F cannot be computed from o .

To generate a signature for a document m , P first computes G(m) to obtain a set [i1, ... , i20] of integers. The signature consists of the 20 keys ki1, ... , ki20 . More precisely, we have op(m) = (ki1, ... , ki20) , where the ij are defined by the following two requirements:

  1. G(m) = {i1, ... , i20}

  2. i1 < ... < i20

After generating the signature, P can destroy all record of the 20 keys ks with s not in G(m) .

To verify that 20-tuple (h1, ... , h20) is valid signature op(m) for the document m, one first computes G(m) to find the ij and then uses o to check that for all j , hj is the ijth key. More precisely, the signature is valid if and only if for each j with 1 ≤ j ≤ 20 : F(hj) → oij , where oi denotes the ith component of o , and the ij are defined by the above two requirements.

To demonstrate that this method correctly implements digital signatures, we prove that it has the following properties.

  1. If P does not reveal any of the keys ki , then no-one else can generate valid signature op(m) for any document m .

  2. If P does not reveal any of the keys kj except the ones that are contained in the signature op(m) , then no-one else can generate valid signature op(m') for any document m' ≠ m .

The first property is obvious, since the signature op(m) must contain 20 keys ki such that F(ki) = oi , and our first assumption about F states that it is computationally infeasible to find the keys ki just knowing the values F(ki) .

To prove the second property, note that since no-one else can obtain any of the keys ki , we must have op(m') = op(m) . Moreover, since the oi are all distinct, for the validation test to be passed by op(m') we must also have G(m') = G(m) . However, our assumption about G states that it is computationally infeasible to find such document m' . This proves the correctness of our method.

For P to send many different documents, he must use a different o for each one. This means that there must be sequence of 40-tuples o1, o2, ... and the document must indicate which oi is used to generate that document's signature. The details are simple, and will be omitted.

3. Constructing the Function G

One way functions have been proposed whose domain is the set of documents and whose range is a set of integers of the form {0, ... , 2n - 1} for any reasonably large value of n . (It is necessary for n to be large enough to make exhaustive searching over the range of T computationally infeasible.) Such functions are described in [1] and [2]. The obvious way to construct the required function G is to let T be such a one way function, and define G(m) to equal R(T(m)) , where R : {0, ... , 2n - 1} → S .

It is easy to construct function R having the required range and domain. For example, one can compute R(s) inductively as follows:

  1. Divide s by 40 to obtain quotient q and a remainder r

  2. Use r to choose an element x from {1, ... , 40} (This is easy to do, since 0 ≤ r ≤ 40.)
  3. Use q to choose 19 elements from the set {1, ... , 40} - {x} as follows:
    1. Divide q by 39 to obtain quotient ...

It requires careful analysis of the properties of the one way function T to be sure that the resulting function G has the required property. We suspect that for most one way functions T , this method would work. However, we cannot prove this.

The reason constructing G in this manner might not work is that the function R from {0, ... , 2n} into S is a many to one mapping, and the resulting "collapsing" of the domain might defeat the one way nature of T . However, it is easy to show that if the function R is one to one, then property (ii) of T implies that G has the required property. To construct G we need only find an easily computable one to one function R from {0, ... , 2n - 1} into S , for a reasonably large value of n .

We can simplify our task by observing that the function G need not be defined on the entire set of documents. It suffices that for any document m , it is easy to modify m in a harmless way to get new document that is in the domain of G. For example, one might include a meaningless number as part of the document, and choose different values of that number until he obtains a document that is in the domain of G . This is an acceptable procedure if (i) it is easy to determine whether a document is in the domain, and (ii) the expected number of choices one must make before finding a document in the domain is small.

With this in mind, we let n = 40 and define R(s) as follows: if the binary representation of s contains exactly 20 ones, then R(s) = {i : the ith bit of s equals one} , otherwise R(s) is undefined. Approximately 13% of all 40 bit numbers contain exactly 20 ones. Hence, if the one way function T is sufficiently randomizing, there is a 0.13 probability that any given document will be in the domain of G . This means that randomly choosing documents (or modifications to a document), the expected number of choices before finding one in the domain of G is approximately 8. Moreover, after 17p choices, the probability of not having found document in the domain of G is about 1/10p. (If we use 60 keys instead of 40, the expected number of choices to find document in the domain becomes about 10, and 22p choices are needed to reduce the probability of not finding one to 1/10p.)

If the one way function T is easy to compute, then these numbers indicate that the expected amount of effort to compute G is reasonable. However, it does seem undesirable to have to try so many documents before finding one in the domain of G . We hope that someone can find more elegant method for constructing the function G , perhaps by finding a one to one function R which is defined on a larger subset of {0, ... , 2n} .

Note; We have thus far insisted that G(m) be a subset of {1, ... , 40} consisting of exactly 20 elements. It is clear that the generation and verification procedure can be applied if G(m) is any proper subset. An examination of our correctness proof shows that if we allow G(m) to have any number of elements less than 40, then our method would still have the same correctness properties if G satisfies the following property:

  • For any document m , it is computationally infeasible to find a different document m' such that G(m') is a subset of G(m) .

By taking the range of G to be the collection of 20 element subsets, we insure that G(m') cannot be proper subset of G(m) . However, it may be possible to construct a function G satisfying this requirement without constraining the range of G in this way.

-------

Editors note: Some characters in the original paper were changed to members of the Roman alphabet.

  1. Diffie, W. and Hellman, M. "New Directions in Cryptography". IEEE Trans. on Information Theory IT-22 (November 1976), 644-654. []
  2. Rabin, M. "Digitalized Signatures", in Foundations of Secure Computing, Academic Press (1978), 155-168. []
  3. The elements of K are not keys in the usual cryptographic sense, but are arbitrary data items. We call them keys because they play the same role as the keys in Rabin's algorithm. []

Ideas

Wednesday, May 1st, 2019

Have you ever had a great idea and then later realized that you were not the first one to think of it?

Asphalt paver machine

Asphalt paver machine

When I was about four I invented a machine that would make roads. I even drew a really nice picture in crayons of what I came up with. It had a big hopper in front, and would pour the material down and spread it flat and it had big treads on the side so it could move forward building a road behind it. Some years later I was amazed as my family drove past a highway under construction, and they were using the machine that I had drawn! Something like the picture shown.

When I was in college, after studying a bit of chemistry and physics, I did some thinking about nuclear fusion. Some metals, such as palladium, can absorb large amounts of hydrogen. I reasoned that if one was to make a target of palladium and expose it to deuterium it would hold the deuterium in place. One could then use some high voltage to accelerate more deuterium ions toward it, resulting in nuclear fusion. I was thinking this would be a great way to make a power-producing fusion reactor! Then I found out that this had already been invented some decades prior. The resulting nuclear reactors are not used to generate power, but they are used in the medical field as a source of high energy neutrons.

Also while in college I gave some thought to unusual electric motor configurations. I was fascinated by the homopolar motor. I drew up an idea of a homopolar motor made with multiple windings around two co-axial magnets which were arranged with their north poles pointed in opposite directions. Some years later I found some of my old notes and drawings, and so I did some searching on the internet to try to decide if there was any way they would work. I was surprised to find that a patent had been issued in the intervening years for such a device1, and the drawings in the patent are very similar to what I had drawn some years before.

  1. Although, there being a patent does not necessarily mean that it would actually work. []

Probabilistic Algorithm for Testing Primality

Thursday, April 25th, 2019

MICHAEL O. RABIN1

Institute of Mathematics, Hebrew University, Jerusalem, Israel, and Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
Communicated by H. Zassenhaus

Received December 10, 1977

We present a practical probabilistic algorithm for testing large numbers of arbitrary form for primality. The algorithm has the feature that when it determines a number composite then the result is always true, but when it asserts that a number is prime there is a provably small probability of error. The algorithm was used to generate large numbers asserted to be primes of arbitrary and special forms, including very large numbers asserted to be twin primes. Theoretical foundations as well as details of implementation and experimental results are given.

The problem of determining for given integers whether they are prime has occupied mathematicians for centuries. In the words of Gauss in his celebrated “Disquisitiones Arithmeticae” [2, p. 396]:

The problem of distinguishing prime numbers from composite numbers and of resolving the latter into their prime factors is known to be one of the most important and useful in arithmetic. It has engaged the industry and wisdom of ancient and modern geometers to such an extent that it would be superfluous to discuss the problem at length. Nevertheless we must confess that all methods that have been proposed thus far are either restricted to very special cases or are so laborious and prolix that even for numbers that do not exceed the limits of tables constructed by estimable men, i.e., for numbers that do not yield to artificial methods, they try the patience of even the practiced calculator. And these methods do not apply at all to larger numbers.

Gauss proceeds to give several interesting algorithms for the determination of primality. We do not intend to present here a survey or bibliography of the extensive literature concerning testing for primality. All the existing methods require for arbitrary integers n a number of steps O(ne) where e is some fraction such as 1/3 or 1/4. Thus, experimental results indicate that around n = 1065 each of these methods encounters some numbers untestable by it.
For integers of certain special forms, the most notable example being the so-called Mersenne numbers n = 2p - 1, where p is a prime, there do exist rather rapid tests for primality. But arbitrary large integers defy even the most powerful computers, when tested by the conventional methods.
In this paper we present a primality test applicable to arbitrary and very large integers. If we count an arithmetical operation such as addition, multiplication, or division, involving integers 0 ≤ a, b ≤ n as one step then the test of primality of n requires in the worst case c(log2 n)2 steps where c is about 100. Thus very large numbers are amenable to this test. In practice the number of steps in most cases is about c log2 n for n which are prime, and about 3 log2 n for numbers n that are composite even if they have no small divisors.
The salient features of our method are that it is probabilistic, i.e., uses randomization within the computation, and that it produces the answer with a certain controllable miniscule probability of error. To be more precise, with full details to be given later, the algorithm produces and employs certain random integers 0 < b1 ,..., bk < n. If the outcome of the whole test is that n is composite then it is always correct (barring computational error). If the outcome asserts that n is prime then it may sometimes be wrong. But the probability that a composite number will be erroneously asserted to be prime is smaller than 1/22k. If, say, k = 50, then the probability of error is at most 1/2100.
This last statement does not mean that an integer n asserted as prime by use of 50 random numbers is prime with probability at least 1 - 1/2100. Such an interpretation is nonsensical since n is either prime or not. The correct meaning is that if the test is applied to m = 2100 integers n1, n2 ,..., nm, then the expected number of wrong answers is one. These integers need not be pairwise different and no probability distribution on the integers to be tested for primality is assumed.
Thus when we say that an integer n was asserted to be prime by use of 50 random numbers, this is no proof that n is actually prime. What is stated is that n was asserted to be prime by a procedure that on the average will make no more than one mistake in 2100 applications (even when testing the same n).
The test and an earlier, slightly weaker, form of the theorem underlying it appeared in [5]. A different probabilistic test for primality was given by Solovay and Strassen [6].
In Section 3 we discuss the implementation and application of the test to the generation of very large numbers asserted to be primes of arbitrary or of a desired prescribed form. Large primes are useful in exact arithmetical computations involving large integers. By computing modulo a prime, one can avoid the use of fractions. Also, recently evolved digitalized signature systems are based on the easy availability of large primes.
In the last section we present some of the experimental results which include examples of very large numbers asserted to be primes, twin primes, etc., very rapidly obtained on a medium-sized computer.

1. THE FUNDAMENTAL THEOREM

Throughout this paper (a, b) denotes the greatest common divisor (g.c.d.) of the integers a, b, and res(a, b) denotes the least nonnegative residue of a when divided by b. By b | a we denote the fact that b divides a, i.e., res(a, b) = 0.2

DEFINITION. Let n be an integer. Consider the following condition to be denoted by Wn(b), on an integer b:

(i) 1 ≤ b < n;
(ii)(a) bn-1 ≢ 1 mod n, or
(ii)(b) ∃i s.t.3 2i | (n - 1) and 1 < (b(n-1)/2i - 1, n) < n.

Call such an integer b a witness to the compositeness of n.
This condition was first considered by Miller [4], who used it to give a nonprobabilistic test for primality assuming the correctness of the extended Riemann hypothesis (ERH). Our test does not assume ERH and is considerably faster.
Obviously if Wn(b) holds for some b then n is composite. For if (ii)(a) holds then Fermat’s relation is violated. And (ii)(b) means that n has a proper divisor. Thus the adjective “witness to compositeness” is justified. It turns out that if n is composite then witnesses abound.

THEOREM 1. If 4 < n is composite then

eq. (1): 3(n - 1)/4 ≤ c({b | Wn(b)}).

By c(S) we denote the number of elements in the set S.

Because of (i), (1) means that no more than 1/4 of the numbers 1 ≤ b < n are not witnesses. We require some lemmas for the proof of Theorem 1.
Denote by En the set of all c, 1 ≤ c < n, (c, n) = 1. We have c(En) = φ(n), where φ is Euler’s function4. The set En is a group under multiplication mod n. If n = pk, where p is prime, then φ(n) = pk - pk-1; for odd p, En is a cyclic group.
For an integer m denote by e(m) the largest i such that 2i | m.
For a sequence m = ⟨m1 ,..., mk⟩ of integers define res(b, m) = ⟨s1 ,..., sk⟩, where si = res(b, mi), 1 ≤ i ≤ k. When m is fixed abbreviate res(b, m) = res(b).

LEMMA 2. Let mi | n for 1 ≤ i ≤ k, and (mi, mj) = 1, 1 ≤ i ≤ j ≤ k, and let m = ⟨m1 ,..., mk⟩. For every s = ⟨s1 ,..., sk⟩, s1 ∈ Em1 ,..., sk ∈ Emk, the number of elements in

En ∩ {b | res(b, m) = s}

is the same.

Proof. Let t be the product of all primes dividing n, but not any of the mi, 1 ≤ i ≤ k. Assume t ≠ 1, for t = 1 the proof that follows has to be just slightly modified. By the Chinese Remainder Theorem there exists an integer b such that res(b, ⟨m, t⟩) = ⟨s1 ,..., sk , 1⟩. Since mi | n, 1 ≤ i ≤ k, and t | n, we may assume 1 ≤ b < n. Now b ∈ En. Otherwise for some prime, p | n and p | b. This p divides some mi, or p | t. Since si = b - qimi, if p | mi then p | si contradicting (mi , si) = 1. Similarly p | t leads to the contradiction p | 1.
Denote the restriction res( , m) | En by f. By the previous paragraph, f: En → Em1 Χ ... Χ Emk = G is a homomorphism of En onto the direct product G. Thus f-1(⟨s1 ,..., sk⟩) has the same number of elements for all ⟨s1 ,..., sk⟩ ∈ G.

Let, for example, p | n, q | n, where p and q are different primes. The number of b ∈ En for which res(b, ⟨p , q⟩) = ⟨s1 , s2⟩ is the same for every pair 1 ≤ s1 < p - 1, 1 ≤ s2 < q - 1. We shall employ in the sequel such considerations without further elaboration.

LEMMA 3. Let p1 ≠ p2 be primes, q1 = p1k1, q2 = p2k2. Assume q1q2 | n. Denote ti = (φ(qi), n - l), mi = φ(qi)/ti, i = 1, 2.
At most φ(n)/m1m2 of the integers b ∈ En do not satisfy Wn(b).
If t1 is even then at most φ(n)/(2m1m2) of the b ∈ En do not satisfy Wn(b).

Proof. Let ai be a primitive root mod qi, i = 1,2. That is, ait ≡ 1 mod qi if and only if φ(qi) | t.
Let b ∈ En then (b, qi) = 1 and b ≡ airi mod qi, for some r1, r2. Because q1q2 | n we have that

eq. (2): bn - 1 ≡ 1 mod n

implies φ(qi) | ri(n - l), i = 1, 2. Hence it implies mi | ri, i = 1, 2, so r1 = h1m1, r2 = h2m2.
Thus if res(b, ⟨q1, q2⟩) = ⟨s1, s2⟩ then for at most 1/m1m2 out of all pairs ⟨s1, s2⟩ will (2) hold. By Lemma 2 all pairs ⟨s1, s2⟩ appear equally often as residue of b ∈ En, hence (2) will hold for at most φ(n)/m1m2 of the integers b ∈ En. But if (2) does not hold then Wn(b) is true.
The sharper claim made, when t1 is even, hinges on the fact, to be proved next, that in this case even if (2) holds for b, in at least half the instances (ii)(b) holds, so that b is still a witness.
Assume that t1 is even and e(t1) = e(t2); see the notation preceeding Lemma 2. Let i be such that (n - 1)/2i = d(i) is an integer and tj ∤ d(i), tJ/2 | d(i), j = 1, 2. Let b ∈ En satisfy (2) and adopt the above notation, concerning a1, a2, r1, r2, etc. If h1 is odd and h2 is even then φ(q1) ∤ h1m1d(i) and φ(q2) | h2m2d(i). Hence

eq. (3): bd(i) ≢ 1 mod q1,

eq. (4): bd(i) ≡ 1 mod q2.

For such a b ∈ En (ii)(b) holds. Similarly if h1 is even and h2 is odd. There are four equally frequent possibilities for the parities of h1 and h2. Thus for half of the b ∈ En satisfying (2), Wn(b) does still hold. Hence at most φ(n)/2m1m2 integers b ∈ En do not satisfy Wn(b).
Finally let t1 be even, e(t1) < e(t2). Choose i and j so that t2 | d(i), t1 ∤ d(i), and t1/2j | d(i) where j is minimal with the last property. Still using the above notation for a b ∈ En satisfying (2), we have φ(q2) | h2m2d(i) so that (4) holds. But bd(i) ≡ 1 mod q1 holds only if φ(q1) | h1m1d(i), i.e., only if 2j | h1. Unless 2j | h1 we have (3), so that Wn(b) is not true for at most φ(n)/2jm1m2 integers b ∈ En. But 1 ≤ j.

Proof of Theorem 1. Let 4 < n be composite. If 1 ≤ b < n and (b, n) ≠ 1 then (ii)(a) and consequently Wn(b) holds. So that to prove (1) it suffices to show that at least 3φ(n)/4 of the b ∈ En satisfy Wn(b).
Assume that n = pk, 1 < k. Here φ(n) = pk-1(p - 1) and p ∤ (n - 1), so pk-1 ≤ φ(pk)/(φ(pk), n - 1) = m. By the argument at the beginning of the proof of Lemma 3 applied to a single prime, we have that at most φ(n)/m of the b ∈ En satisfy (2) and hence satisfy not Wn(b). If 9 < n then 5 ≤ p or 2 < k hence 4 ≤ m and (1) follows. If n = 9, there are only two non-witnesses.
Next let n have at least two different prime divisors p1, p2, let qi = piki be the maximal powers such that q1 | n, q2 | n, and assume that, say, φ(q1) ∤ (n - 1). Then 2 ≤ φ(q1)/(φ(q1), n - 1) = m1. If φ(q2) ∤ (n - 1) then 2 ≤ m2, and we have finished by the first statement of Lemma 3. Thus assume φ(q2) | (n - 1). If p2 ≠ 2 then t2 = φ(q2) is even and by the second assertion of Lemma 3 at most φ(n)/2m1 ≤ φ(n)/4 (here m2 = 1) of the b ∈ En satisfy not Wn(b). If p2 = 2 then φ(n) ≤ (n - 1)/2 and at most φ(n)/m1 ≤ (n - 1)/4 of the 1 ≤ b < n satisfy (2).
Finally let n = p1k1p2k2 ... prkr, 2 ≤ r, satisfy φ(piki) | (n - 1), 1 ≤ i ≤ r. Then ki = 1, 1 ≤ i ≤ r. An easy calculation shows 3 ≤ r. Such numbers do exist, e.g., (5), (6), and satisfy (2) for all b ∈ En. In honor of their discoverer [1] we call them Carmichael numbers. Let p1, p2, p3 be three different prime divisors of n, (pi - 1) | (n - l), 1 ≤ i ≤ 3.
Assume e(p1 - 1) = e(p2 - 1) = e(p3 - 1), and let i be such that for (n - 1)/2i = d(i) we have (pi - 1) ∤ d(i), (pj - 1)/2 | d(i), 1 ≤ j ≤ 3. If b = airi mod pi, where ai is a primitive root mod pi, 1 ≤ i ≤ 3, then by the analysis in the proof of Lemma 3, 1 < (bd(i) - 1, n) < n unless the ri are all even or all odd. For if, say, r1 is even and r2 is odd then p1 | (bd(i) - 1) and p2 ∤ (bd(i) - 1). Thus not Wn(b) holds for at most (2/8)φ(n) of the b ∈ En.
The similar analysis of the cases e(p3 - 1) = e(p2 - 1) < e(p1 - 1) and e(p3 - 1) < e(p2 - 1) ≤ e(p1 - 1), is left to the reader.
If n = p*q*r is a Carmichael number with p ≡ q ≡ r ≡ -1 mod 4, p < q < r. Then n ≡ -1 mod 4. Consequently 4 ∤ (n - 1) and i in (ii)(b) is just i = 1. Thus Wn(b) holds for all b ∉ En, i.e., for the

n(1 - (1 - 1/p)(1 - 1/q)(1 - 1/r)) ≤ n (1 - (1 - 1/p)3) ≤ 3n/p

numbers (b, n) ≠ 1, and also for exactly (3/4)φ(n) of the b ∈ En.
Adding up:

c({b | Wn(b)) ≤ (3/4 + 3/p) (n - 1).

If Carmichael numbers of the above type with arbitrarily large smallest prime factor do exist, then (1) is asymptotically best possible.
A systematic computer search for Carmichael numbers of this form was conducted by Oren by looking at triplets of primes 100 ≤ p, g, r ≤ 3000. Many instances were found.
For example, for

eq. (5): 652969351 = 271 * 811 * 2971

the fraction of witnesses is 0.7513. For

eq. (6): 2000436751 = 487 * 1531 * 2683

the fraction of witnesses is 0.7507. Thus the constant 3/4 in Theorem 1 seems to be best possible.

2. THE ALGORITHM

Given a number n we choose a k, determined by the desired reliability, and randomly pick 1 ≤ b1 ,..., bk < n.
Let n - 1 = 2lm, where m is odd5. For b = bi compute bm mod n. This can be done by about 1.5 log2m multiplications, see [2, p. 398]. Every time a number n < d is obtained, find d1 = res(d, n) and continue. Thus only products of numbers a, b < n and divisions of d < n2 by n are involved. Next calculate the residues mod n of b2m,..., b2lm = bn-1.
Since log2n = l + log2m, the evaluation of all the necessary powers of b will require 1.5 log2n multiplications of a, b < n and 1.5 log2n divisions of a d < n2 by n, altogether 3 log2n steps.
If the residue of bn-1 is not 1 then Wn(b) holds. If that residue is 1, find (res(b2im, n) - 1, n) for 1 ≤ i ≤ l. Each g.c.d. computation can be performed by doing at most log2n subtractions and divisions by 2. If any one of these g.c.d.‘s is neither 1 nor n then Wn(b) is true. Thus for each bi we computationally determined whether Wn(bi) is true, and the total required number of steps is 3 log2n + l * log2n.
If, for any 1 ≤ i ≤ k, Wn(bi) is true then n is composite. If, for all 1 ≤ i ≤ k, Wn(bi) is not true, then the test asserts that n is prime.

THEOREM 2. The above algorithm requires for n - 1 = 2lm, m odd, at most k * (2 * log2n + l * log2n) Steps. If n is prime then the result is always correct. For each composite n the test may declare n to be prime, but the probability of such error is smaller than 1/22k.

Proof. Only the last statement requires proof. An error will occur only when the n to be tested is composite and the b1 ,..., bk chosen in this particular run of the algorithm are all nonwitnesses. Becauses of Theorem 1, the probability of randomly picking a nonwitness is smaller than 1/4. The probability of independently picking k nonwitnesses is smaller than l/4k = 1/22k. Thus the probability that for any given number n, a particular run of the algorithm will produce an erroneous answer is smaller than 1/22k.
Note that the theorem is formulated with respect to each n, no averaging over a range of n’s is necessary. For a given composite n the test may (rarely) give a wrong answer in one run and the correct answer in other runs. Thus there are no n’s on which the algorithm behaves poorly.

3. IMPLEMENTATION

In implementing the algorithm we incorporate some laborsaving steps. First we test the given n for divisibility by any prime p < N, where, say N=1000.6
If n passed this sieve then we apply the algorithm. Suppose that we pick k = 30. An examination of the proof of Theorem 1 shows that for “most” composite n the probability of finding a witness in one try is much bigger than 3/4. In fact, in actual runs when n was composite the witness was always the first or second b chosen. But even for n for which (1) is almost exact, the probability of error is 1/260 < 10-18. One expected error in a billion billion tests! This seems small when compared to the frequency of machine errors present in practical computations.
The main application of the primality test is to produce large numbers asserted to be primes with specified properties.
The overall strategy is to generate numbers of the specified form and test for primality until a number is asserted to be prime. Such searches lead to success rather quickly because in most cases the Prime Number Theorem ensures that the density of primes in the sequence mi searched is like 1/l if the numbers satisfy ln(mi) ∼ l. In other cases an appropriate density statement can be derived from stronger hitherto unproven assumptions such as the extended Riemann hypothesis (ERH). In every instance actually tried, the search did terminate within practical time.
As mentioned before, any primality test should incorporate trying small divisors. Thus in practice the full test is applied only to the numbers asserted to be primes.
If desired, it is often possible to ensure that n - 1 = 2 * m, where m is odd so that the second part of checking Wn, that relating to (ii)(b), involves only b(n-1)/2. But even if this feature is not incorporated, a heuristic argument shows that the expected number of g.c.d. computations arising from (ii)(b) is two. Note that the notion of expected number used here does not have a precise meaning as in Theorem 2 and is invoked only heuristically.
Suppose we want to find an “arbitrary” prime having 250 binary digits. Start by randomly generating a sequence m = α1α2 ... α250, αi ∈ {0, 1}, α1 = 1. View m as an integer written in binary notation, say it is even. Successively form the sums m + 1, m + 3 ,...; if desired omit those for which 4 | (n - 1). For each n = m + i test for primality. Continue until a number is asserted to be prime.
Because of the Prime Number Theorem, the search will not require testing a prohibitive number of sums m + i. The density of primes around n = 2250 is 1/ln n > 1/250. Thus, one can incorporate a stopping rule: If within 500 tries no prime was found, drop the number m and restart with a new number m, [log2m] = 250. In this way a number asserted to be prime n = m + i, i < 1000, will usually be found very rapidly.
For certain applications it is important to have a prime n so that n - 1 has a large prime factor. This is achieved in a similar fashion. Suppose that we want n - 1 to have a prime factor p so that [log2p] = 200. Find, as before, such a number asserted to be prime. Form the numbers 2ip + 1, i = 1, 3,... . Test each number for primality. Again a number q = lp + 1 of the desired form is rapidly asserted as prime. If the arithmetical series with difference p does not rapidly find a number asserted to be prime (this has never happened), then p can be discarded and another prime p1 can be used for restarting the search.
These are just examples of searches actually conducted. The variations pertaining to other forms of primes, or to the testing of given numbers are obvious.

4. SOME EXPERIMENTAL RESULTS

V. Pratt has programmed this algorithm and together we planned some experiments. The computations were done on a medium-sized computer7. Numbers with several hundred binary digits were generated and tested. Still, the computations, including searches, did not take more than minutes per number asserted to be prime.
The first test was verification of the known results concerning primality and compositeness of n = 2p - 1, p ≤ 500, p prime. Within about 10 min all answers were produced without a single error. This was mainly done to test the program. For Mersenne primes our test is about 1/k as fast as the Lucas test which applies only to Mersenne numbers.
Next the test was applied to generate some large numbers asserted to be primes along the lines explained in Section 3. Rather than load an arbitrary binary sequence m, the search started from powers of 2 and proceeded by decrements. The numbers

2300 - 153, 2400 - 593,

were asserted to be the largest primes below 2300 and 2400, respectively, since the other numbers in the intervals are all composite.
Finally the test was applied to discover what we believe are the hitherto largest known numbers asserted to be twin primes. In order to speed up the sieving process the search was started with a number m which is a product of many small primes. Then pairs of the form m * l + 821, m * l + 823, and m * l + 827, m * l + 829 were tried. The pairs 821, 823 and 827, 829 are themselves twin primes. Within half an hour

(∏pi<300 pi) * 338 + 821, (∏pi<300 pi) * 338 + 823,

were asserted to be twin-primes. These numbers are of order of magnitude 10123. Five subsequent hours of search failed to discover additional pairs. This is the only case where any search required more than a few minutes. Of course, no Prime Number Theorem density estimates apply to twin primes. The reader’s guess as to the heuristic implications of this seeming gap in twin primes is as good as ours.

In conclusion, let us raise a question concerning possible theoretical applications of this primality test, in addition to the practical applications mentioned in the Introduction. What conjectures pertaining to the distribution of primes can one formulate, lend support to, or disprove by use of this test? For example, are there some refined estimates for the density of primes around 10100, perhaps consequences of some strong number- or function-theoretic conjectures, which one could experimentally check using the primality test. The author, together with V. Pratt, have tested the density of primes of the form n = x2 + 1 for n ∼ 250, 2100, 2150, 2200, and found very good agreement with the Hardy-Littlewood conjecture on this density. These and other experimental results will be reported elsewhere.
Note added May 10, 1978: Donald Knuth (and, it seems, several others) observed that the primality test can dispense with the g.c.d. computation. His observation is based on the following corollary of Theorem 1.
Using the notation of Section 2, let n - 1 = 2l * m, where m is odd. Let 0 ≤ x < n, denote x0 ≡ xm mod n, xi ≡ xi-12 mod n, 1 ≤ i ≤ l. Thus xl ≡ xn-1 mod n.

PROPOSITION. If 4 < n is composite and odd then for at least (3/4)(n - 1) of the 1 ≤ x < n either xl ≠ 1 or for some 1 ≤ i ≤ l we have xi = 1 and xi-1 ≠ n - 1.

Proof. By Theorem 1, if n is composite then for at least (3/4)(n - 1) of the 1 ≤ x < n Wn(x) holds. If Wn(x) holds then xl ≠ 1, or xl = 1 and for some 0 ≤ j ≤ l we have 1 < (xj - 1, n) < n.
In the second case let i - 1, j ≤ i - 1 < l, be the last index such that xi-1 ≠ 1; thus xi = 1. We have xj ≡ 1 mod p, where p is a proper divisor of n. Hence xi-1 ≡ 1 mod p, since

xi-1 ≡ xj2(-)-1 mod n.

Now xi-1 = n - 1 is impossible because it entails xi-1 - 1 = n - 2 being divisible by p, but p ≠ 2. Thus xi-1 ≠ n - 1 and xi = 1.
Note that if the condition in the statement of the proposition holds for some x then n must be composite. Namely, if xl ≠ 1 then Fermat’s relation does not hold. And if xi-1 ≠ n - 1, xi ≡ xi-12 ≡ 1 mod n, then 1 has more than two square roots mod n so n is composite.
The test for primality runs as in Section 2 except that when xl = 1 we need not compute the (xi - 1, n). Simply calculate x0 ≡ xm mod n, then square mod n repeatedly until xi-1 ≠ n - 1 and xi = 1, or until xl is reached. In any case we need never square xl-1.

REFERENCES
1. R. D. CARMICHAEL, On composite numbers p which satisfy the Fermat congruence ap-1 ≡ 1 mod p, Amer. Math. Monthly 19 (1912), 22-27.
2. C. E. GAUSS, “Disquisitiones Arithmeticase” (A. A. Clarke, Transl.), Yale Univ. Press, New Haven, Conn. London, 1966.
3. D. E. KNUTH, “The Art of Computing,” Vol. 2, Addison-Wesley, Reading, Mass., 1969.
4. G. L. MILLER, Reimann’s hypothesis and a test for primality, J. Comput. and System Sci. 13 (1976), 300-317.
5. M. 0. RABIN, Probabilistic algorithms, in “Algorithms and Complexity, Recent Results and New Direction” (J. F. Traub, Ed.), pp. 21-40, Academic Press, New York, 1976.
6. R. SOLOVAY AND V. STRASSEN, A fast Monte-Carlo test for primality, SIAM J. Comput. 6 (1977), 84-85.

  1. Originally published in Journal of Number Theory, 12, 128-138 (1980) [pdf version] Some minor edits were made to the text to be amenable to this publishing format. []
  2. And similarly, by b ∤ a we denote the fact that b does not divide a, i.e., res(a, b) ≠ 0. []
  3. There exists i such that []
  4. Euler's totient function counts the positive integers up to a given integer n that are relatively prime to n. []
  5. The choice of the letter "l" here is unfortunate, since it can be hard to distinguish from the number 1, depending on the font. I have left it as the letter l to be consistent with the paper. []
  6. Or, you could pick the biggest primorial that fits in your memory size and test for primality against that by calculating the g.c.d. []
  7. As computers have changed over the years, calling it a medium-sized computer is not very descriptive []

Keccak Reference

Tuesday, April 23rd, 2019

In the interest of preserving the historical document which is the Keccak Reference Standard1, I have included a copy on this website. This is the pdf version of the document.

  1. Guido Bertoni, et al []