Using the .htaccess File to Allow Two Computers Access

May 15th, 2019

In order to control access to parts of a website one can use a .htaccess file to restrict the IP addresses which are allowed to view files in a folder. I have two computers which I generally use to access the admin functions of my website, so I wanted to set up the .htaccess file so that either computer could log into the WP-Admin functionality. After looking through various online suggestions, I figured I should be able to do something like this to allow either IP address:

Order Deny,Allow
Deny from All
Allow from xx.xx.xx.xx
Allow from yy.yy.yy.yy

But when I put this in my .htaccess file, when I tried to load the website I just got an error saying that Apache was misconfigured. If you see anything wrong with the code above, I would love to hear why it does not work, since every site I looked at seemed to suggest this should work for what I am trying to do.

The setting that I have been using, and has been working, is like this:

Require IP xx.xx.xx.xx

But when I use this, each time I switch computers I have to ssh into the host and manually alter three different .htaccess files to use the correct IP address before I can log into WP-Admin, which is annoyingly tedious. So I did some more digging and came up with some more technical documentation, where I found the option to use this form:

<RequireAny>
Require IP xx.xx.xx.xx
Require IP yy.yy.yy.yy
</RequireAny>

Which lets either computer access the folder.

Hello from a Linux Computer

May 13th, 2019

Back in 2007 I got my first Macbook. I have been using Macs ever since, except for at work where I generally have been issued a Windows machine, the exception being the computer running the NMR, which has been a Linux system everywhere I have been.

Recently I decided that it would be better to have a Linux computer. I have heard good things about Gentoo, so I thought I would try to start there. I happen to have an old macbook pro sitting around that nobody is using, so I decided to try installing a new operating system there. Apparently macs can be finicky about the particular settings used. I spent the last week trying to follow the Gentoo installation guide, but the system never booted up.

Being rather frustrated at this point, I thought I would start with something easier. So I created a Ubuntu minimal installation disk. After fighting with the Gentoo install for the past week, the Ubuntu install was a breeze. There were just a couple simple questions to answer, and the install program did the rest. Now I have a working Linux system.

I imagine that it will take me some time to get all the settings just the way I want them, but I at least have something to work with. Right away I noticed there is no Curl, so I had to install that.

New Class of Polymers which Allow Efficient Recycling

May 10th, 2019

In the news I saw a reference to this paper on recyclable polymers1. The basic idea described is that these researchers have developed a class of polymers which allow efficient recycling. By combining the monomers of a bis-triketone and a di-amine a polymer is formed. This polymer can be broken back down to the monomers simply by soaking in acid.

The important advancement here is that once the starting monomers are regenerated they can be purified from any additives, and cleaned from impurities from the use of the polymer, and then used as starting materials for the generation of new polymer. This allows the recycled product to be just as high quality as the original product.

Figure 1. A triketone

Figure 1. A triketone

Let's take a look at the chemical reactions going on here. First, when they say they are using a triketone it is specifically a molecule with a carbon surrounded by three carbonyl moieties, an example is shown in Figure 1. A triketone is more appropriate than a normal ketone for this sort of reaction because it has a fairly acidic proton on that central carbon, the acidity is caused by stabilization of the resulting anion by the surrounding carbonyl groups, which can be represented using resonance structures as shown in Figure 2.

Figure 2. Resonance structure of a triketone anion.

Figure 2. Resonance structure of a triketone anion.

The reaction creating the polymer is the reaction of a triketone with an amine, which forms a diketoimine and releases a molecule of water, as shown in Figure 3.

Figure 3. Formation of a diketoimine.

Figure 3. Formation of a diketoimine.

This reaction can be reversed by hydrolysis under acidic conditions to regenerate the original triketone and amine, as shown in Figure 4.

Figure 4. Hydrolysis of diketoimine.

Figure 4. Hydrolysis of diketoimine.

The formation of a polymer would use a diamine and a bis-triketone as starting materials. When mixed, they form a polymer, as shown in Figure 5. Here I have shown the reaction using 1,2-diethylamine and the bis-triketone derived from adipic acid.

Figure 5. Polymerization

Figure 5. Polymerization

Figure 6. A triamine.

Figure 6. A triamine.

The reaction shown above would result in in linear polymer. A branching or networked polymer can be created using a triamine, such as the one shown in Figure 6.

Figure 7. A bis-triketone derived from terephthalic acid.

Figure 7. A bis-triketone derived from terephthalic acid.

The physical properties of the resulting polymer can be controlled by the choice of monomers. A mixture of diamine and triamine monomers could be used to tailor the polymer for a specific use. Instead of using the aliphatic bis-triketone derived from adipic acid, one could start with the aromatic bis-triketone derived from terephthalic acid, which is shown in Figure 7. This would give more rigidity to the resulting polymer. Alternately, one could form a polymer with less rigidity by using a bis-triketone derived from a longer chained di-acid, such as dodecanedioic acid.

The real advantage this polymer presents is that additives such as colorants, platicizers, or inorganic fibers for strength can be added without reducing the recylability of the polymer. Once the polymer is broken down by acid, the monomers can be purified using regular solution purification techniques.

I also wanted to call out at the formation of the bis-triketone, because I think the chemistry here, while not new in itself, is interesting as an example of organic synthesis. They start with two equivalents of dimedone2 and one equvalent of adipic acid3. These were dissolved in methylene chloride with three equivalents of DMAP4, which will deprotonate both the adipic acid and the dimedone. They then added a solution of DCC5 in methylene chloride, which accepts the oxygen from the adipic acid, forming N,N'-dicyclohexylurea and allowing the coupling of the adipic acid and dimedone. The N,N'-dicyclohexylurea precipitates and was removed by filtration, and then the DMAP was removed by washing with hydrochloric acid. This scheme is shown below in Figure 86.

Figure 8. Reaction to form a bis-triketone.

Figure 8. Reaction to form a bis-triketone.

  1. Peter R. Christensen, Angelique M. Scheuermann, Kathryn E. Loeffler, Brett A. Helms, Closed-loop recycling of plastics enabled by dynamic covalent diketoenamine bonds, Nature Chemistry, volume 11, pages 442–448 (2019). []
  2. 5,5-Dimethyl-1,3-cyclohexanedione []
  3. Hexanedioic acid. You need 2 eq. of dimedone because the reaction adds one dimedone to each end of the adipic acid. []
  4. Dimethylaminopyridine, a base. You might think this would require four equivalents, but the DCC acts as a base as well so you don't need so much []
  5. Dicyclohexylcarbodiimine []
  6. I admit that I am being lazy about arrow pushing electrons here, but I think this shows the important part of the reaction. []

BTC price setting auction May 2019

May 9th, 2019

This appeared in the log today:

auctionbot: Buy order # 1047 has ENDED: 500 WFF, WU esta bien SOLD by PeterL for 94mn ecu. Attn: BingoBoingo

This might look cryptic, but it is actually a very important piece of information. What this is saying is that 500 USD1 were sold for 94mn ecu2, which is equivalent3 to 0.094 bitcoin. Or for people who think of things using the USD price, that is an exchange rate of 5319.15 USD/BTC.

This particular monthly auction is important because it is used to set the prices which the Pizarro ISP uses to set the rates they charge customers.

  1. US dollars, aka WFF, which stands for wired filthy fiats []
  2. ecu = euloran copper unit []
  3. Equivalent because the makers of the game Eulora, who issue these ecu, have pegged them to the bitcoin. []

Ideas

May 1st, 2019

Have you ever had a great idea and then later realized that you were not the first one to think of it?

Asphalt paver machine

Asphalt paver machine

When I was about four I invented a machine that would make roads. I even drew a really nice picture in crayons of what I came up with. It had a big hopper in front, and would pour the material down and spread it flat and it had big treads on the side so it could move forward building a road behind it. Some years later I was amazed as my family drove past a highway under construction, and they were using the machine that I had drawn! Something like the picture shown.

When I was in college, after studying a bit of chemistry and physics, I did some thinking about nuclear fusion. Some metals, such as palladium, can absorb large amounts of hydrogen. I reasoned that if one was to make a target of palladium and expose it to deuterium it would hold the deuterium in place. One could then use some high voltage to accelerate more deuterium ions toward it, resulting in nuclear fusion. I was thinking this would be a great way to make a power-producing fusion reactor! Then I found out that this had already been invented some decades prior. The resulting nuclear reactors are not used to generate power, but they are used in the medical field as a source of high energy neutrons.

Also while in college I gave some thought to unusual electric motor configurations. I was fascinated by the homopolar motor. I drew up an idea of a homopolar motor made with multiple windings around two co-axial magnets which were arranged with their north poles pointed in opposite directions. Some years later I found some of my old notes and drawings, and so I did some searching on the internet to try to decide if there was any way they would work. I was surprised to find that a patent had been issued in the intervening years for such a device1, and the drawings in the patent are very similar to what I had drawn some years before.

  1. Although, there being a patent does not necessarily mean that it would actually work. []

Macro for Chemical Notation in MS Word

April 29th, 2019

If you ever find yourself entering chemicals formulas1 into a Microsoft Word document you will quickly realize that it is much harder than it needs to be. Either you switch back and forth between the keyboard and the mouse to click "subscript", or use the keyboard shortcut Ctrl-(=), or enter the formula and then go back and select all the numbers (using Ctrl-mouse) and then set them to subscript all at once - all of these are too much work, especially if you have a large number of formulas to enter. It is much better to use the following macro to make formatting a chemical formula just take a couple keystrokes. (Direction for installing into Microsoft Word are below.)

#******************************#
Sub ChemNotation()
'
' Chemnotation Macro
'
'
Set myrange = Selection.Words(1)

If (Len(myrange) < 2) Then
Selection.MoveLeft Unit:=wdWord, Count:=1, Extend:=wdExtend
Set myrange = Selection.Words(1)
End If

For i = 1 To Len(myrange)
Char = Mid$(myrange, i, 1)
If (IsNumeric(Char)) Then
myrange.Characters(i).Font.Subscript = True
Else
myrange.Characters(i).Font.Subscript = False
End If
Next i

Selection.MoveRight Unit:=wdCharacter, Count:=1
With Selection.Font
.Subscript = False
End With

End Sub
#******************************#

Directions:

  • Open Microsoft Word

  • Go to the "View" tab in the ribbon
  • Click on the "Macros" button ("view macros", or click Alt-F8)
  • Type the name "ChemNotation" in the Macro name box, then click "Create"
  • Copy and paste the text above into the editing field
  • Open File->Options->Quick Access Toolbar (or right click on the ribbon and select "customize Quick Access Toolbar")
  • Use the drop down box "Choose commands from" to select "Macros", add the macro to the toolbar
  • Select the macro and click "Modify", change the name and icon as desired
  • To use the macro: type in the formula, then either with the cursor at the end of the formula or after selecting the formula, click alt, then click the number on the icon in the toolbar ((I have found that in normal writing I can type the formula and hit Alt, shortcut-number (3 on my quick access bar), but if I am working in a table cell then it only works if I have selected the formula beforehand.
  1. Things like H2O, CO2, or C14H28O2. []

Application Note: Levofloxacin Assay

April 29th, 2019

Peter M. Lambert1

Impact Analytical, Inc., Midland, Michigan

Levofloxacin is an antibiotic2, the structure of which is shown in Structure 1.

<B>Structure 1</b> Levofloxacin Hemihydrate

Structure 1 Levofloxacin Hemihydrate

Levofloxacin was analyzed by high performance liquid chromatography (HPLC) following the USP Levofloxacin monograph3.

Experimental A buffer was prepared by weighing 8.4561 g ammonium acetate, 1.2481 g copper(II) sulfate, pentahydrate, and 1.3062 g L-isoleucine into a 1000 mL volumetric flask and filling to volume with water.

The mobile phase solution was prepared by mixing 700 mL buffer with 300 mL methanol.

A reference standard solution was prepared by weighing 51.86 mg levofloxacin USP standard into a 50 mL volumetric flask and filling to volume with mobile phase.

System suitability was established by five replicate injections of the standard solution. The following conditions were used.

System: Waters Alliance HPLC
Column: Phenomenex Luna C18(2), 5 μm particle size, 250 x 4.6 mm
Column Temp.: 45 °C
Injection volume: 25 μL
Run time: 25 minutes
Flow rate: 0.8 mL/minute
UV Wavelength: 360 nm

Results The system suitability passed, with an average tailing factor of 0.7 (criteria: 0.5 – 1.5) and a relative standard deviation of 0.1% (criteria: NMT 1.0%). A representative HPLC chromatogram of the levofloxacin standard is shown in Figure 1. The retention time of levofloxacin was about 16.8 minutes.

<b>Figure 1.</b> HPLC chromatogram of Levofloxacin USP standard.

Figure 1. HPLC chromatogram of Levofloxacin USP standard.

  1. Correspondence can be sent to: lambert@impactanalytical.com []
  2. Anderson VR, Perry CM, Levofloxacin : a review of its use as a high-dose, short-course treatment for bacterial infection, Drugs. 2008;68(4):535-65. []
  3. United States Pharmacopeia, USP Monograph: Levofloxacin, USP41-NF36. []

Allowable Baby Names

April 26th, 2019

From the Logs:

Mircea Popescu: Names like "Trace Mayer" SHOULD be obsolete. Get a fucking human name why don't you. What is this Mug Costanza and Rifling Jones bs. Unless of course there's some King Trace in the glorious history of that great nation of Africa that I'm unfamiliar with.
Asciilifeform: Hey, if we had a Joe Stack, why not king trace. (is there a, e.g., Henry Heap?)
MP: Joe is fine. what the fuck reason did Mrs Mayer have to eschew calling her boy Rachel or whatever the fuck they do in her tribe.

PeterL: What kind of funny name is "Mircea"?
MP: PeterL ah, but in the god forsaken country of Romania, there's a bunch of kings called that.

PeterL: So you can only name people after kings?
MP: PeterL yes. You may only name kids after famous dead people. Either kings or saints. Pick.

PeterL: How about family members?
MP: PeterL well family members were named through the same process, so...

So, to build on the wisdom of Mircea Popescu, I figure it would be nice to have a baby name book made not with definitions of the names but with historical instances of the names. Here I list all the boy names which are presidents of the US, Kings of England or Scotland (which is where my family came from before moving to the US), governors of Michigan (where I currently live), and my own family names. The girls are mostly wives of those listed above, the list would only be about three names else-wise.

Here is a complete list of the boy names of my tribe:

Aaron Abraham Albert Alexander Alfred Andrew Arthur Austin
Barak Benjamin Boris Brigham
Calvin Charles Chester
David Donald Duncan Dwight
Edgar Edmund Edward Edwin Elliott Elwood Ezra
Franklin
George Gerald Gordon Grover
Harold Harry Heber Henry Herbert
James John Joseph Josiah
Kenneth Kinsley
Lorenzo Lyndon
Macbeth Malcomb Martin Maurice Millard Moses
Peter
Reed Richard Robert Ronald Russel Rutherford
Spencer Stephen
Theodore Thomas
Ulysses
Ward Warren Wilber Wilford William Woodrow
Zachary

And here is a complete list of the girl names of my tribe:

Abigail Adelaide Alexandra Allegra Anna Anne Augusta
Barbara Bess Beth Betty
Camilla Caroline Carrie Catherine Charlotte Cheryl Clara Clarissa Cora
Dantzel Diane Dolley
Edith Eleanor Eliza Elizabeth Ellen Elva Emily Emma
Frances Flora Florence
Grace Gretchen
Hannah Helen Henrietta Hillary
Ida Ila Ivana
Jane Janeen Jacqueline Jennifer Julia
Lady Laura Lou Louisa Louise Lucia Lucretia Lucy
Mamie Margaret Maria Marina Marjorie Marla Martha Mary Melania Michelle Miriam
Nancy
Patricia
Rachel Rosalynn
Sarah Sophia
Theresa
Victoria
Wendy

You may note that this list is segregated by boys/girls. I do not agree with the current trend of having gender neutral names.

If you think of an obvious name I missed, please comment and I will add it to the list!

Probabilistic Algorithm for Testing Primality

April 25th, 2019

MICHAEL O. RABIN1

Institute of Mathematics, Hebrew University, Jerusalem, Israel, and Massachusetts Institute of Technology, Cambridge, Massachusetts 02139
Communicated by H. Zassenhaus

Received December 10, 1977

We present a practical probabilistic algorithm for testing large numbers of arbitrary form for primality. The algorithm has the feature that when it determines a number composite then the result is always true, but when it asserts that a number is prime there is a provably small probability of error. The algorithm was used to generate large numbers asserted to be primes of arbitrary and special forms, including very large numbers asserted to be twin primes. Theoretical foundations as well as details of implementation and experimental results are given.

The problem of determining for given integers whether they are prime has occupied mathematicians for centuries. In the words of Gauss in his celebrated “Disquisitiones Arithmeticae” [2, p. 396]:

The problem of distinguishing prime numbers from composite numbers and of resolving the latter into their prime factors is known to be one of the most important and useful in arithmetic. It has engaged the industry and wisdom of ancient and modern geometers to such an extent that it would be superfluous to discuss the problem at length. Nevertheless we must confess that all methods that have been proposed thus far are either restricted to very special cases or are so laborious and prolix that even for numbers that do not exceed the limits of tables constructed by estimable men, i.e., for numbers that do not yield to artificial methods, they try the patience of even the practiced calculator. And these methods do not apply at all to larger numbers.

Gauss proceeds to give several interesting algorithms for the determination of primality. We do not intend to present here a survey or bibliography of the extensive literature concerning testing for primality. All the existing methods require for arbitrary integers n a number of steps O(ne) where e is some fraction such as 1/3 or 1/4. Thus, experimental results indicate that around n = 1065 each of these methods encounters some numbers untestable by it.
For integers of certain special forms, the most notable example being the so-called Mersenne numbers n = 2p - 1, where p is a prime, there do exist rather rapid tests for primality. But arbitrary large integers defy even the most powerful computers, when tested by the conventional methods.
In this paper we present a primality test applicable to arbitrary and very large integers. If we count an arithmetical operation such as addition, multiplication, or division, involving integers 0 ≤ a, b ≤ n as one step then the test of primality of n requires in the worst case c(log2 n)2 steps where c is about 100. Thus very large numbers are amenable to this test. In practice the number of steps in most cases is about c log2 n for n which are prime, and about 3 log2 n for numbers n that are composite even if they have no small divisors.
The salient features of our method are that it is probabilistic, i.e., uses randomization within the computation, and that it produces the answer with a certain controllable miniscule probability of error. To be more precise, with full details to be given later, the algorithm produces and employs certain random integers 0 < b1 ,..., bk < n. If the outcome of the whole test is that n is composite then it is always correct (barring computational error). If the outcome asserts that n is prime then it may sometimes be wrong. But the probability that a composite number will be erroneously asserted to be prime is smaller than 1/22k. If, say, k = 50, then the probability of error is at most 1/2100.
This last statement does not mean that an integer n asserted as prime by use of 50 random numbers is prime with probability at least 1 - 1/2100. Such an interpretation is nonsensical since n is either prime or not. The correct meaning is that if the test is applied to m = 2100 integers n1, n2 ,..., nm, then the expected number of wrong answers is one. These integers need not be pairwise different and no probability distribution on the integers to be tested for primality is assumed.
Thus when we say that an integer n was asserted to be prime by use of 50 random numbers, this is no proof that n is actually prime. What is stated is that n was asserted to be prime by a procedure that on the average will make no more than one mistake in 2100 applications (even when testing the same n).
The test and an earlier, slightly weaker, form of the theorem underlying it appeared in [5]. A different probabilistic test for primality was given by Solovay and Strassen [6].
In Section 3 we discuss the implementation and application of the test to the generation of very large numbers asserted to be primes of arbitrary or of a desired prescribed form. Large primes are useful in exact arithmetical computations involving large integers. By computing modulo a prime, one can avoid the use of fractions. Also, recently evolved digitalized signature systems are based on the easy availability of large primes.
In the last section we present some of the experimental results which include examples of very large numbers asserted to be primes, twin primes, etc., very rapidly obtained on a medium-sized computer.

1. THE FUNDAMENTAL THEOREM

Throughout this paper (a, b) denotes the greatest common divisor (g.c.d.) of the integers a, b, and res(a, b) denotes the least nonnegative residue of a when divided by b. By b | a we denote the fact that b divides a, i.e., res(a, b) = 0.2

DEFINITION. Let n be an integer. Consider the following condition to be denoted by Wn(b), on an integer b:

(i) 1 ≤ b < n;
(ii)(a) bn-1 ≢ 1 mod n, or
(ii)(b) ∃i s.t.3 2i | (n - 1) and 1 < (b(n-1)/2i - 1, n) < n.

Call such an integer b a witness to the compositeness of n.
This condition was first considered by Miller [4], who used it to give a nonprobabilistic test for primality assuming the correctness of the extended Riemann hypothesis (ERH). Our test does not assume ERH and is considerably faster.
Obviously if Wn(b) holds for some b then n is composite. For if (ii)(a) holds then Fermat’s relation is violated. And (ii)(b) means that n has a proper divisor. Thus the adjective “witness to compositeness” is justified. It turns out that if n is composite then witnesses abound.

THEOREM 1. If 4 < n is composite then

eq. (1): 3(n - 1)/4 ≤ c({b | Wn(b)}).

By c(S) we denote the number of elements in the set S.

Because of (i), (1) means that no more than 1/4 of the numbers 1 ≤ b < n are not witnesses. We require some lemmas for the proof of Theorem 1.
Denote by En the set of all c, 1 ≤ c < n, (c, n) = 1. We have c(En) = φ(n), where φ is Euler’s function4. The set En is a group under multiplication mod n. If n = pk, where p is prime, then φ(n) = pk - pk-1; for odd p, En is a cyclic group.
For an integer m denote by e(m) the largest i such that 2i | m.
For a sequence m = ⟨m1 ,..., mk⟩ of integers define res(b, m) = ⟨s1 ,..., sk⟩, where si = res(b, mi), 1 ≤ i ≤ k. When m is fixed abbreviate res(b, m) = res(b).

LEMMA 2. Let mi | n for 1 ≤ i ≤ k, and (mi, mj) = 1, 1 ≤ i ≤ j ≤ k, and let m = ⟨m1 ,..., mk⟩. For every s = ⟨s1 ,..., sk⟩, s1 ∈ Em1 ,..., sk ∈ Emk, the number of elements in

En ∩ {b | res(b, m) = s}

is the same.

Proof. Let t be the product of all primes dividing n, but not any of the mi, 1 ≤ i ≤ k. Assume t ≠ 1, for t = 1 the proof that follows has to be just slightly modified. By the Chinese Remainder Theorem there exists an integer b such that res(b, ⟨m, t⟩) = ⟨s1 ,..., sk , 1⟩. Since mi | n, 1 ≤ i ≤ k, and t | n, we may assume 1 ≤ b < n. Now b ∈ En. Otherwise for some prime, p | n and p | b. This p divides some mi, or p | t. Since si = b - qimi, if p | mi then p | si contradicting (mi , si) = 1. Similarly p | t leads to the contradiction p | 1.
Denote the restriction res( , m) | En by f. By the previous paragraph, f: En → Em1 Χ ... Χ Emk = G is a homomorphism of En onto the direct product G. Thus f-1(⟨s1 ,..., sk⟩) has the same number of elements for all ⟨s1 ,..., sk⟩ ∈ G.

Let, for example, p | n, q | n, where p and q are different primes. The number of b ∈ En for which res(b, ⟨p , q⟩) = ⟨s1 , s2⟩ is the same for every pair 1 ≤ s1 < p - 1, 1 ≤ s2 < q - 1. We shall employ in the sequel such considerations without further elaboration.

LEMMA 3. Let p1 ≠ p2 be primes, q1 = p1k1, q2 = p2k2. Assume q1q2 | n. Denote ti = (φ(qi), n - l), mi = φ(qi)/ti, i = 1, 2.
At most φ(n)/m1m2 of the integers b ∈ En do not satisfy Wn(b).
If t1 is even then at most φ(n)/(2m1m2) of the b ∈ En do not satisfy Wn(b).

Proof. Let ai be a primitive root mod qi, i = 1,2. That is, ait ≡ 1 mod qi if and only if φ(qi) | t.
Let b ∈ En then (b, qi) = 1 and b ≡ airi mod qi, for some r1, r2. Because q1q2 | n we have that

eq. (2): bn - 1 ≡ 1 mod n

implies φ(qi) | ri(n - l), i = 1, 2. Hence it implies mi | ri, i = 1, 2, so r1 = h1m1, r2 = h2m2.
Thus if res(b, ⟨q1, q2⟩) = ⟨s1, s2⟩ then for at most 1/m1m2 out of all pairs ⟨s1, s2⟩ will (2) hold. By Lemma 2 all pairs ⟨s1, s2⟩ appear equally often as residue of b ∈ En, hence (2) will hold for at most φ(n)/m1m2 of the integers b ∈ En. But if (2) does not hold then Wn(b) is true.
The sharper claim made, when t1 is even, hinges on the fact, to be proved next, that in this case even if (2) holds for b, in at least half the instances (ii)(b) holds, so that b is still a witness.
Assume that t1 is even and e(t1) = e(t2); see the notation preceeding Lemma 2. Let i be such that (n - 1)/2i = d(i) is an integer and tj ∤ d(i), tJ/2 | d(i), j = 1, 2. Let b ∈ En satisfy (2) and adopt the above notation, concerning a1, a2, r1, r2, etc. If h1 is odd and h2 is even then φ(q1) ∤ h1m1d(i) and φ(q2) | h2m2d(i). Hence

eq. (3): bd(i) ≢ 1 mod q1,

eq. (4): bd(i) ≡ 1 mod q2.

For such a b ∈ En (ii)(b) holds. Similarly if h1 is even and h2 is odd. There are four equally frequent possibilities for the parities of h1 and h2. Thus for half of the b ∈ En satisfying (2), Wn(b) does still hold. Hence at most φ(n)/2m1m2 integers b ∈ En do not satisfy Wn(b).
Finally let t1 be even, e(t1) < e(t2). Choose i and j so that t2 | d(i), t1 ∤ d(i), and t1/2j | d(i) where j is minimal with the last property. Still using the above notation for a b ∈ En satisfying (2), we have φ(q2) | h2m2d(i) so that (4) holds. But bd(i) ≡ 1 mod q1 holds only if φ(q1) | h1m1d(i), i.e., only if 2j | h1. Unless 2j | h1 we have (3), so that Wn(b) is not true for at most φ(n)/2jm1m2 integers b ∈ En. But 1 ≤ j.

Proof of Theorem 1. Let 4 < n be composite. If 1 ≤ b < n and (b, n) ≠ 1 then (ii)(a) and consequently Wn(b) holds. So that to prove (1) it suffices to show that at least 3φ(n)/4 of the b ∈ En satisfy Wn(b).
Assume that n = pk, 1 < k. Here φ(n) = pk-1(p - 1) and p ∤ (n - 1), so pk-1 ≤ φ(pk)/(φ(pk), n - 1) = m. By the argument at the beginning of the proof of Lemma 3 applied to a single prime, we have that at most φ(n)/m of the b ∈ En satisfy (2) and hence satisfy not Wn(b). If 9 < n then 5 ≤ p or 2 < k hence 4 ≤ m and (1) follows. If n = 9, there are only two non-witnesses.
Next let n have at least two different prime divisors p1, p2, let qi = piki be the maximal powers such that q1 | n, q2 | n, and assume that, say, φ(q1) ∤ (n - 1). Then 2 ≤ φ(q1)/(φ(q1), n - 1) = m1. If φ(q2) ∤ (n - 1) then 2 ≤ m2, and we have finished by the first statement of Lemma 3. Thus assume φ(q2) | (n - 1). If p2 ≠ 2 then t2 = φ(q2) is even and by the second assertion of Lemma 3 at most φ(n)/2m1 ≤ φ(n)/4 (here m2 = 1) of the b ∈ En satisfy not Wn(b). If p2 = 2 then φ(n) ≤ (n - 1)/2 and at most φ(n)/m1 ≤ (n - 1)/4 of the 1 ≤ b < n satisfy (2).
Finally let n = p1k1p2k2 ... prkr, 2 ≤ r, satisfy φ(piki) | (n - 1), 1 ≤ i ≤ r. Then ki = 1, 1 ≤ i ≤ r. An easy calculation shows 3 ≤ r. Such numbers do exist, e.g., (5), (6), and satisfy (2) for all b ∈ En. In honor of their discoverer [1] we call them Carmichael numbers. Let p1, p2, p3 be three different prime divisors of n, (pi - 1) | (n - l), 1 ≤ i ≤ 3.
Assume e(p1 - 1) = e(p2 - 1) = e(p3 - 1), and let i be such that for (n - 1)/2i = d(i) we have (pi - 1) ∤ d(i), (pj - 1)/2 | d(i), 1 ≤ j ≤ 3. If b = airi mod pi, where ai is a primitive root mod pi, 1 ≤ i ≤ 3, then by the analysis in the proof of Lemma 3, 1 < (bd(i) - 1, n) < n unless the ri are all even or all odd. For if, say, r1 is even and r2 is odd then p1 | (bd(i) - 1) and p2 ∤ (bd(i) - 1). Thus not Wn(b) holds for at most (2/8)φ(n) of the b ∈ En.
The similar analysis of the cases e(p3 - 1) = e(p2 - 1) < e(p1 - 1) and e(p3 - 1) < e(p2 - 1) ≤ e(p1 - 1), is left to the reader.
If n = p*q*r is a Carmichael number with p ≡ q ≡ r ≡ -1 mod 4, p < q < r. Then n ≡ -1 mod 4. Consequently 4 ∤ (n - 1) and i in (ii)(b) is just i = 1. Thus Wn(b) holds for all b ∉ En, i.e., for the

n(1 - (1 - 1/p)(1 - 1/q)(1 - 1/r)) ≤ n (1 - (1 - 1/p)3) ≤ 3n/p

numbers (b, n) ≠ 1, and also for exactly (3/4)φ(n) of the b ∈ En.
Adding up:

c({b | Wn(b)) ≤ (3/4 + 3/p) (n - 1).

If Carmichael numbers of the above type with arbitrarily large smallest prime factor do exist, then (1) is asymptotically best possible.
A systematic computer search for Carmichael numbers of this form was conducted by Oren by looking at triplets of primes 100 ≤ p, g, r ≤ 3000. Many instances were found.
For example, for

eq. (5): 652969351 = 271 * 811 * 2971

the fraction of witnesses is 0.7513. For

eq. (6): 2000436751 = 487 * 1531 * 2683

the fraction of witnesses is 0.7507. Thus the constant 3/4 in Theorem 1 seems to be best possible.

2. THE ALGORITHM

Given a number n we choose a k, determined by the desired reliability, and randomly pick 1 ≤ b1 ,..., bk < n.
Let n - 1 = 2lm, where m is odd5. For b = bi compute bm mod n. This can be done by about 1.5 log2m multiplications, see [2, p. 398]. Every time a number n < d is obtained, find d1 = res(d, n) and continue. Thus only products of numbers a, b < n and divisions of d < n2 by n are involved. Next calculate the residues mod n of b2m,..., b2lm = bn-1.
Since log2n = l + log2m, the evaluation of all the necessary powers of b will require 1.5 log2n multiplications of a, b < n and 1.5 log2n divisions of a d < n2 by n, altogether 3 log2n steps.
If the residue of bn-1 is not 1 then Wn(b) holds. If that residue is 1, find (res(b2im, n) - 1, n) for 1 ≤ i ≤ l. Each g.c.d. computation can be performed by doing at most log2n subtractions and divisions by 2. If any one of these g.c.d.‘s is neither 1 nor n then Wn(b) is true. Thus for each bi we computationally determined whether Wn(bi) is true, and the total required number of steps is 3 log2n + l * log2n.
If, for any 1 ≤ i ≤ k, Wn(bi) is true then n is composite. If, for all 1 ≤ i ≤ k, Wn(bi) is not true, then the test asserts that n is prime.

THEOREM 2. The above algorithm requires for n - 1 = 2lm, m odd, at most k * (2 * log2n + l * log2n) Steps. If n is prime then the result is always correct. For each composite n the test may declare n to be prime, but the probability of such error is smaller than 1/22k.

Proof. Only the last statement requires proof. An error will occur only when the n to be tested is composite and the b1 ,..., bk chosen in this particular run of the algorithm are all nonwitnesses. Becauses of Theorem 1, the probability of randomly picking a nonwitness is smaller than 1/4. The probability of independently picking k nonwitnesses is smaller than l/4k = 1/22k. Thus the probability that for any given number n, a particular run of the algorithm will produce an erroneous answer is smaller than 1/22k.
Note that the theorem is formulated with respect to each n, no averaging over a range of n’s is necessary. For a given composite n the test may (rarely) give a wrong answer in one run and the correct answer in other runs. Thus there are no n’s on which the algorithm behaves poorly.

3. IMPLEMENTATION

In implementing the algorithm we incorporate some laborsaving steps. First we test the given n for divisibility by any prime p < N, where, say N=1000.6
If n passed this sieve then we apply the algorithm. Suppose that we pick k = 30. An examination of the proof of Theorem 1 shows that for “most” composite n the probability of finding a witness in one try is much bigger than 3/4. In fact, in actual runs when n was composite the witness was always the first or second b chosen. But even for n for which (1) is almost exact, the probability of error is 1/260 < 10-18. One expected error in a billion billion tests! This seems small when compared to the frequency of machine errors present in practical computations.
The main application of the primality test is to produce large numbers asserted to be primes with specified properties.
The overall strategy is to generate numbers of the specified form and test for primality until a number is asserted to be prime. Such searches lead to success rather quickly because in most cases the Prime Number Theorem ensures that the density of primes in the sequence mi searched is like 1/l if the numbers satisfy ln(mi) ∼ l. In other cases an appropriate density statement can be derived from stronger hitherto unproven assumptions such as the extended Riemann hypothesis (ERH). In every instance actually tried, the search did terminate within practical time.
As mentioned before, any primality test should incorporate trying small divisors. Thus in practice the full test is applied only to the numbers asserted to be primes.
If desired, it is often possible to ensure that n - 1 = 2 * m, where m is odd so that the second part of checking Wn, that relating to (ii)(b), involves only b(n-1)/2. But even if this feature is not incorporated, a heuristic argument shows that the expected number of g.c.d. computations arising from (ii)(b) is two. Note that the notion of expected number used here does not have a precise meaning as in Theorem 2 and is invoked only heuristically.
Suppose we want to find an “arbitrary” prime having 250 binary digits. Start by randomly generating a sequence m = α1α2 ... α250, αi ∈ {0, 1}, α1 = 1. View m as an integer written in binary notation, say it is even. Successively form the sums m + 1, m + 3 ,...; if desired omit those for which 4 | (n - 1). For each n = m + i test for primality. Continue until a number is asserted to be prime.
Because of the Prime Number Theorem, the search will not require testing a prohibitive number of sums m + i. The density of primes around n = 2250 is 1/ln n > 1/250. Thus, one can incorporate a stopping rule: If within 500 tries no prime was found, drop the number m and restart with a new number m, [log2m] = 250. In this way a number asserted to be prime n = m + i, i < 1000, will usually be found very rapidly.
For certain applications it is important to have a prime n so that n - 1 has a large prime factor. This is achieved in a similar fashion. Suppose that we want n - 1 to have a prime factor p so that [log2p] = 200. Find, as before, such a number asserted to be prime. Form the numbers 2ip + 1, i = 1, 3,... . Test each number for primality. Again a number q = lp + 1 of the desired form is rapidly asserted as prime. If the arithmetical series with difference p does not rapidly find a number asserted to be prime (this has never happened), then p can be discarded and another prime p1 can be used for restarting the search.
These are just examples of searches actually conducted. The variations pertaining to other forms of primes, or to the testing of given numbers are obvious.

4. SOME EXPERIMENTAL RESULTS

V. Pratt has programmed this algorithm and together we planned some experiments. The computations were done on a medium-sized computer7. Numbers with several hundred binary digits were generated and tested. Still, the computations, including searches, did not take more than minutes per number asserted to be prime.
The first test was verification of the known results concerning primality and compositeness of n = 2p - 1, p ≤ 500, p prime. Within about 10 min all answers were produced without a single error. This was mainly done to test the program. For Mersenne primes our test is about 1/k as fast as the Lucas test which applies only to Mersenne numbers.
Next the test was applied to generate some large numbers asserted to be primes along the lines explained in Section 3. Rather than load an arbitrary binary sequence m, the search started from powers of 2 and proceeded by decrements. The numbers

2300 - 153, 2400 - 593,

were asserted to be the largest primes below 2300 and 2400, respectively, since the other numbers in the intervals are all composite.
Finally the test was applied to discover what we believe are the hitherto largest known numbers asserted to be twin primes. In order to speed up the sieving process the search was started with a number m which is a product of many small primes. Then pairs of the form m * l + 821, m * l + 823, and m * l + 827, m * l + 829 were tried. The pairs 821, 823 and 827, 829 are themselves twin primes. Within half an hour

(∏pi<300 pi) * 338 + 821, (∏pi<300 pi) * 338 + 823,

were asserted to be twin-primes. These numbers are of order of magnitude 10123. Five subsequent hours of search failed to discover additional pairs. This is the only case where any search required more than a few minutes. Of course, no Prime Number Theorem density estimates apply to twin primes. The reader’s guess as to the heuristic implications of this seeming gap in twin primes is as good as ours.

In conclusion, let us raise a question concerning possible theoretical applications of this primality test, in addition to the practical applications mentioned in the Introduction. What conjectures pertaining to the distribution of primes can one formulate, lend support to, or disprove by use of this test? For example, are there some refined estimates for the density of primes around 10100, perhaps consequences of some strong number- or function-theoretic conjectures, which one could experimentally check using the primality test. The author, together with V. Pratt, have tested the density of primes of the form n = x2 + 1 for n ∼ 250, 2100, 2150, 2200, and found very good agreement with the Hardy-Littlewood conjecture on this density. These and other experimental results will be reported elsewhere.
Note added May 10, 1978: Donald Knuth (and, it seems, several others) observed that the primality test can dispense with the g.c.d. computation. His observation is based on the following corollary of Theorem 1.
Using the notation of Section 2, let n - 1 = 2l * m, where m is odd. Let 0 ≤ x < n, denote x0 ≡ xm mod n, xi ≡ xi-12 mod n, 1 ≤ i ≤ l. Thus xl ≡ xn-1 mod n.

PROPOSITION. If 4 < n is composite and odd then for at least (3/4)(n - 1) of the 1 ≤ x < n either xl ≠ 1 or for some 1 ≤ i ≤ l we have xi = 1 and xi-1 ≠ n - 1.

Proof. By Theorem 1, if n is composite then for at least (3/4)(n - 1) of the 1 ≤ x < n Wn(x) holds. If Wn(x) holds then xl ≠ 1, or xl = 1 and for some 0 ≤ j ≤ l we have 1 < (xj - 1, n) < n.
In the second case let i - 1, j ≤ i - 1 < l, be the last index such that xi-1 ≠ 1; thus xi = 1. We have xj ≡ 1 mod p, where p is a proper divisor of n. Hence xi-1 ≡ 1 mod p, since

xi-1 ≡ xj2(-)-1 mod n.

Now xi-1 = n - 1 is impossible because it entails xi-1 - 1 = n - 2 being divisible by p, but p ≠ 2. Thus xi-1 ≠ n - 1 and xi = 1.
Note that if the condition in the statement of the proposition holds for some x then n must be composite. Namely, if xl ≠ 1 then Fermat’s relation does not hold. And if xi-1 ≠ n - 1, xi ≡ xi-12 ≡ 1 mod n, then 1 has more than two square roots mod n so n is composite.
The test for primality runs as in Section 2 except that when xl = 1 we need not compute the (xi - 1, n). Simply calculate x0 ≡ xm mod n, then square mod n repeatedly until xi-1 ≠ n - 1 and xi = 1, or until xl is reached. In any case we need never square xl-1.

REFERENCES
1. R. D. CARMICHAEL, On composite numbers p which satisfy the Fermat congruence ap-1 ≡ 1 mod p, Amer. Math. Monthly 19 (1912), 22-27.
2. C. E. GAUSS, “Disquisitiones Arithmeticase” (A. A. Clarke, Transl.), Yale Univ. Press, New Haven, Conn. London, 1966.
3. D. E. KNUTH, “The Art of Computing,” Vol. 2, Addison-Wesley, Reading, Mass., 1969.
4. G. L. MILLER, Reimann’s hypothesis and a test for primality, J. Comput. and System Sci. 13 (1976), 300-317.
5. M. 0. RABIN, Probabilistic algorithms, in “Algorithms and Complexity, Recent Results and New Direction” (J. F. Traub, Ed.), pp. 21-40, Academic Press, New York, 1976.
6. R. SOLOVAY AND V. STRASSEN, A fast Monte-Carlo test for primality, SIAM J. Comput. 6 (1977), 84-85.

  1. Originally published in Journal of Number Theory, 12, 128-138 (1980) [pdf version] Some minor edits were made to the text to be amenable to this publishing format. []
  2. And similarly, by b ∤ a we denote the fact that b does not divide a, i.e., res(a, b) ≠ 0. []
  3. There exists i such that []
  4. Euler's totient function counts the positive integers up to a given integer n that are relatively prime to n. []
  5. The choice of the letter "l" here is unfortunate, since it can be hard to distinguish from the number 1, depending on the font. I have left it as the letter l to be consistent with the paper. []
  6. Or, you could pick the biggest primorial that fits in your memory size and test for primality against that by calculating the g.c.d. []
  7. As computers have changed over the years, calling it a medium-sized computer is not very descriptive []

Keccak Reference

April 23rd, 2019

In the interest of preserving the historical document which is the Keccak Reference Standard1, I have included a copy on this website. This is the pdf version of the document.

  1. Guido Bertoni, et al []