How many iterations of Rabin-Miller should be used to generate cryptographic safe primes?
I just answered to the same question on stackoverflow, so I will duplicate my answer here (I don't think duplication is the way to go; maybe the question should be migrated):
Let's assume that you select a prime p by selecting random values until you hit one for which Miller-Rabin says: that one looks like a prime. You use n rounds at most for the Miller-Rabin test. (For a so-called "safe prime", things are are not changed, except that you run two nested tests.)
The probability that a random 1024-bit integer is prime is about 1/900. Now, you do not want to do anything stupid so you generate only odd values (an even 1024-bit integer is guaranteed non-prime), and, more generally, you run the Miller-Rabin test only if the value is not "obviously" non-prime, i.e. can be divided by a small prime. So you end up with trying about 300 values with Miller-Rabin before hitting a prime (on average). When the value is non-prime, Miller-Rabin will detect it with probability 3/4 at each round, so the number of Miller-Rabin rounds you will run on average for a single non-prime value is 1+(1/4)+(1/16)+... = 4/3. For the 300 values, this means about 400 rounds of Miller-Rabin, regardless of what you choose for n.
(Edit: actually, for a randomly chosen non-prime, Miller-Rabin works better than that (see @Brett's answer) and the average number of rounds will be closer to 300 than 400. It does not substantially change my argument.)
So if you select n to be, e.g., 40, then the cost implied by n is less than 10% of the total computational cost. The random prime selection process is dominated by the test on non-primes, which are not impacted by the value of n you choose. I talked here about 1024-bit integers; for bigger numbers the choice of n is even less important since primes become sparser as size increases (for 2048-bit integers, the "10%" above become "5%").
Hence you can choose n=40 and be happy with it (or at least know that reducing n will not buy you much anyway). On the other hand, using a n greater than 40 is meaningless, because this would get you to probabilities lower than the risk of a simple miscomputation. Computers are hardware, they can have random failures. For instance, a primality test function could return "true" for a non-prime value because a cosmic ray (a high-energy particle hurtling through the Universe at high speed) happens to hit just the right transistor at the right time, flipping the return value from 0 ("false") to 1 ("true"). This is very unlikely -- but no less likely than probability 2-80. See this stackoverflow answer for a few more details. The bottom line is that regardless of how you make sure that an integer is prime, you still have an unavoidable probabilistic element, and 40 rounds of Miller-Rabin already give you the best that you can hope for.
To sum up, use 40 rounds.
(t = 40) iterations of M-R with randomized bases is completely unnecessary for p(1024, t) < (2^-80). No more than 1/4 of composite odd values are SPRPs to a random base. The ((1/4)^t) is extremely pessimistic, and papers DLP, RBJ (the latter also proving the DLP conjecture: p(k, t) < (1/4)^t, for k >= 2, t >= 1), combine results proving that (t = 3) iterations are sufficient to yield: p(1024, t) < 2^-80
. See also: Handbook of Applied Cryptography ch.4, table.4.4. This same (t = 3) value is used for 1024-bit values in the OpenSSL library.
20190202
: I've decided to add a link to my own code based on these papers. While the mrtab
utility is the key, the LUTs and assertion-testing programs it's built on are all part of the package. I'd always appreciate any feedback on the mrtab
project.