How many iterations of Rabin-Miller should I use for cryptographic safe primes?

Each iteration of Rabin-Miller reduces the odds that the number is composite by a factor of 1/4.

So after 64 iterations, there is only 1 chance in 2^128 that the number is composite.

Assuming you are using these for a public key algorithm (e.g. RSA), and assuming you are combining that with a symmetric algorithm using (say) 128-bit keys, an adversary can guess your key with that probability.

The bottom line is to choose the number of iterations to put that probability within the ballpark of the other sizes you are choosing for your algorithm.

[update, to elaborate]

The answer depends entirely on what algorithms you are going to use the numbers for, and what the best known attacks are against those algorithms.

For example, according to Wikipedia:

As of 2003 RSA Security claims that 1024-bit RSA keys are equivalent in strength to 80-bit symmetric keys, 2048-bit RSA keys to 112-bit symmetric keys and 3072-bit RSA keys to 128-bit symmetric keys.

So, if you are planning to use these primes to generate (say) a 1024-bit RSA key, then there is no reason to run more than 40 iterations or so of Rabin-Miller. Why? Because by the time you hit a failure, an attacker could crack one of your keys anyway.

Of course, there is no reason not to perform more iterations, time permitting. There just isn't much point to doing so.

On the other hand, if you are generating 2048-bit RSA keys, then 56 (or so) iterations of Rabin-Miller is more appropriate.

Cryptography is typically built as a composition of primitives, like prime generation, RSA, SHA-2, and AES. If you want to make one of those primitives 2^900 times stronger than the others, you can, but it is a little like putting a 10-foot-steel vault door on a log cabin.

There is no fixed answer to your question. It depends on the strength of the other pieces going into your cryptographic system.

All that said, 2^-128 is a ludicrously tiny probability, so I would probably just use 64 iterations :-).


From the libgcrypt source: /* We use 64 Rabin-Miller rounds which is better and thus sufficient. We do not have a Lucas test implementaion thus we can't do it in the X9.31 preferred way of running a few Rabin-Miller followed by one Lucas test. */ cipher/primegen.c line# 1295


The paper Average case error estimates for the strong probable prime test by Damgard-Landrock-Pomerance points out that, if you randomly select k-bit odd number n and apply t independent Rabin-Miller tests in succession, the probability that n is a composite has much stronger bounds.

In fact for 3 <= t <= k/9 and k >= 21,

enter image description here

For a k=1024 bit prime, t=6 iterations give you an error rate less than 10^(-40).


Let's assume that you select a prime p by selecting random values until you hit one for which Miller-Rabin says: that one looks like a prime. You use n rounds at most for the Miller-Rabin test. (For a so-called "safe prime", things are are not changed, except that you run two nested tests.)

The probability that a random 1024-bit integer is prime is about 1/900. Now, you do not want to do anything stupid so you generate only odd values (an even 1024-bit integer is guaranteed non-prime), and, more generally, you run the Miller-Rabin test only if the value is not "obviously" non-prime, i.e. can be divided by a small prime. So you end up with trying about 300 values with Miller-Rabin before hitting a prime (on average). When the value is non-prime, Miller-Rabin will detect it with probability 3/4 at each round, so the number of Miller-Rabin rounds you will run on average for a single non-prime value is 1+(1/4)+(1/16)+... = 4/3. For the 300 values, this means about 400 rounds of Miller-Rabin, regardless of what you choose for n.

So if you select n to be, e.g., 40, then the cost implied by n is less than 10% of the total computational cost. The random prime selection process is dominated by the test on non-primes, which are not impacted by the value of n you choose. I talked here about 1024-bit integers; for bigger numbers the choice of n is even less important since primes become sparser as size increases (for 2048-bit integers, the "10%" above become "5%").

Hence you can choose n=40 and be happy with it (or at least know that reducing n will not buy you much anyway). On the other hand, using a n greater than 40 is meaningless, because this would get you to probabilities lower than the risk of a simple miscomputation. Computers are hardware, they can have random failures. For instance, a primality test function could return "true" for a non-prime value because a cosmic ray (a high-energy particle hurtling through the Universe at high speed) happens to hit just the right transistor at the right time, flipping the return value from 0 ("false") to 1 ("true"). This is very unlikely -- but no less likely than probability 2-80. See this stackoverflow answer for a few more details. The bottom line is that regardless of how you make sure that an integer is prime, you still have an unavoidable probabilistic element, and 40 rounds of Miller-Rabin already give you the best that you can hope for.

To sum up, use 40 rounds.