Recommended # of rounds for bcrypt

Short Version

The number of iterations that gives at least 250 ms to compute

Long Version

When BCrypt was first published, in 1999, they listed their implementation's default cost factors:

  • normal user: 6
  • super user: 8

A bcrypt cost of 6 means 64 rounds (26 = 64).

They also note:

Of course, whatever cost people choose should be reevaluated from time to time

  • At the time of deployment in 1976, crypt could hash fewer than 4 passwords per second. (250 ms per password)
  • In 1977, on a VAX-11/780, crypt (MD5) could be evaluated about 3.6 times per second. (277 ms per password)

That gives you a flavor of the kind of delays that the original implementers were considering when they wrote it:

  • ~250 ms for normal users
  • ~1 second for super users.

But, of course, the longer you can stand, the better. Every BCrypt implementation I've seen used 10 as the default cost. And my implementation used that. I believe it is time for me to to increase the default cost to 12.

We've decided we want to target no less than 250ms per hash.

My desktop PC is an Intel Core i7-2700K CPU @ 3.50 GHz. I originally benchmarked a BCrypt implementation on 1/23/2014:

1/23/2014  Intel Core i7-2700K CPU @ 3.50 GHz

| Cost | Iterations        |    Duration |
|------|-------------------|-------------|
|  8   |    256 iterations |     38.2 ms | <-- minimum allowed by BCrypt
|  9   |    512 iterations |     74.8 ms |
| 10   |  1,024 iterations |    152.4 ms | <-- current default (BCRYPT_COST=10)
| 11   |  2,048 iterations |    296.6 ms |
| 12   |  4,096 iterations |    594.3 ms |
| 13   |  8,192 iterations |  1,169.5 ms |
| 14   | 16,384 iterations |  2,338.8 ms |
| 15   | 32,768 iterations |  4,656.0 ms |
| 16   | 65,536 iterations |  9,302.2 ms |

enter image description here

Future Proofing

Rather than having a fixed constant, it should be a fixed minimum.

Rather than having your password hash function be:

String HashPassword(String password)
{
   return BCrypt.HashPassword(password, BCRYPT_DEFAULT_COST);
}

it should be something like:

String HashPassword(String password)
{  
   /*
     Rather than using a fixed default cost, run a micro-benchmark
     to figure out how fast the CPU is.
     Use that to make sure that it takes **at least** 250ms to calculate
     the hash
   */
   Int32 costFactor = this.CalculateIdealCost();
   //Never use a cost lower than the default hard-coded cost
   if (costFactor < BCRYPT_DEFAULT_COST) 
      costFactor = BCRYPT_DEFAULT_COST;

   return BCrypt.HashPassword(password, costFactor);
}

Int32 CalculateIdealCost()
{
    //Benchmark using a cost of 5 (the second-lowest allowed)
    Int32 cost = 5;

    var sw = new Stopwatch();
    sw.Start();
    this.HashPassword("microbenchmark", cost);
    sw.Stop();

    Double durationMS = sw.Elapsed.TotalMilliseconds;

    //Increasing cost by 1 would double the run time.
    //Keep increasing cost until the estimated duration is over 250 ms
    while (durationMS < 250)
    {
       cost += 1;
       durationMS *= 2;
    }

    return cost;
}

And ideally this would be part of everyone's BCrypt library, so rather than relying on users of the library to periodically increase the cost, the cost periodically increases itself.


I think the answer to all of your questions is already contained in Thomas Pornin's answer. You linked to it, so you presumably know about it, but I suggest that you read it again.

The basic principles are: don't choose a number of rounds; instead, choose the amount of time password verification will take on your server, then calculate the number of rounds based upon that. You want verification to take as long as you can stand.

For some examples of concrete numbers, see Thomas Pornin's answer. He suggests a reasonable goal would be for password verification/hashing to take 241 milliseconds per password. (Note: Thomas initially wrote "8 milliseconds", which is wrong -- this is the figure for a patience of one day instead of one month.) That still lets your server verify 4 passwords per second (more if you can do it in parallel). Thomas estimates that, if this is your goal, about 20,000 rounds is in the right ballpark.

However, the optimal number of rounds will change with your processor. Ideally, you would benchmark how long it takes on your processor and choose the number accordingly. This doesn't take that long; so for best results, just whip up the script and work out how many rounds are needed to ensure that password hashing takes about 240 milliseconds on your server (or longer, if you can bear it).


Stronger Key Derivation via Sequential Memory-Hard Functions is a very good paper on the topic of key stretching. On page 14 it compares various hashing algorithms with how much money it will cost to break the hash, which is a useful way of thinking about these things. (On a side note ChromeOS uses Scrypt if TPM isn't available.)

The idea is that you want these password hashes to be unbroken for as long as possible. With Moore's law this is a exponentially fast moving target. Scrypt uses variable amount of memory and cpu, this variable could become heavier as a function of time. In that each time the client logs in you can update the password hash to be more secure. In the case of PBKDF2 this could look like rounds=2^(current_year-2000) or something like that.

Its important to note that you can't just offload this processing onto the client and expect your protocol to be secure. All client-side hashing authentication protocols I know of require the server to make an identical calculation in order to verify the authentication credentials (NTLM, NTLMv2, SRP, WPA-PSK...).

Tags:

Bcrypt