Nginx Setting server_names_hash_max_size and server_names_hash_bucket_size
Solution 1:
Just some technical details that I dug form the source code:
- The general recommendation would be to keep both values as small as possible.
- If nginx complains increase
max_size
first as long as it complains. If the number exceeds some big number (32769 for instance), increasebucket_size
to multiple of default value on your platform as long as it complains. If it does not complain anymore, decreasemax_size
back as long as it does not complain. Now you have the best setup for your set of server names (each set of server_names may need different setup). - Bigger
max_size
means more memory consumed (once per worker or server, please comment if you know). - Bigger
bucket_size
means more CPU cycles (for every domain name lookup) and more transfers from main memory to cache. max_size
is not related to number of server_names directly, if number of servers doubles, you may need to increasemax_size
10 times or even more to avoid collisions. If you cannot avoid them, you have to increasebucket_size
.bucket_size
is said to be increased to the next power of two, from the source code I would judge it should be enough to make it multiple of default value, this should keep transfers to cache optimal.- Average domain name should fit into 32 bytes even with hash array overhead. If you increase
bucket_size
to 512 bytes, it would accommodate 16 domain names with colliding hash key. This is not something that you want, if collision happens it searches linearly. You want to have as less collisions as possible. - If you have
max_size
less than 10000 and smallbucket_size
, you can come across long loading time because nginx would try to find optimal hash size in a loop. - If you have
max_size
bigger than 10000, there will be "only" 1000 loops performed before it would complain.
Solution 2:
The list of server
names that nginx serves is stored in a hash table for fast lookup. As you increase the number of entries, you have to increase the size of the hash table and/or the number of hash buckets in the table.
Given the nature of your setup, I can't think of any way for you to easily reduce the number of server
names you're storing in the table. I will suggest, though, that you not "restart" nginx, but rather simply have it reload its configuration. For instance:
service nginx reload
Solution 3:
Increase the "server_names_hash_bucket_size" configuration inside your nginx.conf.
I had it 64 and changed to 128.
Problem solved.
Solution 4:
@Michael Hampton is absolutely right with his answer. This hash table is constructed and compiled during restart or reload and afterwards it's running very fast. I guess this hash table could grow a lot more without degrading the performance noticeable. But I'd suggest using a size which is power of two, like 4096, due to the nature of C code.
Solution 5:
I'm not 100% sure in your case, but I was getting the same warning because I was calling proxy_set_header for X-Forwarded-Proto twice:
proxy_set_header X-Forwarded-Proto ...;
This was happening because I was including proxy_params, and it contains this line among others:
proxy_set_header X-Forwarded-Proto $scheme;
Removing that line from my site's configuration made the warning go away.