Why does HikariCP recommend fixed size pool for better performance

I suggest you read this page and watch the attached video. The Oracle Performance Group demonstrates how an application with a pool of 96 connection easily handles 10,000 front-end users and 20,000 transactions per-second.

PostgreSQL recommends a formula of:

connections = ((core_count * 2) + effective_spindle_count)

Where core_count is CPU cores, and effective_spindle_count is the number of disks in a RAID. For many servers, this formula would result in a connection pool of 10-20 maximum connections.

The odds are that with even 100 connections, your database is heavily over saturated. Do you have 50 CPU cores? If you're drives are spinning platters not SSDs the head can only be in one place at a time, and unless the entire dataset is in memory there is no way to service so many requests at once (100-200).

UPDATE: Directly answering your question about fixed-pool sizing. You will likely get the best performance from your application with a pool that as a maximum connection count turned right at the "knee" or "peak" performance that your DB can handle. This is likely a small number. If you have "spike demand", as many applications do, trying to spin up new connections to grow the pool at the instant of the spike is counter-productive (creates more load on the server). A small, constant pool will give you predictable performance.


It really depends on the application behavior of running long running and short running transactions. Sometimes I feel that it's better to hold of some idle connections to pool if we wants to respond to client synchronous manner.