ThreadPoolExecutor with unbounded queue not creating new threads
This gotcha is covered in this blog post:
This construction of thread pool will simply not work as expected. This is due to the logic within the ThreadPoolExecutor where new threads are added if there is a failure to offer a task to the queue. In our case, we use an unbounded LinkedBlockingQueue, where we can always offer a task to the queue. It effectively means that we will never grow above the core pool size and up to the maximum pool size.
If you also need to decouple the minimum from maximum pool sizes, you will have to do some extended coding. I am not aware of a solution that exists in the Java libraries or Apache Commons. The solution is to create a coupled BlockingQueue
that is aware of the TPE, and will go out of its way to reject a task if it knows the TPE has no threads available, then manually requeue. It is covered in more detail in linked post. Ultimately your construction will look like:
public static ExecutorService newScalingThreadPool(int min, int max, long keepAliveTime) {
ScalingQueue queue = new ScalingQueue();
ThreadPoolExecutor executor =
new ScalingThreadPoolExecutor(min, max, keepAliveTime, TimeUnit.MILLISECONDS, queue);
executor.setRejectedExecutionHandler(new ForceQueuePolicy());
queue.setThreadPoolExecutor(executor);
return executor;
}
However more simply set corePoolSize
to maxPoolSize
and don't worry about this nonsense.
As mentioned by @djechlin, this is part of the (surprising to many) defined behavior of the ThreadPoolExecutor
. I believe I've found a somewhat elegant solution around this behavior that I show in my answer here:
How to get the ThreadPoolExecutor to increase threads to max before queueing?
Basically you extend LinkedBlockingQueue
to have it always return false for queue.offer(...)
which will add an additional threads to the pool, if necessary. If the pool is already at max threads and they all are busy, the RejectedExecutionHandler
will be called. It is the handler which then does the put(...)
into the queue.
See my code there.
There is a workaround to this problem. Consider the following implementation:
int corePoolSize = 40;
int maximumPoolSize = 40;
ThreadPoolExecutor threadPoolExecutor = new ThreadPoolExecutor(corePoolSize, maximumPoolSize,
60L, TimeUnit.SECONDS, new LinkedBlockingQueue<Runnable>());
threadPoolExecutor.allowCoreThreadTimeOut(true);
By setting the allowCoreThreadTimeOut() to true
, the threads in the pool are allowed to terminate after the specified timeout (60 seconds in this example). With this solution, it is the corePoolSize
constructor argument that determines the maximum pool size in practice, because the thread pool will grow up to the corePoolSize
, and then start adding jobs to the queue. It is likely that the pool may never grow bigger than that, because the pool will not spawn new threads until the queue is full (which, given that the LinkedBlockingQueue
has an Integer.MAX_VALUE
capacity may never happen). Consequently, there is little point in setting maximumPoolSize
to a larger value than corePoolSize
.
Consideration: The thread pool have 0 idle threads after the timeout has expired, which means that there will be some latency before the threads are created (normally, you would always have corePoolSize
threads available).
More details can be found in the JavaDoc of ThreadPoolExecutor.