Java Executor with throttling/throughput control
no more than say 100 tasks can be processed in a second -- if more tasks get submitted they should get queued and executed later
You need to look into Executors.newFixedThreadPool(int limit)
. This will allow you to limit the number of threads that can be executed simultaneously. If you submit more than one thread, they will be queued and executed later.
ExecutorService threadPool = Executors.newFixedThreadPool(100);
Future<?> result1 = threadPool.submit(runnable1);
Future<?> result2 = threadPool.submit(runnable2);
Futurte<SomeClass> result3 = threadPool.submit(callable1);
...
Snippet above shows how you would work with an ExecutorService
that allows no more than 100 threads to be executed simultaneously.
Update:
After going over the comments, here is what I have come up with (kinda stupid). How about manually keeping a track of threads that are to be executed ? How about storing them first in an ArrayList
and then submitting them to the Executor
based on how many threads have already been executed in the last one second.
So, lets say 200 tasks have been submitted into our maintained ArrayList
, We can iterate and add 100 to the Executor
. When a second passes, we can add few more threads based on how many have completed in theExecutor
and so on
The Java Executor doesn't offer such a limitation, only limitation by amount of threads, which is not what you are looking for.
In general the Executor is the wrong place to limit such actions anyway, it should be at the moment where the Thread tries to call the outside server. You can do this for example by having a limiting Semaphore that threads wait on before they submit their requests.
Calling Thread:
public void run() {
// ...
requestLimiter.acquire();
connection.send();
// ...
}
While at the same time you schedule a (single) secondary thread to periodically (like every 60 seconds) releases acquired resources:
public void run() {
// ...
requestLimiter.drainPermits(); // make sure not more than max are released by draining the Semaphore empty
requestLimiter.release(MAX_NUM_REQUESTS);
// ...
}
Take a look at guavas RateLimiter:
A rate limiter. Conceptually, a rate limiter distributes permits at a configurable rate. Each acquire() blocks if necessary until a permit is available, and then takes it. Once acquired, permits need not be released. Rate limiters are often used to restrict the rate at which some physical or logical resource is accessed. This is in contrast to Semaphore which restricts the number of concurrent accesses instead of the rate (note though that concurrency and rate are closely related, e.g. see Little's Law).
Its threadsafe, but still @Beta
. Might be worth a try anyway.
You would have to wrap each call to the Executor
with respect to the rate limiter. For a more clean solution you could create some kind of wrapper for the ExecutorService
.
From the javadoc:
final RateLimiter rateLimiter = RateLimiter.create(2.0); // rate is "2 permits per second"
void submitTasks(List<Runnable> tasks, Executor executor) {
for (Runnable task : tasks) {
rateLimiter.acquire(); // may wait
executor.execute(task);
}
}