When to cache Tasks?
The relevant difference is considering what happens when the method is called multiple times before the cache has been populated.
If you only cache the result, as is done in the first snippet, then if two (or three, or fifty) calls to the method are made before any of them has finished, they'll all start the actual operation to generate the results (in this case, performing a network request). So you now have two, three, fifty, or whatever network requests that you're making, all of which are going to put their results in the cache when they finish.
When you cache the task, rather than the results of the operation, if a second, third, or fiftieth call is made to this method after someone else starts their request, but before any of those requests have completed, they're all going to be given the same task representing that one network operation (or whatever long-running operation it is). That means that you're only ever sending one network request, or only ever performing one expensive computation, rather than duplicating that work when you have multiple requests for the same result.
Also, consider the case where one request gets sent, and when it's 95% done, a second call is made to the method. In the first snippet, since there is no result, it'll start from scratch and do 100% of the work. The second snippet is going to result in that second invocation being handed a Task
that's 95% done, so that second invocation is going to get its result much sooner than it would if using the first approach, in addition to the whole system just doing a lot less work.
In both cases, if you don't ever call the method when there is no cache, and another method has already started doing the work, then there is no meaningful difference between the two approaches.
You can create a fairly simple reproducible example to demonstrate this behavior. Here we have a toy long running operation, and methods that either cache the result or cache the Task
it returns. When we fire off 5 of the operations all at once you'll see that the result caching performs the long running operation 5 times, and the task caching performs it just once.
public class AsynchronousCachingSample
{
private static async Task<string> SomeLongRunningOperation()
{
Console.WriteLine("I'm starting a long running operation");
await Task.Delay(1000);
return "Result";
}
private static ConcurrentDictionary<string, string> resultCache =
new ConcurrentDictionary<string, string>();
private static async Task<string> CacheResult(string key)
{
string output;
if (!resultCache.TryGetValue(key, out output))
{
output = await SomeLongRunningOperation();
resultCache.TryAdd(key, output);
}
return output;
}
private static ConcurrentDictionary<string, Task<string>> taskCache =
new ConcurrentDictionary<string, Task<string>>();
private static Task<string> CacheTask(string key)
{
Task<string> output;
if (!taskCache.TryGetValue(key, out output))
{
output = SomeLongRunningOperation();
taskCache.TryAdd(key, output);
}
return output;
}
public static async Task Test()
{
int repetitions = 5;
Console.WriteLine("Using result caching:");
await Task.WhenAll(Enumerable.Repeat(false, repetitions)
.Select(_ => CacheResult("Foo")));
Console.WriteLine("Using task caching:");
await Task.WhenAll(Enumerable.Repeat(false, repetitions)
.Select(_ => CacheTask("Foo")));
}
}
It's worth noting that the specific implementation of the second approach you've provided has a few notable properties. It's possible for the method to be called twice in such a way that both of them will start the long running operation before either task can finish starting the operation, and therefore cache the Task
that represents that operation. So while it'd be much harder than with the first snippet, it is possible for whatever the long running operation is to be run twice. There would need to be more robust locking around checking the cache, starting a new operation, and then populating the cache, in order to prevent that. If doing whatever the long running task is multiple times on rare occasions would just be wasting a bit of time, then the current code is probably fine, but if it's important that the operation never be performed multiple times (say, because it cases side effects) then the current code isn't complete.
I have trouble understanding how this actually helps more than just storing the results.
When a method is marked with the async
modifier, the compiler will automatically transform the underlying method into a state-machine, as Stephan demonstrates in previous slides. This means that the use of the first method will always trigger a creation of a Task
.
In the second example, notice Stephan removed the async
modifier and the signature of the method is now public static Task<string> GetContentsAsync(string url)
. This now means that the responsibility of creating the Task
is on the implementer of the method and not the compiler. By caching Task<string>
, the only "penalty" of creating the Task
(actually, two tasks, as ContinueWith
will also create one) is when it's unavailable in the cache, and not foreach method call.
In this particular example, IMO, wasn't to re-use the network operation that is already ongoing when the first task executes, it was simply to reduce the amount of allocated Task
objects.
how do we know when to cache tasks?
Think of caching a Task
as if it were anything else, and this question can be viewed from a more broad perspective: When should I cache something? The answer to this question is broad, but I think the most common use case is when you have an expensive operation which is on the hotpath of your application. Should you always be caching tasks? definitely not. The overhead of the state-machine allocation is usually neglectable. If needed, profile your app, and then (and only then) think if caching would be of use in your particular use case.
Let's assume you are talking to a remote service which takes the name of a city and returns its zip codes. The service is remote and under load so we are talking to a method with an asynchronous signature:
interface IZipCodeService
{
Task<ICollection<ZipCode>> GetZipCodesAsync(string cityName);
}
Since the service needs a while for every request we would like to implement a local cache for it. Naturally the cache will also have an asynchronous signature maybe even implementing the same interface (see Facade pattern). A synchronous signature would break the best-practice of never calling asynchronous code synchronously with .Wait(), .Result or similar. At least the cache should leave that up to the caller.
So let's do a first iteration on this:
class ZipCodeCache : IZipCodeService
{
private readonly IZipCodeService realService;
private readonly ConcurrentDictionary<string, ICollection<ZipCode>> zipCache = new ConcurrentDictionary<string, ICollection<ZipCode>>();
public ZipCodeCache(IZipCodeService realService)
{
this.realService = realService;
}
public Task<ICollection<ZipCode>> GetZipCodesAsync(string cityName)
{
ICollection<ZipCode> zipCodes;
if (zipCache.TryGetValue(cityName, out zipCodes))
{
// Already in cache. Returning cached value
return Task.FromResult(zipCodes);
}
return this.realService.GetZipCodesAsync(cityName).ContinueWith((task) =>
{
this.zipCache.TryAdd(cityName, task.Result);
return task.Result;
});
}
}
As you can see the cache does not cache Task objects but the returned values of ZipCode collections. But by doing so it has to construct a Task for every cache hit by calling Task.FromResult and I think that is exactly what Stephen Toub tries to avoid. A Task object comes with overhead especially for the garbage collector because you are not only creating garbage but also every Task has a Finalizer which needs to be considered by the runtime.
The only option to work around this is by caching the whole Task object:
class ZipCodeCache2 : IZipCodeService
{
private readonly IZipCodeService realService;
private readonly ConcurrentDictionary<string, Task<ICollection<ZipCode>>> zipCache = new ConcurrentDictionary<string, Task<ICollection<ZipCode>>>();
public ZipCodeCache2(IZipCodeService realService)
{
this.realService = realService;
}
public Task<ICollection<ZipCode>> GetZipCodesAsync(string cityName)
{
Task<ICollection<ZipCode>> zipCodes;
if (zipCache.TryGetValue(cityName, out zipCodes))
{
return zipCodes;
}
return this.realService.GetZipCodesAsync(cityName).ContinueWith((task) =>
{
this.zipCache.TryAdd(cityName, task);
return task.Result;
});
}
}
As you can see the creation of Tasks by calling Task.FromResult is gone. Furthermore it is not possible to avoid this Task creation when using the async/await keywords because internally they will create a Task to return no matter what your code has cached. Something like:
public async Task<ICollection<ZipCode>> GetZipCodesAsync(string cityName)
{
Task<ICollection<ZipCode>> zipCodes;
if (zipCache.TryGetValue(cityName, out zipCodes))
{
return zipCodes;
}
will not compile.
Don't get confused by Stephen Toub's ContinueWith flags TaskContinuationOptions.OnlyOnRanToCompletion and TaskContinuationOptions.ExecuteSynchronously. They are (only) another performance optimization which is not related to the main objective of caching Tasks.
As with every cache you should consider some mechanism which clean the cache from time to time and remove entries which are too old or invalid. You could also implement a policy which limits the cache to n entries and trys to cache the items requested most by introducing some counting.
I did some benchmarking with and without caching of Tasks. You can find the code here http://pastebin.com/SEr2838A and the results look like this on my machine (w/ .NET4.6)
Caching ZipCodes: 00:00:04.6653104
Gen0: 3560 Gen1: 0 Gen2: 0
Caching Tasks: 00:00:03.9452951
Gen0: 1017 Gen1: 0 Gen2: 0