How to cache an IDisposable object

You are correct that MemoryCache does not call Dispose, however you can tell it to call Dispose when evicting a item.

static void Main(string[] args)
{
    var policy = new CacheItemPolicy
    {
        RemovedCallback = RemovedCallback,
        SlidingExpiration = TimeSpan.FromMinutes(5)
    };
    Stream myStream = GetMyStream();
    MemoryCache.Default.Add("myStream", myStream, policy);
}

private static void RemovedCallback(CacheEntryRemovedArguments arg)
{
    if (arg.RemovedReason != CacheEntryRemovedReason.Removed)
    {
        var item = arg.CacheItem.Value as IDisposable;
        if(item != null)
            item.Dispose();
    }
}

The above example creates a Stream object and if it is unused for 5 minutes it will have Dispose() called on it. If the stream is removed due to a Remove( call removing the item or a Set( call overwriting the item it will not have Dispose() called on it.


The first thing to consider is whether it's a good idea to cache such an item at all. Many disposable objects hold onto relatively limited resources, and/or some which will time-out in some way. These do not cache well, and it's best just not to do so.

On the other hand, some disposable objects don't really need to be disposable, but they share a base class with many that do, or implement an interface that needs to allow for disposal at a particular point if it is done (IEnumerator<T>) and so you could know that it's actually fine to not dispose it at all. In such a case you can cheerfully ignore the issue, but be careful of changes in implementation with later versions, unless the Dispose() is explicitly documented as safe to ignore.

Yet other possibility is to cache something that allows for quicker construction of an object, which is the approach I'd recommend with Stream: Don't cache Stream objects at all, but rather cache the bytes that could be read from it. When calling code wants to read the stream first construct a new MemoryStream with that byte array as the buffer. If the stream can be accessed from outside the assembly wrap that stream in another stream that enforces a read-only policy (if it is only accessible inside your own code you can skip that as an optimisation, by just being careful never to write to the stream). Then return that stream. The calling code can treat it like a stream obtained any other way (including calling Dispose() when it's done) but you can still give the calling code those stream faster because of the caching.


I wrote a class called Scoped<T> to solve this problem. You can store scoped objects in a cache, and when retrieving them create a lifetime from the scope. The scope implements thread-safe reference counting and will keep the scoped item alive (not disposed) until the cache and all lifetimes are disposed.

This is what it looks like in use, plugged into a cache:

int capacity = 666;
var lru = new ConcurrentLru<int, Scoped<SomeDisposable>>(capacity);
var valueFactory = new SomeDisposableValueFactory();

using (var lifetime = lru.GetOrAdd(1, valueFactory.Create).CreateLifetime())
{
    // lifetime.Value is guaranteed to be alive until the lifetime is disposed
}

class SomeDisposableValueFactory
{
   public Scoped<SomeDisposable>> Create(int key)
   {
      return new Scoped<SomeDisposable>(new SomeDisposable(key));
   }
}

class SomeDisposable : IDisposable
{
   public SomeDisposable(int key) {}
   public void Dispose() {}
}

The code is here on GitHub: https://github.com/bitfaster/BitFaster.Caching

Install-Package BitFaster.Caching

I use this for caching pooled MemoryStream instances, to prevent the situation you described - a consumer of the cache is a relatively long running operation, and a burst of web requests causes the LRU to fully cycle and evict the in use item. The scope keeps it alive until the last user is finished.