Redis cache vs using memory directly

Currently we are more attracted in server less architecture where each request can go to different container.In this case redis can play very important role.

We can't use simple cache in server less as we can't be sure our request gets served at same container where our simple cache is stored.

In this case, we have to use redis as it stores cache at remote location & we can access that even container changes in server less architecture.


Redis is a remote data structure server. It is certainly slower than just storing the data in local memory (since it involves socket roundtrips to fetch/store the data). However, it also brings some interesting properties:

  • Redis can be accessed by all the processes of your applications, possibly running on several nodes (something local memory cannot achieve).

  • Redis memory storage is quite efficient, and done in a separate process. If the application runs on a platform whose memory is garbage collected (node.js, java, etc ...), it allows handling a much bigger memory cache/store. In practice, very large heaps do not perform well with garbage collected languages.

  • Redis can persist the data on disk if needed.

  • Redis is a bit more than a simple cache: it provides various data structures, various item eviction policies, blocking queues, pub/sub, atomicity, Lua scripting, etc ...

  • Redis can replicate its activity with a master/slave mechanism in order to implement high-availability.

Basically, if you need your application to scale on several nodes sharing the same data, then something like Redis (or any other remote key/value store) will be required.