rocksdb out of memory

Are you seeing the memory usage grow quickly or over a longer period of time?

We have found and fixed a few RocksDB resource leaks that would cause memory leaks:

  • BloomFilters can leak (https://issues.apache.org/jira/browse/KAFKA-8323) This was fixed in 2.2.1 and (pending 2.3.0)
  • Custom RocksDB configs are doomed to create leaks (https://issues.apache.org/jira/browse/KAFKA-8324) This will be fixed in 2.3.0

There are some indications that there may be others (https://issues.apache.org/jira/browse/KAFKA-8367), either in our usage of RocksDB or in RocksDB itself.

Oh, one other idea is that if you're using iterators from the state stores, either in your processors or in Interactive Query, you have to close them.

Beyond looking for leaks, I'm afraid I don't have too much insight into diagnosing RocksDB's memory usage. You could also restrict the Memtable size, but I don't think we set it very large by default anyway.

Hope this helps,

-John


I found out what was causing this.

I thought that my kafka streams application would have only one rockDB instance. But there is one instance per stream partition. So this configuration:

blockCacheSize=1350 * 1024 * 1024

Does not necessarily mean that the rocksDB memory is restricted to 1350MB. If the application has e.g. 8 stream partitions assigned it also has 8 blockCaches and thus can take up to 1350 * 8 = ~11GB of memory.