Elasticsearch 7.x circuit breaker - data too large - troubleshoot

The reason is that the heap of the node is pretty full and being caught by the circuit breaker is nice because it prevents the nodes from running into OOMs, going stale and crash...

Elasticsearch 6.2.0 introduced the circuit breaker and improved it in 7.0.0. With the version upgrade from ES-5.4 to ES-7.2, you are running straight into this improvement.

I see 3 solutions so far:

  1. Increase heap size if possible
  2. Reduce the size of your bulk requests if feasible
  3. Scale-out your cluster as the shards are consuming a lot of heap, leaving nothing to process the large request. More nodes will help the cluster to distribute the shards and requests among more nodes, what leads to a lower AVG heap usage on all nodes.

As an UGLY workaround (not solving the issue) one could increase the limit after reading and understanding the implications:


So I've spent some time researching how exactly ES implemented the new circuit breaker mechanism, and tried to understand why we are suddenly getting those errors?

  1. the circuit breaker mechanism exists since the very first versions.
  2. we started experience issues around it when moving from version 5.4 to 7.2
  3. in version 7.2 ES introduced a new way for calculating circuit-break: Circuit-break based on real memory usage (why and how: https://www.elastic.co/blog/improving-node-resiliency-with-the-real-memory-circuit-breaker, code: https://github.com/elastic/elasticsearch/pull/31767)
  4. In our internal upgrade of ES to version 7.2, we changed the jdk from 8 to 11.
  5. also as part of our internal upgrade we changed the jvm.options default configuration, switching the official recommended CMS GC with the G1GC GC which have a fairly new support by elasticsearch.
  6. considering all the above, I found this bug that was fixed in version 7.4 regarding the use of circuit-breaker together with the G1GC GC: https://github.com/elastic/elasticsearch/pull/46169

How to fix:

  1. change the configuration back to CMS GC.
  2. or, take the fix. the fix for the bug is just a configuration change that can be easily changed and tested in your deployment.