Comparison between lz4 vs lz4_hc vs blosc vs snappy vs fastlz

Like most questions, the answer usually ends up being: It depends :)

The other answers gave you good pointers, but another thing to take into account is RAM usage in both compression and decompression stages, as well as decompression speed in MB/s.

Decompression speed is typically inversely proportional to the compression ratio, so you may think you chose the perfect algorithm to save some bandwidth/disk storage, but then whatever is consuming that data downstream now has to spend much more time, CPU cycles and/or RAM to decompress. And RAM usage might seem inconsequential, but maybe the downstream system is an embedded/low-voltage system? Maybe RAM is plentiful, but CPU is limited? All those things need to be taken into account.

Here's an example of a suite of benchmarks done on various algorithms, taking a lot of these considerations into account:

https://catchchallenger.first-world.info/wiki/Quick_Benchmark:_Gzip_vs_Bzip2_vs_LZMA_vs_XZ_vs_LZ4_vs_LZO


This migth help you: (lz4 vs snappy) http://java-performance.info/performance-general-compression/ (benchmarks for lz4, snappy, lz4hc, blosc) https://web.archive.org/web/20170706065303/http://blosc.org:80/synthetic-benchmarks.html (now not available on http://www.blosc.org/synthetic-benchmarks.html)


Yann Collet's lz4, hands down.

enter image description here


If you are only aiming for high compression density, you want to look at LZMA and large-window Brotli. These two algorithms give the best compression density from the widely available open-sourced algorithms. Brotli is slower at compression, but ~5x faster at decompression.