bzip2 too slow. Multiple cores are avaible
Solution 1:
There are many compression algorithms around, and bzip2
is one of the slower ones. Plain gzip
tends to be significantly faster, at usually not much worse compression. When speed is the most important, lzop
is my favourite. Poor compression, but oh so fast.
I decided to have some fun and compare a few algorithms, including their parallel implementations. The input file is the output of pg_dumpall
command on my workstation, a 1913 MB SQL file. The hardware is an older quad-core i5. The times are wall-clock times of just the compression. Parallel implementations are set to use all 4 cores. Table sorted by compression speed.
Algorithm Compressed size Compression Decompression
lzop 398MB 20.8% 4.2s 455.6MB/s 3.1s 617.3MB/s
lz4 416MB 21.7% 4.5s 424.2MB/s 1.6s 1181.3MB/s
brotli (q0) 307MB 16.1% 7.3s 262.1MB/s 4.9s 390.5MB/s
brotli (q1) 234MB 12.2% 8.7s 220.0MB/s 4.9s 390.5MB/s
zstd 266MB 13.9% 11.9s 161.1MB/s 3.5s 539.5MB/s
pigz (x4) 232MB 12.1% 13.1s 146.1MB/s 4.2s 455.6MB/s
gzip 232MB 12.1% 39.1s 48.9MB/s 9.2s 208.0MB/s
lbzip2 (x4) 188MB 9.9% 42.0s 45.6MB/s 13.2s 144.9MB/s
pbzip2 (x4) 189MB 9.9% 117.5s 16.3MB/s 20.1s 95.2MB/s
bzip2 189MB 9.9% 273.4s 7.0MB/s 42.8s 44.7MB/s
pixz (x4) 132MB 6.9% 456.3s 4.2MB/s 7.9s 242.2MB/s
xz 132MB 6.9% 1027.8s 1.9MB/s 17.3s 110.6MB/s
brotli (q11) 141MB 7.4% 4979.2s 0.4MB/s 3.6s 531.6MB/s
If the 16 cores of your server are idle enough that all can be used for compression, pbzip2
will probably give you a very significant speed-up. But you need more speed still and you can tolerate ~20% larger files, gzip
is probably your best bet.
Update: I added brotli
(see TOOGAMs answer) results to the table. brotli
s compression quality setting has a very large impact on compression ratio and speed, so I added three settings (q0
, q1
, and q11
). The default is q11
, but it is extremely slow, and still worse than xz
. q1
looks very good though; the same compression ratio as gzip
, but 4-5 times as fast!
Update: Added lbzip2
(see gmathts comment) and zstd
(Johnny's comment) to the table, and sorted it by compression speed. lbzip2
puts the bzip2
family back in the running by compressing three times as fast as pbzip2
with a great compression ratio! zstd
also looks reasonable but is beat by brotli (q1)
in both ratio and speed.
My original conclusion that plain gzip
is the best bet is starting to look almost silly. Although for ubiquity, it still can't be beat ;)
Solution 2:
Use pbzip2.
The manual says:
pbzip2 is a parallel implementation of the bzip2 block-sorting file compressor that uses pthreads and achieves near-linear speedup on SMP machines. The output of this version is fully compatible with bzip2 v1.0.2 or newer (ie: anything compressed with pbzip2 can be decompressed with bzip2).
It auto-detects the number of processors you have and creates threads accordingly.
Solution 3:
-
Google's brotli is a newer format that has gained some wide support within browsers recently, as it has some impressive compression, some impressive speed, and perhaps the most impressive combination/balance of both of those features.
Some data:
Comparison of Brotli, Deflate, Zopfli, LZMA, LZHAM and Bzip2 Compression Algorithms
- e.g., this chart reporting numbers showing Brotli to be roughly 6-14 faster than Bzip2.
CanIUse.com: feature: brotli shows support by Microsoft Edge, Mozilla Firefox, Google Chrome, Apple Safari, Opera (but not Opera Mini or Microsoft Internet Explorer).
Comparison: Brotli vs deflate vs zopfli vs lzma vs lzham vs bzip2
- If you're looking for compression speed, then what you're looking for is which lines are further right on this chart. (The entries to the top of this chart show tight compression ratio. Higher=tighter. However, if compression speed is your priority, then you'll want to pay more attention to what lines reach further right on the chart.)
- Facebook's ZStandard is another option, striving to reduce bits but also having a high focus on storing data in a way that reduces missed predictions, thereby allowing for faster speed. Its home page is at: Smaller and faster data compression with ZStandard
- Lizard doesn't get quite as high of compression as Brotli or ZStandard, but may be somewhat close in compression ratio, and be quite a bit faster (at least according to this chart which is about speed, although that is reporting decompression)
You didn't mention an operating system. If Windows, 7-Zip with ZStandard (Releases) is a version of 7-Zip that has been modified to provide support for using all of these algorithms.
Solution 4:
Use zstd. If it's good enough for Facebook, it's probably good enough for you as well.
On a more serious note, it's actually pretty good. I use it for everything now because it just works, and it lets you trade speed for ratio on a large scale (most often, speed matters more than size anyway since storage is cheap, but speed is a bottleneck).
At compression levels that achieve comparable overall compression as bzip2, it's significantly faster, and if you are willing to pay some extra in CPU time, you can almost achieve results similar to LZMA (although then it will be slower than bzip2). At sligthly worse compression ratios, it is much, much faster than bzip2 or any other mainstream alternative.
Now, your are compressing a SQL dump, which is just about as embarrassingly trivial to compress as it can be. Even the poorest compressors score well on that kind of data.
So you can run zstd
with a lower compression level, which will run dozens of times faster and still achieve 95-99% the same compression on that data.
As a bonus, if you will be doing this often and want to invest some extra time, you can "train" the zstd
compressor ahead of time, which increases both compression ratio and speed. Note that for training to work well, you will need to feed it individual records, not the whole thing. The way the tool works, it expects many small and somewhat similar samples for training, not one huge blob.