Best compression for ZFS send/recv
Solution 1:
Here is what I've learned doing the exact same thing you are doing. I suggest using mbuffer. When testing in my environment it only helped on the receiving end, without it at all the send would slow down while the receive caught up.
Some examples: http://everycity.co.uk/alasdair/2010/07/using-mbuffer-to-speed-up-slow-zfs-send-zfs-receive/
Homepage with options and syntax http://www.maier-komor.de/mbuffer.html
The send command from my replication script:
zfs send -i tank/pool@oldsnap tank/pool@newsnap | ssh -c arcfour remotehostip "mbuffer -s 128k -m 1G | zfs receive -F tank/pool"
this runs mbuffer on the remote host as a receive buffer so the sending runs as fast as possible. I run a 20mbit line and found that having mbuffer on the sending side as well didn't help, also my main zfs box is using all of it's ram as cache so giving even 1g to mbuffer would require me to reduce some cache sizes.
Also, and this isnt really my area of expertise, I think it's best to just let ssh do the compression. In your example I think you are using bzip and then using ssh which by default uses compression, so SSH is trying to compress a compressed stream. I ended up using arcfour as the cipher as it's the least CPU intensive and that was important for me. You may have better results with another cipher, but I'd definately suggest letting SSH do the compression (or turn off ssh compression if you really want to use something it doesn't support).
Whats really interesting is that using mbuffer when sending and receiving on localhost speeds things up as well:
zfs send tank/pool@snapshot | mbuffer -s 128k -m 4G -o - | zfs receive -F tank2/pool
I found that 4g for localhost transfers seems to be the sweetspot for me. It just goes to show that zfs send/receive doesn't really like latency or any other pauses in the stream to work best.
Just my experience, hope this helps. It took me awhile to figure all this out.
Solution 2:
Things have changed in the years since this question was posted:
1: ZFS now supports compressed replication, just add the -c flag to the zfs send command, and blocks what were compressed on disk will remain compressed as they pass through the pipe to the other end. There may still be more compression to be gained, because the default compression in ZFS is lz4
2: The best compressor to use in this case is zstd (ZStandard), it now has an 'adaptive' mode that will change the compression level (between the 19+ levels supported, plus the new higher speed zstd-fast levels) based on the speed of the link between zfs send and zfs recv. It compresses as much as it can while keeping the queue of data waiting to go out the pipe to a minimum. If your link is fast it won't waste time compressing the data more, and if your link is slow, it will keep working to compress the data more and save you time in the end. It also supports threaded compression, so I can take advantage of multiple cores, which gzip and bzip do not, outside of special versions like pigzip.
Solution 3:
This is an answer to your specific question:
You can try rzip, but it works in ways that are a bit different from compress/bzip/gzip:
rzip expects to be able to read over the whole file, so it can't be run in a pipeline. This will greatly increase your local storage requirements and you won't be able to run a backup and send the backup over the wire in one single pipe. That said, the resulting files, at least according to this test, are quite a bit smaller.
If your resource constraint is your pipe, you'll be running backups 24x7 anyhow so you'll need to just be copying snapshots constantly and hoping you keep up anyhow.
Your new command would be:
remotedir=/big/filesystem/on/remote/machine/
while
snaploc=/some/big/filesystem/
now=$(date +%s)
snap=snapshot.$now.zfssnap
test -f $snaploc/$snap
do
sleep 1
done
zfs send -i tank/vm@2009-10-10 tank/vm@2009-10-12 > $snaploc/$snap &&
rzip $snaploc/$snap &&
ssh offsite-backup "
cat > $remotedir/$snap.rzip &&
rzip -d $remotedir/$snap.rzip &&
zfs recv -F tank/vm < $remotedir/$snap &&
rm $remotedir/$snap " < $snaploc/$snap &&
rm $snaploc/$snap
You will want to put better error correction in, and you'll want to consider using something like rsync to transfer the compressed files so if the transfer fails in the middle you can pick up where you left off.
Solution 4:
It sounds like you've tried all of the best compression mechanisms and are still being limited by the line speed. Assuming running a faster line is out of the question, have you considered just running the backups less frequently so that they have more time to run?
Short of that, is there some kind of way to lower the amount of data being written? Without knowing your application stack its hard to say how, but just doing things like making sure apps are overwriting existing files instead of creating new ones might help. And making sure you arent saving backups of temp/cache files that you wont need.
Solution 5:
My experience is that zfs send
is quite bursty despite being much faster (on average) than the following compression step. My backup inserts considerable buffering after zfs send
and more after gzip
:
zfs send $SNAP | mbuffer $QUIET -m 100M | gzip | mbuffer -q -m 20M | gpg ... > file
In my case the output device is USB (not network) connected, but the buffering is important for a similar reason: The overall backup time is faster when the USB drive is kept 100% busy. You may not send fewer bytes overall (as you request) but you can still finish sooner. Buffering keeps the CPU-bound compression step from becoming IO-bound.