How do you synchronise huge sparse files (VM disk images) between machines?
Solution 1:
rsync --ignore-existing --sparse ...
To create new files in sparse mode
Followed by
rsync --inplace ...
To update all existing files (including the previously created sparse ones) inplace.
Solution 2:
Rsync only transfers changes to each file and with --inplace should only rewrite the blocks that changed without recreating the file. From their features page.
rsync is a file transfer program for Unix systems. rsync uses the "rsync algorithm" which provides a very fast method for bringing remote files into sync. It does this by sending just the differences in the files across the link, without requiring that both sets of files are present at one of the ends of the link beforehand.
Using --inplace should work for you. This will show you progress, compress the transfer (at the default compression level), transfer the contents of the local storage directory recursively (that first trailing slash matters), make the changes to the files in place and use ssh for the transport.
rsync -v -z -r --inplace --progress -e ssh /path/to/local/storage/ \
[email protected]:/path/to/remote/storage/
I often use the -a flag as well which does a few more things. It's equivalent to -rlptgoD I'll leave the exact behavior for you to look up in the man page.
Solution 3:
Take a look at Zumastor Linux Storage Project it implements "snapshot" backup using binary "rsync" via the ddsnap
tool.
From the man-page:
ddsnap provides block device replication given a block level snapshot facility capable of holding multiple simultaneous snapshots efficiently. ddsnap can generate a list of snapshot chunks that differ between two snapshots, then send that difference over the wire. On a downstream server, write the updated data to a snapshotted block device.
Solution 4:
I ended up writing software to do this:
http://www.virtsync.com
This is commercial software costing $49 per physical server.
I can now replicate a 50GB sparse file (which has 3GB of content) in under 3 minutes across residential broadband.
chris@server:~$ time virtsync -v /var/lib/libvirt/images/vsws.img backup.barricane.com:/home/chris/
syncing /var/lib/libvirt/images/vsws.img to backup.barricane.com:/home/chris/vsws.img (dot = 1 GiB)
[........>.........................................]
done - 53687091200 bytes compared, 4096 bytes transferred.
real 2m47.201s
user 0m48.821s
sys 0m43.915s
Solution 5:
To sync huge files or block-devices with low to moderate differences you can either do a plain copy or use bdsync, rsync is absolutely not fit for this particular case*.
bdsync
worked for me, seems mature enough, it's history of bugs is encouraging (little issues, prompt resolution). In my tests it's speed was close to the theoretical maximum you could get** (that is you can sync in about the time you need to read the file). Finally it's open source and costs nothing.
bdsync
reads the files from both hosts and exchanges check-sums to compare them and detect differences. All these at the same time. It finally creates a compressed patch file on the source host. Then you move that file to the destination host and run bdsync a second time to patch the destination file.
When using it over a rather fast link (e.g. 100Mbit Ethernet) and for files with small differences (as is most often the case on VM disks) it reduces the time to sync to the time you need to read the file. Over a slow link you need a bit more time because you have to copy the compressed changes from one host to the other (it seems you can save time using a nice trick but haven't tested). For files with many changes the time to write the patch file to disk should also be taken into account (and you need enough free space in both hosts to hold it).
Here's how I typically use bdsync. These commands are run on $LOCAL_HOST
to "copy" $LOCAL_FILE
to $REMOTE_FILE
on $REMOTE_HOST
. I use pigz
(a faster gzip
) to compress the changes, ssh
to run bdsync on the remote host and rsync
/ssh
to copy the changes. Do note that I'm checking whether the patch has been applied successfully but I only print "Update successful" when it does. You may wish to do something more clevel in case of failure.
REMOTE_HOST=1.2.3.4
LOCAL_FILE=/path/to/source/file
REMOTE_FILE=/path/to/destination/file
PATCH=a_file_name
LOC_TMPDIR=/tmp/
REM_TMPDIR=/tmp/
# if you do use /tmp/ make sure it fits large patch files
# find changes and create a compressed patch file
bdsync "ssh $REMOTE_HOST bdsync --server" "$LOCAL_FILE" "$REMOTE_FILE" --diffsize=resize | pigz > "$LOC_TMPDIR/$PATCH"
# move patch file to remote host
rsync "$LOC_TMPDIR/$PATCH" $REMOTE_HOST:$REM_TMPDIR/$PATCH
# apply patch to remote file
(
ssh -T $REMOTE_HOST <<ENDSSH
pigz -d < $REM_TMPDIR/$PATCH | bdsync --patch="$REMOTE_FILE" --diffsize=resize && echo "ALL-DONE"
rm $REM_TMPDIR/$PATCH
ENDSSH
) | grep -q "ALL-DONE" && echo "Update succesful" && rm "$LOC_TMPDIR/$PATCH"
# (optional) update remote file timestamp to match local file
MTIME=`stat "$LOCAL_$FILE" -c %Y`
ssh $REMOTE_HOST touch -c -d @"$MTIME_0" "$REMOTE_FILE" </dev/null
*: rsync is hugely inefficient with huge files. Even with --inplace it will first read the whole file on the destination host, AFTERWARDS begin reading the file on the source host and finally transfer the differences (just run dstat or similar while running rsync and observe). The result is that even for files with small differences it takes about double the time you need to read the file in order to sync it.
**: Under the assumption that you have no other way to tell what parts of the files have changed. LVM snapshots use bitmaps to record the changed blocks so they can be extremely faster (The readme of lvmsync has more info).