Good block size for disk-cloning with diskdump (dd)
64k seems to be a good pick:
Results:
no bs= 78s 144584+0 records
bs=512 78s 144584+0 records
bs=1k 38s 72292+0 records
bs=2k 38s 36146+0 records
bs=4k 38s 18073+0 records
bs=5k 39s 14458+1 records
bs=50k 38s 1445+1 records
bs=500k 39s 144+1 records
bs=512k 39s 144+1 records
bs=1M 39s 72+1 records
bs=5M 39s 14+1 records
bs=10M 39s 7+1 records
(taken from here).
this matches with my own findings regarding read/write buffering for speeding up an io-heavy converter-program i was once pimping @work.
dd will happily copy using the BS of whatever you want, and will copy a partial block (at the end).
Basically, the block size (bs) parameter seems to set the amount of memory thats used to read in a lump from one disk before trying to write that lump to the other.
If you have lots of RAM, then making the BS large (but entirely contained in RAM) means that the I/O sub-system is utilised as much as possible by doing massively large reads and writes - exploiting the RAM. Making the BS small means that the I/O overhead as a proportion of total activity goes up.
Of course in this there is a law of diminishing returns. My rough approximation is that a block size in the range about 128K to 32M is probably going to give performance such that the overheads are small compared to the plain I/O, and going larger won't make a lot of difference. The reason for the lower bound being 128K to 32M is - it depends on your OS, hardware, and so on.
If it were me, I'd do a few experiments timing a copy/clone using a BS of 128K and again using (say) 16M. If one is appreciably faster, use it. If not, then use the smaller BS of the two.
As others have said, there is no universally correct block size; what is optimal for one situation or one piece of hardware may be terribly inefficient for another. Also, depending on the health of the disks it may be preferable to use a different block size than what is "optimal".
One thing that is pretty reliable on modern hardware is that the default block size of 512 bytes tends to be almost an order of magnitude slower than a more optimal alternative. When in doubt, I've found that 64K is a pretty solid modern default. Though 64K usually isn't THE optimal block size, in my experience it tends to be a lot more efficient than the default. 64K also has a pretty solid history of being reliably performant: You can find a message from the Eug-Lug mailing list, circa 2002, recommending a block size of 64K here: http://www.mail-archive.com/[email protected]/msg12073.html
For determining THE optimal output block size, I've written the following script that tests writing a 128M test file with dd at a range of different block sizes, from the default of 512 bytes to a maximum of 64M. Be warned, this script uses dd internally, so use with caution.
dd_obs_test.sh:
#!/bin/bash
# Since we're dealing with dd, abort if any errors occur
set -e
TEST_FILE=${1:-dd_obs_testfile}
TEST_FILE_EXISTS=0
if [ -e "$TEST_FILE" ]; then TEST_FILE_EXISTS=1; fi
TEST_FILE_SIZE=134217728
if [ $EUID -ne 0 ]; then
echo "NOTE: Kernel cache will not be cleared between tests without sudo. This will likely cause inaccurate results." 1>&2
fi
# Header
PRINTF_FORMAT="%8s : %s\n"
printf "$PRINTF_FORMAT" 'block size' 'transfer rate'
# Block sizes of 512b 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M 4M 8M 16M 32M 64M
for BLOCK_SIZE in 512 1024 2048 4096 8192 16384 32768 65536 131072 262144 524288 1048576 2097152 4194304 8388608 16777216 33554432 67108864
do
# Calculate number of segments required to copy
COUNT=$(($TEST_FILE_SIZE / $BLOCK_SIZE))
if [ $COUNT -le 0 ]; then
echo "Block size of $BLOCK_SIZE estimated to require $COUNT blocks, aborting further tests."
break
fi
# Clear kernel cache to ensure more accurate test
[ $EUID -eq 0 ] && [ -e /proc/sys/vm/drop_caches ] && echo 3 > /proc/sys/vm/drop_caches
# Create a test file with the specified block size
DD_RESULT=$(dd if=/dev/zero of=$TEST_FILE bs=$BLOCK_SIZE count=$COUNT conv=fsync 2>&1 1>/dev/null)
# Extract the transfer rate from dd's STDERR output
TRANSFER_RATE=$(echo $DD_RESULT | \grep --only-matching -E '[0-9.]+ ([MGk]?B|bytes)/s(ec)?')
# Clean up the test file if we created one
if [ $TEST_FILE_EXISTS -ne 0 ]; then rm $TEST_FILE; fi
# Output the result
printf "$PRINTF_FORMAT" "$BLOCK_SIZE" "$TRANSFER_RATE"
done
View on GitHub
I've only tested this script on a Debian (Ubuntu) system and on OSX Yosemite, so it will probably take some tweaking to make work on other Unix flavors.
By default the command will create a test file named dd_obs_testfile in the current directory. Alternatively, you can provide a path to a custom test file by providing a path after the script name:
$ ./dd_obs_test.sh /path/to/disk/test_file
The output of the script is a list of the tested block sizes and their respective transfer rates like so:
$ ./dd_obs_test.sh
block size : transfer rate
512 : 11.3 MB/s
1024 : 22.1 MB/s
2048 : 42.3 MB/s
4096 : 75.2 MB/s
8192 : 90.7 MB/s
16384 : 101 MB/s
32768 : 104 MB/s
65536 : 108 MB/s
131072 : 113 MB/s
262144 : 112 MB/s
524288 : 133 MB/s
1048576 : 125 MB/s
2097152 : 113 MB/s
4194304 : 106 MB/s
8388608 : 107 MB/s
16777216 : 110 MB/s
33554432 : 119 MB/s
67108864 : 134 MB/s
(Note: The unit of the transfer rates will vary by OS)
To test optimal read block size, you could use more or less the same process, but instead of reading from /dev/zero and writing to the disk, you'd read from the disk and write to /dev/null. A script to do this might look like so:
dd_ibs_test.sh:
#!/bin/bash
# Since we're dealing with dd, abort if any errors occur
set -e
TEST_FILE=${1:-dd_ibs_testfile}
if [ -e "$TEST_FILE" ]; then TEST_FILE_EXISTS=$?; fi
TEST_FILE_SIZE=134217728
# Exit if file exists
if [ -e $TEST_FILE ]; then
echo "Test file $TEST_FILE exists, aborting."
exit 1
fi
TEST_FILE_EXISTS=1
if [ $EUID -ne 0 ]; then
echo "NOTE: Kernel cache will not be cleared between tests without sudo. This will likely cause inaccurate results." 1>&2
fi
# Create test file
echo 'Generating test file...'
BLOCK_SIZE=65536
COUNT=$(($TEST_FILE_SIZE / $BLOCK_SIZE))
dd if=/dev/urandom of=$TEST_FILE bs=$BLOCK_SIZE count=$COUNT conv=fsync > /dev/null 2>&1
# Header
PRINTF_FORMAT="%8s : %s\n"
printf "$PRINTF_FORMAT" 'block size' 'transfer rate'
# Block sizes of 512b 1K 2K 4K 8K 16K 32K 64K 128K 256K 512K 1M 2M 4M 8M 16M 32M 64M
for BLOCK_SIZE in 512 1024 2048 4096 8192 16384 32768 65536 131072 262144 524288 1048576 2097152 4194304 8388608 16777216 33554432 67108864
do
# Clear kernel cache to ensure more accurate test
[ $EUID -eq 0 ] && [ -e /proc/sys/vm/drop_caches ] && echo 3 > /proc/sys/vm/drop_caches
# Read test file out to /dev/null with specified block size
DD_RESULT=$(dd if=$TEST_FILE of=/dev/null bs=$BLOCK_SIZE 2>&1 1>/dev/null)
# Extract transfer rate
TRANSFER_RATE=$(echo $DD_RESULT | \grep --only-matching -E '[0-9.]+ ([MGk]?B|bytes)/s(ec)?')
printf "$PRINTF_FORMAT" "$BLOCK_SIZE" "$TRANSFER_RATE"
done
# Clean up the test file if we created one
if [ $TEST_FILE_EXISTS -ne 0 ]; then rm $TEST_FILE; fi
View on GitHub
An important difference in this case is that the test file is a file that is written by the script. Do not point this command at an existing file or the existing file will be overwritten with random data!
For my particular hardware I found that 128K was the most optimal input block size on a HDD and 32K was most optimal on a SSD.
Though this answer covers most of my findings, I've run into this situation enough times that I wrote a blog post about it: http://blog.tdg5.com/tuning-dd-block-size/ You can find more specifics on the tests I performed there.
This StackOverflow post may also be helpful: dd: How to calculate optimal blocksize?