How can I best copy large numbers of small files over scp?
You can pipe tar across an ssh session:
$ tar czf - <files> | ssh user@host "cd /wherever && tar xvzf -"
Tar with bzip2 compression should take as much load off the network and on the cpu.
$ tar -C /path/to/src/dir -jcf - ./ | ssh user@server 'tar -C /path/to/dest/dir -jxf -'
Not using -v
because screen output might slow down the process. But if you want a verbose output use it on the local side of tar (-jcvf
), not on the remote part.
If you repeatedly copy over the same destination path, like updating a backup copy, your best choice is rsync with compression.
$ rsync -az -e ssh /path/to/src/dir/ user@server:/path/to/dest/dir/
Notice that both src and dest paths end with a /. Again, not using -v
and -P
flags on purpose, add them if you need verbose output.
use rsync
, it uses SSH.
Usage:
rsync -aPz /source/path destination.server:remote/path
The rsync switches care about compression and I-Node information. -P
displays progress of every file.
You can use scp -C
, which enables compression, but if possible, use rsync
.