How do I recursively copy/download a whole webdav directory?
This answer summarises suggestions given in comments by @Ocaso and @Rinzwind.
I used this:
wget -r -nH -np --cut-dirs=1 --no-check-certificate -U Mozilla --user={uname}
--password={pwd} https://my-host/my-webdav-dir/my-dir-in-webdav
Not perfect (downloaded lots of 'index.html?C=M;O=D' and the like) but otherwise worked ok.
The "-r" downloads recursively, following links.
The "-np" prevents ascending to parent directories (else you download the whole website!).
The "-nH" prevents creating a directory called "my-host" (which I didn't want).
The "--cut-dirs=1" prevents creating a directory called "my-webdav-dir".
The "--no-check-certificate" is because I'm using a self-signed certificate on the webdav server (I'm also forcing https).
The "-U Mozilla" sets the user agent in the http request to "Mozilla" - my webdav server didn't actually need this, but I've included it anyway.
Alternatively you can mount it as a path to be accessed as part of your own file system.
sudo mount -t davfs https://your.remote/path /your/local/mount/point
Note: /your/local/mount/point
has to be a real existing directory for this to work.
As far as I know you only need to run the following to get the command to work:
sudo apt-get install davfs2
(If more configuration is required I apologise, it was a long time ago that I did this.)
(I added this as an answer because I feel Liam's answer didn't give enough info.)
Actually with Cadaver you can cd
to directory from which you want to download files and mget *
.