Copy multiple files from s3 bucket

There is a bash script which can read all the filenames from a file filename.txt.

#!/bin/bash  
set -e  
while read line  
do  
  aws s3 cp s3://bucket-name/$line dest-path/  
done <filename.txt

You might want to use "sync" instead of "cp". The following will download/sync only the files with the ".txt" extension in your local folder:

aws s3 sync --exclude="*" --include="*.txt" s3://mybucket/mysubbucket .

As per the doc you can use include and exclude filters with s3 cp as well. So you can do something like this:

aws s3 cp s3://bucket/folder/ . --recursive --exclude="*" --include="2017-12-20*"

Make sure you get the order of exclude and include filters right as that could change the whole meaning.


Also one can use the --recursive option, as described in the documentation for cp command. It will copy all objects under a specified prefix recursively.

Example:

aws s3 cp s3://folder1/folder2/folder3 . --recursive

will grab all files under folder1/folder2/folder3 and copy them to local directory.