how to create multi tar archives for a huge folder
I wrote this bash script to do it.
It basically forms an array containing the names of the files to go into each tar, then starts tar
in parallel on all of them.
It might not be the most efficient way, but it will get the job done as you want.
I can expect it to consume large amounts of memory though.
You will need to adjust the options in the start of the script.
You might also want to change the tar options cvjf
in the last line (like removing the verbose output v
for performance or changing compression j
to z
, etc ...).
Script
#!/bin/bash
# User configuratoin
#===================
files=(*.log) # Set the file pattern to be used, e.g. (*.txt) or (*)
num_files_per_tar=5 # Number of files per tar
num_procs=4 # Number of tar processes to start
tar_file_dir='/tmp' # Tar files dir
tar_file_name_prefix='tar' # prefix for tar file names
tar_file_name="$tar_file_dir/$tar_file_name_prefix"
# Main algorithm
#===============
num_tars=$((${#files[@]}/num_files_per_tar)) # the number of tar files to create
tar_files=() # will hold the names of files for each tar
tar_start=0 # gets update where each tar starts
# Loop over the files adding their names to be tared
for i in `seq 0 $((num_tars-1))`
do
tar_files[$i]="$tar_file_name$i.tar.bz2 ${files[@]:tar_start:num_files_per_tar}"
tar_start=$((tar_start+num_files_per_tar))
done
# Start tar in parallel for each of the strings we just constructed
printf '%s\n' "${tar_files[@]}" | xargs -n$((num_files_per_tar+1)) -P$num_procs tar cjvf
Explanation
First, all the file names that match the selected pattern are stored in the array files
. Next, the for loop slices this array and forms strings from the slices. The number of the slices is equal to the number of the desired tarballs. The resulting strings are stored in the array tar_files
. The for loop also adds the name of the resulting tarball to the beginning of each string. The elements of tar_files
take the following form (assuming 5 files/tarball):
tar_files[0]="tar0.tar.bz2 file1 file2 file3 file4 file5"
tar_files[1]="tar1.tar.bz2 file6 file7 file8 file9 file10"
...
The last line of the script, xargs
is used to start multiple tar
processes (up to the maximum specified number) where each one will process one element of tar_files
array in parallel.
Test
List of files:
$ls
a c e g i k m n p r t
b d f h j l o q s
Generated Tarballs: $ls /tmp/tar* tar0.tar.bz2 tar1.tar.bz2 tar2.tar.bz2 tar3.tar.bz2
Here's another script. You can choose whether you want precisely one million files per segment, or precisely 30 segments. I've gone with the former in this script, but the split
keyword allows either choice.
#!/bin/bash
#
DIR="$1" # The source of the millions of files
TARDEST="$2" # Where the tarballs should be placed
# Create the million-file segments
rm -f /tmp/chunk.*
find "$DIR" -type f | split -l 1000000 - /tmp/chunk.
# Create corresponding tarballs
for CHUNK in $(cd /tmp && echo chunk.*)
do
test -f "$CHUNK" || continue
echo "Creating tarball for chunk '$CHUNK'" >&2
tar cTf "/tmp/$CHUNK" "$TARDEST/$CHUNK.tar"
rm -f "/tmp/$CHUNK"
done
There are a number of niceties that could be applied to this script. The use of /tmp/chunk.
as the file list prefix should probably be pushed out into a constant declaration, and the code shouldn't really assume it can delete anything matching /tmp/chunk.*
, but I've left it this way as a proof of concept rather than a polished utility. If I were using this I would use mktemp
to create a temporary directory for holding the file lists.
This one does precisely what was requested:
#!/bin/bash
ctr=0;
# Read 1M lines, strip newline chars, put the results into an array named "asdf"
while readarray -n 1000000 -t asdf; do
ctr=$((${ctr}+1));
# "${asdf[@]}" expands each entry in the array such that any special characters in
# the filename won't cause problems
tar czf /destination/path/asdf.${ctr}.tgz "${asdf[@]}";
# If you don't want compression, use this instead:
#tar cf /destination/path/asdf.${ctr}.tar "${asdf[@]}";
# this is the canonical way to generate output
# for consumption by read/readarray in bash
done <(find /source/path -not -type d);
readarray
(in bash) can also be used to execute a callback function, so that could potentially be re-written to resemble:
function something() {...}
find /source/path -not -type d \
| readarray -n 1000000 -t -C something asdf
GNU parallel
could be leveraged to do something similar (untested; I don't have parallel
installed where I'm at so I'm winging it):
find /source/path -not -type d -print0 \
| parallel -j4 -d '\0' -N1000000 tar czf '/destination/path/thing_backup.{#}.tgz'
Since that's untested you could add the --dry-run
arg to see what it'll actually do. I like this one the best, but not everyone has parallel
installed. -j4
makes it use 4 jobs at a time, -d '\0'
combined with find
's -print0
makes it ignore special characters in the filename (whitespace, etc). The rest should be self explanatory.
Something similar could be done with parallel
but I don't like it because it generates random filenames:
find /source/path -not -type d -print0 \
| parallel -j4 -d '\0' -N1000000 --tmpdir /destination/path --files tar cz
I don't [yet?] know of a way to make it generate sequential filenames.
xargs
could also be used, but unlike parallel
there's no straightforward way to generate the output filename so you'd end up doing something stupid/hacky like this:
find /source/path -not -type d -print0 \
| xargs -P 4 -0 -L 1000000 bash -euc 'tar czf $(mktemp --suffix=".tgz" /destination/path/backup_XXX) "$@"'
The OP said they didn't want to use split ... I thought that seemed weird as cat
will re-join them just fine; this produces a tar and splits it into 3gb chunks:
tar c /source/path | split -b $((3*1024*1024*1024)) - /destination/path/thing.tar.
... and this un-tars them into the current directory:
cat $(\ls -1 /destination/path/thing.tar.* | sort) | tar x