Calling multiple bash scripts and running them in parallel, not in sequence
for((i=1;i<100;i++)); do nohup bash script${i}.sh & done
A better way would be to use GNU Parallel. GNU parallel is simple and with it we can control the number of jobs to run in parallel with more control over the jobs.
In the below command, script{1..3}.sh
gets expanded and are sent as arguments to bash
in parallel. Here -j0
indicates that as many jobs should be run as possible. By default parallel
runs one job for one cpu core.
$ parallel -j0 bash :::: <(ls script{1..3}.sh)
And you can also try using
$ parallel -j0 bash ::: script{1..3}.sh
While executing the second method if you get any error message then it means that --tollef
option is set in /etc/parallel/config
and that needs to be deleted and every thing will work fine.
You can read GNU Parallels
man page here for more richer options.
And in case if your are running the jobs from a remote machine, better use screen
so that the session does not gets closed due to network problems. nohup
is not necessary, as recent versions of bash as coming with huponexit
as off
and this will prevent parent shell from sending HUP
signal to its children during its exit. In case if its not unset do it with
$ shopt -u huponexit
We can also use xargs
to run multiple script in parallel.
$ ls script{1..5}.sh|xargs -n 1 -P 0 bash
here each script is passed to bash as argument separately. -P 0
indicates that the number of parallel process can be as much as possible. It is also safer that using bash default job control feature
(&)
.