What is the fastest way to run a script?
Terminals these days are slower than they used to be, mainly because graphics cards no longer care about 2D acceleration. So indeed, printing to a terminal can slow down a script, particularly when scrolling is involved.
Consequently ./script.sh
is slower than ./script.sh >script.log
, which in turn is slower than /script.sh >/dev/null
, because the latter involve less work. However whether this makes enough of a difference for any practical purpose depends on how much output your script produces and how fast. If your script writes 3 lines and exits, or if it prints 3 pages every few hours, you probably don't need to bother with redirections.
Edit: Some quick (and completely broken) benchmarks:
In a Linux console, 240x75:
$ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done) real 3m52.053s user 0m0.617s sys 3m51.442s
In an
xterm
, 260x78:$ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done) real 0m1.367s user 0m0.507s sys 0m0.104s
Redirect to a file, on a Samsung SSD 850 PRO 512GB disk:
$ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done >file) real 0m0.532s user 0m0.464s sys 0m0.068s
Redirect to
/dev/null
:$ time (for i in {1..100000}; do echo $i 01234567890123456789012345678901234567890123456789; done >/dev/null) real 0m0.448s user 0m0.432s sys 0m0.016s
I would have instinctively agreed with Satō Katsura's answer; it makes sense. However, it's easy enough to test.
I tested writing a million lines to the screen, writing (appending) to a file, and redirecting to /dev/null
. I tested each of these in turn, then did five replicates. These are the commands I used.
$ time (for i in {1..1000000}; do echo foo; done)
$ time (for i in {1..1000000}; do echo foo; done > /tmp/file.log)
$ time (for i in {1..1000000}; do echo foo; done > /dev/null)
I then plotted the total times below.
As you can see, Satō Katsura's presumptions were correct. As per Satō Katsura's answer, I also doubt that the limiting factor will be the output, so it's unlikely that the choice of output will have a substantial effect on the overall speed of the script.
FWIW, my original answer had different code, which had the file appending and /dev/null
redirect inside the loop.
$ rm /tmp/file.log; touch /tmp/file.log; time (for i in {1..1000000}; do echo foo >> /tmp/file.log; done)
$ time (for i in {1..1000000}; do echo foo > /dev/null; done)
As John Kugelman points out in the comments, this adds a lot of overhead. As the question stands, this is not really the right way to test it, but I'll leave it here as it clearly shows the cost of re-opening a file repeatedly from within the script itself.
In this case, the results are reversed.
Another way to speed up a script is to use a faster shell interpreter. Compare the speeds of a POSIX busy loop, run under bash
v4.4, ksh
v93u+20120801, and dash
v0.5.8.
bash
:time echo 'n=0;while [ $n -lt 1000000 ] ; do \ echo $((n*n*n*n*n*n*n)) ; n=$((n+1)); done' | bash -s > /dev/null
Output:
real 0m25.146s user 0m24.814s sys 0m0.272s
ksh
:time echo 'n=0;while [ $n -lt 1000000 ] ; do \ echo $((n*n*n*n*n*n*n)) ; n=$((n+1)); done' | ksh -s > /dev/null
Output:
real 0m11.767s user 0m11.615s sys 0m0.010s
dash
:time echo 'n=0;while [ $n -lt 1000000 ] ; do \ echo $((n*n*n*n*n*n*n)) ; n=$((n+1)); done' | dash -s > /dev/null
Output:
real 0m4.886s user 0m4.690s sys 0m0.184s
A subset of commands inbash
and ksh
are backwardly compatible to all of the commands in dash
. A bash
script that only uses commands in that subset should work with dash
.
Some bash
scripts that use new features can be converted to another interpreter. If the bash
script relies heavily on newer features, it may not be worth the bother -- some new bash
features are improvements which are both easier to code and more efficient, (despite bash
being generally slower), so that the dash
equivalent, (which might involve running several other commands), would be slower.
When in doubt, run a test...