Calculate average execution time of a program using Bash
Total execution time vs sum of single execution time
Care! dividing sum of N rounded execution time is imprecise!
Instead, we could divide total execution time of N iteration (by N)
avg_time_alt() {
local -i n=$1
local foo real sys user
shift
(($# > 0)) || return;
{ read foo real; read foo user; read foo sys ;} < <(
{ time -p for((;n--;)){ "$@" &>/dev/null ;} ;} 2>&1
)
printf "real: %.5f\nuser: %.5f\nsys : %.5f\n" $(
bc -l <<<"$real/$n;$user/$n;$sys/$n;" )
}
Nota: This uses bc
instead of awk
to compute the average. For this, we would create a temporary bc
file:
printf >/tmp/test-pi.bc "scale=%d;\npi=4*a(1);\nquit\n" 60
This would compute ¶
with 60 decimals, then exit quietly. (You can adapt number of decimals for your host.)
Demo:
avg_time_alt 1000 sleep .001
real: 0.00195
user: 0.00008
sys : 0.00016
avg_time_alt 1000 bc -ql /tmp/test-pi.bc
real: 0.00172
user: 0.00120
sys : 0.00058
Where codeforester's function will anser:
avg_time 1000 sleep .001
real 0.000000
user 0.000000
sys 0.000000
avg_time 1000 bc -ql /tmp/test-pi.bc
real 0.000000
user 0.000000
sys 0.000000
Alternative, inspired by choroba's answer, using Linux's/proc
Ok, you could consider:
avgByProc() {
local foo start end n=$1 e=$1 values times
shift;
export n;
{
read foo;
read foo;
read foo foo start foo
} < /proc/timer_list;
mapfile values < <(
for((;n--;)){ "$@" &>/dev/null;}
read -a endstat < /proc/self/stat
{
read foo
read foo
read foo foo end foo
} </proc/timer_list
printf -v times "%s/100/$e;" ${endstat[@]:13:4}
bc -l <<<"$[end-start]/10^9/$e;$times"
)
printf -v fmt "%-7s: %%.5f\\n" real utime stime cutime cstime
printf "$fmt" ${values[@]}
}
This is based on /proc
:
man 5 proc | grep [su]time\\\|timer.list | sed 's/^/> /' (14) utime %lu (15) stime %lu (16) cutime %ld (17) cstime %ld /proc/timer_list (since Linux 2.6.21)
Then now:
avgByProc 1000 sleep .001
real : 0.00242
utime : 0.00015
stime : 0.00021
cutime : 0.00082
cstime : 0.00020
Where utime
and stime
represent user time and system time for bash himself and cutime
and cstime
represent child user time and child system time wich are the most interesting.
Nota: In this case (sleep
) command won't use a lot of ressources.
avgByProc 1000 bc -ql /tmp/test-pi.bc
real : 0.00175
utime : 0.00015
stime : 0.00025
cutime : 0.00108
cstime : 0.00032
This become more clear...
Of course, as accessing timer_list
and self/stat
successively but not atomicaly, differences between real
(nanosecs based) and c?[su]time
(based in ticks ie: 1/100th sec) may appear!
You could write a loop and collect the output of time
command and pipe it to awk
to compute the average:
avg_time() {
#
# usage: avg_time n command ...
#
n=$1; shift
(($# > 0)) || return # bail if no command given
for ((i = 0; i < n; i++)); do
{ time -p "$@" &>/dev/null; } 2>&1 # ignore the output of the command
# but collect time's output in stdout
done | awk '
/real/ { real = real + $2; nr++ }
/user/ { user = user + $2; nu++ }
/sys/ { sys = sys + $2; ns++}
END {
if (nr>0) printf("real %f\n", real/nr);
if (nu>0) printf("user %f\n", user/nu);
if (ns>0) printf("sys %f\n", sys/ns)
}'
}
Example:
avg_time 5 sleep 1
would give you
real 1.000000
user 0.000000
sys 0.000000
This can be easily enhanced to:
- sleep for a given amount of time between executions
- sleep for a random time (within a certain range) between executions
Meaning of time -p
from man time
:
-p When in the POSIX locale, use the precise traditional format "real %f\nuser %f\nsys %f\n" (with numbers in seconds) where the number of decimals in the output for %f is unspecified but is sufficient to express the clock tick accuracy, and at least one.
You may want to check out this command-line benchmarking tool as well:
sharkdp/hyperfine