Difference between AbsoluteTiming and Timing
Which one we use depends upon what we are trying to determine. If our goal is to measure algorithmic time complexity, Timing
(used carefully) is the tool. If we want to measure how long a computation took to run in our environment, AbsoluteTiming
is what we need.
Timing
measures the amount of CPU time consumed by the kernel to evaluate a given expression. The result is only approximate since, depending upon the underlying platform, it may or may not include CPU time used for system calls, page faults, process swaps, etc. It will also not include any CPU time used by parallel processes and threads, even other Mathematica kernels.
AbsoluteTiming
measures the amount of elapsed time (i.e. wall-clock time) to evaluate an expression. Again, the result is approximate due to platform-specific overhead and clock resolution.
Let's look at some examples.
Let's try evaluating a computation-heavy expression across multiple kernels. First, we'll measure the CPU time using Timing
:
bigSum[n_] := Sum[RandomInteger[10]&[], {i, 1, n}]
SeedRandom[0]
ParallelTable[bigSum[i] // Timing, {i, {2^22, 2^23}}] // Timing
(* {0.015,{{2.98,20964693},{5.913,41923486}}} *)
We see that the master kernel racked up only 0.015 seconds of CPU time since it was spending most of its time twiddling its thumbs waiting for the subkernels to finish. The two subkernels were busy though, using 2.98 and 5.913 seconds of CPU time each. The total CPU time used for the entire computation was 0.015s + 2.98s + 5.913s = 8.908s.
Now let's measure the same computation using AbsoluteTiming
to get the elapsed time:
SeedRandom[0]
ParallelTable[bigSum[i] // AbsoluteTiming, {i, {2^22, 2^23}}] // AbsoluteTiming
(* {5.9904000,{{2.9952000,20982605},{5.9592000,41944028}}} *)
We see that the first subkernel was done in 2.995s of elapsed time. The second subkernel needed 5.959s. The master kernel took just a little bit longer since it had to assemble the results, running for 5.990s. Unlike CPU time, these quantities do not add so the total elapsed time for the expression was the largest, 5.990s.
We can contrast these results with those from a computation that is not CPU intensive:
ParallelTable[(Pause[i*5];i) // Timing, {i, 1, 2}] // Timing
(* {0.,{{0.,1},{0.,2}}} *)
This time we see that, for practical purposes, none of the kernels used any CPU time. They did, however, take real time to execute:
ParallelTable[(Pause[i*5];i) // AbsoluteTiming, {i, 1, 2}] // AbsoluteTiming
(*{11.7624000,{{5.0076000,1},{10.0152000,2}}}*)
From these results we can see that Timing
is valuable when we are trying to determine the CPU load of a computation. This measure has a strong correlation to the time complexity of an algorithm, provided we take care to track the CPU time in all relevant processes.
AbsoluteTiming
is valuable when we don't really care about CPU resource usage or time complexity, but are primarily interested in how long a computation will take (to know whether we should take a coffee break or a vacation while we wait). It can also be useful to estimate computational cost of external processes that we cannot monitor directly (e.g. protected system processes or remote machines).
Beware that neither Timing
nor AbsoluteTiming
will account for time taken to render any computed results in the front end:
Format[slowRender[]] := Null /; (Pause[5]; False)
slowRender[] // Timing // AbsoluteTiming
(* {6.15813*10^-6, {0., slowRender[]}} *)
The kernel code that measures timing is unaware of the activities of the front end. Rendering time can be significant for large amounts of result data or for complex visualizations.
Update, 2015
The examples in this response were written in 2012 using Mathematica version 8 on Windows. As noted in Incorrect Timing of Total, version 10.3 offloads more processing to subsidiary threads whose CPU time cannot be tracked using Timing
(nor AbsoluteTiming
presuming there is more than one thread). Be aware of the possibility of such behaviour when the goal is to account for all CPU time consumed.
The documentation pages for both Timing
and AbsoluteTiming
allude to this problem:
On certain computer systems with multiple CPUs, the Wolfram Language kernel may sometimes spawn additional threads on different CPUs. On some operating systems,
Timing
may ignore these additional threads. On other operating systems, it may give the total time spent in all threads, which may exceed the result fromAbsoluteTiming
.
I used to use Timing[]
for all my performance evaluations ... and have completely stopped doing so ... I now use AbsoluteTiming[]
for everything. The reason for my switch is predominantly the advent of parallel processors and multiple kernels.
If you evaluate on a multiprocessor machine:
Timing[ blah ]
... Mathematica returns the time taken for ONLY the master kernel to issue the instruction to the slave kernels and manage them ... which might be 0.01 seconds; the slaves might spend an hour each on the calculation, but the timing taken by the master kernel is 0.01 seconds and that is what is reported back to you. In other words, Timing[]
in a multiprocessor environment has become highly misleading and largely pointless. It is not the timing function you generally want.
So, I now use AbsoluteTiming
for everything! It does what you expect Timing
to do. For me, Timing
is to be avoided.