Linux perf events: cpu-clock and task-clock - what is the difference
According to this message, they measure the same thing.
They just differ in when they sample.
cpu-clock is wall-clock based -- so samples are taken at regular intervals relative to walltime. I believe that task-clock is relative to the task run time. So, samples are taken at regular intervals relative to the process' runtime.
When I run it on my multi-threaded app, it indeed shows nearly identical values.
1) By default, perf stat
shows task-clock
, and does not show cpu-clock
. Therefore we can tell task-clock
was expected to be much more useful.
2) cpu-clock
was simply broken, and has not been fixed for many years. It is best to ignore it.
It was intended that cpu-clock
of sleep 1
would show about 1 second. In contrast, task-clock
would show close to zero. It would have made sense to use cpu-clock
to read wall clock time. You could then look at the ratio between cpu-clock
and task-clock
.
But in the current implementation, cpu-clock
is equivalent to task-clock
. It is even possible that "fixing" the existing counter might break some userspace program. If there is such a program, Linux might not be able to "fix" this counter. Linux might need to define a new counter instead.
Exception: starting with v4.7-rc1, when profiling a CPU or CPUs - as opposed to a specific task - e.g. perf stat -a
. perf stat -a
shows cpu-clock
instead of task-clock
. In this specific case, the two counters were intended to be equivalent. The original intention for cpu-clock
makes more sense in this case. So for perf stat -a
, you could just ignore this difference, and interpret it as task-clock
.
If you write your own code which profiles a CPU or CPUs - as opposed to a specific task - perhaps it would be clearest to follow the implementation of perf stat -a
. But you might link to this question, to explain what your code is doing :-).
Subject: Re: perf: some questions about perf software events
From: Peter ZijlstraOn Sat, 2010-11-27 at 14:28 +0100, Franck Bui-Huu wrote:
Peter Zijlstra writes:
On Wed, 2010-11-24 at 12:35 +0100, Franck Bui-Huu wrote:
[...]
Also I'm currently not seeing any real differences between cpu-clock and task-clock events. They both seem to count the time elapsed when the task is running on a CPU. Am I wrong ?
No, Francis already noticed that, I probably wrecked it when I added the multi-pmu stuff, its on my todo list to look at (Francis also handed me a little patchlet), but I keep getting distracted with other stuff :/
OK.
Does it make sense to adjust the period for both of them ?
Also, when creating a task clock event, passing 'pid=-1' to sys_perf_event_open() doesn't really make sense, does it ?
Same with cpu clock and 'pid=n': whatever value, the event measure the cpu wall time clock.
Perhaps proposing only one clock in the API and internally bind this clock to the cpu or task clock depending on pid or cpu parameters would have been better ?
No, it actually makes sense to count both cpu and task clock on a task (cpu clock basically being wall-time).
On a more superficial level, perf stat
output for cpu-clock
can be slightly different from that of task-clock
in perf earlier than v4.7-rc1. For example, it may print "CPUs utilized" for task-clock
but not for cpu-clock
.
Generally speaking: The cpu-clock event measures the passage of time. It uses the Linux CPU clock as the timing source.
Here is an in-depth article on finding execution hot spots with perf: http://sandsoftwaresound.net/perf/perf-tutorial-hot-spots/
The task-clock tells you how parallel your job has been/how many cpus were used. This compendium contains detaild information of output generated by perf: https://doc.zih.tu-dresden.de/hpc-wiki/bin/view/Compendium/PerfTools
There is also a whole lot of information here: https://stackoverflow.com/a/20378648/8223204