What is the accuracy of interval timers in Linux?
You're right not to expect timers to fire early - and they don't. The apparent early firing is because you're not measuring the time since the previous timer expired - you're measuring the time since the previous gettimeofday()
call. If there was a delay between the timer expiring and the process actually getting scheduled, then you will see this gettimeofday()
running late, and the next one running early by the same amount.
Instead of logging the difference between subsequent gettimeofday()
calls, try logging the absolute times returned, then compare the returned times against N * 100ms after the initial time.
If you want PREEMPT_RT
to help you, you will need to set a real-time scheduler policy for your test program (SCHED_FIFO
or SCHED_RR
), which requires root.
I made some changes to your code, and mainly replaced the timer
as follow and run the process as a RT progress(SCHED_FIFO).
setitimer() -> timer_create()/timer_settime()
gettimeofday() -> clock_gettime()
my testbed is i9-9900k CPU and PREEMPT-RT patched Linux with 5.0.21 kernel. The time interval of the timer is 1ms and the program run about 10 hours to generate the following result.
I also run Cyclictest
(based on nanosleep()
) on my machine, and it show better latency control (maximum latency less than 15us). So, In my opinion, if you you want to realize a high-resolution timer by your self, an standalone RT thread running nanosleep on an isolated core may be helpful. I am new in RT system, comments are welcome.