How expensive is calling time(NULL) in server loop?
It's a system call, like the other answers said, and the other answers give you a good way of measuring the cost on your system. (Once in the kernel it doesn't have to do much work, so it's pretty close to the cost of pure syscall overhead. And Linux has done what it can to implement syscalls effectively. So in that sense, you can consider it pretty well optimized.)
Unlike the other answers, I wouldn't consider this so cheap as to be automatically not worth worrying about. If this is in an inner loop, it depends on what else you're doing in your inner loop. If this is a server processing requests, it's probably making many syscalls per request, and one more will indeed not be much of a change in the cost of each request. However, I have seen code where the syscall overhead from calling time() (or gettimeofday(), which is what it really boils down to) does have a detrimental impact.
If you're worried about the cost, the next thing to ask yourself is what cheaper ways of finding the time are available. In general, there's not going to be a cheaper good way. If you're on x86, you can ask the CPU with the rdtsc instruction (and there's likely an analog on other cpu architectures) -- that's a single assembly instruction that's not privileged so you can drop it into your code anywhere. But there are a lot of pitfalls -- rdtsc doesn't always increase at a predictable rate, especially if the cpu speed changes for power management, depending on which precise model of cpu you're using; the values may not be synchronized across multiple cpus, etc. The OS keeps track of all this and will give you the friendly, easy-to-use version of the information when you call gettimeofday().
Fetching the current time involves a system call to Linux. As sugested by Vilx, it is rather easy to benchmark:
#include <time.h>
int main(void)
{
int i;
for (i = 0; i < 10000000; i++)
time(NULL);
return 0;
}
Running this program takes 6.26s on my wimpy 1.6GHz Atom 330 with a 64-bit kernel, equating to approximately 1002 CPU cycles per call (6.26s * 1.6G cycles per second / 10M iters ≈ 1002 cycles).
This certainly doesn't warrant much concern, as noted by others.
On Linux (not ancient ones), it's not a system call, and it's really fast, so fast that just takes less than 10 cycles. Implemented in vdso, which is user space call.
See: https://github.com/torvalds/linux/blob/dd53a4214d4ff450b66ca7d2e51d9369e3266ebf/arch/x86/entry/vdso/vclock_gettime.c#L318