Why is buffering in C++ important?
The main issue with writing to the disk is that the time taken to write is not a linear function of the number bytes, but an affine one with a huge constant.
In computing terms, it means that, for IO, you have a good throughput (less than memory, but quite good still), however you have poor latency (a tad better than network normally).
If you look at evaluation articles of HDD or SSD, you'll notice that the read/write tests are separated in two categories:
- throughput in random reads
- throughput in contiguous reads
The latter is normally significantly greater than the former.
Normally, the OS and the IO library should abstract this for you, but as you noticed, if your routine is IO intensive, you might gain by increasing the buffer size. This is normal, the library is generally tailored for all kinds of uses and thus offers a good middle-ground for average applications. If your application is not "average", then it might not perform as fast as it could.
For the stand of file operations, writing to memory (RAM) is always faster than writing to the file on the disk directly.
For illustration, let's define:
- each write IO operation to a file on the disk costs 1 ms
- each write IO operation to a file on the disk over a network costs 5 ms
- each write IO operation to the memory costs 0.5 ms
Let's say we have to write some data to a file 100 times.
Case 1: Directly Writing to File On Disk
100 times x 1 ms = 100 ms
Case 2: Directly Writing to File On Disk Over Network
100 times x 5 ms = 500 ms
Case 3: Buffering in Memory before Writing to File on Disk
(100 times x 0.5 ms) + 1 ms = 51 ms
Case 4: Buffering in Memory before Writing to File on Disk Over Network
(100 times x 0.5 ms) + 5 ms = 55 ms
Conclusion
Buffering in memory is always faster than direct operation. However if your system is low on memory and has to swap with page file, it'll be slow again. Thus you have to balance your IO operations between memory and disk/network.
What compiler/platform are you using? I see no significant difference here (RedHat, gcc 4.1.2); both programs take 5-6 seconds to finish (but "user" time is about 150 ms). If I redirect output to a file (through the shell), total time is about 300 ms (so most of the 6 seconds is spent waiting for my console to catch up to the program).
In other words, output should be buffered by default, so I'm curious why you're seeing such a huge speedup.
3 tangentially-related notes:
- Your program has an off-by-one error in that you only print 199999 times instead of the stated 200000 (either start with
i = 0
or end withi <= 200000
) - You're mixing
printf
syntax withcout
syntax when outputting count...the fix for that is obvious enough. - Disabling
sync_with_stdio
produces a small speedup (about 5%) for me when outputting to console, but the impact is negligible when redirecting to file. This is a micro-optimization which you probably wouldn't need in most cases (IMHO).