Can't understand Belady's anomaly
The reason that when using FIFO, increasing the number of pages can increase the fault rate in some access patterns, is because when you have more pages, recently requested pages can remain at the bottom of the FIFO queue longer.
Consider the third time that "3" is requested in the wikipedia example here: http://en.wikipedia.org/wiki/Belady%27s_anomaly
Page faults are marked with an "f".
1:
Page Requests 3 2 1 0 3 2 4 3 2 1 0 4
Newest Page 3f 2f 1f 0f 3f 2f 4f 4 4 1f 0f 0
3 2 1 0 3 2 2 2 4 1 1
Oldest Page 3 2 1 0 3 3 3 2 4 4
2:
Page Requests 3 2 1 0 3 2 4 3 2 1 0 4
Newest Page 3f 2f 1f 0f 0 0 4f 3f 2f 1f 0f 4f
3 2 1 1 1 0 4 3 2 1 0
3 2 2 2 1 0 4 3 2 1
Oldest Page 3 3 3 2 1 0 4 3 2
In the first example (with fewer pages), there are 9 page faults.
In the second example (with more pages), there are 10 page faults.
When using FIFO, increasing the size of the cache changes the order in which items are removed. Which in some cases, can increase the fault rate.
Belady's Anomaly does not state anything about the general trend of fault rates with respect to cache size. So your reasoning (about viewing the cache as a pipe), in the general case is not wrong.
In summary: Belady's Anomaly points out that it is possible to exploit the fact that larger cache sizes can cause items in the cache to be raised in the FIFO queue later than smaller cache sizes, in order to cause larger cache sizes to have a higher fault rate under a particular (and possibly rare) access pattern.
This statement is wrong because it is overgeneralized:
Belady's Anomaly states that when using a FIFO page replacement policy, when adding more page space we'll have more page faults
This is a corrected version:
"Belady's Anomaly states that when using a FIFO page replacement policy, when adding more page space, some memory access patterns will actually result in more page faults."
In other words... whether the anomaly is observed depends on the actual memory access pattern.