Apache Benchmark - concurrency and number of requests

It means a single test with a total of 100 requests, keeping 20 requests open at all times. I think the misconception you have is that requests all take the same amount of time, which is virtually never the case. Instead of issuing requests in batches of 20, ab simply starts with 20 requests and issues a new one each time an existing request finishes.

For example, testing with ab -n 10 -c 3 would start with3 concurrent requests:

[1, 2, 3]

Let's say #2 finishes first, ab replaces it with a fourth:

[1, 4, 3]

... then #1 may finish, replaced by a fifth:

[5, 4, 3]

... Then #3 finishes:

[5, 4, 6]

... and so on, until a total of 10 requests have been made. (As requests 8, 9, and 10 complete the concurrency tapers off to 0 of course.)

Make sense?

As to your question about why you see results with more failures than total requests... I don't know the answer to that. I can't say I've seen that. Can you post links or test cases that show this?

Update: In looking at the source, ab tracks four types of errors which are detailed below the "Failed requests: ..." line:

  • Connect - (err_conn in source) Incremented when ab fails to set up the HTTP connection
  • Receive - (err_recv in source) Incremented when ab fails a read of the connection fails
  • Length - (err_length in source) Incremented when the response length is different from the length of the first good response received.
  • Exceptions - (err_except in source) Incremented when ab sees an error while polling the connection socket (e.g. the connection is killed by the server?)

The logic around when these occur and how they are counted (and how the total bad count is tracked) is, of necessity, a bit complex. It looks like the current version of ab should only count a failure once per request, but perhaps the author of that article was using a prior version that was somehow counting more than one? That's my best guess.

If you're able to reproduce the behavior, definitely file a bug.