What's the shortest way to count the number of items in a generator/iterator?
Calls to itertools.imap()
in Python 2 or map()
in Python 3 can be replaced by equivalent generator expressions:
sum(1 for dummy in it)
This also uses a lazy generator, so it avoids materializing a full list of all iterator elements in memory.
Method that's meaningfully faster than sum(1 for i in it)
when the iterable may be long (and not meaningfully slower when the iterable is short), while maintaining fixed memory overhead behavior (unlike len(list(it))
) to avoid swap thrashing and reallocation overhead for larger inputs:
# On Python 2 only, get zip that lazily generates results instead of returning list
from future_builtins import zip
from collections import deque
from itertools import count
def ilen(it):
# Make a stateful counting iterator
cnt = count()
# zip it with the input iterator, then drain until input exhausted at C level
deque(zip(it, cnt), 0) # cnt must be second zip arg to avoid advancing too far
# Since count 0 based, the next value is the count
return next(cnt)
Like len(list(it))
it performs the loop in C code on CPython (deque
, count
and zip
are all implemented in C); avoiding byte code execution per loop is usually the key to performance in CPython.
It's surprisingly difficult to come up with fair test cases for comparing performance (list
cheats using __length_hint__
which isn't likely to be available for arbitrary input iterables, itertools
functions that don't provide __length_hint__
often have special operating modes that work faster when the value returned on each loop is released/freed before the next value is requested, which deque
with maxlen=0
will do). The test case I used was to create a generator function that would take an input and return a C level generator that lacked special itertools
return container optimizations or __length_hint__
, using Python 3.3's yield from
:
def no_opt_iter(it):
yield from it
Then using ipython
%timeit
magic (substituting different constants for 100):
>>> %%timeit -r5 fakeinput = (0,) * 100
... ilen(no_opt_iter(fakeinput))
When the input isn't large enough that len(list(it))
would cause memory issues, on a Linux box running Python 3.5 x64, my solution takes about 50% longer than def ilen(it): return len(list(it))
, regardless of input length.
For the smallest of inputs, the setup costs to call deque
/zip
/count
/next
means it takes infinitesimally longer this way than def ilen(it): sum(1 for x in it)
(about 200 ns more on my machine for a length 0 input, which is a 33% increase over the simple sum
approach), but for longer inputs, it runs in about half the time per additional element; for length 5 inputs, the cost is equivalent, and somewhere in the length 50-100 range, the initial overhead is unnoticeable compared to the real work; the sum
approach takes roughly twice as long.
Basically, if memory use matters or inputs don't have bounded size and you care about speed more than brevity, use this solution. If inputs are bounded and smallish, len(list(it))
is probably best, and if they're unbounded, but simplicity/brevity counts, you'd use sum(1 for x in it)
.
A short way is:
def ilen(it):
return len(list(it))
Note that if you are generating a lot of elements (say, tens of thousands or more), then putting them in a list may become a performance issue. However, this is a simple expression of the idea where the performance isn't going to matter for most cases.