Does Python's reduce() short circuit?
It doesn't. Your alternative in this case is any and all.
result = reduce(operator.and_, [False] * 1000)
result = reduce(operator.or_, [True] * 1000)
can be replaced by
result = all([False] * 1000)
result = any([True] * 1000)
which do short circuit.
The timing results show the difference:
In [1]: import operator
In [2]: timeit result = reduce(operator.and_, [False] * 1000)
10000 loops, best of 3: 113 us per loop
In [3]: timeit result = all([False] * 1000)
100000 loops, best of 3: 5.59 us per loop
In [4]: timeit result = reduce(operator.or_, [True] * 1000)
10000 loops, best of 3: 113 us per loop
In [5]: timeit result = any([True] * 1000)
100000 loops, best of 3: 5.49 us per loop
Not only does reduce() not short-circuit, it cannot possibly short-circuit over all the items being reduced, because it only considers the items two at a time. Additionally, it has no idea of the conditions under which the function being used short-circuits. (It would be sorta nifty if functions could have a property that indicates the value at which they begin to short-circuit, which reduce() could then recognize and use, but they don't.)
It may well be possible (see fate of reduce) that an alternative reduce implementation will do a good job.
This idea has perfectly worked for me to make things more transparent in the design.
def ipairs(seq):
prev = None
for item in seq:
if prev is not None:
yield (prev, item)
prev = item
def iapply(seq, func):
for a, b in ipairs(seq):
yield func(a, b)
def satisfy(seq, cond):
return all(iapply(seq, cond))
def is_uniform(seq):
return satisfy(seq, lambda a, b: a == b)
As you see reduce is broken into iapply <- ipairs.
Please note it is not equivalent to
def ireduce(seq, func):
prev = None
for item in seq:
if prev is None:
prev = item
else:
prev = func(prev, item)
return prev