How do I check if there are duplicates in a flat list?
Use set()
to remove duplicates if all values are hashable:
>>> your_list = ['one', 'two', 'one']
>>> len(your_list) != len(set(your_list))
True
I thought it would be useful to compare the timings of the different solutions presented here. For this I used my own library simple_benchmark
:
So indeed for this case the solution from Denis Otkidach is fastest.
Some of the approaches also exhibit a much steeper curve, these are the approaches that scale quadratic with the number of elements (Alex Martellis first solution, wjandrea and both of Xavier Decorets solutions). Also important to mention is that the pandas solution from Keiku has a very big constant factor. But for larger lists it almost catches up with the other solutions.
And in case the duplicate is at the first position. This is useful to see which solutions are short-circuiting:
Here several approaches don't short-circuit: Kaiku, Frank, Xavier_Decoret (first solution), Turn, Alex Martelli (first solution) and the approach presented by Denis Otkidach (which was fastest in the no-duplicate case).
I included a function from my own library here: iteration_utilities.all_distinct
which can compete with the fastest solution in the no-duplicates case and performs in constant-time for the duplicate-at-begin case (although not as fastest).
The code for the benchmark:
from collections import Counter
from functools import reduce
import pandas as pd
from simple_benchmark import BenchmarkBuilder
from iteration_utilities import all_distinct
b = BenchmarkBuilder()
@b.add_function()
def Keiku(l):
return pd.Series(l).duplicated().sum() > 0
@b.add_function()
def Frank(num_list):
unique = []
dupes = []
for i in num_list:
if i not in unique:
unique.append(i)
else:
dupes.append(i)
if len(dupes) != 0:
return False
else:
return True
@b.add_function()
def wjandrea(iterable):
seen = []
for x in iterable:
if x in seen:
return True
seen.append(x)
return False
@b.add_function()
def user(iterable):
clean_elements_set = set()
clean_elements_set_add = clean_elements_set.add
for possible_duplicate_element in iterable:
if possible_duplicate_element in clean_elements_set:
return True
else:
clean_elements_set_add( possible_duplicate_element )
return False
@b.add_function()
def Turn(l):
return Counter(l).most_common()[0][1] > 1
def getDupes(l):
seen = set()
seen_add = seen.add
for x in l:
if x in seen or seen_add(x):
yield x
@b.add_function()
def F1Rumors(l):
try:
if next(getDupes(l)): return True # Found a dupe
except StopIteration:
pass
return False
def decompose(a_list):
return reduce(
lambda u, o : (u[0].union([o]), u[1].union(u[0].intersection([o]))),
a_list,
(set(), set()))
@b.add_function()
def Xavier_Decoret_1(l):
return not decompose(l)[1]
@b.add_function()
def Xavier_Decoret_2(l):
try:
def func(s, o):
if o in s:
raise Exception
return s.union([o])
reduce(func, l, set())
return True
except:
return False
@b.add_function()
def pyrospade(xs):
s = set()
return any(x in s or s.add(x) for x in xs)
@b.add_function()
def Alex_Martelli_1(thelist):
return any(thelist.count(x) > 1 for x in thelist)
@b.add_function()
def Alex_Martelli_2(thelist):
seen = set()
for x in thelist:
if x in seen: return True
seen.add(x)
return False
@b.add_function()
def Denis_Otkidach(your_list):
return len(your_list) != len(set(your_list))
@b.add_function()
def MSeifert04(l):
return not all_distinct(l)
And for the arguments:
# No duplicate run
@b.add_arguments('list size')
def arguments():
for exp in range(2, 14):
size = 2**exp
yield size, list(range(size))
# Duplicate at beginning run
@b.add_arguments('list size')
def arguments():
for exp in range(2, 14):
size = 2**exp
yield size, [0, *list(range(size)]
# Running and plotting
r = b.run()
r.plot()
Recommended for short lists only:
any(thelist.count(x) > 1 for x in thelist)
Do not use on a long list -- it can take time proportional to the square of the number of items in the list!
For longer lists with hashable items (strings, numbers, &c):
def anydup(thelist):
seen = set()
for x in thelist:
if x in seen: return True
seen.add(x)
return False
If your items are not hashable (sublists, dicts, etc) it gets hairier, though it may still be possible to get O(N logN) if they're at least comparable. But you need to know or test the characteristics of the items (hashable or not, comparable or not) to get the best performance you can -- O(N) for hashables, O(N log N) for non-hashable comparables, otherwise it's down to O(N squared) and there's nothing one can do about it:-(.
This is old, but the answers here led me to a slightly different solution. If you are up for abusing comprehensions, you can get short-circuiting this way.
xs = [1, 2, 1]
s = set()
any(x in s or s.add(x) for x in xs)
# You can use a similar approach to actually retrieve the duplicates.
s = set()
duplicates = set(x for x in xs if x in s or s.add(x))