What's a faster operation, re.match/search or str.find?

The question: which is faster is best answered by using timeit.

from timeit import timeit
import re

def find(string, text):
    if string.find(text) > -1:
        pass

def re_find(string, text):
    if re.match(text, string):
        pass

def best_find(string, text):
    if text in string:
       pass

print timeit("find(string, text)", "from __main__ import find; string='lookforme'; text='look'")  
print timeit("re_find(string, text)", "from __main__ import re_find; string='lookforme'; text='look'")  
print timeit("best_find(string, text)", "from __main__ import best_find; string='lookforme'; text='look'")  

The output is:

0.441393852234
2.12302494049
0.251421928406

So not only should you use the in operator because it is easier to read, but because it is faster also.


Use this:

if 'lookforme' in s:
    do something

Regex need to be compiled first, which adds some overhead. Python's normal string search is very efficient anyways.

If you search the same term a lot or when you do something more complex then regex become more useful.


re.compile speeds up regexs a lot if you are searching for the same thing over and over. But I just got a huge speedup by using "in" to cull out bad cases before I match. Anecdotal, I know. ~Ben