python pandas.Series.str.contains WHOLE WORD
No, the regex /bis/b|/bsmall/b
will fail because you are using /b
, not \b
which means "word boundary".
Change that and you get a match. I would recommend using
\b(is|small)\b
This regex is a little faster and a little more legible, at least to me. Remember to put it in a raw string (r"\b(is|small)\b"
) so you don’t have to escape the backslashes.
First, you may want to convert everything to lowercase, remove punctuation and whitespace and then convert the result into a set of words.
import string
df['words'] = [set(words) for words in
df['col_name']
.str.lower()
.str.replace('[{0}]*'.format(string.punctuation), '')
.str.strip()
.str.split()
]
>>> df
col_name words
0 This is Donald. {this, is, donald}
1 His hands are so small {small, his, so, are, hands}
2 Why are his fingers so short? {short, fingers, his, so, are, why}
You can now use boolean indexing to see if all of your target words are in these new word sets.
target_words = ['is', 'small']
# Convert target words to lower case just to be safe.
target_words = [word.lower() for word in target_words]
df['match'] = df.words.apply(lambda words: all(target_word in words
for target_word in target_words))
print(df)
# Output:
# col_name words match
# 0 This is Donald. {this, is, donald} False
# 1 His hands are so small {small, his, so, are, hands} False
# 2 Why are his fingers so short? {short, fingers, his, so, are, why} False
target_words = ['so', 'small']
target_words = [word.lower() for word in target_words]
df['match'] = df.words.apply(lambda words: all(target_word in words
for target_word in target_words))
print(df)
# Output:
# Output:
# col_name words match
# 0 This is Donald. {this, is, donald} False
# 1 His hands are so small {small, his, so, are, hands} True
# 2 Why are his fingers so short? {short, fingers, his, so, are, why} False
To extract the matching rows:
>>> df.loc[df.match, 'col_name']
# Output:
# 1 His hands are so small
# Name: col_name, dtype: object
To make this all into a single statement using boolean indexing:
df.loc[[all(target_word in word_set for target_word in target_words)
for word_set in (set(words) for words in
df['col_name']
.str.lower()
.str.replace('[{0}]*'.format(string.punctuation), '')
.str.strip()
.str.split())], :]