Tokenize a paragraph into sentence and then into words in NLTK
You probably intended to loop over sent_text
:
import nltk
sent_text = nltk.sent_tokenize(text) # this gives us a list of sentences
# now loop over each sentence and tokenize it separately
for sentence in sent_text:
tokenized_text = nltk.word_tokenize(sentence)
tagged = nltk.pos_tag(tokenized_text)
print(tagged)
import nltk
textsample ="This thing seemed to overpower and astonish the little dark-brown dog, and wounded him to the heart. He sank down in despair at the child's feet. When the blow was repeated, together with an admonition in childish sentences, he turned over upon his back, and held his paws in a peculiar manner. At the same time with his ears and his eyes he offered a small prayer to the child."
sentences = nltk.sent_tokenize(textsample)
words = nltk.word_tokenize(textsample)
sentences
[w for w in words if w.isalpha()]
The last line above will ensure only words are in the output and not special characters The sentence output is as below
['This thing seemed to overpower and astonish the little dark-brown dog, and wounded him to the heart.',
"He sank down in despair at the child's feet.",
'When the blow was repeated, together with an admonition in childish sentences, he turned over upon his back, and held his paws in a peculiar manner.',
'At the same time with his ears and his eyes he offered a small prayer to the child.']
The words output is as below after removing special characters
['This',
'thing',
'seemed',
'to',
'overpower',
'and',
'astonish',
'the',
'little',
'dog',
'and',
'wounded',
'him',
'to',
'the',
'heart',
'He',
'sank',
'down',
'in',
'despair',
'at',
'the',
'child',
'feet',
'When',
'the',
'blow',
'was',
'repeated',
'together',
'with',
'an',
'admonition',
'in',
'childish',
'sentences',
'he',
'turned',
'over',
'upon',
'his',
'back',
'and',
'held',
'his',
'paws',
'in',
'a',
'peculiar',
'manner',
'At',
'the',
'same',
'time',
'with',
'his',
'ears',
'and',
'his',
'eyes',
'he',
'offered',
'a',
'small',
'prayer',
'to',
'the',
'child']
Here's a shorter version. This will give you a data structure with each individual sentence, and each token within the sentence. I prefer the TweetTokenizer for messy, real world language. The sentence tokenizer is considered decent, but be careful not to lower your word case till after this step, as it may impact the accuracy of detecting the boundaries of messy text.
from nltk.tokenize import TweetTokenizer, sent_tokenize
tokenizer_words = TweetTokenizer()
tokens_sentences = [tokenizer_words.tokenize(t) for t in
nltk.sent_tokenize(input_text)]
print(tokens_sentences)
Here's what the output looks like, which I cleaned up so the structure stands out:
[
['This', 'thing', 'seemed', 'to', 'overpower', 'and', 'astonish', 'the', 'little', 'dark-brown', 'dog', ',', 'and', 'wounded', 'him', 'to', 'the', 'heart', '.'],
['He', 'sank', 'down', 'in', 'despair', 'at', 'the', "child's", 'feet', '.'],
['When', 'the', 'blow', 'was', 'repeated', ',', 'together', 'with', 'an', 'admonition', 'in', 'childish', 'sentences', ',', 'he', 'turned', 'over', 'upon', 'his', 'back', ',', 'and', 'held', 'his', 'paws', 'in', 'a', 'peculiar', 'manner', '.'],
['At', 'the', 'same', 'time', 'with', 'his', 'ears', 'and', 'his', 'eyes', 'he', 'offered', 'a', 'small', 'prayer', 'to', 'the', 'child', '.']
]