Keras Tokenizer num_words doesn't seem to work
There is nothing wrong in what you are doing. word_index
is computed the same way no matter how many most frequent words you will use later (as you may see here). So when you will call any transformative method - Tokenizer
will use only three most common words and at the same time, it will keep the counter of all words - even when it's obvious that it will not use it later.
Just a add on Marcin's answer ("it will keep the counter of all words - even when it's obvious that it will not use it later.").
The reason it keeps counter on all words is that you can call fit_on_texts
multiple times. Each time it will update the internal counters, and when transformations are called, it will use the top words based on the updated counters.
Hope it helps.