Understanding LDA implementation using gensim
For understanding the usage of gensim LDA implementation, I have recently penned blog-posts implementing topic modeling from scratch on 70,000 simple-wiki dumped articles in Python.
In here, there is a detailed explanation of how gensim's LDA can be used for topic modeling. One can find the usage of
ElementTree library for extraction of article text from XML dumped file.
Regex filters to clean the articles.
NLTK stop words removal & Lemmatization
LDA from gensim library
Hope it will help understanding the LDA implementation of gensim package.
Part 1
Topic Modelling (Part 1): Creating Article Corpus from Simple Wikipedia dump
Part 2
Topic Modelling (Part 2): Discovering Topics from Articles with Latent Dirichlet Allocation
Word cloud (10 words) of few topics that i got as an outcome.
The answer you're looking for is in the gensim tutorial. lda.printTopics(k)
prints the most contributing words for k
randomly selected topics. One can assume that this is (partially) the distribution of words over each of the given topics, meaning the probability of those words appearing in the topic to the left.
Usually, one would run LDA on a large corpus. Running LDA on a ridiculously small sample won't give the best results.
I think this tutorial will help you understand everything very clearly - https://www.youtube.com/watch?v=DDq3OVp9dNA
I too faced a lot of problems understanding it at first. I'll try to outline a few points in a nutshell.
In Latent Dirichlet Allocation,
- The order of words is not important in a document - Bag of Words model.
- A document is a distribution over topics
- Each topic, in turn, is a distribution over words belonging to the vocabulary
- LDA is a probabilistic generative model. It is used to infer hidden variables using a posterior distribution.
Imagine the process of creating a document to be something like this -
- Choose a distribution over topics
- Draw a topic - and choose word from the topic. Repeat this for each of the topics
LDA is sort of backtracking along this line -given that you have a bag of words representing a document, what could be the topics it is representing ?
So, in your case, the first topic (0)
INFO : topic #0: 0.181*things + 0.181*amazon + 0.181*many + 0.181*sells + 0.031*nokia + 0.031*microsoft + 0.031*apple + 0.031*announces + 0.031*acquisition + 0.031*product
is more about things
, amazon
and many
as they have a higher proportion and not so much about microsoft
or apple
which have a significantly lower value.
I would suggest reading this blog for a much better understanding ( Edwin Chen is a genius! ) - http://blog.echen.me/2011/08/22/introduction-to-latent-dirichlet-allocation/
Since the above answers were posted, there are now some very nice visualization tools for gaining an intuition of LDA using gensim
.
Take a look at the pyLDAvis package. Here is a great notebook overview. And here is a very helpful video description geared toward the end user (9 min tutorial).
Hope this helps!