Wordcount program is stuck in hadoop-2.3.0
I was facing the same issue.I added the following property to my yarn-site.xml and it solved the issue.
<property>
<name>yarn.resourcemanager.hostname</name>
<value>Hostname-of-your-RM</value>
<description>The hostname of the RM.</description>
</property>
Without the resource manager host name things go awry in the multi-node set up as each node would then default to trying to find a local resource manager and would never announce its resources to the master node. So your Map Reduce execution request probably didn't find any mappers in which to run because the request was being sent to the master and the master didn't know about the slave slots.
Reference : http://www.alexjf.net/blog/distributed-systems/hadoop-yarn-installation-definitive-guide/
I had the exact same situation a while back while switching to YARN. Basically there was the concept of task slots
in MRv1 and containers
in MRv2. Both of these differ very much in how the tasks are scheduled and run on the nodes.
The reason that your job is stuck is that it is unable to find/start a container
. If you go into the full logs of Resource Manager/Application Master
etc daemons, you may find that it is doing nothing after it starts to allocate a new container.
To solve the problem, you have to tweak your memory settings in yarn-site.xml
and mapred-site.xml
. While doing the same myself, I found this and this tutorials especially helpful. I would suggest you to try with the very basic memory settings and optimize them later on. First check with a word count example then go on to other complex ones.