out of Memory Error in Hadoop
You can assign more memory by editing the conf/mapred-site.xml file and adding the property:
<property>
<name>mapred.child.java.opts</name>
<value>-Xmx1024m</value>
</property>
This will start the hadoop JVMs with more heap space.
Another possibility is editing hadoop-env.sh
, which contains export HADOOP_CLIENT_OPTS="-Xmx128m $HADOOP_CLIENT_OPTS"
.
Changing 128m to 1024m helped in my case (Hadoop 1.0.0.1 on Debian).
For anyone using RPM or DEB packages, the documentation and common advice is misleading. These packages install hadoop configuration files into /etc/hadoop. These will take priority over other settings.
The /etc/hadoop/hadoop-env.sh sets the maximum java heap memory for Hadoop, by Default it is:
export HADOOP_CLIENT_OPTS="-Xmx128m $HADOOP_CLIENT_OPTS"
This Xmx setting is too low, simply change it to this and rerun
export HADOOP_CLIENT_OPTS="-Xmx2048m $HADOOP_CLIENT_OPTS"
After trying so many combinations, finally I concluded the same error on my environment (Ubuntu 12.04, Hadoop 1.0.4) is due to two issues.
- Same as Zach Gamer mentioned above.
- don't forget to execute "ssh localhost" first. Believe or not! No ssh would throw an error message on Java heap space as well.