unable to check nodes on hadoop [Connection refused]

also edit your /etc/hosts file and change 127.0.1.1 to 127.0.0.1...proper dns resolution is very important for hadoop and a bit tricky too..also add following property in your core-site.xml file -

<property>
      <name>hadoop.tmp.dir</name>
      <value>/path_to_temp_directory</value>
    </property>

the default location for this property is /tmp directory which get emptied after each system restart..so you loose all your info at each restart..also add these properties in your hdfs-site.xml file -

<property>
        <name>dfs.name.dir</name>
        <value>/path_to_name_directory</value>
    </property>

    <property>
        <name>dfs.data.dir</name>
        <value>/path_to_data_directory</value>
    </property>

Another possibility is the namenode is not running.

You can remove the HDFS files:

rm -rf /tmp/hadoop*

Reformat the HDFS

bin/hadoop namenode -format

And restart hadoop services

bin/hadoop/start-all.sh (Hadoop 1.x)

or

sbin/hadoop/start-all.sh (Hadoop 2.x)

same problem i got and this solved my problem:

problem lies with the permission given to the folders "chmod" 755 or greater for the folders /home/username/hadoop/*


There are few things that you need to take care of before starting hadoop services.

Check what this returns:

hostname --fqdn 

In your case this should be localhost. Also comment out IPV6 in /etc/hosts.

Did you format the namenode before starting HDFS.

hadoop namenode -format

How did you install Hadoop. Location of log files will depend on that. Usually it is in location "/var/log/hadoop/" if you have used cloudera's distribution.

If you are a complete newbie, I suggest installing Hadoop using Cloudera SCM which is quite easy. I have posted my approach in installing Hadoop with Cloudera's distribution.

Also

Make sure DFS location has a write permission. It usually sits @ /usr/local/hadoop_store/hdfs

That is a common reason.

Tags:

Hadoop