EMR Spark - TransportClient: Failed to send RPC
When I setup hadoop and spark in my laptop and try to launch spark as "spark-shell --master yarn" I got the same error message.
Solution:
sudo vim /usr/local/hadoop/etc/hadoop/yarn-site.xml
Add the following property:
<property>
<name>yarn.nodemanager.vmem-pmem-ratio</name>
<value>5</value>
</property>
Then restart hadoop service
stop-all.sh
start-all.sh
Finally I resolved the problem. It was due to insufficient disk space. One column of hadoop logs showed:
Hadoop YARN: 1/1 local-dirs are bad: /var/lib/hadoop-yarn/cache/yarn/nm-local-dir; 1/1 log-dirs are bad: /var/log/hadoop-yarn/containers
Googling it I found http://gethue.com/hadoop-yarn-11-local-dirs-are-bad-varlibhadoop-yarncacheyarnnm-local-dir-11-log-dirs-are-bad-varloghadoop-yarncontainers/
"If you are getting this error, make some disk space!"
To see this error I have to activate the yarn logs in EMR. See
http://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-web-interfaces.html
To have access to the logs port in the cluster ec2 instances I changed security groups for it
i.e.:
master instance was listening here: 172.30.12.84:8088 And core instance here: 172.30.12.21:8042
Finally I fixed the problem changing in etl.py the type of instances by other ones with bigger disks:
master: m3.2xlarge
core: c3.4xlarge