putting a remote file into hadoop without copying it to local disk
The node where you have generated the data on, is this able to reach each of your cluster nodes (the name node and all the datanodes).
If you do have data connectivity then you can just execute the hadoop fs -put command from the machine where the data is generated (assuming you have the hadoop binaries installed there too):
#> hadoop fs -fs masternode:8020 -put test.bin hadoopFolderName/
Hadoop provides a couple of REST interfaces. Check Hoop and WebHDFS. You should be able to copy the file without copying the file to the master using them from non-Hadoop environments.
Try this (untested):
cat test.txt | ssh username@masternode "hadoop dfs -put - hadoopFoldername/test.txt"
I've used similar tricks to copy directories around:
tar cf - . | ssh remote "(cd /destination && tar xvf -)"
This sends the output of local-tar
into the input of remote-tar
.
Create pipe and then using pipe do the transfer. In this way file is not stored locally.
mkfifo transfer_pipe
scp remote_file transfer_pipe| hdfs dfs -put transfer_pipe <hdfs_path>