reading a file in hdfs from pyspark
Since you don't provide authority URI should look like this:
hdfs:///inputFiles/CountOfMonteCristo/BookText.txt
otherwise inputFiles
is interpreted as a hostname. With correct configuration you shouldn't need scheme at all an use:
/inputFiles/CountOfMonteCristo/BookText.txt
instead.
There are two general way to read files in Spark, one for huge-distributed files to process them in parallel, one for reading small files like lookup tables and configuration on HDFS. For the latter, you might want to read a file in the driver node or workers as a single read (not a distributed read). In that case, you should use SparkFiles
module like below.
# spark is a SparkSession instance
from pyspark import SparkFiles
spark.sparkContext.addFile('hdfs:///user/bekce/myfile.json')
with open(SparkFiles.get('myfile.json'), 'rb') as handle:
j = json.load(handle)
or_do_whatever_with(handle)
You could access HDFS files via full path if no configuration provided.(namenodehost is your localhost if hdfs is located in local environment).
hdfs://namenodehost/inputFiles/CountOfMonteCristo/BookText.txt