Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
The error indicates that you cluster has insufficient resources for current job.Since you have not started the slaves i.e worker . The cluster won't have any resources to allocate to your job. Starting the slaves will work.
`start-slave.sh <spark://master-ip:7077>`
I had the same problem, and it was because the workers could not communicate with the driver.
You need to set spark.driver.port
(and open said port on your driver), spark.driver.host
and spark.driver.bindAddress
in your spark-submit
from the driver.
Solution to your Answer
Reason
- Spark Master doesn't have any resources allocated to execute the Job like worker node or slave node.
Fix
- You have to start the slave node by connecting with the master node like this /SPARK_HOME/sbin> ./start-slave.sh spark://localhost:7077 (if your master in your local node)
Conclusion
- start your master node and also slave node during spark-submit, so that you will get the enough resources allocated to execute the job.
Alternate-way
- You need to make necessary changes in spark-env.sh file which is not recommended.