Enable case sensitivity for spark.sql globally
As it turns out setting
spark.sql.caseSensitive: True
in $SPARK_HOME/conf/spark-defaults.conf
DOES work after all. It just has to be done in the configuration of the Spark driver as well, not the master or workers. Apparently I forgot that when I last tried.
Yet another way for PySpark. Using a SparkSession
object named spark
:
spark.conf.set('spark.sql.caseSensitive', True)