Spark SQL case insensitive filter for column conditions
You can either use case-insensitive regex:
val df = sc.parallelize(Seq(
(1L, "Fortinet"), (2L, "foRtinet"), (3L, "foo")
)).toDF("k", "v")
df.where($"v".rlike("(?i)^fortinet$")).show
// +---+--------+
// | k| v|
// +---+--------+
// | 1|Fortinet|
// | 2|foRtinet|
// +---+--------+
or simple equality with lower
/ upper
:
import org.apache.spark.sql.functions.{lower, upper}
df.where(lower($"v") === "fortinet")
// +---+--------+
// | k| v|
// +---+--------+
// | 1|Fortinet|
// | 2|foRtinet|
// +---+--------+
df.where(upper($"v") === "FORTINET")
// +---+--------+
// | k| v|
// +---+--------+
// | 1|Fortinet|
// | 2|foRtinet|
// +---+--------+
For simple filters I would prefer rlike
although performance should be similar, for join
conditions equality is a much better choice. See How can we JOIN two Spark SQL dataframes using a SQL-esque "LIKE" criterion? for details.
Try to use lower/upper string functions:
dataFrame.filter(lower(dataFrame.col("vendor")).equalTo("fortinet"))
or
dataFrame.filter(upper(dataFrame.col("vendor")).equalTo("FORTINET"))