Where do you need to use lit() in Pyspark SQL?
Import library:
from pyspark.sql.functions import lit
simple example could be:
df.withColumn("columnName", lit(Column_Value ))
ex:
df = df.withColumn("Today's Date", lit(datetime.now()))
To keep it simple you need a Column
(can be a one created using lit
but it is not the only option) when JVM counterpart expects a column and there is no internal conversion in a Python wrapper or you want to call a Column
specific method.
In the first case the only strict rule is the on that applies to UDFs. UDF (Python or JVM) can be called only with arguments which are of Column
type. It also typically applies to functions from pyspark.sql.functions
. In other cases it is always best to check documentation and docs string firsts and if it is not sufficient docs of a corresponding Scala counterpart.
In the second case rules are simple. If you for example want to compare a column to a value then value has to be on the RHS:
col("foo") > 0 # OK
or value has to be wrapped with literal:
lit(0) < col("foo") # OK
In Python many operators (<
, ==
, <=
, &
, |
, +
, -
, *
, /
) can use non column object on the LHS:
0 < col("foo")
but such applications are not supported in Scala.
It goes without saying that you have to use lit
if you want to access any of the pyspark.sql.Column
methods treating standard Python scalar as a constant column. For example you'll need
c = lit(1)
not
c = 1
to
c.between(0, 3) # type: pyspark.sql.Column