Pyspark replace strings in Spark dataframe column
For Spark 1.5 or later, you can use the functions package:
from pyspark.sql.functions import *
newDf = df.withColumn('address', regexp_replace('address', 'lane', 'ln'))
Quick explanation:
- The function
withColumn
is called to add (or replace, if the name exists) a column to the data frame. - The function
regexp_replace
will generate a new column by replacing all substrings that match the pattern.
For scala
import org.apache.spark.sql.functions.regexp_replace
import org.apache.spark.sql.functions.col
data.withColumn("addr_new", regexp_replace(col("addr_line"), "\\*", ""))