Dynamically rename multiple columns in PySpark DataFrame
Wrote an easy & fast function for you to use. Enjoy! :)
def rename_cols(rename_df):
for column in rename_df.columns:
new_column = column.replace('.','_')
rename_df = rename_df.withColumnRenamed(column, new_column)
return rename_df
You can use something similar to this great solution from @zero323:
df.toDF(*(c.replace('.', '_') for c in df.columns))
alternatively:
from pyspark.sql.functions import col
replacements = {c:c.replace('.','_') for c in df.columns if '.' in c}
df.select([col(c).alias(replacements.get(c, c)) for c in df.columns])
The replacement
dictionary then would look like:
{'emp.city': 'emp_city', 'emp.dno': 'emp_dno', 'emp.sal': 'emp_sal'}
UPDATE:
if I have dataframe with space in column names also how do replace both
'.'
and space with'_'
import re
df.toDF(*(re.sub(r'[\.\s]+', '_', c) for c in df.columns))