how to replace value for certain type to column in pyspark 2 plus code example

Example: A distributed collection of data grouped into named columns

# A distributed collection of data grouped into named columns

people = sqlContext.read.parquet("...")

ageCol = people.age

# To create DataFrame using SQLContext
people = sqlContext.read.parquet("...")
department = sqlContext.read.parquet("...")

people.filter(people.age > 30).join(
  department, people.deptId == department.id).groupBy(
  department.name, "gender").agg({"salary": "avg", "age": "max"})