pyspark Row-Key-001, K1, 10, A2, 20, K3, 30, B4, 42, K5, 19, C20, 20 Row-Key-002, X1, 20, Y6, 10, Z15, 35, X16, 42 Row-Key-003, L4, 30, M10, 5, N12, 38, O14, 41, P13, 8 code example

Example 1: Create a DataFrame with single pyspark.sql.types.LongType column named id, containing elements in a range

# Create a DataFrame with single column named id, containing elements in a range

spark.range(1, 7, 2).collect()
# [Row(id=1), Row(id=3), Row(id=5)]

spark.range(3).collect()
# [Row(id=0), Row(id=1), Row(id=2)]

Example 2: A distributed collection of data grouped into named columns

# A distributed collection of data grouped into named columns

people = sqlContext.read.parquet("...")

ageCol = people.age

# To create DataFrame using SQLContext
people = sqlContext.read.parquet("...")
department = sqlContext.read.parquet("...")

people.filter(people.age > 30).join(
  department, people.deptId == department.id).groupBy(
  department.name, "gender").agg({"salary": "avg", "age": "max"})