partition df based on column pyspark code example Example: dataframe partition dataset based on column s = 30 df1 = df[df[0] >= s]