pyspark repartition without knowing the number of partitions code example Example: dataframe partition dataset based on column s = 30 df1 = df[df[0] >= s]