Which operations preserve RDD order?
All operations preserve the order, except those that explicitly do not. Ordering is always "meaningful", not just after a sortBy
. For example, if you read a file (sc.textFile
) the lines of the RDD will be in the order that they were in the file.
Without trying to give a complete list, map
, filter
and flatMap
do preserve the order. sortBy
, partitionBy
, join
do not preserve the order.
The reason is that most RDD operations work on Iterator
s inside the partitions. So map
or filter
just has no way to mess up the order. You can take a look at the code to see for yourself.
You may now ask: What if I have an RDD with a HashPartitioner
. What happens when I use map
to change the keys? Well, they will stay in place, and now the RDD is not partitioned by the key. You can use partitionBy
to restore the partitioning with a shuffle.
In Spark 2.0.0+ coalesce
doesn't guarantee partitions order during merge. DefaultPartitionCoalescer has optimization algorithm which is based on partition locality. When a partition contains information about its locality DefaultPartitionCoalescer
tries to merge partitions on the same host. And only when there is no locality information it simply splits partition based on their index and preserves partitions order.
UPDATE:
If you load DataFrame from files, like parquet, Spark breaks order when it plans file splits. You can see it in DataSourceScanExec.scala#L629 or in new Spark 3.x FileScan#L152 if you use it. It just sorts partitions by size and the splits which are less than spark.sql.files.maxPartitionBytes
gets to last partitions.
So, if you need to load sorted dataset from files you need to implement your own reader.