Who can give a clear explanation for `combineByKey` in Spark?

The groupByKey call makes no attempt at merging/combining values, so it’s an expensive operation.

Thus the combineByKey call is just such an optimization. When using combineByKey values are merged into one value at each partition then each partition value is merged into a single value. It’s worth noting that the type of the combined value does not have to match the type of the original value and often times it won’t be. The combineByKey function takes 3 functions as arguments:

  1. A function that creates a combiner. In the aggregateByKey function the first argument was simply an initial zero value. In combineByKey we provide a function that will accept our current value as a parameter and return our new value that will be merged with additional values.

  2. The second function is a merging function that takes a value and merges/combines it into the previously collected values.

  3. The third function combines the merged values together. Basically this function takes the new values produced at the partition level and combines them until we end up with one singular value.

In other words, to understand combineByKey, it’s useful to think of how it handles each element it processes. As combineByKey goes through the elements in a partition, each element either has a key it hasn’t seen before or has the same key as a previous element.

If it’s a new element, combineByKey uses a function we provide, called createCombiner(), to create the initial value for the accumulator on that key. It’s important to note that this happens the first time a key is found in each partition, rather than only the first time the key is found in the RDD.

If it is a value we have seen before while processing that partition, it will instead use the provided function, mergeValue(), with the current value for the accumulator for that key and the new value.

Since each partition is processed independently, we can have multiple accumulators for the same key. When we are merging the results from each partition, if two or more partitions have an accumulator for the same key we merge the accumulators using the user-supplied mergeCombiners() function.

References:

  • Learning Spark - Chapter 4.
  • Using combineByKey in Apache-Spark blog entry.

'@' is only added within each partition. In your example, it seems there is only one element in each partition. Try:

data.combineByKey(lambda v : str(v)+"_", lambda c, v : c+"@"+str(v), lambda c1, c2 : c1+'$'+c2).collect() $

and see the difference