aggregate RDD pyspark code example

Example 1: Return the union of this RDD and another one

# union(other)

rdd = sc.parallelize([1, 1, 2, 3])
rdd.union(rdd).collect()
#  [1, 1, 2, 3, 1, 1, 2, 3]

Example 2: Sorts this RDD, which is assumed to consist of (key, value) pairs

# sortByKey(ascending=True, numPartitions=None, keyfunc= <function <lambda> at 0x7fc35dbcf848>)
tmp = [('a', 1), ('b', 2), ('1', 3), ('d', 4), ('2', 5)]
sc.parallelize(tmp).sortByKey().first()
# ('1', 3)
sc.parallelize(tmp).sortByKey(True, 1).collect()
# [('1', 3), ('2', 5), ('a', 1), ('b', 2), ('d', 4)]
sc.parallelize(tmp).sortByKey(True, 2).collect()
# [('1', 3), ('2', 5), ('a', 1), ('b', 2), ('d', 4)]
tmp2 = [('Mary', 1), ('had', 2), ('a', 3), ('little', 4), ('lamb', 5)]
tmp2.extend([('whose', 6), ('fleece', 7), ('was', 8), ('white', 9)])
sc.parallelize(tmp2).sortByKey(True, 3, keyfunc=lambda k: k.lower()).collect()
# [('a', 3), ('fleece', 7), ('had', 2), ('lamb', 5),...('white', 9)]

Tags:

Php Example