argmax in Spark DataFrames: how to retrieve the row with the maximum value
If schema is Orderable
(schema contains only atomics / arrays of atomics / recursively orderable structs) you can use simple aggregations:
Python:
df.select(F.max(
F.struct("values", *(x for x in df.columns if x != "values"))
)).first()
Scala:
df.select(max(struct(
$"values" +: df.columns.collect {case x if x!= "values" => col(x)}: _*
))).first
Otherwise you can reduce over Dataset
(Scala only) but it requires additional deserialization:
type T = ???
df.reduce((a, b) => if (a.getAs[T]("values") > b.getAs[T]("values")) a else b)
You can also oredrBy
and limit(1)
/ take(1)
:
Scala:
df.orderBy(desc("values")).limit(1)
// or
df.orderBy(desc("values")).take(1)
Python:
df.orderBy(F.desc('values')).limit(1)
# or
df.orderBy(F.desc("values")).take(1)
Maybe it's an incomplete answer but you can use DataFrame
's internal RDD
, apply the max
method and get the maximum record using a determined key.
a = sc.parallelize([
("a", 1, 100),
("b", 2, 120),
("c", 10, 1000),
("d", 14, 1000)
]).toDF(["name", "id", "salary"])
a.rdd.max(key=lambda x: x["salary"]) # Row(name=u'c', id=10, salary=1000)