Is there better way to display entire Spark SQL DataFrame?
It is generally not advisable to display an entire DataFrame to stdout, because that means you need to pull the entire DataFrame (all of its values) to the driver (unless DataFrame
is already local, which you can check with df.isLocal
).
Unless you know ahead of time that the size of your dataset is sufficiently small so that driver JVM process has enough memory available to accommodate all values, it is not safe to do this. That's why DataFrame API's show()
by default shows you only the first 20 rows.
You could use the df.collect
which returns Array[T]
and then iterate over each line and print it:
df.collect.foreach(println)
but you lose all formatting implemented in df.showString(numRows: Int)
(that show()
internally uses).
So no, I guess there is no better way.
As others suggested, printing out entire DF is bad idea. However, you can use df.rdd.foreachPartition(f)
to print out partition-by-partition without flooding driver JVM (y using collect)
Try with,
df.show(35, false)
It will display 35 rows and 35 column values with full values name.
One way is using count()
function to get the total number of records and use show(rdd.count())
.