Spark DataFrame - Select n random rows
I would prefer this in pyspark
df.sample(withReplacement=False, fraction=desired_fraction)
Here is doc
You can try sample () method. Unfourtunatelly you must give there not a number, but fraction. You can write function like this:
def getRandom (dataset : Dataset[_], n : Int) = {
val count = dataset.count();
val howManyTake = if (count > n) n else count;
dataset.sample(0, 1.0*howManyTake/count).limit (n)
}
Explanation: we must take a fraction of data. If we have 2000 rows and you want to get 100 rows, we must have 0.5 of total rows. If you want to get more rows than there are in DataFrame, you must get 1.0. limit () function is invoked to make sure that rounding is ok and you didn't get more rows than you specified.
Edit: I see in other answer the takeSample method. But remember:
- It'a a method of RDD, not Dataset, so you must do:
dataset.rdd.takeSample(0, 1000, System.currentTimeMilis()).toDF()
takeSample will collect all values. - Remember that if you want to get very many rows then you will have problems with OutOfMemoryError as takeSample is collecting results in driver. Use it carefully
You can shuffle the rows and then take the top ones:
import org.apache.spark.sql.functions.rand
dataset.orderBy(rand()).limit(n)