Using monotonically_increasing_id() for assigning row number to pyspark dataframe
From the documentation
A column that generates monotonically increasing 64-bit integers.
The generated ID is guaranteed to be monotonically increasing and unique, but not consecutive. The current implementation puts the partition ID in the upper 31 bits, and the record number within each partition in the lower 33 bits. The assumption is that the data frame has less than 1 billion partitions, and each partition has less than 8 billion records.
Thus, it is not like an auto-increment id in RDBs and it is not reliable for merging.
If you need an auto-increment behavior like in RDBs and your data is sortable, then you can use row_number
df.createOrReplaceTempView('df')
spark.sql('select row_number() over (order by "some_column") as num, * from df')
+---+-----------+
|num|some_column|
+---+-----------+
| 1| ....... |
| 2| ....... |
| 3| ..........|
+---+-----------+
If your data is not sortable and you don't mind using rdds to create the indexes and then fall back to dataframes, you can use rdd.zipWithIndex()
An example can be found here
In short:
# since you have a dataframe, use the rdd interface to create indexes with zipWithIndex()
df = df.rdd.zipWithIndex()
# return back to dataframe
df = df.toDF()
df.show()
# your data | indexes
+---------------------+---+
| _1 | _2|
+-----------=---------+---+
|[data col1,data col2]| 0|
|[data col1,data col2]| 1|
|[data col1,data col2]| 2|
+---------------------+---+
You will probably need some more transformations after that to get your dataframe to what you need it to be. Note: not a very performant solution.
Hope this helps. Good luck!
Edit:
Come to think about it, you can combine the monotonically_increasing_id
to use the row_number
:
# create a monotonically increasing id
df = df.withColumn("idx", monotonically_increasing_id())
# then since the id is increasing but not consecutive, it means you can sort by it, so you can use the `row_number`
df.createOrReplaceTempView('df')
new_df = spark.sql('select row_number() over (order by "idx") as num, * from df')
Not sure about performance though.
Full examples of the ways to do this and the risks can be found here
using api functions you can do simply as the following
from pyspark.sql.window import Window as W
from pyspark.sql import functions as F
df1 = df1.withColumn("idx", F.monotonically_increasing_id())
windowSpec = W.orderBy("idx")
df1.withColumn("idx", F.row_number().over(windowSpec)).show()
I hope the answer is helpful
I found the solution by @mkaran useful, But for me there was no ordering column while using window function. I wanted to maintain the order of rows of dataframe as their indexes (what you would see in a pandas dataframe). Hence the solution in edit section came of use. Since it is a good solution (if performance is not a concern), I would like to share it as a separate answer.
# Add a increasing data column
df_index = df.withColumn("idx", monotonically_increasing_id())
# Create the window specification
w = Window.orderBy("idx")
# Use row number with the window specification
df_index = df_index.withColumn("index", F.row_number().over(w))
# Drop the created increasing data column
df2_index = df2_index.drop("idx")
df
is your original dataframe and df_index
is new dataframe.