How to insert Spark DataFrame to Hive Internal table?
Neither of the options here worked for me/probably depreciated since the answer was written.
According to the latest spark API docs (for Spark 2.1), it's using the insertInto()
method from the DataFrameWriter
class
I'm using the Python PySpark API but it would be the same in Scala:
df.write.insertInto(target_db.target_table,overwrite = False)
The above worked for me.
df.saveAsTable("tableName", "append")
is deprecated. Instead you should the second approach.
sqlContext.sql("CREATE TABLE IF NOT EXISTS mytable as select * from temptable")
It will create table if the table doesnot exist. When you will run your code second time you need to drop the existing table otherwise your code will exit with exception.
Another approach, If you don't want to drop table. Create a table separately, then insert your data into that table.
The below code will append data into existing table
sqlContext.sql("insert into table mytable select * from temptable")
And the below code will overwrite the data into existing table
sqlContext.sql("insert overwrite table mytable select * from temptable")
This answer is based on Spark 1.6.2. In case you are using other version of Spark I would suggests to check the appropriate documentation.