How to write data to Redshift that is a result of a dataframe created in Python?
import pandas_redshift as pr
pr.connect_to_redshift(dbname = <dbname>,
host = <host>,
port = <port>,
user = <user>,
password = <password>)
pr.connect_to_s3(aws_access_key_id = <aws_access_key_id>,
aws_secret_access_key = <aws_secret_access_key>,
bucket = <bucket>,
subdirectory = <subdirectory>)
# Write the DataFrame to S3 and then to redshift
pr.pandas_to_redshift(data_frame = data_frame,
redshift_table_name = 'gawronski.nba_shots_log')
Details: https://github.com/agawronski/pandas_redshift
You can use to_sql
to push data to a Redshift database. I've been able to do this using a connection to my database through a SQLAlchemy engine. Just be sure to set index = False
in your to_sql
call. The table will be created if it doesn't exist, and you can specify if you want you call to replace the table, append to the table, or fail if the table already exists.
from sqlalchemy import create_engine
import pandas as pd
conn = create_engine('postgresql://username:[email protected]:5439/yourdatabase')
df = pd.DataFrame([{'A': 'foo', 'B': 'green', 'C': 11},{'A':'bar', 'B':'blue', 'C': 20}])
df.to_sql('your_table', conn, index=False, if_exists='replace')
Note that you may need to pip install psycopg2
in order to connect to Redshift through SQLAlchemy.
to_sql Documentation
I tried using pandas df.to_sql()
but it was tremendously slow. It was taking me well over 10 minutes to insert 50 rows. See this open issue (as of writing)
I tried using odo
from the blaze ecosystem (as per the recommendations in the issue discussion), but faced a ProgrammingError
which I didn't bother to investigate into.
Finally what worked:
import psycopg2
# Fill in the blanks for the conn object
conn = psycopg2.connect(user = 'user',
password = 'password',
host = 'host',
dbname = 'db',
port = 666)
cursor = conn.cursor()
# Adjust ... according to number of columns
args_str = b','.join(cursor.mogrify("(%s,%s,...)", x) for x in tuple(map(tuple,np_data)))
cursor.execute("insert into table (a,b,...) VALUES "+args_str.decode("utf-8"))
cursor.close()
conn.commit()
conn.close()
Yep, plain old psycopg2
. This is for a numpy array but converting from a df
to a ndarray
shouldn't be too difficult. This gave me around 3k rows/minute.
However, the fastest solution as per recommendations from other team mates is to use the COPY command after dumping the dataframe as a TSV/CSV into a S3 cluster and then copying over. You should investigate into this if you're copying really huge datasets. (I will update here if and when I try it out)