Python Pandas to_sql, how to create a table with a primary key?
with engine.connect() as con:
con.execute('ALTER TABLE for_import_ml ADD PRIMARY KEY ("ID");')
for_import_ml
is a table name in the database.
Adding a slight variation to tomp's answer (I would comment but don't have enough reputation points).
I am using PGAdmin with Postgres (on Heroku) to check and it works.
Disclaimer: this answer is more experimental then practical, but maybe worth mention.
I found that class pandas.io.sql.SQLTable
has named argument key
and if you assign it the name of the field then this field becomes the primary key:
Unfortunately you can't just transfer this argument from DataFrame.to_sql()
function. To use it you should:
create
pandas.io.SQLDatabase
instanceengine = sa.create_engine('postgresql:///somedb') pandas_sql = pd.io.sql.pandasSQL_builder(engine, schema=None, flavor=None)
define function analoguous to
pandas.io.SQLDatabase.to_sql()
but with additional*kwargs
argument which is passed topandas.io.SQLTable
object created inside it (i've just copied originalto_sql()
method and added*kwargs
):def to_sql_k(self, frame, name, if_exists='fail', index=True, index_label=None, schema=None, chunksize=None, dtype=None, **kwargs): if dtype is not None: from sqlalchemy.types import to_instance, TypeEngine for col, my_type in dtype.items(): if not isinstance(to_instance(my_type), TypeEngine): raise ValueError('The type of %s is not a SQLAlchemy ' 'type ' % col) table = pd.io.sql.SQLTable(name, self, frame=frame, index=index, if_exists=if_exists, index_label=index_label, schema=schema, dtype=dtype, **kwargs) table.create() table.insert(chunksize)
call this function with your
SQLDatabase
instance and the dataframe you want to saveto_sql_k(pandas_sql, df2save, 'tmp', index=True, index_label='id', keys='id', if_exists='replace')
And we get something like
CREATE TABLE public.tmp
(
id bigint NOT NULL DEFAULT nextval('tmp_id_seq'::regclass),
...
)
in the database.
PS You can of course monkey-patch DataFrame
, io.SQLDatabase
and io.to_sql()
functions to use this workaround with convenience.
As of pandas 0.15, at least for some flavors, you can use argument dtype
to define a primary key column. You can even activate AUTOINCREMENT
this way. For sqlite3, this would look like so:
import sqlite3
import pandas as pd
df = pd.DataFrame({'MyID': [1, 2, 3], 'Data': [3, 2, 6]})
with sqlite3.connect('foo.db') as con:
df.to_sql('df', con=con, dtype={'MyID': 'INTEGER PRIMARY KEY AUTOINCREMENT'})
Simply add the primary key after uploading the table with pandas.
group_export.to_sql(con=engine, name=example_table, if_exists='replace',
flavor='mysql', index=False)
with engine.connect() as con:
con.execute('ALTER TABLE `example_table` ADD PRIMARY KEY (`ID_column`);')