Using a psycopg2 converter to retrieve bytea data from PostgreSQL

The format you see in the debugger is easy to parse: it is PostgreSQL hex binary format (http://www.postgresql.org/docs/9.1/static/datatype-binary.html). psycopg can parse that format and return a buffer containing the data; you can use that buffer to obtain an array. Instead of writing a typecaster from scratch, write one invoking the original func and postprocess its result. Sorry but I can't remember its name now and I'm writing from a mobile: you may get further help from the mailing list.


Edit: complete solution.

The default bytea typecaster (which is the object that can parse the postgres binary representation and return a buffer object out of it) is psycopg2.BINARY. We can use it to create a typecaster converting to array instead:

In [1]: import psycopg2

In [2]: import numpy as np

In [3]: a = np.eye(3)

In [4]: a
Out[4]:
array([[ 1.,  0.,  0.],
      [ 0.,  1.,  0.],
      [ 0.,  0.,  1.]])

In [5]: cnn = psycopg2.connect('')


# The adapter: converts from python to postgres
# note: this only works on numpy version whose arrays 
# support the buffer protocol,
# e.g. it works on 1.5.1 but not on 1.0.4 on my tests.

In [12]: def adapt_array(a):
  ....:     return psycopg2.Binary(a)
  ....:

In [13]: psycopg2.extensions.register_adapter(np.ndarray, adapt_array)


# The typecaster: from postgres to python

In [21]: def typecast_array(data, cur):
  ....:     if data is None: return None
  ....:     buf = psycopg2.BINARY(data, cur)
  ....:     return np.frombuffer(buf)
  ....:

In [24]: ARRAY = psycopg2.extensions.new_type(psycopg2.BINARY.values,
'ARRAY', typecast_array)

In [25]: psycopg2.extensions.register_type(ARRAY)


# Now it works "as expected"

In [26]: cur = cnn.cursor()

In [27]: cur.execute("select %s", (a,))

In [28]: cur.fetchone()[0]
Out[28]: array([ 1.,  0.,  0.,  0.,  1.,  0.,  0.,  0.,  1.])

As you know, np.frombuffer(a) loses the array shape, so you will have to figure out a way to preserve it.


pFor the case of numpy arrays one can avoid the buffer strategy with all its disadvantages like the loss of shape and data type. Following a stackoverflow question about storing a numpy array in sqlite3 one can easily adapt the approach for postgres.

import os
import psycopg2 as psql
import numpy as np

# converts from python to postgres
def _adapt_array(text):
    out = io.BytesIO()
    np.save(out, text)
    out.seek(0)
    return psql.Binary(out.read())

# converts from postgres to python
def _typecast_array(value, cur):
    if value is None:
        return None

    data = psql.BINARY(value, cur)
    bdata = io.BytesIO(data)
    bdata.seek(0)
    return np.load(bdata)

con = psql.connect('')

psql.extensions.register_adapter(np.ndarray, _adapt_array)
t_array = psql.extensions.new_type(psql.BINARY.values, "numpy", _typecast_array)
psql.extensions.register_type(t_array)

cur = con.cursor()

Now one can create and fill a table (with a defined as in the previous post)

cur.execute("create table test (column BYTEA)")
cur.execute("insert into test values(%s)", (a,))

And restore the numpy object

cur.execute("select * from test")
cur.fetchone()[0]

Result:

array([[ 1.,  0.,  0.],
       [ 0.,  1.,  0.],
       [ 0.,  0.,  1.]])