Best way to use a PostgreSQL database as a simple key value store
The extension in Postgresql to properly do this is called hstore. It works in a similar fashion as you would expect other key-value store systems. Just load the extension. The syntax is unique but if you have ever used redis or mongo you will get it quickly. Don't make it harder than it is. I understand, we often don't get to pick our tools and have to make do.
Here is the document page:
http://www.postgresql.org/docs/9.1/static/hstore.html
Another option is to use JSON or JSONB with a unique hash index on the key.
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE TABLE key_values (
key uuid DEFAULT uuid_generate_v4(),
value jsonb
);
CREATE INDEX idx_key_values ON key_values USING hash (key);
Some queries
SELECT * FROM key_values WHERE key = '1cfc4dbf-a1b9-46b3-8c15-a03f51dde891';
Time: 0.514 ms
postgres=# SELECT * FROM key_values WHERE key = '1cfc4dbf-a1b9-46b3-8c15-a03f51dde890';
Time: 1.747 ms
postgres=# do $$
begin
for r in 1..1000 loop
INSERT INTO key_values (value)
VALUES ('{"somelarge_json": "bla"}');
end loop;
end;
$$;
DO
Time: 58.327 ms
You can't run efficient range queries like with B-tree, but it should have better read/write performance. Index should be about 60% smaller.
If you are forced to use relational database, I would suggest to try to find structure in your data to take advantage of the fact, since you forgo the advantage of speed you got with unstructured data and key-value store. The more structure you find, the better advantage you get out of your predicament. Even if you only find structure in the keys.
Also consider if you will only need sequential or random access to your data and in which ratio and structure your database by this requirement. Are you going to do queries on your values by type for example? Each of those questions could have effect on how you structure your database.
One specific consideration about blobs in postgresql they are internally represented as pg_largetable (loid:oid,pageno:int4,data:bytea). The size of the chunks is defined by LOBBLKSIZE, but typically 2k. So if you can use byte arrays in your table instead of blobs and limit size of your value/key pair under blocksize, you can avoid this indirection through second table. You could also increase the block size if you have access to configuration of the database.
I'd suggest to go looking for structure in data and patterns in data access and then ask your question again with more detail.