What is the optimal data type for an MD5 field?
The data type uuid
is perfectly suited for the task. It only occupies 16 bytes as opposed to 37 bytes in RAM for the varchar
or text
representation. (Or 33 bytes on disk, but the odd number would require padding in many cases to make it 40 bytes effectively.) And the uuid
type has some more advantages.
Example:
SELECT md5('Store hash for long string, maybe for index?')::uuid AS md5_hash;
See:
- Convert hex in text representation to decimal number
- Would index lookup be noticeably faster with char vs varchar when all values are 36 chars
You might consider other (cheaper) hashing functions if you don't need the cryptographic component of md5, but I would go with md5 for your use case (mostly read-only).
A word of warning: For your case (immutable once written
) a functionally dependent (pseudo-natural) PK is fine. But the same would be a pain where updates on text
are possible. Think of correcting a typo: the PK and all depending indexes, FK columns in "dozens of other tables" and other references would have to change as well. Table and index bloat, locking issues, slow updates, lost references, ...
If text
can change in normal operation, a surrogate PK would be a better choice. I suggest a bigserial
column (range -9223372036854775808 to +9223372036854775807
- that's nine quintillion two hundred twenty-three quadrillion three hundred seventy-two trillion thirty-six something billion) distinct values for billions of rows
. Might be a good idea in any case: 8 instead of 16 bytes for dozens of FK columns and indexes!). Or a random UUID for much bigger cardinalities or distributed systems. You can always store said md5 (as uuid
) additionally to find rows in the main table from the original text quickly. Related:
- Default value for UUID column in Postgres
As for your query:
- Optimizing a Postgres query with a large IN
To address @Daniel's comment: If you prefer a representation without hyphens, remove the hyphens for display:
SELECT replace('90b7525e-84f6-4850-c2ef-b407fae3f271', '-', '')
But I wouldn't bother. The default representation is just fine. And the problem's really not the representation here.
If other parties should have a different approach and throw strings without hyphens into the mix, that's no problem, either. Postgres accepts several reasonable text representations as input for a uuid
. The manual:
PostgreSQL also accepts the following alternative forms for input: use of upper-case digits, the standard format surrounded by braces, omitting some or all hyphens, adding a hyphen after any group of four digits. Examples are:
A0EEBC99-9C0B-4EF8-BB6D-6BB9BD380A11 {a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11} a0eebc999c0b4ef8bb6d6bb9bd380a11 a0ee-bc99-9c0b-4ef8-bb6d-6bb9-bd38-0a11 {a0eebc99-9c0b4ef8-bb6d6bb9-bd380a11}
What's more, the md5()
function returns text
, you would use decode()
to convert to bytea
and the default representation of that is:
SELECT decode(md5('Store hash for long string, maybe for index?'), 'hex')
\220\267R^\204\366HP\302\357\264\007\372\343\362q
You would have to encode()
again to get the original text representation:
SELECT encode(my_md5_as_bytea, 'hex');
To top it off, values stored as bytea
would occupy 20 bytes in RAM (and 17 bytes on disk, 24 with padding) due to the internal varlena
overhead, which is particularly unfavorable for size and performance of simple indexes.
Everything works in favor of a uuid
here.
I would store the MD5 in a text
or varchar
column. There is no performance difference between the various character data types. You might want to constrain the length of the md5 values by using varchar(xxx)
to make sure the md5 value never exceeds a certain length.
Large IN lists are usually not really fast, it's better to do something like this:
with md5vals (md5) as (
values ('one'), ('two'), ('three')
)
select t.*
from the_table t
join md5vals m on t.name_key = m.md5;
Another option that is sometimes said to be faster is to use an array:
select t.*
from the_table t
where name_key = ANY (array['one', 'two', 'three']);
As you are just comparing for equality, a regular BTree index should be fine. Both queries should be able to make use of such an index (especially if the are selecting only a small fraction of the rows.