Which is more efficient for searches on JSON data in Postgres: GIN or multiple indexed columns?
The advantage of JSON is versatility: you can add any keys without changing the table definition. And maybe convenience, if your application can read and write JSON directly.
Separate columns beat a combined json
or jsonb
column in every performance aspect and in several other aspects, too: More sophisticated type system, the full range of functionality (check, unique, foreign key constraints, default values, etc.), the table is smaller, indexes are smaller, queries are faster.
For prefix matching on text
columns you might use a text_pattern_ops
index:
- Why would you index text_pattern_ops on a text column?
Or, more generally, a trigram index supporting any LIKE
patterns:
- Full Text Search With PostgreSQL
- Pattern matching with LIKE, SIMILAR TO or regular expressions in PostgreSQL
While you stick with JSON (jsonb
in particular), there are also different indexing strategies. GIN or Btree is not the only decision to make. Partial indexes, expression indexes, different operator classes (in particular: jsonb_path_ops
) Related:
- How to get particular object from jsonb array in PostgreSQL?