PostgreSQL query very slow with limit 1
You're running into an issue which relates, I think, to the lack of statistics on row correlations. Consider reporting it to pg-bugs for reference if this is using the latest version Postgres.
The interpretation I'd suggest for your plans is:
limit 1
makes Postgres look for a single row, and in doing so it assumes that your object_id is common enough that it'll show up reasonably quickly in an index scan.Based on the stats you gave its thinking probably is that it'll need to read ~70 rows on average to find one row that fits; it just doesn't realize that object_id and timestamp correlate to the point where it's actually going to read a large portion of the table.
limit 3
, in contrast, makes it realize that it's uncommon enough, so it seriously considers (and ends up…) top-n sorting an expected 1700 rows with theobject_id
you want, on grounds that doing so is likely cheaper.For instance, it might know that the distribution of these rows is so that they're all packed in the same area on the disk.
no
limit
clause means it'll fetch the 1700 anyways, so it goes straight for the index onobject_id
.
Solution, btw: add an index on (object_id, timestamp)
or (object_id, timestamp desc)
.
I started having similar symptoms on an update-heavy table, and what was needed in my case was
analyze $table_name;
In this case the statistics needed to be refreshed, which then fixed the slow query plans that were occurring.
Supporting docs: https://www.postgresql.org/docs/current/sql-analyze.html
You can avoid this issue by adding an unneeded ORDER BY
clause to the query.
SELECT * FROM object_values WHERE (objectID = 53708) ORDER BY timestamp, objectID DESC limit 1;
Not a fix, but sure enough switching from limit 1
to limit 50
(for me) and returning the first result row is way faster...Postgres 9.x in this instance. Just thought I'd mention it as a workaround mentioned by the OP.