How to make DISTINCT ON faster in PostgreSQL?
For only 400 stations, this query will be massively faster:
SELECT s.station_id, l.submitted_at, l.level_sensor
FROM station s
CROSS JOIN LATERAL (
SELECT submitted_at, level_sensor
FROM station_logs
WHERE station_id = s.station_id
ORDER BY submitted_at DESC NULLS LAST
LIMIT 1
) l;
dbfiddle here
(comparing plans for this query, Abelisto's alternative and your original)
Resulting EXPLAIN ANALYZE
as provided by the OP:
Nested Loop (cost=0.56..356.65 rows=102 width=20) (actual time=0.034..0.979 rows=98 loops=1) -> Seq Scan on stations s (cost=0.00..3.02 rows=102 width=4) (actual time=0.009..0.016 rows=102 loops=1) -> Limit (cost=0.56..3.45 rows=1 width=16) (actual time=0.009..0.009 rows=1 loops=102) -> Index Scan using station_id__submitted_at on station_logs (cost=0.56..664062.38 rows=230223 width=16) (actual time=0.009$ Index Cond: (station_id = s.id) Planning time: 0.542 ms Execution time: 1.013 ms -- !!
The only index you need is the one you created: station_id__submitted_at
. The UNIQUE
constraint uniq_sid_sat
also does the job, basically. Maintaining both seems like a waste of disk space and write performance.
I added NULLS LAST
to ORDER BY
in the query because submitted_at
isn't defined NOT NULL
. Ideally, if applicable!, add a NOT NULL
constraint to the column submitted_at
, drop the additional index and remove NULLS LAST
from the query.
If submitted_at
can be NULL
, create this UNIQUE
index to replace both your current index and unique constraint:
CREATE UNIQUE INDEX station_logs_uni ON station_logs(station_id, submitted_at DESC NULLS LAST);
Consider:
- Does creating a unique constraint on a Postgres column remove the need to index it?
- How does PostgreSQL enforce the UNIQUE constraint / what type of index does it use?
This is assuming a separate table station
with one row per relevant station_id
(typically the PK) - which you should have either way. If you don't have it, create it. Again, very fast with this rCTE technique:
CREATE TABLE station AS
WITH RECURSIVE cte AS (
(
SELECT station_id
FROM station_logs
ORDER BY station_id
LIMIT 1
)
UNION ALL
SELECT l.station_id
FROM cte c
, LATERAL (
SELECT station_id
FROM station_logs
WHERE station_id > c.station_id
ORDER BY station_id
LIMIT 1
) l
)
TABLE cte;
I use that in the fiddle as well. You could use a similar query to solve your task directly, without station
table - if you can't be convinced to create it.
Detailed instructions, explanation and alternatives:
- Optimize GROUP BY query to retrieve latest record per user
- Select first row in each GROUP BY group?
Optimize index
Your query should be very fast now. Only if you still need to optimize read performance ...
It might make sense to add level_sensor
as last column to the index to allow index-only scans, like joanolo commented.
Con: It makes the index bigger - which adds a little cost to all queries using it.
Pro: If you actually get index only scans out of it, the query at hand does not have to visit heap pages at all, which makes it about twice as fast. But that may be an insubstantial gain for the very fast query now.
However, I don't expect that to work for your case. You mentioned:
... around 20k rows per day per
station_id
.
Typically, that would indicate unceasing write load (1 per station_id
every 5 seconds). And you are interested in the latest row. Index-only scans only work for heap pages that are visible to all transactions (bit in the visibility map is set). You would have to run extremely aggressive VACUUM
settings for the table to keep up with the write load, and it would still not work most of the time. If my assumptions are correct, index-only scans are out, don't add level_sensor
to the index.
OTOH, if my assumptions hold, and your table is growing very big, a BRIN index might help. Related:
- Speed up creation of Postgres partial index
Or, even more specialized and more efficient: A partial index for only the latest additions to cut off the bulk of irrelevant rows:
CREATE INDEX station_id__submitted_at_recent_idx ON station_logs(station_id, submitted_at DESC NULLS LAST)
WHERE submitted_at > '2017-06-24 00:00';
Chose a timestamp for which you know that younger rows must exist.
You have to add a matching WHERE
condition to all queries, like:
...
WHERE station_id = s.station_id
AND submitted_at > '2017-06-24 00:00'
...
You have to adapt index and query from time to time.
Related answers with more details:
- Index optimization with dates
- Add datetime constraint to a PostgreSQL multi-column partial index
- Indexed ORDER BY with LIMIT 1
Try the classic way:
create index idx_station_logs__station_id on station_logs(station_id);
create index idx_station_logs__submitted_at on station_logs(submitted_at);
analyse station_logs;
with t as (
select station_id, max(submitted_at) submitted_at
from station_logs
group by station_id)
select *
from t join station_logs l on (
l.station_id = t.station_id and l.submitted_at = t.submitted_at);
dbfiddle
EXPLAIN ANALYZE by ThreadStarter
Nested Loop (cost=701344.63..702110.58 rows=4 width=155) (actual time=6253.062..6253.544 rows=98 loops=1)
CTE t
-> HashAggregate (cost=701343.18..701344.07 rows=89 width=12) (actual time=6253.042..6253.069 rows=98 loops=1)
Group Key: station_logs.station_id
-> Seq Scan on station_logs (cost=0.00..598894.12 rows=20489812 width=12) (actual time=0.034..1841.848 rows=20489812 loop$
-> CTE Scan on t (cost=0.00..1.78 rows=89 width=12) (actual time=6253.047..6253.085 rows=98 loops=1)
-> Index Scan using station_id__submitted_at on station_logs l (cost=0.56..8.58 rows=1 width=143) (actual time=0.004..0.004 rows=$
Index Cond: ((station_id = t.station_id) AND (submitted_at = t.submitted_at))
Planning time: 0.542 ms
Execution time: 6253.701 ms